AI Risk Management Framework - A discussion of NIST's Second Draft

The buzz of Artificial Intelligence (AI) is still going strong as it continues to expand into more industries such as healthcare, transportation, manufacturing, and finance. If you are planning to incorporate AI into your business, or are a business that develops AI products, it is possible you’ve experienced a time when your technology was used in a way it wasn’t intended.  

Unfortunately, the best designers, programmers, or ‘enter-your-job-title-here’ cannot predict with 100% certainty how an end-user will try to use their technology and how the technology will respond in those cases. So even with the best intentions, how can you be sure your technology does not inadvertently portray bias when in use?  

To combat this, the National Institute of Standards and Technology (NIST) is developing an AI Risk Management Framework (AI RMF) and playbook as a guide for businesses like yours to follow to help limit bias and introduce more trustworthiness across AI technologies. 

NIST Weighs in on AI Risk

On August 18, NIST released the second draft of their AI Risk Management Framework. The goal of this framework is to promote the development and use of responsible AI technologies and systems. The first draft, released earlier this year, identified some of the common AI risks and factors of trustworthiness, the core functions of AI RMF, and the promise of a practice guide. 

In the second draft, NIST moves away from the three factors of trustworthiness (technical characteristics, socio-technical characteristics, and guiding principles), and instead lists seven elements of trustworthy AI: 

  1. Valid and reliable – Validity refers to the accuracy and robustness of AI systems. The system can achieve this by keeping its level of performance under a variety of circumstances. When an AI system is reliable, it will perform as needed, without failure, for a given time, and under given conditions. 

  2. Safe – In general, AI systems should not harm (physically or psychologically) people or property. Safety considerations can first be introduced during the planning and designing phases and later tested through simulations. 

  3. Fair – and bias is managed – Fairness comes into play when reviewing an AI system’s decision process or output for controversies. A system becomes unfair if it bases its decisions on sensitive topics such as gender, ethnicity, disability, and more. Correcting this decision process helps manage bias in future outcomes. 

  4. Secure and resilient – Security in AI is the ability of the system to avoid or protect itself from attacks, unauthorized access, and unauthorized use. In case of an attack or an unexpected change in their environment, a resilient AI system will be able to return to normal function without support. 

  5. Transparent and accountable – When interacting with a transparent AI system, the user will be exposed to specific information about the system. This availability helps hold the system accountable for its responses (outputs) to the user while ensuring fairness and cutting bias. 

  6. Explainable and interpretable – Explainable AI refers to the methods that allow human users to understand and trust the responses created by the AI system. When an AI system is interpretable, the user will be able to understand WHY the system responded the way it did. 

  7. Privacy-enhanced – AI systems should promote values such as anonymity and confidentiality. It can do this by not intruding, limiting its observation, and requiring consent before disclosure or control of the user’s body, data, or reputation.

Breaking down the key functions of the AI Risk Management Framework

NIST maintains the same four high-level functions of AI risk management – Govern, Map, Measure, and Manage - in the second draft. These functions, categories, and subcategories serve as the guidance for organizations to apply to better manage AI risks. 

  • Govern – This function is designed to make sure risks and their potential impacts are identified, measured, and managed effectively and consistently. It is a cross-cutting function that is included throughout the AI risk management process. This function includes six categories and 15 subcategories to consider. 

  • Map – This function shows the purpose, expectations, and impacts of the AI system. This information gathering function enables risk prevention and informed decision making. It includes five categories and 18 subcategories to consider. 

  • Measure – This function uses a mixture of tools, techniques, and methods to analyze, asses, and monitor AI risk and the related impacts. During this function, you will be tracking metrics and performing testing and performance assessments of the AI system. It includes four categories and 17 subcategories to consider. 

  • Manage – This function focuses on prioritizing risks and effectively managing those risks using risk management resources previously distributed. The key to this function is to maximize the benefits and minimize the negative impacts of the AI system. It includes four categories and 9 subcategories. 

Understanding AI Risk and Impacts

AI has brought countless benefits to our everyday lives, such as self-driving cars, smart assistants, chatbots, and facial recognition. While most are enjoying these benefits, there are just as many people concerned about how risky the use AI has the potential to be.  

Just like several other technologies, AI has many potential risks. As it continues to grow and become smarter, these risks will also continue to evolve. AI risk management is about minimizing the potential negative impacts of AI systems, while maximizing the positive impacts. 

Some risks associated with AI include violations of personal privacy, the inclusion of bias in decision making, and unclear legal responsibility. These will often lead to negative impacts that can be experienced by individuals, groups, communities, organizations, and the environment. 

NIST has also published a draft Playbook that can be used by organizations to help in their navigation of AI RMF. This resource provides useful suggested actions, references, and guidance for how to minimize risks and impacts associated with AI. 

Conclusion

The final version of the AI RMF is expected to be released in early 2023. Although the framework is not mandatory, it will most likely influence industry standards. 

Contact one of GSec LLC’s experts today for any questions you may have. 

Jazmyne DavisComment