NIST’s Risk Management Framework and Guidelines for Addressing Biases in AI

0

As more companies develop and/or use artificial intelligence (AI), it is important to consider risk management and best practices to address issues such as bias in AI. The National Institute of Standards and Technology (NIST) recently released a draft of its AI risk management framework (Framework) and guidance for addressing bias in AI (Guidance). The voluntary framework addresses the risks associated with the design, development, use and evaluation of AI systems. The Guide offers considerations for reliable and responsible development and use of AI, including suggestions for governance processes to address biases.

Who should pay attention?

The framework and guidance will be useful to those designing, developing, using, or evaluating AI technologies. The language is written to be understandable by a wide audience, including senior executives and those who are not AI professionals. At the same time, NIST includes technical depth, so the framework and guidance will be useful to practitioners. The framework is designed to be scalable for organizations of all sizes, public or private, in various sectors, and for national and international organizations.

Who is NIST and what is this publication?

NIST is part of the U.S. Department of Commerce and was founded in 1901. Congress asked NIST to develop an AI risk management framework in 2020. Elham Tabassi, Chief of Staff of the Technology Lab at the NIST information and coordinator of the agency’s work on AI, says: “We have developed this project with many contributions from the private and public sectors, knowing full well how quickly AI technologies are being developed and used. and how much there is to learn about the associated benefits and risks.” The framework considers approaches to developing trustworthiness characteristics, including accuracy, explainability and interpretability, reliability, confidentiality, robustness, safety, security, and mitigation of unintended use and/or harmful.

In summary, the framework covers the following points:

  1. Technical characteristics, socio-technical characteristics, guiding principles;

  2. Governance, including risk mapping, measurement and management; and

  3. Practical Guide.

The Guide addresses three high-level points:

  1. Describes the issues and challenge of bias in artificial intelligence and provides examples of how and why it can undermine public trust;

  2. Identifies three categories of bias in AI – systemic, statistical and human – and describes how and where they contribute to harm; and

  3. Describes three major challenges to mitigating bias – data sets, testing and evaluation, and human factors – and presents preliminary tips for addressing them.

Why is AI governance important?

Governance processes impact nearly every aspect of AI management. Governance includes administrative procedures and standard operating policies, but is also part of the organizational processes and cultural competencies that directly impact those involved in training, deploying, and monitoring these systems. These monitoring systems and redress channels help end users to report incorrect or potentially harmful results and to hold them accountable. It is also essential to ensure that written policies and procedures address key roles, responsibilities and processes at different stages of the AI ​​lifecycle. Clear documentation helps to consistently implement policies and procedures, and standardizes how an organization’s bias management is implemented.

AI and bias

The detrimental impacts of AI have an effect not only at the individual or organizational level, but can quickly ripple over a wider reach. The scale and rapidity of the damage caused by the AI ​​makes it a unique risk. NIST points out that machine learning processes and the data used to train AI software are subject to bias, both human and systematic. Biases influence the development and deployment of AI. Systemic biases can stem from institutions operating in ways that disadvantage certain groups, such as discriminating against individuals because of their race. Human bias can come from people drawing biased inferences or conclusions from data. When human, systemic, and computational biases combine, they can compound bias effects and lead to crippling consequences. To address these issues, the NIST authors argue for a “socio-technical” approach to tackling bias in AI. This approach combines sociology and technology recognizing that AI operates in a larger social context, so efforts to address bias must go beyond technical efforts.

State AI and Privacy Laws

Upcoming state privacy laws also address AI activities. From 2023, AI, profiling and other forms of automated decision-making will be regulated by comprehensive privacy laws in California, Virginia and Colorado, including the right for consumers to opt out of certain processing of their personal information by AI and the like. process. Organizations should also be prepared to provide information in response to Access to Information requests about the logic involved in automated decision-making processes.

And after?

NIST will be accepting public comment on the AI ​​framework until April 29. Additionally, NIST is hosting a public workshop March 29-31. NIST’s message seeking comments on its framework and providing more information on its guidelines can be found here. Tips can be found here. NIST is planning a series of public workshops over the next few months aimed at writing a technical report to address AI bias and link the guidelines to the framework. More details about the workshop are Coming soon. A second draft of the framework will be released this summer or fall, including comments received by April 29.

Share.

About Author

Comments are closed.