NIST Releases AI Risk Management Framework and Updates on Biases in AI – Privacy Protection


To print this article, all you need to do is be registered or log in to

Advances in artificial intelligence (AI) have led to a wide range of innovations in many aspects of our society and economy, including across a wide range of verticals such as healthcare, transportation and cybersecurity. . Recognizing that there are limitations and risks that need to be considered, AI has captured the attention of regulators and legislators around the world.

In 2020, Congress directed the National Institute of Standards and Technology (NIST) to develop an AI risk management framework with the public and private sectors. Last week, in keeping with its mandate, and following initial requests for information and AI workshops it hosted in 2021, NIST released two documents relating to its broader AI efforts. AI. First, he published a first draft of the AI Risk Management Framework the 17th of March. Public comment on the framework is open until April 29. In addition, the agency organizes a public workshop March 29-31. Second, he updated a special post, Towards a standard for identifying and managing biases in artificial intelligence. While it’s unclear whether NIST’s efforts will lead to broader consensus or federal legislation on AI, the Federal Trade Commission (FTC) and state legislatures are already focusing on it in the immediate.

As we have already reported here on CPW (here), the FTC is focused on AI and has indicated that it is considering enacting AI-related regulations. Although Commissioner Wilson’s statements seem to have questioned the Commission’s likelihood of issuing AI-focused regulations in the first half of this year, its recent Weight Watchers settlement reinforces the agency’s commitment to consumer privacy and related issues and the effects that AI has on them.

State AI and Privacy Laws

AI is also a priority at the state level. From 2023, AI, profiling, and other forms of automated decision-making will be regulated by broad and sweeping privacy laws in California, Virginia, and Colorado, providing consumers with rights correspondents to opt out of certain processing of their personal information. by AI and similar processes. We can expect to see the concepts of AI and profiling significantly fleshed out in regulations promulgated under the California Privacy Rights Act (CPRA). For now, the CPRA is very light on specifics regarding profiling and AI, but will apparently require companies, in response to consumer know/access requests “to include meaningful information about the logic involved in such decision-making processes” – in other words, information about the algorithms used in AI and automated decision-making. For now, we can also expect to see regulations issued under Colorado’s privacy law (in Virginia it’s less clear because the attorney general hasn’t been given the power to regulations). Organizations must understand the requirements for AI, profiling, and automated decision-making in these fast-approaching privacy regimes, and continue to pay attention as rule development in California and Colorado progresses .

The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your particular situation.



About Author

Comments are closed.