NIST’s AI Risk Management Framework Must Focus on Ethical AI

0

Our world is becoming increasingly digital and the amount of data generated is increasing exponentially. At the same time, AI is developing at a breakneck pace and the growth of research and policy guidance must ensure that AI is implemented ethically.

To implement AI ethically, agencies must ensure that AI acts within its scope and that its behavior is thoroughly investigated in terms of fairness and of harm. If an AI model cannot be trusted to be compatible with this mission or if it cannot be proven to limit its harm to the public, it cannot be part of the public infrastructure.

Therefore, Explainable AI (XAI) is an essential part of a risk management framework. Having the tools to understand AI models and models with the architecture capable of generating human-understandable explanations is fundamental and a prerequisite for risk assessment in any AI system. Protecting something requires first seeing what poses the risks.

The National Institute of Standards and Technology’s four principles for XAI describe its component requirements – a model must be compatible with generating an explanation; this explanation must be precise to serve as a useful metric; the explanation must also be meaningful to stakeholders in the context of the outcome; and finally, this explanation must include the limits of its explanatory power, so that the model is not misused without raising alarms.

Although necessary, performing XAI according to NIST principles can be challenging. Most AI models attempt to replicate the way humans think, but the underlying mathematical models are vastly different, making it difficult for humans to understand the logic or heuristics behind those model decisions. Three steps — adopting a transparency-focused AI model, maintaining production oversight, and including employee training — can help agencies use AI responsibly and align with NIST principles.

Make AI Explainable

First, agencies must be able to interpret the varying complexity of the different algorithms driving these AI models. Depending on the underlying principles, it can be something relatively intuitive, like regressions and decision tree, or something more abstract, like natural language transformers where it’s harder to understand the results..

For a model to be explainable, the interpretation must also be easily understood. This is where knowledge of AI and human-computer interaction can join forces, translating statistical metrics and model measurements into a language that helps humans decide whether to act based on the model output..

For example, with computer vision algorithms that help identify abnormalities in medical images, such as finding cancer on a lung CT scan, final diagnoses must be made by medical professionals. Computer vision algorithms highlight ‘areas of interest’, such as high pixels that aid in the automated detection of cancerous tissue, but these results are confirmed by experienced human professionals.

A transparency-driven approach to AI

There are two major components to enable a transparent approach to the development and operationalization of AI models. The first component is MLOps (Machine Learning Operations), which is the technology infrastructure and process that enables the development and deployment of repeatable and replicable models. Without MLOps, model explainability is ad hoc and manual, making minor changes or upgrades to models time-consuming and costly, and production AI model performance monitoring is inconsistent and unreliable.

The reality is that all AI models will lose performance over time. This can happen due to data or concept drift, new algorithms, or changing business ROI. Whatever the reason, production monitoring is how the owner/operator can be alerted and trigger the right actions.

The second component is algorithmic explainability, the ability to extract and translate the heuristics behind model decisions into an actionable language. Machine learning expertise is crucial to extract the decision heuristic from the model. Depending on the use case, there may be some trade-off between model explainability and model performance in terms of accuracy and prediction time, so human intervention is required to perform the selection and selection. optimal recommendations of the algorithm.

The role of employee training

Similar to the adoption of other new technologies, part of the solution is proper internal training and change management. AI is often most effective when integrated behind the scenes or “invisible” into existing technology infrastructure and workflow. But this makes it difficult to educate employees about the existence of AI.

A few key pieces of information should be communicated and made available to employees: a list of AI models in their workflow, a basic introduction to the scope of AI, owners and contact details of the AI ​​model, a guide on the proper use of the model, and training on how to protect the security of the model and a feedback mechanism for them to report issues with the model.

It is also important to highlight the crucial role that humans play in the incorporation of AI, as having a human in the loop is critical to ensuring ethical AI applications, and the explainability of AI s strives to build the information bridge to keep humans in the loop and be the ultimate decision maker.

AI has made great strides over the past decade. But these advances have been measured only in terms of model performance, often at the expense of model interpretability and explainability. Today, the field of explainability is now one of the most active areas of research and investment.

We see promising new findings in the use of contrasting and contrafactual learning approaches to address certain deep learning techniques. Exciting research has also been done on understanding the fundamental logic behind the main components of neural networks, as well as comparing different powerful neural networks.

The application of these new techniques and their integration as a standard for the implementation of AI should be a priority for all organizations. Explainable AI is not a luxury, it’s an essential part of keeping humans in the decision-making process, making AI models more resilient and durable, and minimizing potential harm to users.

Henry Jia is Head of Data Science at Excella

Share.

About Author

Comments are closed.