The National Institute for Standards and Technology (NIST) is emphasizing in a new report the need to tackle biases in artificial intelligence beyond just the data sets and machine learning processes – which tend to be the main points of emphasis when looking at how to make AI less biased and more equitable.
NIST recommends in the report, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence”, that AI researchers and developers widen the scope of where they look for biases in AI. NIST said developers and researchers also should emphasize understanding both how biases arise in algorithms and the data use, as well as the larger societal context in which AI systems are being used.
“Context is everything,” Reva Schwartz, one of the report’s authors and NIST principal investigator for AI bias, said in a release. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives.”
“If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI,” Schwartz added. “Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”
Specifically, NIST said that while data and other computational and statistical sources remain “highly important” areas from which biases can emerge, a fuller picture requires a deeper look at both human and systemic biases. Human biases refer to the biases that can exist while people use data to fill in gaps, while systemic biases refer to the biases ingrained in institutions that can disadvantage whole groups of people.
The report finalizes a draft publication released last summer and will help inform a larger AI effort at the agency. The finalized report is a part of NIST’s push for trustworthy and responsible AI, and its guidance is connected to the development of the NIST AI Risk Management Framework.