NSF and Amazon collaborate to advance fairness in AI
From personalized music playlists to improved health care, artificial intelligence continues to enhance daily life. The field of AI moves quickly, making it important to understand how to ensure that this technology is fair, transparent, accountable, inclusive and beneficial to all. The U.S. National Science Foundation leads the nation’s investments in university research to answer these challenging questions, exploring everything from the theoretical foundations of machine learning to the social and economic impacts of AI. This basic research unlocks new knowledge, while partnerships with industry stakeholders can enhance the ability to answer crucial questions about the fairness of AI systems. Embedding new concepts and methods in real-world systems ensures relevance to solving societal challenges.
In collaboration with Amazon, NSF is pleased to announce the 2021 awardees of its Program on Fairness in Artificial Intelligence. The program supports computational research focused on fairness in AI, with the goal of building trustworthy AI systems that can be deployed to tackle grand challenges facing society.
This year, the projects cover a variety of AI topics to include:
- Theoretical and algorithmic foundations.
- Principles for human interaction with AI systems.
- Technologies such as natural language understanding and computer vision.
- Applications including hiring decisions, education, criminal justice and human services.
“NSF is partnering with Amazon to support this year’s cohort of fairness in AI projects,” said Henry Kautz, director of NSF’s Division of Information and Intelligent Systems. “Understanding how AI systems can be designed on principles of fairness, transparency and trustworthiness will advance the boundaries of AI applications. And it will help us build a more equitable society in which all citizens can be designers of these technologies as well as benefit from them.”
The 2021 awardees include:
- Fairness in Machine Learning with Human in the Loop explores the long-term effects of AI decision-support systems by using human-AI feedback during repeated interactions involving sequences of decisions.
- End-To-End Fairness for Algorithm-in-the-Loop Decision-Making in the Public Sector builds tools for use in criminal and civil court proceedings, housing, and gauging health impacts of adverse environmental exposure.
- Foundations of Fair AI in Medicine: Ensuring the Fair Use of Patient Attributes develops theoretical tools to identify whichgroup attributes (such as age, weight and employment status) may lead to poor performance in specific population subgroups in a system.
- Organizing Crowd Audits to Detect Bias in Machine Learning harnesses crowd sourcing to identify bias and unfairness in AI-enabled systems.
- Using Machine Learning to Address Structural Bias in Personnel Selection integrates statistical machine learning methods with established practices based on legal, social, behavioral and economic sciences in hiring practices.
- Towards Adaptive and Interactive Post Hoc Explanations constructs personalized AI for education, health care and hiring so that users better understand how machine learning decision-support systems function.
- Using AI to Increase Fairness by Improving Access to Justice develops AI natural language systems that will make legal documents intelligible to the lay public, thus enhancing public access to justice.
- Fair AI in Public Policy -- Achieving Fair Societal Outcomes in Machine Learning Applications to Education, Criminal Justice, and Health and Human Services integrates perspectives from computer science, statistics and public policy to help improve fair AI applications in the real world, including education, child welfare and justice.
- Towards Holistic Bias Mitigation in Computer Vision Systems develops a unified framework to prevent biases from entering systems that collect and analyze visual data in domains from health care to law enforcement.
- Measuring and Mitigating Biases in Generic Image Representation helps uncover and mitigate biases in deep neural network computer vision systems in a way that is intended to generalize across many application tasks.
- Quantifying and Mitigating Disparities in Language Technologies addresses the effects of language diversity on the quality of results one can expect from natural language processing technology.
“We are excited to see NSF select an incredibly talented group of researchers whose research efforts are informed by a multiplicity of perspectives,” said Prem Natarajan, vice president in Amazon’s Alexa unit where he leads the Natural Understanding. “As AI technologies become more prevalent in our daily lives, AI fairness is an increasingly important area of scientific endeavor. And we are delighted to collaborate with NSF to accelerate progress in this area by supporting the work of the top research teams in the world.”
Learn more about NSF’s Fairness in Artificial Intelligence Program by visiting nsf.gov
For additional background, read the NSF Science Matters article, “Supporting the Foundation of Fairness in AI.”