Artificial intelligence is shaping our world in profound ways, seemingly at the speed of light.
Leaders in government, industry and academia agree that the rapid expansion of AI-based technologies brings both opportunity and risk. AI holds the potential to boost economic growth, improve individual health care, mitigate the climate crisis and much more. At the same time, used without intention, AI could also increase the spread of disinformation and discrimination, exacerbating some of our toughest societal challenges.
At the heart of this exploration is one critical question: How do we harness the power of AI to build a more just and equitable society?
Over the last decade, the U.S. National Science Foundation has been working to advance high-impact interdisciplinary research that will allow society to use AI for the greater good, creating a framework for building ethical and trustworthy AI systems. NSF's National Research Traineeship (NRT) program is at the forefront of that effort, supporting a new generation of AI researchers committed to exploring how AI can be directed to support equity and well-being in society. Through the NRT program, NSF is funding explorations that will ultimately help mitigate the potential harms that arise from the use of AI and support the creation of a diverse AI workforce focused on the ethical use of this powerful technology.
Exploring standards for ethical AI at UT Austin
An NRT program at The University of Texas (UT) at Austin, NRT-AI: Convergent, Responsible, and Ethical Artificial Intelligence Training Experience, brings together a diverse team of researchers committed to expanding the ethical use of AI. Experts in fields ranging from computer science and robotics to aerospace engineering are partnering with graduate students to pursue cutting-edge AI research focused on social good.
In 2016, UT launched its Good Systems initiative to examine how humans can leverage AI to transform society for the better, grounded in the values of agency, equity, trust, transparency, democracy and justice. The NRT program builds on that important work by expanding research funding opportunities for a new generation of scientists committed to understanding the ethical implications of AI at the deepest levels. Through their work, these graduate students are preparing to help guide technological advances in ways that support equity and justice across society. Today, 14 trainees are now part of the NRT program, working across disciplines to imagine and design human-centered ethical AI systems and standards.
For Junfeng Jiao, NRT program director at UT and principal investigator and founding member of Good Systems, this traineeship is providing a vital space for emerging researchers to explore how AI can be used to support a stronger society and a better world.
"Imagine if you'd fallen asleep in the year 1800 and awakened in the year 2000 to see how much the tools we use in our daily lives — like cars and computers — had radically evolved. There's no doubt we'll see just as many radical shifts with AI technology but in a much shorter time frame -- decades instead of centuries. It is already changing our world in extremely significant ways," said Jiao.
"Initially, we began this project to help future roboticists design the next generation of robots using ethical standards. But we soon realized the need to expand our work to explore aspects of AI technology — using AI to create smart cities, improve generative AI, reduce the spread of disinformation and much more," Jiao said. "Our hope is that we can contribute to the creation of standards for how society designs, develops and deploys AI systems ethically and with the good of society in mind."
UT's NRT program defines ethical AI as grounded in four core values: fairness, privacy and safety, and inclusivity. The project reflects these values by intentionally recruiting trainees from extremely diverse backgrounds — not only in terms of racial and ethnic diversity but also in terms of their academic experience and research interests. While some trainees are focusing on what it means to create a humanoid robot that upholds human autonomy and ethical standards, others have come to NRT to learn how to apply AI in non-STEM fields like law and public health.
"We have such an impressive and diverse group of trainees, and all of them bring their own unique experiences to examine what it means to create an ethical AI system that is really applicable to how we live our lives today," said Kirsten Dalquist, NRT's senior research program coordinator. "Their projects are so compelling — from exploring how AI can help us create more efficient, equitable cities to developing smart hand tools that can help make industrial jobs safer for workers. They are all highly motivated, and it's inspiring to see."
Close mentorship is a foundational part of NRT, with two faculty members pairing with each trainee to help guide and sharpen their research topics. Faculty provide trainees with the freedom and autonomy to pursue their own research questions but work with them to develop personalized roadmaps to plot their course through the program.
"For example, our trainee Connor entered the program from an urban planning background with a strong interest in generative AI. So, we worked together to develop his individual development plan to give him the knowledge in computer science he'd need to pursue his research questions," explained Jiao. "Now, he's going full speed in generative AI, using it to explore how a graphic AI generator can design cities and how that design compares to human design. He's already published three papers in this area, and we're excited to see what comes next."
Along with focusing on their individual research projects, Jiao and his team encourage trainees to build connections and community with students and faculty across a wide range of interests. It's the synergies that arise from working together and sharing knowledge – the convergent experience – that is foundational to the NRT experience. To spark even more collaboration around ethical AI through the NRT program, the university now has a permanent graduate portfolio that brings students from across the university through a socio-technical class sequence focused on "the complex interactions between AI-based technologies and society." Through all this work, UT has created a thriving culture of exploration around AI that will help to drive new discoveries.
"When we think about it from a big picture perspective, we want our project to help agencies like NSF define how AI should be used ethically in the future — and ultimately to help contribute to the development of an international universal standard for AI," said Jiao. "I hope to be able to tell my grandchildren someday, your grandpa did something meaningful for AI so that we can use it to support better well-being around the world. And I think, through NRT, our trainees will be at the forefront of critical research on how to make AI more ethical for all of us."
Meet the UT trainees
Researchers at Stony Brook University are bridging data-based and human-centered science to combat bias
Halfway across the country, another NRT project at Stony Brook University is confronting one of AI's most serious ethical implications — the risk of increased systemic discriminatory bias. The project, "Detecting and Addressing Bias in Data, Humans, and Institutions (Bias-NRT)," seeks to build an interdisciplinary bridge between the data sciences and human-centered sciences to explore how machine learning and AI are affected by bias at the deepest and most holistic levels.
Bias-NRT's principal investigator Susan Brennan says the early vision for the project came through reflection about an increasing lack of diversity in fields like computer science. She and her colleagues believe that harnessing AI ethically to reduce risks like discriminatory bias will require a diverse, inclusive workforce — one with the range of perspectives and awareness to confront the most insidious societal challenges.
"My faculty colleagues Wei Zhu from applied math and statistics and C.R. Ramakrishnan from computer science and I began wondering why women were so underrepresented in the data sciences," said Brennan. "Once we put some statistics together, we realized that quantitatively-oriented women researchers were voting with their feet and gravitating toward the topics they cared passionately about — human-centered topics like psychology, sociology and linguistics. And so we started exploring how to build a two-way bridge between those two areas of focus."
"At the same time, our university and the entire country was in the midst of a deeper conversation about diversity, equity and inclusion. Our student committees had begun to examine critical questions — like why we don't have more professors of color or speakers of color. That sparked new reflection around the reality of bias and how we can detect and address it when it comes to our data and our institutions. We want every single graduate student to develop their own viewpoint on this issue and to examine how bias may be present even in their own research. That self-awareness will give them the tools and perspectives they'll need to detect and confront bias in data and institutions throughout their careers."
Stony Brook's Bias-NRT program brings together faculty from a wide-ranging number of disciplines. The human-centered sciences are represented by researchers in Africana studies, economics, linguistics, neurobiology and behavior, political science, psychology and sociology, and the data sciences include researchers in applied math and statistics and computer science. A convergent approach to collaborative research projects helps trainees with a human-centered background build new skills in data manipulation, while helping data science-focused trainees increase their insight into the impacts that AI/machine learning tools can have on humans.
In addition to individual coursework, trainees participate in multiple research practices — the centerpiece of the Bias-NRT experience at Stony Brook. With guidance from diverse faculty and guest speakers from a range of disciplines, trainees develop foundational knowledge on the numerous ways bias can arise, then apply that knowledge as they begin to dig deep into small group research projects.
"It's been exciting to see a lot of lightbulbs go off for our first cohort of trainees as they've moved through the research practicum," said Brennan. "Typically, with computer science training, if there's something wrong with your results you just throw more data at the problem. But when it comes to human-centered research, often data sets are incomplete, undependable or inconsistent. Through Bias-NRT, we're teaching and exploring a new response that's much more applicable to society's current challenges. By examining and curating the data carefully, we can figure out where the gaps and problems are and then use that information effectively to confront injustice."
That's exactly what a group of trainees are now doing for the Post-Conviction Project, a research effort in collaboration with content experts from the Innocence Network, a coalition of nonprofit organizations that work to prevent wrongful convictions and exonerate innocent people who have been wrongfully incarcerated.
The trainees' research is focused on creating AI-based tools to identify individuals whose cases have significant potential to be overturned, while eliminating bias from imperfect data that can stem from systemic bias. The ultimate goal is to develop an AI-based tool to gather and assess data more effectively and efficiently and to pursue and obtain more exonerations. This powerful work has inspired Bias-NRT trainees and faculty to jointly submit the first of several papers on leveraging machine learning to help exonerate individuals — with the goals of increasing both transparency in machine learning and equity in the criminal justice system. The paper will be featured at the Innocence Project's "2023 Just Data Conference" and published in the associated issue of The Wrongful Conviction Law Review.
Several specific issues have emerged in the research:
- The accuracy of facial recognition technology in identifying people with darker skin tones.
- AI and machine learning to support environmental justice in underserved communities.
- Equitable distribution of relief resources to communities during natural disasters.
- Equitable mitigation of the effects of climate change.
For project coordinator Kristen Kalb-DellaRatta, it's been energizing to see the trainees work together on issues of such vital importance to society.
"Our first cohort of trainees primarily came from a human-centered science background, and it's been wonderful to see them connect and build relationships. Our second cohort that is just getting started is mostly computer science and applied math students. So, we're really excited to see these two groups work together on issues of bias and combine their skills in computational and human-centered science to tackle some critical research questions."
After just a year, Stony Brook's team is already seeing the profound value of this interdisciplinary work — both for individual trainees and for the institution as a whole. Trainees are gravitating to the Bias-NRT program, because they believe it will advance and deepen their training in ways that will make them more competitive in the job market. Rather than just building generic expertise in a field, they will be able to demonstrate their experience in building more effective and ethical systems.
A number of trainees have already obtained competitive internships with prominent public and private sector organizations. A computer science-focused trainee who has been conducting research with the linguistics faculty on large language models worked this summer with an industry-based AI research and development lab. A social psychologist trainee who is exploring health disparities and racism in Latino communities is working with the U.S. Department of Health and Human Services. And a cognitive scientist who has focused her research on spoken conversation spent the summer working on search agents with Google.
But beyond individual trainees, Brennan and her team are also beginning to see some institutional cultural shifts as well. Students and faculty are working together in new ways — and exploring new research questions using an interdisciplinary approach. That process of convergent training, which weaves together different perspectives and viewpoints, is crucial to ensuring that AI technology is ultimately used to reduce damaging biases in society.