NSF News

Advancing artificial intelligence research infrastructure through new NSF investments


Today, the U.S. National Science Foundation has announced a $16.1 million investment to support shared research infrastructure that provides artificial intelligence researchers and students across the nation with access to transformative resources including high-quality data on human-machine interactions in the context of collaborative teams, automated driving and news recommendation. The projects will include platforms for carrying out AI research on social robotics and immersive virtual environments.

The awards are part of NSF's Computer and Information Science and Engineering Community Research Infrastructure — or CCRI — program, which seeks to create, enhance and democratize access to research infrastructures, focusing on scientific agendas across computer and information science and engineering.

"A critical element to the success of the AI research revolution is ensuring that researchers have access to the data and platforms required to continue to drive innovation and scalability in AI technologies and systems," said NSF Director Sethuraman Panchanathan. "This infrastructure must be accessible to a full breadth and diversity of talent interested in AI R&D, as that is the driving force behind modern discoveries."

This investment supports five collaborative projects led by the University of Central Florida; the University of Pennsylvania; the University of Minnesota, Twin Cities; UCLA; and Penn State. The institutions will collectively provide hands-on training and educational opportunities to researchers and students with the goal of creating a community that enables knowledge sharing and collaboration tools that enrich and expand the nation's AI cutting-edge workforce.
 

The awards will focus on the following aspects of AI research infrastructure and instrumentations:

An Open Source Simulation Platform for AI Research on Autonomous Driving

Led by UCLA, researchers in this project aim to develop an open-ended driving simulation platform that fosters innovations in various aspects of autonomous driving. This platform will support realistic driving simulation with a diverse range of traffic assets and scenarios imported from the real world. It is envisioned as a common testing ground for researchers in academia and industry to develop new AI methods, share data and models, and benchmark progress.

Autonomous driving is expected to transform daily life and the economy with promised benefits like safe transportation and efficient mobility. However, much of today's research on autonomous driving is carried out on expensive commercial vehicles, a costly and risky method to evaluate AI and machine-learning methods. The team's proposed driving simulator infrastructure will provide a cost-effective and safe alternative for the development and evaluation of new AI algorithms for autonomous driving. The research will be conducted in collaboration with the University of California, Berkeley.
 

Virtual Experience Research Accelerator (VERA)

Led by the University of Central Florida, this project focuses on sharing resources for carrying out research in extended reality, or XR — virtual reality, augmented reality and mixed reality environments. The team aims to develop a transformative infrastructure for the research community that combines and extends aspects of distributed lab-based studies, online studies, research panels and crowdsourcing, supporting researchers conducting lab-based human-subject research in this field.

VERA will be designed with and for the XR community to supply needed data and scalability and increase equity by providing opportunities for researchers (including students) from all parts of the nation to do high-impact research, even if they do not have access to XR labs and equipment locally. VERA will also support the creation of large and diverse training data sets for AI related to XR. The research will be conducted in collaboration with Cornell Tech, Davidson College, Lehigh University and Stanford University.
 

Uniting, Broadening, and Sustaining a Research Community Around a Modular Social Robot Platform

Led by the University of Pennsylvania, researchers in this project aim to build a standardized infrastructure to create a collaborative platform for robotics and AI research. This project will build 50 humanoid robots and distribute them to selected research teams across the U.S. to enable online data collection, collaboration and the development of novel approaches to AI decision-making.

In addition, the team seeks to provide training for researchers to work together, creating a community of roboticists who can learn from each other and share ideas. The expected results are the rapid advancement of U.S. robotics and human-robot interaction research and increased diversity of new teams of robotics researchers. The research will be conducted in collaboration with Oregon State University.
 

A Research News Recommender Infrastructure with Live Users for Algorithm and Interface Experimentation

Led by the University of Minnesota, Twin Cities, this project aims to develop a shared news recommender system that enables researchers nationwide to study live one-time and longitudinal interactions between users and AI systems that personalize their experience based on past behaviors. The project team will include accomplished experimental researchers who have done extensive investigations on end-user privacy — including the privacy of research participants — and ethics experts to address privacy-related impact on the end-users of these technologies.

Recommender systems have extraordinarily broad impact through, for example, the products ranked and shown to an online shopper based on past shopping behaviors. Recommender systems are also behind most online news sources, and can shape which news people see. Given the importance of these systems, it is critical for researchers to be able to carry out studies to evaluate different algorithm and interface designs and their impact on users. This project will be conducted in collaboration with Clemson University, Boise State University, Northwestern University, and the University of Colorado Boulder.
 

An Open Data Infrastructure for Bodily Expressed Emotion Understanding

Led by Penn State, this project seeks to unlock the wealth of information about human expression found in videos on the internet. Similar to other areas of AI, like image recognition, human bodily movement — including emotions — can provide essential insights for developing future human-machine interaction systems. While sentiment analysis in text has garnered extensive attention, analyzing sentiments and emotions through non-text inputs has been less explored and can be greatly advanced through the infrastructure developed in this project.

The multidisciplinary research team comprises experts in AI, computer vision, expressive robotics, emotion recognition, psychology, statistics, data mining and other fields. They will work with experts in research ethics to ensure that considerations surrounding the use of images in AI training and its applications in the project follow best practices. This research will be conducted in collaboration with the University of Illinois Chicago and the Robotics, Automation and Dance Lab.