Abstract collage of science-related imagery

Safe Learning-Enabled Systems

View guidelines

NSF 23-562

Important information about NSF’s implementation of the revised 2 CFR

NSF Financial Assistance awards (grants and cooperative agreements) made on or after October 1, 2024, will be subject to the applicable set of award conditions, dated October 1, 2024, available on the NSF website. These terms and conditions are consistent with the revised guidance specified in the OMB Guidance for Federal Financial Assistance published in the Federal Register on April 22, 2024.

Important information for proposers

All proposals must be submitted in accordance with the requirements specified in this funding opportunity and in the NSF Proposal & Award Policies & Procedures Guide (PAPPG) that is in effect for the relevant due date to which the proposal is being submitted. It is the responsibility of the proposer to ensure that the proposal meets these requirements. Submitting a proposal prior to a specified deadline does not negate this requirement.

Supports research into the design and implementation of safe learning-enabled systems in which safety is ensured with high levels of confidence.

Supports research into the design and implementation of safe learning-enabled systems in which safety is ensured with high levels of confidence.

Synopsis

As artificial intelligence (AI) systems rapidly increase in size, acquire new capabilities, and are deployed in high-stakes settings, their safety becomes extremely important. Ensuring system safety requires more than improving accuracy, efficiency, and scalability: it requires ensuring that systems are robust to extreme events, and monitoring them for anomalous and unsafe behavior.

The objective of the Safe Learning-Enabled Systems program, which is a partnership between the National Science Foundation, Open Philanthropy and Good Ventures, is to foster foundational research that leads to the design and implementation of learning-enabled systems in which safety is ensured with high levels of confidence. While traditional machine learning systems are evaluated pointwise with respect to a fixed test set, such static coverage provides only limited assurance when exposed to unprecedented conditions in high-stakes operating environments. Verifying that learning components of such systems achieve safety guarantees for all possible inputs may be difficult, if not impossible. Instead, a system’s safety guarantees will often need to be established with respect to systematically generated data from realistic (yet appropriately pessimistic) operating environments. Safety also requires resilience to “unknown unknowns”, which necessitates improved methods for monitoring for unexpected environmental hazards or anomalous system behaviors, including during deployment. In some instances, safety may further require new methods for reverse-engineering, inspecting, and interpreting the internal logic of learned models to identify unexpected behavior that could not be found by black-box testing alone, and methods for improving the performance by directly adapting the systems’ internal logic. Whatever the setting, any learning-enabled system’s end-to-end safety guarantees must be specified clearly and precisely. Any system claiming to satisfy a safety specification must provide rigorous evidence, through analysis corroborated empirically and/or with mathematical proof.  

Program contacts

Name Email Phone Organization
Jie Yang
Program Director, CISE/IIS
jyang@nsf.gov (703) 292-4768 CISE/IIS
Anindya Banerjee
Program Director, CISE/CCF
abanerje@nsf.gov (703) 292-7885 CISE/CCF
David Corman
Program Director, CISE/CNS
dcorman@nsf.gov (703) 292-8754 CISE/CNS
Pavithra Prabhakar
Program Director, CISE/CCF
pprabhak@nsf.gov (703) 292-2585 CISE/CCF

Awards made through this program

Browse projects funded by this program
Map of recent awards made through this program