About the series
Join us for the Safe-Learning Enabled Systems Webinar (NSF 23-562). As artificial intelligence (AI) systems rapidly increase in size, acquire new capabilities, and are deployed in high-stakes settings, their safety becomes extremely important. Ensuring system safety requires more than improving accuracy, efficiency, and scalability: it requires ensuring systems are robust to extreme events and monitoring them for anomalous and unsafe behavior. A learning-enabled system is one that has embedded machine-learning components. Increasingly, these systems underpin critical components of large-scale, safety-critical systems in domains including, for example, healthcare and medicine, criminal justice, autonomous and cyber-physical systems, finance, and high-performance computing applications. Given their deployment in such high-stakes settings, it is imperative that learning-enabled systems be safe: developers must ensure that when deployed, undesirable system behaviors do not arise. Undesirable system behaviors encompass not only overt blunders like prediction errors and system crashes, but also silent failures, like reporting unjustified confidence levels out-of-distribution, and competently achieving unintended objectives. This webinar will discuss the solicitation and answer questions from the research community.
Register in advance for this webinar:
After registering, you will receive a confirmation email containing information about joining the webinar.