Embodied Artificial Intelligence Safety

Spring 2026. 16-886. Monday / Wednesday 11:00-12:20. GHC 4215.

Image

Announcements

Hello!

Jan 1 · 0 min read

See you soon! 🤓

Course Overview

Safety is a nuanced concept. For embodied systems, like robots, we commonly equate safety with collision-avoidance. But out in the “open world” it can be much more: for example, a safe mobile manipulator should understand when it is not confident about a requested task and understand that areas roped off by caution tape should never be breached. However, designing systems with such a nuanced understanding is an outstanding challenge, especially in the era of large robot behavior models.

In this graduate seminar class, we study the question of if (and how) the rise of modern artificial intelligence (AI) models (e.g., deep neural trajectory predictors, large vision-language models, and latent world models) can be harnessed to unlock new avenues for generalizing safety to the open world. From a foundations perspective, we study safety methods from two complementary communities: control theory (which enables the computation of safe decisions) and machine learning (which enables uncertainty quantification and anomaly detection). Throughout the class, there will also be several guest lectures from experts in the field. Students will practice essential research skills including reviewing papers, writing project proposals, and technical communication.

Prerequisites

The course is open to graduate students and advanced undergraduates. While there are no strict prerequisites, familiarity with sequential decision-making, machine learning, optimization, and probability are highly encouraged. Experience with high-level programming languages like Python or MATLAB are also strongly encouraged.

Schedule (Tentative)

Control-Theoretic Safety Foundations

Jan. 12
Course Overview
Syllabus
Jan. 14
Sequential Decision-Making
Jan. 19
No Class MLK Day
Jan. 21
Why is Safe Control Hard and What are Safety Filters?
Data-Driven Safety Filters, Model Predictive Sheilding
Jan. 26
Safety Filter Synthesis via Optimal Control
HJ Reachability Overview, HJ Viscosity Solution
Jan. 28
Robust Safety I
Differential Games, HJ Reach-Avoid Games I, HJ Reach-Avoid Games II
Feb. 2
Robust Safety II
Feb. 4
Guest Lecture Computation I: Reinforcement Learning (Kensuke Nakamura)
HW #1 Due Discounted Reachability, ISAACS
Feb. 9
Computation II: Self-Supervised Learning
DeepReach

Frontiers I

Feb. 11
Updating Safety Online
Parameterized Reachability, Reachability Adapted with Gaussian Processes, Local Updates, AnySafe
Feb. 16
“Semantic Safety”
Project Proposal Due ASIMOV, Safety Representations from Language, SALT
Feb. 18
Latent-Space Safety
Latent Safety Filters, How to Train Your Latent CBF, Safety Filters for LLM Agents
Feb. 23
Latent-Space Safety
Paper Reading Latent Representations for Provable Safety, What You Don’t Know Can Hurt You
Feb. 25
Runtime Monitoring & Recovery via VLMs
HW #2 Due: Feb 28 Paper Reading LLM Fallbacks, FOREWARN
Mar. 2
No Class Spring Break 🏝️
Mar. 4
No Class Spring Break 🏝️

Machine Learning & Statistical Safety Foundations

Mar. 9
Uncertainty Quantification I
On the Calibration of Modern NNs, Prof. Eric Nalisnick’s research and talks
Mar. 11
Mid-term Project Pitches
Mid-term Presentation Due
Mar. 16
Uncertainty Quantification II
Deep Ensembles, Deep Laplace Approx, Gaussian Processes Book
Mar. 18
Conformal Prediction
Mid-term Report Due: March 18 Gentle Intro to Conformal, Perceive With Confidence
Mar. 23
Quantifying and Resolving Robot Uncertainty
Paper Reading EnsembleDAgger, Robots that Ask for Help
Mar. 25
System vs. Component-Level Anomalies
System-Level OOD, Not All Errors, BYOVLA
Mar. 30
Risk-Aware Decision-Making
Paper Reading What is Risk in Robotics?, Risk-Calibrated Interaction

Frontiers II

Apr. 1
Guest Lecture Red-Teaming for Robotics
HW #3 Due
Apr. 6
Controlling In-Distribution
UNISafe, In-D CBF, Lyapunov Density Models, DynaGuide
Apr. 8
Uncertainty in Generative Models
Paper Reading How Confident are Video Models?, World Models that Know When They Don’t Know
Apr. 13
Statistical Testing of Learned Policies
Robot Learning as an Empirical Science, Misinterpretations of Statistical Tests
Apr. 15
Statistical Testing of Learned Policies
Paper Reading Statistical Safe Set Verification, How Generalizable is My BC Policy?

Project Presentations

Apr. 20
Project Presentations
Slides Due 11:59 pm ET, April 19
Apr. 22
Project Presentations
Project Report Due May 1

Instructor

Avatar

Teaching Assistant

Avatar