View map

Flyer: Supervised Autonomy with Formal Logic: Interpretable Learning, Reasoning, and Safety

As autonomous systems increasingly rely on supervised autonomy—where learning-based policies are trained, monitored, and corrected using structured guidance—their decision-making must be interpretable, verifiable, and trustworthy. In open-world and safety-critical environments, purely black-box supervision is insufficient for diagnosing failures, enforcing constraints, and ensuring reliable deployment. This talk presents a unified framework for supervised, interpretable autonomy grounded in formal logic and its spatial extension SpaTiaL, enabling principled supervision across learning, perception, planning, and control. In this talk, he will introduce methods for supervising policy behavior through infrared formal specifications, learned from demonstrations and trajectories, with statistical guarantees provided by conformal prediction and differential extensions. We further show how natural language supervision can be translated into SpaTiaL formulas encoding object-centric geometric and temporal relations, enabling structured oversight of perception and motion planning via quantitative robustness measures. Finally, we demonstrate how logic-guided supervision of reinforcement learning enables safe exploration, interpretable failure diagnosis, and sample-efficient policy adaptation in safety-critical settings.

Event Details

See Who Is Interested

0 people are interested in this event

User Activity

No recent activity