AI for Dynamic Systems (AIDyS) Lab

The AI for Dynamic Systems (AIDyS (pronounced: i-dis)) focuses on Artificial Intelligence Methods for offline and online learning in dynamic systems. The primary methodology we develop is Offline and Online Reinforcement Learning for Dynamic Systems since that is currently the most promising for such systems and learning problems. There are a number of themes of interest to the lab: Offline and Online Reinforcement Learning (RL), Statistical Learning Theory for Dynamic Systems, Safe RL, Intelligent Autonomy, Multi-Agent RL, Imitation Learning and Interpretable approximations to Deep RL. Main applications of interest to the lab are autonomous robotics and vehicles, energy systems, transportation and healthcare. USC autoDrive Lab is a sister lab that focuses on problems of AI-based design and formal verification of control for intelligent autonomous systems.

People

Prof. Rahul Jain, Director and PI

Dhruva Mokhavinasu, Postdoc (from October 2021)

Krishna Kalagarla, RA, PhD Student

Nathan Dahlin, RA, PhD Student

Mehdi Jafarnia, RA, PhD Student

Yogesh Awate, RA, PhD Student

Rishabh Agarwal, RA, PhD Student

Akhil Agnihotri, RA, PhD Student

Cenyi Liu, MS Student researcher

William Chang, CURVE Fellow, Undergraduate researcher

Matthew Cho, CURVE Fellow, Undergraduate researcher

Boyuan Chang, Undergraduate researcher

Projects

Formal Reinforcement Learning Methods, NSF

Online Learning for real-time control of stochastic systems, NSF

New Approach to design and analysis of reinforcement algorithms, NSF