AI for Dynamic Systems: Reinforcement Learning, Online Learning, Statistical Learning Theory and Intelligent Autonomy
My research interests have spanned Reinforcement Learning, Stochastic Control, Game Theory and Networks. My current interests are mostly focused at the intersection of Reinforcement Learning (RL) and Control, especially as related to problems of intelligent autonomy, data-driven optimization and control and multi-agent systems. I work on fundamental problems with applications in autonomous robotics and vehicles, energy systems, transportation and healthcare.
A theme of my research has been around Stochastic Control & Reinforcement Learning, and includes Markov decision processes (MDPs), Multi-armed bandit (MAB) models and Stochastic games. My work on such problems has ranged from developing an ‘empirical process’ or PAC theory for MDPs to more recently, developing universal empirical dynamic programming algorithms for continuous state and action space systems. These have recently succeeded in training a quadrupedal robot to walk. Online Learning for MAB and MDP models is a significant interest wherein I have introduced and worked on combinatorial multi-armed bandit models, and decentralized learning for multi-armed bandit models, both of which are very relevant opportunistic spectrum access. I have also worked on Online Reinforcement Learning for MDPs. Some of my recent research is focused on the problem of ‘Safe and Intelligent Autonomy’, where the goal is to develop algorithms for autonomous control of robots and vehicles by use of RL and ‘Inverse RL/Imitation Learning’ techniques (that aim to learn from expert demonstrations).
My approach on these research themes draws on mathematical methods from probability, optimization, controls and game theory for design and analysis of stochastic systems and networks via solution of fundamental problems in stochastic control, online, reinforcement and statistical learning and game theory.
Other Research Interests
Queueing Theory, Game Theory, Energy System Economics, Network Economics
Another theme of my research has been Networks & Game Theory that led me to Queueing Theory and Smart Energy Systems. Along with my collaborators, I introduced a new (non-classical) queueing model that arises in many real-world scenarios which we called the ∆(i)/GI/1 queueing model and developed the mathematical theory for such queueing models which we call transitory queues. I have also worked on scheduling problems in healthcare operations.
On Smart Energy Systems, my focushas been on economics, pricing, market mechanisms, and integration of renewable energy sources. We developed dynamic pricing algorithms for “demand response” under uncertainty, and stochastic incentive mechanisms for renewable energy integration. I also have a continuing interest in and have contributed to development of a theory of Network Market Design by devising several incentive mechanisms for efficient network resource allocation.
Current Funded Projects
Learning for Decentralized Control of Multi-Agent Systems, NSF
Verification and Synthesis of AI-enabled systems, ONR
Formal Reinforcement Learning Methods, NSF
Online Learning for real-time control of stochastic systems, NSF
New Approach to design and analysis of reinforcement algorithms, NSF