People Publications Research Robots Join

Check out Guanya's new faculty lighting talk at CMU SCS for a 5-min research summary.


Safe Learning-based Nonlinear Control with Learned Robotic Agility

Recent advances in machine learning beckon to applications in autonomous systems. However, for safety-critical settings, the learning system must interact with the rest of the autonomous system in a way that safeguards against catastrophic failures with guarantees. In addition, from computational and statistical standpoints, the learning system must incorporate prior knowledge for efficiency and generalizability. Leveraging control-theoretic tools and prior knowledge, we aim to develop learning-based control methods with both guarantees and new capabilities.


Offline Learning and Online Adaptation

Real-world robotic systems have to operate in unknown and dynamic environments where the decision-maker must fast adapt to uncertainties. For example, legged robot rescue and search necessitates traversing complicated terrain conditions. Deep learning has representation power but is often too slow to update onboard. On the other hand, adaptive control can update as fast as the feedback control loop with guarantees. Our goal is to develop offline and online algorithms that can effectively learn from offline data and efficiently fine-tune/adapt in real time.


Structured Reinforcement Learning and Control

Most RL algorithms (e.g., TRPO, SAC) are general for all tasks. In contrast, drastically different control methods are developed for different systems/tasks, and their successes highly rely on structures inside these systems/tasks. We seek to encode these structures and algorithmic principles into black-box RL algorithms, to make RL algorithms more data-efficient, robust, interpretable, and safe. We are particularly interested in hierarchical and superpositional RL and control approaches.


Learning and Control Theory: Towards a Unified Framework

There are many closely related concepts in machine learning and control communities, for instance, model-based RL and optimal control, online learning and adaptive control, domain randomization and robust control, online optimization and MPC, just to name a few. We seek to build interfaces and unified frameworks, which not only deepen fundamental connections between learning and control, but inspire new algorithms. One example of such connections is analyzing MPC's dynamic regret (a learning-theoretic metric).


Swarm Intelligence

Robot swarm learning and control present multiple new challenges, such as complex interactions between agents and dynamic topology. We aim to develop scalable and robust decision-making methods for multi-agent robotic systems, by leveraging properties like symmetry, locality, and invariance. We are also interested in how different types of robots interact (e.g., drone and legged robot), and human-robot teaming.