当前位置: 首页 > 学术报告
Reinforcement Learning-Based Control of Uncertain Nonlinear Systems
谢立华 教授(Nanyang Technological University)
2021年4月2日10:00-11:30  腾讯会议平台ID:923 684 991

*主持人:李韬 教授


Reinforcement learning (RL), inspired by learning behaviour in nature, is a goal-oriented learning strategy wherein the agent learns the policy to optimize a pre-defined reward by interacting with the environment. For being data-driven, effectiveness in reaching optimal behavior, and adaptiveness to uncertain environment, RL has undergone rapid progress in control community. In this talk, we shall first discuss RL based disturbance rejection control for uncertain nonlinear systems with known nominal part. An extended state observer is first designed to estimate the system state and the total uncertainty. Based on the output of the observer, the control compensates for the total uncertainty in real time, and simultaneously, online approximates the optimal policy for the compensated system using a simulation of experience based RL technique. The approach does not require PE condition or probing signals. We further extend the study to systems with unknown nominal part, where a novel concurrent adaptive extended observer is developed to jointly estimate the parameters of the systems and the state, and a simulation of experience based RL is used to approximate the optimal policy.


谢立华教授于1992年获得澳大利亚纽卡斯尔大学电气工程博士学位。1986年至1989年,他在南京理工大学自动控制系任教。自1992年以来,他就职于新加坡南洋理工大学电气与电子工程学院,目前他是该大学的教授以及校级实验室和研究中心(Delta-NTU网络物理系统合作实验室以及先进机器人技术创新中心)的负责人,总资金约为1亿美元。2011年7月至2014年6月,他担任控制和仪器部门的负责人。它的研究领域涉及鲁棒控制、网络控制、压缩感测、定位和无人系统。自2014年以来,汤森路透(Thomson Routers)和科睿唯安(Clarivate Analytics)每年将他评为被高度引用的研究人员。他目前是Unmanned Systems期刊的主编,也是IEEE Transactions on Control of network Systems的副主编。他曾担任IET Book Series on Control的主编,还曾是IEEE Transactions on Automatic Control, IEEE Transactions on Control Systems Technology, Automatica, IEEE Transactions on Circuits and System-II等期刊的副主编。他曾是IEEE杰出演讲人(2011-2014),IEEE Control System Society理事会的当选成员(2016年1月-2018年12月)。他是新加坡工程院院士,Fellow of IEEE, Fellow of IFAC以及Fellow of CAA.