Reinforcement Learning based Computational Adaptive Optimal Control and System Identification

Abstract: The duality of estimation and control problems is a well- known fact in control theory literature. Parameter convergence and closed loop stability are usually of paramount interest for a given adaptive control scheme. However, the controller thus designed doesn’t guarantee any performance other than ensuring closed loop stability and signal boundedness. In adaptive control, controllers while ensuring closed loop stability do not guarantee parameter convergence to true parameters. Thus there is a need for a higher level abstraction for a control scheme which acts in stages and prioritizes various aspects at different stages. The stage abstraction for the controller is inspired by human intuition towards dealing with control and identification simultaneously and hence named “Intuitive control framework”.

The first stage prioritizes stabilization of states only. The controller moves onto the next stage after the unknown system is stabilized. The subsequent stages involve optimization with different performance metrics (typically linear quadratic cost optimization) through adaptive learning. After enough information for identification is acquired, the control schemes developed for various optimal metrics are used to estimate the unknown parameters in the final stage. This narrative for selective prioritization of objectives and a higher level abstraction for control schemes is illustrated for a continuous linear time invariant state space realization with state feedback.

Numerous real-world applications can benefit from this online system identification routine inspired by the human cognitive process. This offers a seamless integration of control and identification with a higher level of priorities. Further, the identified system matrices enable computation of forward reachable sets to assess future states. The proposed framework offers the only way to perform such computation for an unknown system without the need for extensive experimentation.

Bio sketch of the Speaker: Dr. Kamesh Subbarao is currently an Associate Professor in the Department of Mechanical and Aerospace Engineering at the University of Texas at Arlington. His research interests include control of nonlinear dynamical systems that are subject to large uncertainties. He is actively involved in research on non-parametric mathematical models for dynamical systems with embedded sensing and distributed actuation capabilities. He has also been working on applying control-theoretic methods for aerodynamic and structural design optimization and designing robust adaptive control laws for free flying robotic spacecrafts. He is an active member of the Autonomous Vehicle Laboratory initiative at UTA. He is an Associate Fellow of AIAA and a Senior Member in the IEEE.


Lecture by:
Dr. Kamesh Subbarao

11th December 2017

Dayananda Sagar University,
Kumarswamy Layout,
Bangalore – 78



The Main Campus:

  • Campus 1: Dayananda Sagar University
  • 6th Floor, Dental Block, Shavige Malleshwara Hills, Kumaraswamy layout,
    Bangalore-560 111, India.
  • Schools at Campus 1:
    School of Commerce & Management Studies (B.Com - ACCA,CMA,CA | BBA, BBA - BFSI),
    Basic & Applied Sciences (B.Sc. | M.Sc),
  • Health Sciences (Physiotherapy - BPT | MPT | Pharmacy - B.Pharm, Pharma.D, M.Pharma | Nursing - B.Sc., PB B.Sc. | M.Sc.),
    Arts & Humanities(BA (Hons.) in Journalism) .
  • Office of Admissions: +91 80 4646 1800

  • Campus 3: Dayananda Sagar University
  • Innovation Campus, Hosur Main Road, Kudlu Gate, Bangalore-560 114, Karnataka, India.
  • Schools at Campus 3:
    School of Engineering (B.Tech. | M.Tech.| BCA),
    Department of MBA ( MBA )
  • Office of Registrar: +91 80 4909 2910 / 11
    Office of Dean (School of Engineering): +91 80 4909 2986 / 32 / 33
    Dean - MBA: +91 80 4909 2931
    Research Cell: +91 80 4909 2912 / +91 97390 17462
    Office of Admissions: +91 80 4909 2924 / 25
    Fax : +91 80 4220 1997
  • E-mail: [email protected] / [email protected] / [email protected]