AE8803: Optimization-Based Learning Control and Games, Fall 2020

Cource Description

This course will cover analysis and design techniques in learning-based optimal control systems and differential games. Topics will include optimization under constraints, multiplier methods, descent methods, quasi-Newton methods, calculus of variations, Pontryagin's maximum principle, matrix Riccati equations, nonlinear optimality and relation to Hamilton's principle in physics, suboptimal control, dynamic and approximate dynamic programming, duality of optimal control and optimal estimation, differential games, Hamilton-Jacobi equations, value and policy iteration, Q-learning, approximation methods with reinforcement learning to adapt to optimal control solutions without knowing the full system dynamics. Tentative Syllabus: Here

Instructor

Prof. Kyriakos G. Vamvoudakis
Montgomery Knight Building, Office 415-B,
The Daniel Guggenheim School of Aerospace Engineering,
Georgia Tech, Atlanta,
GA 30313 USA
E-mail: kyriakos at gatech.edu
Website: http://kyriakos.ae.gatech.edu/
Telephone: +1 (404)-385-3342

Office Hours

TuThu, 11:00am - 12:30pm, Montgomery Knight Building, Office 415-B

Please email or phone me in advance to schedule for an appointment. If you have any questions about the course, please send me email. I will try to respond as quickly as possible. Additionally, I will share questions that are particularly good (and their answers) with the rest of the class by broadcasting my answer to the entire class. If you plan to come to office hours for questions about homework, please be prepared to show attempts at solving the problem that you prepared before coming.

Lecture Time and Place

TuTh, 9:30am - 10:45am, Guggenheim 244

Course Credit

Units: 3

Prerequisites

Competency in linear algebra, probability, and understanding of model (state space) control theory.

Textbooks

There is no required text. The instructor will provide notes and research papers.

Website

The website for this course will be maintained on Canvas and here.