Basic problems, methods and results in the theory of stochastic dynamical systems will be presented, concerning optimal control and stochastic filtering. Both discrete-time and continuous-time models will be considered, over finite and infinite horizon. Continuous-time models will be described by stochastic differential equations. The main approach will be dynamic programming, the study of the Hamilton-Jacobi-Bellman equation - including cases of solutions with low regularity - backward stochastic differential equations, the stochastic maximum principle (in the sense of Pontryagin). In the framework of filtering theory the fundamental evolution equations for the conditional laws given the observation will be presented. A brief introduction to other optimization problems will be given, such as optimal stopping or impulse control, and applications to basic models will be presented, for instance optimal investment problems in Mathematical Finance or linear quadratic optimal control.
Expected learning outcomes
Students attending the course will become acquainted with various classes of control and optimization problems for stochastic systems (with discrete time, with continuous time and formulated by stochastic differential equations, on finite and infinite horizon). They will learn the basic methods to solve such problems: dynamic programming and Hamilton-Jacobi-Bellman equations, backward stochastic differential equations, the stochastic maximum principle. They will also see how important models can be analyzed, such as optimal investment problems in Mathematical Finance and linear quadratic problems. Finally, they will learn the fundamentals of stochastic filtering theory, both in discrete and continuous time.
1) Discrete-time stochastic optimal control. Stochastic controlled dynamical systems, payoff functionals over finite or infinite horizon. Value function, dynamic programming and the Hamilton-Jacobi-Bellman (HJB) equation. Applications to reference models.
2) Optimal control of stochastic differential equations. Controlled stochastic differential equations, payoff functionals over finite and infinite horizon. Value function and the dynamic programming principle. Hamilton-Jacobi-Bellman (HJB) equations of parabolic and elliptic type. Verification theorems for regular solutions of the HJB equation. Linear-quadratic stochastic optimal control. Introduction to generalized solutions to the HJB equation, in the viscosity sense. Application to optimal portfolio problems.
3) Backward stochastic differential equations. Formulation, existence and uniqueness results. Probabilistic representation of solutions to partial differential equations of semilinear type and of the value function of an optimal control problem. Stochastic maximum principle in the sense of Potryagin. 4) Overview on other problems and methods. Under certain circumstances, various other topics may also be presented, for instance control with partial observation, ergodic control, optimal stopping problems, optimal switching, impulse control.