Advanced Topics in Probability Theory

A.Y. 2017/2018
Overall hours
Learning objectives
Basic problems, methods and results in the theory of stochastic dynamical systems will be presented, concerning optimal control and stochastic filtering. Both discrete-time and continuous-time models will be considered, over finite and infinite horizon. Continuous-time models will be described by stochastic differential equations. The main approach will be dynamic programming, the study of the Hamilton-Jacobi-Bellman equation - including cases of solutions with low regularity - backward stochastic differential equations, the stochastic maximum principle (in the sense of Pontryagin). In the framework of filtering theory the fundamental evolution equations for the conditional laws given the observation will be presented. A brief introduction to other optimization problems will be given, such as optimal stopping or impulse control, and applications to basic models will be presented, for instance optimal investment problems in Mathematical Finance or linear quadratic optimal control.
Expected learning outcomes
Students attending the course will become acquainted with various classes of control and optimization problems for stochastic systems (with discrete time, with continuous time and formulated by stochastic differential equations, on finite and infinite horizon). They will learn the basic methods to solve such problems: dynamic programming and Hamilton-Jacobi-Bellman equations, backward stochastic differential equations, the stochastic maximum principle. They will also see how important models can be analyzed, such as optimal investment problems in Mathematical Finance and linear quadratic problems. Finally, they will learn the fundamentals of stochastic filtering theory, both in discrete and continuous time.
Course syllabus and organization

Single session

Lesson period
First semester
Course syllabus
1) Discrete-time stochastic optimal control.
Stochastic controlled dynamical systems, payoff functionals over finite or infinite horizon. Value function, dynamic programming and the Hamilton-Jacobi-Bellman (HJB) equation. Applications to reference models.

2) Optimal control of stochastic differential equations.
Controlled stochastic differential equations, payoff functionals over finite and infinite horizon. Value function and the dynamic programming principle. Hamilton-Jacobi-Bellman (HJB) equations of parabolic and elliptic type. Verification theorems for regular solutions of the HJB equation. Linear-quadratic stochastic optimal control. Introduction to generalized solutions to the HJB equation, in the viscosity sense. Application to optimal portfolio problems.

3) Backward stochastic differential equations.
Formulation, existence and uniqueness results. Probabilistic representation of solutions to partial differential equations of semilinear type and of the value function of an optimal control problem. Stochastic maximum principle in the sense of Potryagin.

4) Introduction to stochastic filtering.
Formulation and examples of observation processes. The filtering equations in discrete time. Non-linear filtergin equations in continuous time for observations processes with Brownian noise (the Fujisaki-Kallianpur-Kunita and the Duncan-Mortensen-Zakai equations).

5) Overview on other problems and methods.
Under certain circumstances, various other topics may also be presented, for instance control with partial observation, ergodic control, optimal stopping problems, optimal switching, impulse control.
MAT/06 - PROBABILITY AND STATISTICS - University credits: 6
Lessons: 42 hours
Monday, 10:30 am - 1:30 pm (upon appointment, possibly suppressed for academic duties)
Department of Mathematics, via Saldini 50, office 1017. On line if required by the pandemic conditions.