Discrete-time stochastic optimal control problems are stated over a finite number of decision stages. The state vector is assumed to be perfectly measurable. Such problems are infinite-dimensional as one has to find control functions of the state. Because of the general assumptions under which the problems are formulated, two approximation techniques are addressed. The first technique consists of an approximation of dynamic programming. The approximation derives from the fact that the state space is discretized. Instead of using regular grids, which lead to an exponential growth of the number of samples (and thus to the curse of dimensionality), low-discrepancy sequences (as quasi-Monte Carlo ones) are considered. The second approximation technique is given by the application of the “Extended Ritz Method” (ERIM). The ERIM consists in substituting the admissible functions with fixed-structure parametrized functions containing vectors of “free” parameters. This requires solving easier nonlinear programming problems. If suitable regularity assumptions are verified, such problems can be solved by stochastic gradient algorithms. The computation of the gradient can be performed by resorting to the classical adjoint equations, which solve deterministic optimal control problems with the addition of one term, dependent on the chosen family of fixed-structure parametrized functions.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Stochastic optimal control with perfect state information over a finite horizon|
|Data di pubblicazione:||2020|
|Appare nelle tipologie:||02.01 - Contributo in volume (Capitolo o saggio)|