Controlled Diffusion Processes

Controlled Diffusion Processes
Author :
Publisher : Springer Science & Business Media
Total Pages : 314
Release :
ISBN-10 : 9783540709145
ISBN-13 : 3540709142
Rating : 4/5 (45 Downloads)

Synopsis Controlled Diffusion Processes by : N. V. Krylov

Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. ~urin~ that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in Wonham [76]). At the same time, Girsanov [25] and Howard [26] made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4]. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8], Mine and Osaki [55], and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.

Relative Optimization of Continuous-Time and Continuous-State Stochastic Systems

Relative Optimization of Continuous-Time and Continuous-State Stochastic Systems
Author :
Publisher : Springer Nature
Total Pages : 376
Release :
ISBN-10 : 9783030418465
ISBN-13 : 3030418464
Rating : 4/5 (65 Downloads)

Synopsis Relative Optimization of Continuous-Time and Continuous-State Stochastic Systems by : Xi-Ren Cao

This monograph applies the relative optimization approach to time nonhomogeneous continuous-time and continuous-state dynamic systems. The approach is intuitively clear and does not require deep knowledge of the mathematics of partial differential equations. The topics covered have the following distinguishing features: long-run average with no under-selectivity, non-smooth value functions with no viscosity solutions, diffusion processes with degenerate points, multi-class optimization with state classification, and optimization with no dynamic programming. The book begins with an introduction to relative optimization, including a comparison with the traditional approach of dynamic programming. The text then studies the Markov process, focusing on infinite-horizon optimization problems, and moves on to discuss optimal control of diffusion processes with semi-smooth value functions and degenerate points, and optimization of multi-dimensional diffusion processes. The book concludes with a brief overview of performance derivative-based optimization. Among the more important novel considerations presented are: the extension of the Hamilton–Jacobi–Bellman optimality condition from smooth to semi-smooth value functions by derivation of explicit optimality conditions at semi-smooth points and application of this result to degenerate and reflected processes; proof of semi-smoothness of the value function at degenerate points; attention to the under-selectivity issue for the long-run average and bias optimality; discussion of state classification for time nonhomogeneous continuous processes and multi-class optimization; and development of the multi-dimensional Tanaka formula for semi-smooth functions and application of this formula to stochastic control of multi-dimensional systems with degenerate points. The book will be of interest to researchers and students in the field of stochastic control and performance optimization alike.

Deterministic and Stochastic Optimal Control

Deterministic and Stochastic Optimal Control
Author :
Publisher : Springer Science & Business Media
Total Pages : 231
Release :
ISBN-10 : 9781461263807
ISBN-13 : 1461263808
Rating : 4/5 (07 Downloads)

Synopsis Deterministic and Stochastic Optimal Control by : Wendell H. Fleming

This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.

Applied Stochastic Control of Jump Diffusions

Applied Stochastic Control of Jump Diffusions
Author :
Publisher : Springer Science & Business Media
Total Pages : 263
Release :
ISBN-10 : 9783540698265
ISBN-13 : 3540698264
Rating : 4/5 (65 Downloads)

Synopsis Applied Stochastic Control of Jump Diffusions by : Bernt Øksendal

Here is a rigorous introduction to the most important and useful solution methods of various types of stochastic control problems for jump diffusions and its applications. Discussion includes the dynamic programming method and the maximum principle method, and their relationship. The text emphasises real-world applications, primarily in finance. Results are illustrated by examples, with end-of-chapter exercises including complete solutions. The 2nd edition adds a chapter on optimal control of stochastic partial differential equations driven by Lévy processes, and a new section on optimal stopping with delayed information. Basic knowledge of stochastic analysis, measure theory and partial differential equations is assumed.

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions
Author :
Publisher : Springer Science & Business Media
Total Pages : 436
Release :
ISBN-10 : 9780387310718
ISBN-13 : 0387310711
Rating : 4/5 (18 Downloads)

Synopsis Controlled Markov Processes and Viscosity Solutions by : Wendell H. Fleming

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.

Ergodic Control of Diffusion Processes

Ergodic Control of Diffusion Processes
Author :
Publisher : Cambridge University Press
Total Pages : 341
Release :
ISBN-10 : 9780521768405
ISBN-13 : 0521768403
Rating : 4/5 (05 Downloads)

Synopsis Ergodic Control of Diffusion Processes by : Ari Arapostathis

The first comprehensive account of controlled diffusions with a focus on ergodic or 'long run average' control.

Stochastic Analysis and Diffusion Processes

Stochastic Analysis and Diffusion Processes
Author :
Publisher : OUP Oxford
Total Pages : 368
Release :
ISBN-10 : 9780191004520
ISBN-13 : 0191004529
Rating : 4/5 (20 Downloads)

Synopsis Stochastic Analysis and Diffusion Processes by : Gopinath Kallianpur

Stochastic Analysis and Diffusion Processes presents a simple, mathematical introduction to Stochastic Calculus and its applications. The book builds the basic theory and offers a careful account of important research directions in Stochastic Analysis. The breadth and power of Stochastic Analysis, and probabilistic behavior of diffusion processes are told without compromising on the mathematical details. Starting with the construction of stochastic processes, the book introduces Brownian motion and martingales. The book proceeds to construct stochastic integrals, establish the Itô formula, and discuss its applications. Next, attention is focused on stochastic differential equations (SDEs) which arise in modeling physical phenomena, perturbed by random forces. Diffusion processes are solutions of SDEs and form the main theme of this book. The Stroock-Varadhan martingale problem, the connection between diffusion processes and partial differential equations, Gaussian solutions of SDEs, and Markov processes with jumps are presented in successive chapters. The book culminates with a careful treatment of important research topics such as invariant measures, ergodic behavior, and large deviation principle for diffusions. Examples are given throughout the book to illustrate concepts and results. In addition, exercises are given at the end of each chapter that will help the reader to understand the concepts better. The book is written for graduate students, young researchers and applied scientists who are interested in stochastic processes and their applications. The reader is assumed to be familiar with probability theory at graduate level. The book can be used as a text for a graduate course on Stochastic Analysis.

Numerical Methods for Stochastic Control Problems in Continuous Time

Numerical Methods for Stochastic Control Problems in Continuous Time
Author :
Publisher : Springer Science & Business Media
Total Pages : 480
Release :
ISBN-10 : 9781461300076
ISBN-13 : 146130007X
Rating : 4/5 (76 Downloads)

Synopsis Numerical Methods for Stochastic Control Problems in Continuous Time by : Harold Kushner

Stochastic control is a very active area of research. This monograph, written by two leading authorities in the field, has been updated to reflect the latest developments. It covers effective numerical methods for stochastic control problems in continuous time on two levels, that of practice and that of mathematical development. It is broadly accessible for graduate students and researchers.

Stochastic Controls

Stochastic Controls
Author :
Publisher : Springer Science & Business Media
Total Pages : 459
Release :
ISBN-10 : 9781461214663
ISBN-13 : 1461214661
Rating : 4/5 (63 Downloads)

Synopsis Stochastic Controls by : Jiongmin Yong

As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.