Dynamic programming

Date

1964

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This thesis is a survey of the present status of the mathematical aspects of dynamic Programming. Dynamic Programming is a method of solving multi-stage decision processes. These are processes which have the inherent characteristic that a decision or a sequence of decisions, which affect the outcome, is required at specific points in the decision variable. Dynamic Programming is the application of the "Principle of Optimality" and results in the "functional equation" of the process. The formulation of Dynamic Programming is developed in its principal forms. The principal methods of solution are discussed. The treatment of the Calculus of Variations is presented in some detail because of the general Interest in this subject. In order to present as complete a picture as possible, the application of Dynamic Programming to various problems is reviewed along with the computational aspects. The power of Dynamic Programming is that it provides a uniform approach, the fundamental functional equation, to many different types of problems. The decision variable can be defined over discrete sets or a continuum of values, with constraints restricting the range of decisions. The processes themselves can be of deterministic or stochastic nature. The basic form of the functional equation remains the same, however, the functions which are included depend on the processes. The reference material on thia subject can be divided Into three categories: (1) Basic references which develop the theory. (2) General references which describe applications to specific problems. (3) Computational references which deal with the numerical methods used in solving specific problems.

Description

Keywords

Citation