A study of methods for the solution of Markovian decision processes

dc.contributor.advisorDawkins, George S.
dc.contributor.committeeMemberMotard, Rodolphe L.
dc.contributor.committeeMemberElrod, John T.
dc.creatorBertazzi, Paulo de Azevedo
dc.date.accessioned2022-10-28T16:26:35Z
dc.date.available2022-10-28T16:26:35Z
dc.date.issued1966
dc.description.abstractProblems of sequential decisions are marked by the fact that the consequences of a decision made at a certain time cannot be evaluated without considering the future evolution of the process. The general solution for problems of this type was presented by Richard Bellman, based on his "Principle of Optimality". This solution consists of assuming a "truncation" of the process at a chosen time, which allows the computation of the optimal decisions for the preceding times by a recurrence relation. A number of the difficulties encountered in the application of the General Dynamic Programming Method can be circumvented when some information about the behavior of the process is known. Particularly, there are some problems of this type which possess certain linear features. The mathematical model known as a Markov Process is one such case. When the process follows a Markov chain and is assumed to have Infinite duration, a steady state optimal solution can be reached in finite number of iterations, following a criterion of "partial maximization". This method was presented by Ronald Howard. Howard's approach will always give an optimal solution when the process is far from its ending, but will not solve the problem when the process is in its "transient" period (near the end of the process). The steady state case can also be solved by means of Linear Programming, as presented by Guy de Ghellinck. Like Howard's method, Linear Programming will yield only steady state solutions. If it is necessary to know the optimal decisions in the transient period, the Dynamic Programming solution must be employed. Five examples were solved using the three different methods. By varying the number of states of the processes and the number of possible alternatives in each state, some conparlsons were made concerning the effectiveness of the solutions. When the number of alternatives is high in conparlson with the number of states, there was no significant differences with respect to the time of computation. When the number of states is high in comparison with the number of alternatives, the linear programming method achieved the optimal solution in shorter time.
dc.description.departmentIndustrial Engineering, Department of
dc.format.digitalOriginreformatted digital
dc.format.mimetypeapplication/pdf
dc.identifier.other13806161
dc.identifier.urihttps://hdl.handle.net/10657/12414
dc.language.isoen
dc.rightsThis item is protected by copyright but is made available here under a claim of fair use (17 U.S.C. Section 107) for non-profit research and educational purposes. Users of this work assume the responsibility for determining copyright status prior to reusing, publishing, or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires express permission of the copyright holder.
dc.titleA study of methods for the solution of Markovian decision processes
dc.type.dcmiText
dc.type.genreThesis
thesis.degree.collegeCullen College of Engineering
thesis.degree.departmentIndustrial Engineering, Department of
thesis.degree.disciplineIndustrial Engineering
thesis.degree.grantorUniversity of Houston
thesis.degree.levelMasters
thesis.degree.nameMaster of Science

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bertazzi_1966_13806161.pdf
Size:
2.89 MB
Format:
Adobe Portable Document Format