Show simple item record

dc.contributor.advisorKamrani, Ali K.
dc.creatorIbekwe, Henry I 1981-
dc.date.accessioned2015-08-24T01:21:19Z
dc.date.available2015-08-24T01:21:19Z
dc.date.createdMay 2013
dc.date.issued2013-05
dc.identifier.urihttp://hdl.handle.net/10657/1022
dc.description.abstractDecision-making for autonomous systems acting in real world domains are complex and difficult to formalize. For instance, consider the task of autonomously navigating a mobile robot in an automated manufacturing facility. Its task is to transport hazardous material from a collection site to a disposal site. This is a navigation problem where the robot has to consider numerous variables such as collision avoidance, recognition of goal locations, accurate selection of the desired material, and knowledge of its location within the facility. The difficulty is often to reliably model the uncertainties and dynamics of the robot-environment interaction when the robot can only partially observe the states of the environment. Therefore a principal problem in designing mobile robots that can efficiently navigate indoor domains to achieve a desired task autonomously is to construct robust models for efficient planning and motion control in stochastic domains. This is still a difficult and open problem despite significant advances. The robot must generate efficient policies to reliably accomplish its tasks while accounting for uncertainties in both its action and perception. In this dissertation we model the uncertainties in action selection and perception using a sequential decision-making model. The mathematical formalism adopted is the Partially Observable Markov Decision Process (POMDP), a generalization of the well-known Markov Decision Process (MDP). Though POMDPs represent a robust formalism for the modeling of agent-based decision making, it is still very difficult and often intractable to compute optimal solutions for problems with large state space due to the high dimensionality of the underlying belief space. We propose a technique called Goal-Specific Representation (GSR) that exploits domain structure and generates policies over a subset of the state space given a map of the domain, a starting location and a goal location. We solve the resulting POMDP model using a Point-Based Value Iteration (PBVI) solver and apply the generated policies for navigation on an autonomous robot. We anticipate that the results from this work can be applied in manufacturing facilities to enhance automation and healthcare domains for assisted care.
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.subjectDecision-Making
dc.subjectAutonomous Systems
dc.subjectAutonomous Robots
dc.subjectPOMDPs
dc.subjectPartially Observable Environments
dc.subjectRobot Navigation
dc.subjectGoal-Specific Representation
dc.titleDecision-Making for Autonomous Systems in Partially Observable Environments
dc.date.updated2015-08-24T01:21:19Z
dc.type.genreThesis
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
thesis.degree.disciplineIndustrial Engineering
thesis.degree.grantorUniversity of Houston
thesis.degree.departmentIndustrial Engineering
dc.contributor.committeeMemberFeng, Qianmei (May)
dc.contributor.committeeMemberKundakcioglu, Erhun
dc.contributor.committeeMemberRao, Jagannatha R.
dc.contributor.committeeMemberWang, Keh-Han
dc.type.dcmiText
dc.format.digitalOriginborn digital
dc.description.departmentIndustrial Engineering
thesis.degree.collegeCullen College of Engineering


Files in this item


Thumbnail

This item appears in the following Collection(s)

Show simple item record