Decision-Making for Autonomous Systems in Partially Observable Environments

dc.contributor.advisorKamrani, Ali K.
dc.contributor.committeeMemberFeng, Qianmei
dc.contributor.committeeMemberKundakcioglu, Erhun
dc.contributor.committeeMemberRao, Jagannatha R.
dc.contributor.committeeMemberWang, Keh-Han
dc.creatorIbekwe, Henry I. 1981- 2013
dc.description.abstractDecision-making for autonomous systems acting in real world domains are complex and difficult to formalize. For instance, consider the task of autonomously navigating a mobile robot in an automated manufacturing facility. Its task is to transport hazardous material from a collection site to a disposal site. This is a navigation problem where the robot has to consider numerous variables such as collision avoidance, recognition of goal locations, accurate selection of the desired material, and knowledge of its location within the facility. The difficulty is often to reliably model the uncertainties and dynamics of the robot-environment interaction when the robot can only partially observe the states of the environment. Therefore a principal problem in designing mobile robots that can efficiently navigate indoor domains to achieve a desired task autonomously is to construct robust models for efficient planning and motion control in stochastic domains. This is still a difficult and open problem despite significant advances. The robot must generate efficient policies to reliably accomplish its tasks while accounting for uncertainties in both its action and perception. In this dissertation we model the uncertainties in action selection and perception using a sequential decision-making model. The mathematical formalism adopted is the Partially Observable Markov Decision Process (POMDP), a generalization of the well-known Markov Decision Process (MDP). Though POMDPs represent a robust formalism for the modeling of agent-based decision making, it is still very difficult and often intractable to compute optimal solutions for problems with large state space due to the high dimensionality of the underlying belief space. We propose a technique called Goal-Specific Representation (GSR) that exploits domain structure and generates policies over a subset of the state space given a map of the domain, a starting location and a goal location. We solve the resulting POMDP model using a Point-Based Value Iteration (PBVI) solver and apply the generated policies for navigation on an autonomous robot. We anticipate that the results from this work can be applied in manufacturing facilities to enhance automation and healthcare domains for assisted care.
dc.description.departmentIndustrial Engineering, Department of
dc.format.digitalOriginborn digital
dc.rightsThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
dc.subjectAutonomous Systems
dc.subjectPartially Observable Environments
dc.subjectRobot Navigation
dc.subjectGoal-Specific Representation
dc.titleDecision-Making for Autonomous Systems in Partially Observable Environments
dc.type.genreThesis College of Engineering Engineering, Department of Engineering of Houston of Philosophy
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
2.2 MB
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
1.84 KB
Plain Text