Optimal Decision Making Models in Changing Environments
Radillo, Adrian Ernesto 1986-
MetadataShow full item record
Mathematical decision making theory has been successfully applied to the neuroscience of sensation, behavior, and cognition, for more than fifty years. Classical models rely on the assumption that the environment doesn't change during the period of observation. This assumption has been relaxed in more recent studies of adaptive decision making. We develop new ideal observer -- Bayes-optimal -- models for this latter setting; and more specifically for the case in which temporal integration of noisy evidence improves choice accuracy. The generative model of the stimulus is a Hidden Markov Model that the ideal observer must filter, and more generally learn. In a first part, we derive and study models tailored to pulsatile evidence with Poisson-distributed timing. We characterize the model parameters that determine choice accuracy, and compare the ideal observer to a finely tuned linear-leak model. We show that the linear model is both more sensitive to parameter perturbation and easier to fit to choice data. In a second part, we derive Bayes-optimal models that learn the change rates of their environment. We do so in several configurations: in discrete time, in continuous time, when more than one change rate must be learned, and for both pulsatile and continuously arriving, drift-diffusion type evidence. We find that such learning models may outperform wrongly tuned known-hazard rate models, but are hard to implement computationally. We conclude that the mathematical study of optimal decision making is crucial for at least three reasons. First, it helps develop an intuition about the various computations required to perform a task. Second, Bayes-optimal models allow benchmarking accuracy and other dependent variables from experiments. Finally, from them, approximate schemes may be built, hopefully taking us one step closer to understanding the human brain.