Optimizing Bayesian methods for high dimensional data sets on array-based parallel database systems

Date

2014-12

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

In this work we solve the problem of variable selection for linear regression on large data sets stored in Database Management Systems (DBMS), under the Bayesian approach. This is a challenging problem due to data sets with a large number of variables present a combinatorial search space. Besides, data sets might be so large that they cannot fit in main memory. In this work, we introduce a three-step algorithm to solve variable selection for large data sets: Pre-selection, Summarization and accelerated Gibbs sampling. Because Markov chain Monte Carlo methods, such as Gibbs sampling, require thousands of iterations for the Markov chain to stabilize, un-optimized algorithms could either run for many hours, or fail due to insufficient main memory. We overcome such issues with several non-trivial database-oriented optimizations, which are experimentally validated in both a parallel Array DBMS and a Row DBMS. We highlight the superiority of an Array DBMS, a novel kind of DBMS, to analyze large matrices. We analyze performance from two perspectives: data set size and dimensionality. Our algorithm is able to identify small subsets of variables that model the dependent variable with reasonable accuracy. We show that our algorithm presents promising performance, specially for high dimensional data sets: our prototype is generally two orders of magnitude faster than a variable selection package for R.

Description

Keywords

Databases, Linear regression, Variable selection, Algorithms

Citation