Quentin F. Stout
Janis Hardwick
University of Michigan
Abstract: We discuss the use of parallel computing in the design and analysis of adaptive sampling procedures, and show how some efficient parallel programs were developed to allow one to analyze useful sample sizes. Response adaptive designs are an important class of learning algorithms for a stochastic environment and apply in a large number of situations. As all illustrative example, we focus on the problem of optimally assigning patients to treatments in controlled clinical trials. While response adaptive designs have significant ethical and cost advantages, until recently they were rarely utilized because of the complexity of optimizing and analyzing them.
Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, showing that, while standard approaches do not necessarily work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical.
Keywords: response adaptive sampling, parallel computing, statistical computing, design of experiments, controlled clinical trial, bandit problem, delayed response, stochastic optimization, computational learning theory
Complete paper. This appears in Handbook on Parallel Computing and Statistics, E.J. Kontoghiorghes, ed., Marcel Dekker, 2006, pp. 347-373.
Related Work
Copyright © 2004-2017 Quentin F. Stout |