Sequentially Deciding Between Two Experiments for Estimating a Common Success Probability

Janis Hardwick
University of Michigan

Connie Page
Michigan State University

Quentin F. Stout
University of Michigan

 

Abstract: To estimate a success probability p, two experiments are available: individual Bernoulli(p) trials or the product of r individual Bernoulli(p) trials. This problem has its roots in reliability where either single components can be tested or a system of r identical components can be tested. A total of N experiments can be performed, and the problem is to sequentially select some combination (allocation) of these two experiments, along with an estimator of p, to achieve low mean square error of the final estimate. This scenario is similar to that of the better-known group testing problem, but here the goal is to estimate failure rates rather than to identify defective units. The problem also arises in epidemiological applications such as estimating disease prevalence.

Information maximization considerations, and analysis of the asymptotic mean square error of several estimators, lead to the following adaptive procedure: use the maximum likelihood estimator to estimate p, and if this estimator is below (above) the cut-point ar, then observe an individual (product) trial at the next stage. In a Bayesian setting with squared error estimation loss and suitable regularity conditions on the prior distribution, this adaptive procedure, replacing the maximum likelihood estimator with the Bayes estimator, will be asymptotically Bayes.

Exact computational evaluations of the adaptive procedure for fixed sample sizes show that it behaves roughly as the asymptotics predict. The exact analyses also show parameter regions for which the adaptive procedure achieves negative regret, as well as regions for which it achieves normalized mean squared error superior to that asymptotically possible.

An example and a discussion of extensions conclude the work.

Keywords: response adaptive sampling design, batch testing, grouped data, composite testing, design of experiments

Complete paper. It appeared in Journal of the American Statistical Association 93 (1998), pp. 1502-1511.

 


Related Work
Adaptive Sampling Designs:
Here is an explanation of this topic, and here are our relevant publications.
Dynamic Programming (also known as backward induction):
Here is an overview of our work.


Quentin's Home Copyright © 2005-2017 Quentin F. Stout