Active Learning

In many applications, it’s possible to take measurements repeatedly to use for inference. Examples include genetic experiments, environmental sensing, and crowdsourced image tasks. The problem of how to design sequential measurements for machine learning inference is called active learning. We can and should exploit expert knowledge about the signal of interest. However, if we trust a model too much, we may miss a true signal. We have studied active learning algorithms for image clustering/classification and spatial environmental sampling.

Lipor, J., B. P. Wong, D. Scavia, B. Kerkez, and L. Balzano. 2017. “Distance-Penalized Active Learning Using Quantile Search.” IEEE Transactions on Signal Processing 65 (20): 5453–65. https://doi.org/10.1109/TSP.2017.2731323.
Lipor, John, and Laura Balzano. 2017. “Leveraging Union of Subspace Structure to Improve Constrained Clustering.” In PMLR, 2130–39. http://proceedings.mlr.press/v70/lipor17a.html.
Lipor, J., and L. Balzano. 2015. “Margin-Based Active Subspace Clustering.” In 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 377–80. https://doi.org/10.1109/CAMSAP.2015.7383815.
Lipor, J., L. Balzano, B. Kerkez, and D. Scavia. 2015. “Quantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling.” In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 1241–48. https://doi.org/10.1109/ALLERTON.2015.7447150.