Theory, Algorithms, Privacy, and Learning
The emerging science of crowdsourcing and human computation relies on foundational principles from voting theory, management science, theory of computation, and more. As human computation matures, there is a need for a better understanding of core underlying theoretical principles. This research direction aims to understand formal properties and models of computational systems that integrate human intelligence.
- How collective response systems be used to exceed human performance even on complex tasks?
- How can workers be incentivized and rewarded fairly for their contributions?
- How can we identify correct answers even when most people can't even identify them (meaning that majority voting fails)?
- How do we maintain end-user and worker privacy while completing tasks with potentially sensitive information?
- How can crowds optimally be leveraged to train AI systems?
Crowd Agents / Real-Time, Continuous Crowdsourcing
Traditional crowdsourcing distributes batches of human computation tasks to workers for offline completion. The crowd agents architecture  organizes crowds into collective entities that can act consistently, and continuously in real-time Legion . These agents can even be organized to structured to maintain collective memory .
Crowd agents can collectively outperform individual on complex tasks, such as real-time captioning . Workflows that vary aspect such as information presentation rage (i.e., audio playback while captioning) can be used to improve collective performance, even though individuals would not be able to take advantage of them (i.e., one captionist with a slower playback would not be able to keep up with live speech) .
We are exploring new ways to leverage collective action to enable more effective systems, and super-human performance.
Critical Thinking Agents
Many tasks are too difficult to rely on the answer of a single crowd worker. A simple majority vote can solve this problem in some cases, but what if only a couple workers know the correct answer? Allowing workers to discuss the problem together would ideally lead to the most logical answer; but in actuality, social dynamics persuade them to conform to the opinion of the most convincing person —even when its illogical. In a current work-in-progress, we are investigating a fix to this issue by coordinating workers to diagram their argument and identify logical fallacies within it. We are looking into applications in other domains as well, including critical thinking assistants to aid people while they forage for information online.
While non-expert crowds are the focus of much prior work, leveraging expertise can make far more complex tasks possible but brings different organizational challenges. For example, Foundry  helps end-users create and manage expert workflows to accomplish large, multi-phase tasks. In on-going work, we are exploring how interactive tools that leverage expert workers on-demand can be created. Making these systems efficient, scalable, and allow workers to have the freedom to creatively complete tasks as they see fit, while still allowing the system to understand the context of their work.
Privacy and Security Threats in Crowdsourcing
Intelligent systems that use a combination of human and machine intelligence have the potential to expose sensitive user information to members of the crowd. Exploring potential threats  and ways to avoid exposing such information without limiting the abilities of the system, is critical for crowdsourcing to become a mainstream source of human computation.
Selected Publications A. Lundgard, Y. Yang, M.L. Foster, W.S. Lasecki. Bolt: Instantaneous Crowdsourcing via Just-in-Time Training. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2018). Montreal, Canada.
 H. Kaur, M. Gordon, Y. Yang, J.P. Bigham, J. Teevan, E. Kamar, W.S. Lasecki. CrowdMask: Using Crowds to Preserve Privacy in Crowd-Powered Systems via Progressive Filtering. In Proceedings of the AAAI Conference on Human Computation (HCOMP 2017). Quebec City, Canada.
 W.S. Lasecki, C.M. Homan and J.P. Bigham. Architecting Real-Time Crowd-Powered Systems. In Human Computation Journal (HCJournal October, 2014).
 D. Retelny, S. Robaszkiewicz, A. To, W.S. Lasecki, J. Patel, N. Rahmati, T. Doshi, M. Valentine, and M.S. Bernstein. Expert Crowdsourcing with Flash Teams. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST 2014). Honolulu, HI. Best Paper
 W.S. Lasecki, J. Teevan, E. Kamar. Information Extraction and Manipulation Threats in Crowd-Powered Systems. In Proceedings of the International ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW 2014). Baltimore, MD.
 W.S. Lasecki, C.D. Miller, J.P. Bigham. Warping Time for More Effective Real-Time Crowdsourcing. In Proceedings of the International ACM Conference on Human Factors in Computing Systems (CHI 2013). Paris, France. p2033-2036. Best Paper Honorable Mention
 W.S. Lasecki, C.D. Miller, A. Sadilek, A. Abumoussa, D. Borrello, R. Kushalnagar, J.P. Bigham. Real-time Captioning by Groups of Non-Experts. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST 2012). Boston, MA. p23-34. Best Paper Award Nominee
 W.S. Lasecki, K.I. Murray, S. White, R.C. Miller, J.P. Bigham. Real-time Crowd Control of Existing Interfaces. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST 2011). Santa Barbara, CA. p23-32.
 W.S. Lasecki, L. Weingard, G. Ferguson, J.P. Bigham. Finding Dependencies Between Actions Using the Crowd. In Proceedings of the International ACM Conference on Human Factors in Computing Systems (CHI 2014). Toronto, Canada.
 W.S. Lasecki, S. White, K.I. Murray, J.P. Bigham. Crowd Memory: Learning in the Collective. In Proceedings of Collective Intelligence 2012 (CI 2012). Boston, MA.