Human Computation:
Theory, Algorithms, Privacy, and Learning


Figure of an information extraction task setup. One task tries to hire a worker to return information from different tasks.

Figure of an information extraction task setup. One task tries to hire a worker to return information from different tasks.



The emerging science of crowdsourcing and human computation relies on foundational principles from voting theory, management science, theory of computation, and more. As human computation matures, there is a need for a better understanding of core underlying theoretical principles. This research direction aims to understand formal properties and models of computational systems that integrate human intelligence.


Challenges:
  • How collective response systems be used to exceed human performance even on complex tasks?
  • How can workers be incentivized and rewarded fairly for their contributions?
  • How can we identify correct answers even when most people can't even identify them (meaning that majority voting fails)?
  • How do we maintain end-user and worker privacy while completing tasks with potentially sensitive information?
  • How can crowds optimally be leveraged to train AI systems?


Crowd Agents / Real-Time, Continuous Crowdsourcing

Traditional crowdsourcing distributes batches of human computation tasks to workers for offline completion. The crowd agents architecture [1] organizes crowds into collective entities that can act consistently, and continuously in real-time Legion [4]. These agents can even be organized to structured to maintain collective memory [7].

Crowd agents can collectively outperform individual on complex tasks, such as real-time captioning [3]. Workflows that vary aspect such as information presentation rage (i.e., audio playback while captioning) can be used to improve collective performance, even though individuals would not be able to take advantage of them (i.e., one captionist with a slower playback would not be able to keep up with live speech) [2].

We are exploring new ways to leverage collective action to enable more effective systems, and super-human performance.



Leveraging Expertise

While non-expert crowds are the focus of much prior work, leveraging expertise can make far more complex tasks possible but brings different organizational challenges. For example, Foundry [8] helps end-users create and manage expert workflows to accomplish large, multi-phase tasks. In on-going work, we are exploring how interactive tools that leverage expert workers on-demand can be created. Making these systems efficient, scalable, and allow workers to have the freedom to creatively complete tasks as they see fit, while still allowing the system to understand the context of their work.


Privacy and Security Threats in Crowdsourcing

Intelligent systems that use a combination of human and machine intelligence have the potential to expose sensitive user information to members of the crowd. Exploring potential threats [5] and ways to avoid exposing such information without limiting the abilities of the system, is critical for crowdsourcing to become a mainstream source of human computation.



Publications

[1]  W.S. Lasecki, C.M. Homan and J.P. Bigham. Architecting Real-Time Crowd-Powered Systems. In Human Computation Journal (HCJournal October, 2014).

[2]  W.S. Lasecki, C.D. Miller, J.P. Bigham. Warping Time for More Effective Real-Time Crowdsourcing. In Proceedings of the International ACM Conference on Human Factors in Computing Systems (CHI 2013). Paris, France. p2033-2036. Best Paper Honorable Mention

[3]  W.S. Lasecki, C.D. Miller, A. Sadilek, A. Abumoussa, D. Borrello, R. Kushalnagar, J.P. Bigham. Real-time Captioning by Groups of Non-Experts. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST 2012). Boston, MA. p23-34. Best Paper Award Nominee

[4]  W.S. Lasecki, K.I. Murray, S. White, R.C. Miller, J.P. Bigham. Real-time Crowd Control of Existing Interfaces. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST 2011). Santa Barbara, CA. p23-32.

[5]  W.S. Lasecki, J. Teevan, E. Kamar. Information Extraction and Manipulation Threats in Crowd-Powered Systems. In Proceedings of the International ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW 2014). Baltimore, MD.

[6]  W.S. Lasecki, L. Weingard, G. Ferguson, J.P. Bigham. Finding Dependencies Between Actions Using the Crowd. In Proceedings of the International ACM Conference on Human Factors in Computing Systems (CHI 2014). Toronto, Canada.

[7]  W.S. Lasecki, S. White, K.I. Murray, J.P. Bigham. Crowd Memory: Learning in the Collective. In Proceedings of Collective Intelligence 2012 (CI 2012). Boston, MA.

[8]  D. Retelny, S. Robaszkiewicz, A. To, W.S. Lasecki, J. Patel, N. Rahmati, T. Doshi, M. Valentine, and M.S. Bernstein. Expert Crowdsourcing with Flash Teams. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST 2014). Honolulu, HI. Best Paper