Crowd-Sourcing Technologies


Linguists increasingly recognize the importance of backing up their intuitions and theories with empirical data in the form of native speaker judgements, acceptability ratings, and behavioral measures. While appropriate lab-based experiments entail significant cost, time and expertise, crowdsourcing technologies allow experiments or surveys to be designed and run over the internet quickly and cheaply. More than a million workers currently log in to services like Amazon's Mechanical Turk to complete short tasks for pay-per-task compensation. These platforms were developed to allow companies to outsource work, but are now being used in research: first for simple annotation and translation (Callison-Burch, 2009; Hseh et al., 2009; Marge et al., 2010; Snow et al., 2008); but increasingly more sophisticated research in varied experimental paradigms (Gibson & Fedorenko, submitted; Munro et al., 2010; Schnoebelen & Kuperman, submitted). This workshop will bring together linguists who are utilizing crowdsourcing technologies and those who want to know more about them. It will combine a half-day "how-to" session where participants will learn to conduct experiments using crowdsourcing platforms and a half-day workshop where researchers come together to share results, ideas, and strategies.


  • Robert Munro, Stanford Univeristy, rmunro AT stanford DOT edu
  • Hal Tily, MIT, hjt AT mit DOT edu