Crowdsourcing and Gamification with Applications to Computer Vision (IN0014, IN2107)
Organizational Information
Announcements
- On November 9, the seminar will take place in MW 0250.
- For organizational requests please write an e-mail using this link. Please do not alter the predefined subject!
- As we currently have less than 20 participants, we only offer 20 slots mostly before the Christmas holidays, in order to save the precious time after the Christmas holidays for exam preparation.
- It is crucial to participate in the introductory meeting on Monday, October 12.
- The list of topics can be found below!
Criteria for passing the seminar
- Attendance is mandatory: If you cannot attend one of the dates, you have to send a notification to me until 9:00 am at the latest (same day). Furthermore you have to provide evidence, e.g. an attestation, justifying that you were not able to come. If this criterion is not fulfilled, the course is graded as not passed.
- Presentation: Every student has to give a presentation on the chosen topic. The duration is 40 min. presentation + 10. min. "freestyle" + 10 min. discussion and feedback.
- Freestyle part: During the freestyle part, students are encouraged to extend their presentation with a contribution that augments the presented topic in any kind of way. Such contributions include ideas, concepts, pieces of software, demos, etc.
- Discussion part: Active participation in the discussions is absolutely necessary for achieving a good grade.
- Presentation preview: Each student can discuss his or her presentation one week before the actual presentation. Slots for obtaining feedback are Monday 2:10pm - 2:55pm and 2:55pm - 3:40pm .
Abstract
Crowdsourcing has gained a considerable interest during the last decade, mainly facilitated by a drastic increase in
available computing power as well as the increasing availability of digital communication such as the internet or mobile communication in general. Inspired by the term outsourcing, crowdsourcing refers to the process of the outsourcing of a specified task to a group of people (the crowd) performing the this task on a (mostly) voluntary basis. Amazon for instance, as one of the largest service providers in this area, refers with its mechanical Turk platform to the fulfillment of so-called human intelligence tasks (HITs), where crowd-users receive provisions for each fulfilled HIT. Recently, there have been the first attempts to use crowdsourcing also in the area of computer vision and pattern recognition.
In order to provide an additional incentive or even avoid paying clickworkers, the concept of gamification, i.e., the usage of games or game elements in non-game contexts, has been exploited a lot in this domain. This particular form of crowdsourcing is called playsourcing. Playsourcing refers to the usage of games with high numbers
of players - such as online and smartphone games - for solving real world problems. Probably the most popular examples for the application of playsourcing to scientific problems so far are the games "FoldIt", "nanocrafter", and "play to cure". Recently, there have been the first attempts to use crowdsourcing also in the area of computer vision and pattern recognition.
In this seminar we will discuss several principles as well as application examples of gamification in the context of crowdsourcing - with a particular focus on computer vision.
Schedule
Literature
(1) Luis Von Ahn and Laura Dabbish. Designing games with a purpose. Communications of the ACM 51.8 (2008): 58-67.
(2) Seth Cooper, Firas Khatib, Adrien Treuille, Janos Barbero, Jeehyung Lee, Michael Beenen, Andrew Leaver-Fay, David Baker, Zoran Popović, Foldit players (2010). Predicting protein structures with a multiplayer online game. Nature 446 p. 756-760 (2010).
(3) Gabe Zichermann and Christopher Cunningham. Gamification by design: Implementing game mechanics in web and mobile apps. O'Reilly (2011).
(4) Lafourcade, Mathieu , and Joubert, Alain and Le Brun, Nathalie. Games with a Purpose (Gwaps). Wiley (2015)
(5) Play to cure: Genes in space
(6) Catherine Wah. Crowdsourcing and Its Applications in Computer Vision. technical report (2011).
(7) Greg Little, Lydia B. Chilton, Max Goldman, Robert C. Miller.
TurKit?: Human Computation Algorithms on Mechanical Turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology (2010).
(8) Richard Souvenir et al. Gamesourcing to acquire labeled human pose estimation data. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on , vol., no., pp.1-6, 16-21 June 2012.
(9) Catherine Wah, Grant Van Horn, Steven Branson, Subhransu Maji, Pietro Perona, Serge Belongie. Similarity Comparisons for Interactive Fine-Grained Categorization. Workshop on Computer Vision and Human Computation at CVPR (2014).
(10) Ejaz Ahmed, Subhransu Maji, Gregory Shakhnarovich, Larry Davis. Using Human Knowledge to Judge Part Goodness: Interactive Part Selection. Workshop on Computer Vision and Human Computation at CVPR (2014).
(11) Michael Wilber, Sam Kwak, Serge Belongie. Cost-Effective HITs for Relative Similarity Comparisons. Workshop on Computer Vision and Human Computation at CVPR (2014).
(12) Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba. Acquiring Visual Classfiers from Human Imagination. Workshop on Computer Vision and Human Computation at CVPR (2014).
(13) Crowd Truth Project.
(14) Ferran Cabezas, Axel Carlier, Amaia Salvador, Xavier Giró-i-Nieto, Vincent Charvillat. Quality Control in Crowdsourced Object Segmentation. Preprint (2015).