Crowdsourcing Satellite Imagery Analysis: Study of Parallel and Iterative Models

  • Nicolas Maisonneuve
  • Bastien Chopard
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7478)


In this paper we investigate how a crowdsourcing approach i.e. the involvement of non-experts, could support the effort of experts to analyze satellite imagery e.g. geo-referencing objects. An underlying challenge in crowdsourcing and especially volunteered geographical information (VGI) is the strategy used to allocate the volunteers in order to optimize a set of criteria, especially the quality of data. We study two main strategies of organization: the parallel and iterative models. In the parallel model, a set of volunteers performs independently the same task and an aggregation function is used to generate a collective output. In the iterative model, a chain of volunteers improves the work of previous workers. We first study their qualitative differences. We then introduce the use of Mechanical Turk Service as a simulator in VGI to benchmark both models. We ask volunteers to identify buildings on three maps and investigate the relationship between the amount of non-trained volunteers and the accuracy and consistency of the result. For the parallel model we propose a new clustering algorithm called democratic clustering algorithm DCA taking into account spatial and democratic constraints to form clusters. While both strategies are sensitive to their parameters and implementations we find that parallel model tends to reduce type I errors (less false identification) by filtering only consensual results, while the iterative model tends to reduce type II errors (better completeness) and outperforms the parallel model for difficult/complex areas thanks to knowledge accumulation. However in terms of consistency the parallel model is better than the iterative one. Secondly, the Linus’ law studied for OpenStreetMap [7] (iterative model) is of limited validity for the parallel model: after a given threshold, adding more volunteers does not change the consensual output. As side analysis, we also investigate the use of the spatial inter-agreement as indicator of the intrinsic difficulty to analyse an area.


Volunteer Geographical Information crowdsourcing satellite image analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval, vol. 463. Addison Wesley (1999)Google Scholar
  2. 2.
    Egidi, M., Narduzzo, A.: The emergence of path-dependent behaviors in cooperative contexts. International Journal of Industrial Organization 15(6), 677–709 (1997)CrossRefGoogle Scholar
  3. 3.
    Ester, M., Xu, X., Kriegel, H.-P., Sander, J.: Density-based algorithm for discovering clusters in large spatial databases with noise, pp. 226–231. AAAI (1996)Google Scholar
  4. 4.
    Fang, C., Lee, J., Schilling, M.A.: Balancing exploration and exploitation through structural design: The isolation of subgroups and organization learning. Organization Science 21(3), 625–642 (2010)CrossRefGoogle Scholar
  5. 5.
    Friess, S.: 50,000 Volunteers Join Distributed Search for Steve Fossett (2007)Google Scholar
  6. 6.
    Hafner, K.: Silicon Valleys High-Tech Hunt for Colleague (2007)Google Scholar
  7. 7.
    Haklay, M., Basiouka, S., Antoniou, V., Ather, A.: How Many Volunteers Does it Take to Map an Area Well? The Validity of Linus’ Law to Volunteered Geographic Information. The Cartographic Journal 47(4), 315–322 (2010)CrossRefGoogle Scholar
  8. 8.
    Howe, J.: Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business, unedited edition. Crown Business (2008)Google Scholar
  9. 9.
    Kanefsky, B., Barlow, N.G., Gulick, V.C.: Can distributed volunteers accomplish massive data analysis tasks? Lunar and Planetary Science 32, 1272 (2001)Google Scholar
  10. 10.
    Lazer, D., Friedman, A.: The network structure of exploration and exploitation. Administrative Science Quarterly 52(4), 667–694 (2007)CrossRefGoogle Scholar
  11. 11.
    Lorenz, J., Rauhut, H., Schweitzer, F., Helbing, D.: How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences of the United States of America 108(22), 9020–9025 (2011)CrossRefGoogle Scholar
  12. 12.
    Malone, T.W., Laubacher, R., Dellarocas, C.: Harnessing crowds: Mapping the genome of collective intelligence. MIT Center for Collective Intelligence (No. 4732-09), 1–20 (2009) (retrieved June 10, 2009)Google Scholar
  13. 13.
    March, J.G.: Exploration and exploitation in organizational learning. Organization Science 2(1), 71–87 (1991)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Mason, W.A.: How to use mechanical turk for cognitive science research, New York (2011)Google Scholar
  15. 15.
    Quinn, A.J., Bederson, B.B.: A taxonomy of distributed human computation. HumanComputer Interaction Lab Tech Report University of Maryland (2009)Google Scholar
  16. 16.
    Raymond, E.: The cathedral and the bazaar. Knowledge Technology Policy 12(3), 23–49 (1999)MathSciNetCrossRefGoogle Scholar
  17. 17.
    ImageCat RIT, World Bank, GFDRR. Remote Sensing and Damage Assessment Mission Haiti (2010)Google Scholar
  18. 18.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast but is it good? evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263 (October 2008)Google Scholar
  19. 19.
    Surowiecki, J.: The wisdom of crowds: why the many are smarter than the few and how... Doubleday (2004)Google Scholar
  20. 20.
    Welinder, P., Branson, S., Belongie, S., Perona, P.: The Multidimensional Wisdom of Crowds. Most 6(7), 1–9 (2010)Google Scholar
  21. 21.
    Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise. Security (1), 1–9Google Scholar
  22. 22.
    Woolley, A.W., Chabris, C.F., Pentland, A., Hashmi, N., Malone, T.W.: Evidence for a collective intelligence factor in the performance of human groups. Science 330(6004), 686–688 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Nicolas Maisonneuve
    • 1
  • Bastien Chopard
    • 1
  1. 1.Computer Science DepartmentUniversity of GenevaSwitzerland

Personalised recommendations