Skip to main content

Crowdsourcing Satellite Imagery Analysis: Study of Parallel and Iterative Models

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7478))

Abstract

In this paper we investigate how a crowdsourcing approach i.e. the involvement of non-experts, could support the effort of experts to analyze satellite imagery e.g. geo-referencing objects. An underlying challenge in crowdsourcing and especially volunteered geographical information (VGI) is the strategy used to allocate the volunteers in order to optimize a set of criteria, especially the quality of data. We study two main strategies of organization: the parallel and iterative models. In the parallel model, a set of volunteers performs independently the same task and an aggregation function is used to generate a collective output. In the iterative model, a chain of volunteers improves the work of previous workers. We first study their qualitative differences. We then introduce the use of Mechanical Turk Service as a simulator in VGI to benchmark both models. We ask volunteers to identify buildings on three maps and investigate the relationship between the amount of non-trained volunteers and the accuracy and consistency of the result. For the parallel model we propose a new clustering algorithm called democratic clustering algorithm DCA taking into account spatial and democratic constraints to form clusters. While both strategies are sensitive to their parameters and implementations we find that parallel model tends to reduce type I errors (less false identification) by filtering only consensual results, while the iterative model tends to reduce type II errors (better completeness) and outperforms the parallel model for difficult/complex areas thanks to knowledge accumulation. However in terms of consistency the parallel model is better than the iterative one. Secondly, the Linus’ law studied for OpenStreetMap [7] (iterative model) is of limited validity for the parallel model: after a given threshold, adding more volunteers does not change the consensual output. As side analysis, we also investigate the use of the spatial inter-agreement as indicator of the intrinsic difficulty to analyse an area.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval, vol. 463. Addison Wesley (1999)

    Google Scholar 

  2. Egidi, M., Narduzzo, A.: The emergence of path-dependent behaviors in cooperative contexts. International Journal of Industrial Organization 15(6), 677–709 (1997)

    Article  Google Scholar 

  3. Ester, M., Xu, X., Kriegel, H.-P., Sander, J.: Density-based algorithm for discovering clusters in large spatial databases with noise, pp. 226–231. AAAI (1996)

    Google Scholar 

  4. Fang, C., Lee, J., Schilling, M.A.: Balancing exploration and exploitation through structural design: The isolation of subgroups and organization learning. Organization Science 21(3), 625–642 (2010)

    Article  Google Scholar 

  5. Friess, S.: 50,000 Volunteers Join Distributed Search for Steve Fossett (2007)

    Google Scholar 

  6. Hafner, K.: Silicon Valleys High-Tech Hunt for Colleague (2007)

    Google Scholar 

  7. Haklay, M., Basiouka, S., Antoniou, V., Ather, A.: How Many Volunteers Does it Take to Map an Area Well? The Validity of Linus’ Law to Volunteered Geographic Information. The Cartographic Journal 47(4), 315–322 (2010)

    Article  Google Scholar 

  8. Howe, J.: Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business, unedited edition. Crown Business (2008)

    Google Scholar 

  9. Kanefsky, B., Barlow, N.G., Gulick, V.C.: Can distributed volunteers accomplish massive data analysis tasks? Lunar and Planetary Science 32, 1272 (2001)

    Google Scholar 

  10. Lazer, D., Friedman, A.: The network structure of exploration and exploitation. Administrative Science Quarterly 52(4), 667–694 (2007)

    Article  Google Scholar 

  11. Lorenz, J., Rauhut, H., Schweitzer, F., Helbing, D.: How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences of the United States of America 108(22), 9020–9025 (2011)

    Article  Google Scholar 

  12. Malone, T.W., Laubacher, R., Dellarocas, C.: Harnessing crowds: Mapping the genome of collective intelligence. MIT Center for Collective Intelligence (No. 4732-09), 1–20 (2009) (retrieved June 10, 2009)

    Google Scholar 

  13. March, J.G.: Exploration and exploitation in organizational learning. Organization Science 2(1), 71–87 (1991)

    Article  MathSciNet  Google Scholar 

  14. Mason, W.A.: How to use mechanical turk for cognitive science research, New York (2011)

    Google Scholar 

  15. Quinn, A.J., Bederson, B.B.: A taxonomy of distributed human computation. HumanComputer Interaction Lab Tech Report University of Maryland (2009)

    Google Scholar 

  16. Raymond, E.: The cathedral and the bazaar. Knowledge Technology Policy 12(3), 23–49 (1999)

    Article  MathSciNet  Google Scholar 

  17. ImageCat RIT, World Bank, GFDRR. Remote Sensing and Damage Assessment Mission Haiti (2010)

    Google Scholar 

  18. Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast but is it good? evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263 (October 2008)

    Google Scholar 

  19. Surowiecki, J.: The wisdom of crowds: why the many are smarter than the few and how... Doubleday (2004)

    Google Scholar 

  20. Welinder, P., Branson, S., Belongie, S., Perona, P.: The Multidimensional Wisdom of Crowds. Most 6(7), 1–9 (2010)

    Google Scholar 

  21. Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise. Security (1), 1–9

    Google Scholar 

  22. Woolley, A.W., Chabris, C.F., Pentland, A., Hashmi, N., Malone, T.W.: Evidence for a collective intelligence factor in the performance of human groups. Science 330(6004), 686–688 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Maisonneuve, N., Chopard, B. (2012). Crowdsourcing Satellite Imagery Analysis: Study of Parallel and Iterative Models. In: Xiao, N., Kwan, MP., Goodchild, M.F., Shekhar, S. (eds) Geographic Information Science. GIScience 2012. Lecture Notes in Computer Science, vol 7478. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33024-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33024-7_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33023-0

  • Online ISBN: 978-3-642-33024-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics