Applied Geomatics

, Volume 10, Issue 1, pp 1–11 | Cite as

A crowdsourcing-based game for land cover validation

  • Maria Antonia Brovelli
  • Irene Celino
  • Andrea Fiano
  • Monia Elisa Molinari
  • Vijaycharan Venkatachalam
Original Paper
  • 118 Downloads

Abstract

Land cover datasets are critical environmental information which are becoming increasingly available nowadays as open data. Accuracy of these datasets is key for their use in manifold applications and can be obtained through validation processes, e.g., the intercomparison with other existing land cover data. The results of this procedure usually highlight disagreements between the compared products which should be further analyzed. The presented work has the aim to address this need by proposing an innovative crowdsourcing-based game that engages citizens in validating disagreements between land cover datasets. The game was played during the Free and Open Source Software for Geospatial (FOSS4G) Europe Conference 2015 by the conference participants and allowed to evaluate the disagreements between the GlobeLand30 and the DUSAF land cover datasets on the Como city area (Italy). The results show the feasibility of the proposed approach and the potentiality of gaming in user engagement for land cover validation campaigns.

Keywords

Land cover validation Citizen science Human computation Game With A Purpose 

Notes

Acknowledgments

The authors would like to thank Blom CGR S.p.a for providing the high-resolution photos used in the game.

References

  1. Bastin L, Buchanan G, Beresford A, Pekel JF, Dubois G (2013) Open-source mapping and services for Web-based land-cover validation. Ecological Informatics 14:9–16CrossRefGoogle Scholar
  2. Brovelli MA, Molinari ME, Hussein E, Chen J, Li R (2015) The first comprehensive accuracy assessment of GlobeLand30 at a national level: methodology and results. Remote Sens 7(4):4191–4212CrossRefGoogle Scholar
  3. Celino I, Contessa S, Corubolo M, Dell’Aglio D, Della Valle E, Fumeo S, Krüger T (2016) Linking smart cities datasets with human computation—the case of urbanmatch. Proceedings of the 11th International Semantic Web Conference (ISWC), Boston, MA, USAGoogle Scholar
  4. Chen J, Chen J, Liao A, Cao X, Chen L, Chen X, He C, Han G, Peng S, Lu M (2015) Global land cover mapping at 30 m resolution: a POK-based operational approach. ISPRS J. Photogram. Remote Sens. 103:7–27CrossRefGoogle Scholar
  5. Cochran WG (1977) Sampling techniques, 3rd edn. John Wiley & Sons, New YorkGoogle Scholar
  6. Congalton RG, Green K (1999) Assessing the accuracy of remotely sensed data: principles and practices. Lewis Publishers, Boca RatonGoogle Scholar
  7. Credali M, Fasolini D, Minnella L, Pedrazzini L, Peggion M, Pezzoli S (2011) Tools for territorial knowledge and government. In Fasolini D, Pezzoli S, Sale VM, Cesca M, Coffani S, Brenna S. (eds) Land cover changes in Lombardy over the last 50 years. ERSAF-Lombardy region, Milan. pp. 17–19Google Scholar
  8. Delincé J (2001) A European approach to area frame survey. Proceedings of the conference on agricultural and environmental statistical applications in Rome (CAESAR), Rome, ItalyGoogle Scholar
  9. Foody GM (2011) Classification accuracy assessment. IEEE Geosci Remote Sens Soc Newsl 159:8–14Google Scholar
  10. Fraisl D (2016) Picture Pile: Gaming for Science. https://blog.iiasa.ac.at/2016/05/17/picture-pile-gaming-for-science/. Accessed 9 Mar 2017
  11. Fritz S, McCallum I, Schill C, Perger C, Grillmayer R, Achard F, Kraxner F, Obersteiner M (2009) Geo-Wiki.Org: the use of crowdsourcing to improve global land cover. Remote Sens 1(3):345–354CrossRefGoogle Scholar
  12. Fritz S, McCallum I, Schill C, Perger C, See L, Schepaschenko D, Velde MVD, Kraxner F, Obersteiner M (2012) Geo-Wiki: an online platform for improving global land cover. Environ Model Softw 31:110–123CrossRefGoogle Scholar
  13. International Institute of Applied Systems Analysis (2016) FotoQuestGo. Explore the European landscape and help build a more sustainable future while doing so. http://www.fotoquest-go.org/. Accessed 9 Mar 2017
  14. Laso-Bayas JC, See L, Fritz S, Sturn T, Karner M, Perger C, Dürauer M, Mondel T, Domian D, Moorthy I (2016) Assessing the quality of crowdsourced in-situ land-use and land cover data from FotoQuest Austria application. Proceedings of the European Geosciences Union (EGU) General Assembly, Vienna, AustriaGoogle Scholar
  15. Law E, Von Ahn L (2011) Human computation. Synthesis lectures on artificial intelligence and machine learning 5(3):1–121Google Scholar
  16. Prestopnik NR, Crowston K (2011) Gaming for (citizen) science: exploring motivation and data quality in the context of crowdsourced science through the design and evaluation of a social-computational system. Proceedings of the 7th IEEE International Conference on e-Science - “Computing for Citizen Science” Workshop Stockholm, SwedenGoogle Scholar
  17. Salk CF, Sturn T, See L, Fritz S, Perger C (2016) Assessing quality of volunteer crowdsourcing contributions: lessons from the Cropland Capture game. Int J Digital Earth 9(4):410–426CrossRefGoogle Scholar
  18. Von Ahn L (2006) Games with a purpose. Computer 39(6):92–94CrossRefGoogle Scholar
  19. Von Ahn L, Dabbish L (2004) Labeling images with a computer game. Proceedings of the SIGCHI conference on Human factors in computing systems, CHI ’04, pages 319–326, New York, NY, USA, ACMGoogle Scholar

Copyright information

© Società Italiana di Fotogrammetria e Topografia (SIFET) 2017

Authors and Affiliations

  • Maria Antonia Brovelli
    • 1
  • Irene Celino
    • 2
  • Andrea Fiano
    • 2
  • Monia Elisa Molinari
    • 1
  • Vijaycharan Venkatachalam
    • 1
  1. 1.Politecnico di MilanoComoItaly
  2. 2.CEFRIELICT Institute Politecnico di MilanoMilanItaly

Personalised recommendations