When a Metadata Provider Task Is Successful

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10450)

Abstract

Computer services are normally assumed to work well all the time. In this work we examined the operation and the errors of metadata harvesting services and tried to find clues that will help predicting the consistency of the behavior and the quality of the harvesting. The large number of such services, the huge amount of harvested information and the possibility of meeting transient conditions makes this work hard. We studied 395530 harvesting tasks from 2138 harvesting services in 185 harvesting rounds during a period of 9 months, of which 214163 ended with error messages and the remaining tasks occasionally returning fewer records. A significant part of the OAI services never worked or have ceased working while many other serves occasionally fail to respond. It is not trivial to decide when a tasks is successful, as tasks that return without an error message do sometimes return records and also tasks that declare that complete normally sometimes return less or no records. This issue is fundamental for further analysis of the harvesting outcome and any assessment that may follow. Therefore, on this work we studied the error messages and the task outcome patterns in which they appear and also the tasks that returned no records, to decide on which is the most essential condition to decide when a task is successful. Our conclusion is that a task should be considered successful when it returns some records.

Keywords

OAI Metadata harvesting Reliability Services Temporary error Permanent error 

References

  1. 1.
    Fuhr, N., Tsakonas, G., Aalberg, T., Agosti, M., Hansen, P., Kapidakis, S., Klas, P., Kovács, L., Landoni, M., Micsik, A., Papatheodorou, C., Peters, C., Sølvberg, I.: Evaluation of digital libraries. Int. J. Digit. Libr. 8(1), 21–38 (2007)CrossRefGoogle Scholar
  2. 2.
    Kapidakis, S.: Rating quality in metadata harvesting. In: Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA 2015), Corfu, Greece, 1–3 July 2015. ACM International Conference Proceeding Series (2015). ISBN 978-1-4503-3452-5Google Scholar
  3. 3.
    Kapidakis, S.: Exploring metadata providers reliability and update behavior. In: Fuhr, N., Kovács, L., Risse, T., Nejdl, W. (eds.) TPDL 2016. LNCS, vol. 9819, pp. 417–425. Springer, Cham (2016). doi: 10.1007/978-3-319-43997-6_36 CrossRefGoogle Scholar
  4. 4.
    Kapidakis, S.: Exploring the consistent behavior of information services. In: CSCC 2016, Corfu, 13–16 July (2016)Google Scholar
  5. 5.
    Lagoze, C., Krafft, D., Cornwell, T., Dushay, N., Eckstrom, D., Saylor, J.: Metadata aggregation and automated digital libraries: a retrospective on the NSDL experience. In: Proceedings of the 6th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2006), pp. 230–239 (2006)Google Scholar
  6. 6.
    Moreira, B.L., Goncalves, M.A., Laender, A.H.F., Fox, E.A.: Automatic evaluation of digital libraries with 5SQual. J. Informetrics 3(2), 102–123 (2009)CrossRefGoogle Scholar
  7. 7.
    Ward., J.: A quantitative analysis of unqualified dublin core metadata element set usage within data providers registered with the open archives initiative. In: Proceedings of the 3rd ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2003), pp. 315–317 (2003). ISBN 0-7695-1939-3Google Scholar
  8. 8.
    Zhang, Y.: Developing a holistic model for digital library evaluation. J. Am. Soc. Inf. Sci. Technol. 61(1), 88–110 (2010)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Laboratory on Digital Libraries and Electronic Publishing, Department of Archive, Library and Museum SciencesIonian UniversityCorfuGreece

Personalised recommendations