An Operational Framework for Evaluating the Performance of Learning Record Stores

  • Chahrazed LabbaEmail author
  • Azim Roussanaly
  • Anne Boyer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12315)


Nowadays, Learning Record Stores (LRS) are increasingly used within digital learning systems to store learning experiences. Multiple LRS software have made their appearance in the market. These systems provide the same basic functional features including receiving, storing and retrieving learning records. Further, some of them may offer varying features like visualization functions and interfacing with various external systems. However, the non-functional requirements such as scalability, response time and throughput may differ from one LRS to another. Thus, for a specific organization, choosing the appropriate LRS is of high importance, since adopting a non-optimized one in terms of non-functional requirements may lead to a loss of money, time and effort. In this paper, we focus on the performance aspect and we introduce an operational framework for analyzing the performance behaviour of LRS under a set of test scenarios. Moreover, the use of our framework provides the user with the possibility to choose the suitable strategy for sending storing requests to optimize their processing while taking into account the underlying infrastructure. A set of metrics are used to provide performance measurements at the end of each test. To validate our framework, we studied and analyzed the performances of two open source LRS including Learning Locker and Trax.


Test scenarios Non-functional requirements Learning record store xAPI specifications 



This work has been done in the framework of the LOLA (see Footnote 8) project, with the support of the French Ministry of Higher Education, Research and Innovation.


  1. 1.
    Abbas, R., Sultan, Z., Bhatti, S.N.: Comparative analysis of automated load testing tools: Apache Jmeter, Microsoft Visual Studio (TFS), Loadrunner, Siege. In: 2017 International Conference on Communication Technologies (ComTech), pp. 39–44 (2017)Google Scholar
  2. 2.
    Berking, P.: Technical report: choosing a learning record store. Technical Report version1.13, Advanced Distributed Learning (ADL) Initiative, December 2016Google Scholar
  3. 3.
    Dodero, J.M., González-Conejero, E.J., Gutiérrez-Herrera, G., Peinado, S., Tocino, J.T., Ruiz-Rube, I.: Trade-off between interoperability and data collection performance when designing an architecture for learning analytics. Future Gener. Comput. Syst. 68, 31–37 (2017).
  4. 4.
    Khan, R.B.: Comparative study of performance testing tools: apache Jmeter and HP loadrunner. Ph.D. thesis, Department of Software Engineering (2016)Google Scholar
  5. 5.
    Maila-Maila, F., Intriago-Pazmiño, M., Ibarra-Fiallo, J.: Evaluation of open source software for testing performance of web applications. In: Rocha, Á., Adeli, H., Reis, L.P., Costanzo, S. (eds.) WorldCIST’19 2019. AISC, vol. 931, pp. 75–82. Springer, Cham (2019). Scholar
  6. 6.
    Paz, S., Bernardino, J.: Comparative analysis of web platform assessment tools. In: WEBIST, pp. 116–125 (2017)Google Scholar
  7. 7.
    Presnall, B.: Choosing a learning record store (the 2019 edition). In: DEVLEARN, October 2019Google Scholar
  8. 8.
    Tscheulin, A.: How to choose your LRS partner. Yet analytics blog (2017)Google Scholar
  9. 9.
    Vermeulen, M., Wolfe, W.: In search of a learning record store. In: DEVLEARN, October 2019Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Lorraine University, Loria, KIWI TeamNancyFrance

Personalised recommendations