“Overloaded!” — A Model-Based Approach to Database Stress Testing
As a new era of “Big Data” comes, contemporary database management systems (DBMS) introduced new functions to satisfy new requirements for big volume and velocity applications. Although the development agenda goes at full pace, the current testing agenda does not keep up, especially to validate non-functional requirements, such as: performance and scalability. The testing approaches strongly rely on the combination of unit testing tools and benchmarks. There is still a testing methodology missing, in which testers can model the runtime environment of the DBMS under test, defining the testing goals and the harness support for executing test cases. The major contribution of this paper is the MoDaST (Model-based Database Stress Testing) approach that leverages a state transition model to reproduce a runtime DBMS with dynamically shifting workload volumes and velocity. Each state in the model represents the possible running states of the DBMS. Therefore, testers can define state goals or specific state transitions that revealed bugs. Testers can also use MoDaST to pinpoint the conditions of performance loss and thrashing states. We put MoDaST to practical application testing two popular DBMS: PostgreSQL and VoltDB. The results show that MoDaST can reach portions of source code that are only possible with non-functional testing. Among the defects revealed by MoDaST, when increasing the code coverage, we highlight a defect confirmed by the developers of VoltDB as a major bug and promptly fixed.
KeywordsCode Coverage Test Driver Performance Input Connection Module Performance Slope
Supported by the Digital Inclusion Project: Ministry of Communication of Brazil, National Research Fund of Luxembourg and CNPq grant 441944/2014-0.
- 1.Soror, A.A., Minhas, U.F., Aboulnaga, A., Salem, K., Kokosielis, P., Kamath, S.: Automatic virtual machine configuration for database workloads. ACM Trans. Database Syst. 35(1), 7:1–7:47 (2008)Google Scholar
- 4.Storm, A.J., Garcia-Arellano, C., Lightstone, S.S., Diao, Y., Surendra, M.: Adaptive self-tuning memory in DB2. In: Proceedings of the 32nd International Conference on Very Large Data Bases. VLDB 2006, pp. 1081–1092. VLDB Endowment (2006)Google Scholar
- 5.Gray, J.: Why do computers stop and what can be done about it? (1985)Google Scholar
- 6.Willmor, D., Embury, S.M.: An intensional approach to the specification of test cases for database applications. In: Proceedings of the 28th International Conference on Software Engineering, pp. 102–111. ACM (2006)Google Scholar
- 7.Deng, Y., Frankl, P., Chays, D.: Testing database transactions with agenda. In: Proceedings of the 27th International Conference on Software Engineering. ICSE 2005, pp. 78–87. ACM, New York (2005)Google Scholar
- 8.de Almeida, E.C., Marynowski, J.E., Sunyé, G., Valduriez, P.: Peerunit: a framework for testing peer-to-peer systems. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering. ASE 2010, pp. 169–170. ACM, New York (2010)Google Scholar
- 10.Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing. SoCC 2010, pp. 143–154. ACM, New York (2010)Google Scholar
- 11.Zhu, J., Mauro, J., Pramanick, I.: R-cubed (r3): rate, robustness, and recovery - an availability benchmark framework. Technical report, CA, USA (2002)Google Scholar
- 12.Vieira, M., Madeira, H.: A dependability benchmark for OLTP application environments. In: Proceedings of the 29th International Conference on Very Large Data Bases, VLDB 2003, vol. 29, pp. 742–753. VLDB Endowment (2003)Google Scholar
- 13.Fior, A.G., Meira, J.A., de Almeida, E.C., Coelho, R.G., Fabro, M.D.D., Traon, Y.L.: Under pressure benchmark for DDBMS availability. JIDM 4(3), 266–278 (2013)Google Scholar
- 17.Binnig, C., Kossmann, D., Lo, E.: Towards automatic test database generation. IEEE Data Eng. Bull. 31(1), 28–35 (2008)Google Scholar
- 19.Varrette, S., Bouvry, P., Cartiaux, H., Georgatos, F.: Management of an academic HPC cluster: The UL experience. In: Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS 2014), Bologna, Italy. IEEE, July 2014Google Scholar