Abstract
Code quality is a constant challenge faced by today’s software industry. To ensure that developers follow good coding practices, a variety of program analysis and test coverage tools are routinely deployed. However, these tools often fail to engage and change the practices of developers when applied to legacy systems as they output a huge number of warnings, quickly overwhelming the developers. In this article, we explore how individual feedback and gamification can motivate developers to pay more attention to good coding practices. To that extent, we implemented these two concepts in a tool that we deployed at two large companies where we conducted a case study. We find out that individual feedback is essential for motivating developers. We also find that gamification can be useful but has to be used with caution as it can frustrate some developers. Finally, we reflect on some lessons learned during our case studies, and conclude that the promising approach of our tool needs to be supported by longitudinal studies as well as comparative studies.
Similar content being viewed by others
Notes
A quality gate is defined by a threshold that drives the quality metrics. For instance, 85% of code coverage. When set, developers have no choice other than respecting the threshold.
Themis does not create any code coverage action for test code, which explains why there is no such action in TestCustomer.js.
References
Anderson JR, Corbett AT, Koedinger KR, Pelletier R (1996) Cognitive tutors: lessons learned. https://hal.archives-ouvertes.fr/hal-00699789, aRI Research Note 96-66
Andrews JH, Briand LC, Labiche Y, Namin AS (2006) Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans Softw Eng 32(8):608–624. https://doi.org/10.1109/TSE.2006.83
Arai S, Sakamoto K, Washizaki H, Fukazawa Y (2014) A Gamified tool for motivating developers to remove warnings of bug pattern tools. In: Proceedings of the 2014 6th international workshop on empirical software engineering in practice, IEEE Computer Society, Washington, DC, USA, IWESEP ’14. https://doi.org/10.1109/IWESEP.2014.17, pp 37–42
Ayewah N, Hovemeyer D, Morgenthaler JD, Penix J, Pugh W (2008) Using static analysis to find bugs. IEEE Softw 25(5):22–29. https://doi.org/10.1109/MS.2008.130
Azevedo R, Bernard RM (1995) A meta-analysis of the effects of feedback in computer-based instruction. J Educ Comput Res 13(2):111–127. https://doi.org/10.2190/9LMD-3U28-3A0G-FTQT
Barik T, Murphy-Hill E, Zimmermann T (2016) A perspective on blending programming environments and games: beyond points, badges, and leaderboards. In: 2016 IEEE symposium on visual languages and human-centric computing (VL/HCC). https://doi.org/10.1109/VLHCC.2016.7739676, pp 134–142
Beck K, Fowler M, Beck G (1999) Bad smells in code. Refactoring: Improving the design of existing code, pp 75–88, http://www-public.tem-tsp.eu/gibson/Teaching/Teaching-ReadingMaterial/BeckFowler99.pdf
Beecham S, Baddoo N, Hall T, Robinson H, Sharp H (2008) Motivation in Software Engineering: a systematic literature review. Inf Softw Technol 50(9–10):860–878. https://doi.org/10.1016/j.infsof.2007.09.004. http://www.sciencedirect.com/science/article/pii/S0950584907001097
Beller M, Bholanath R, McIntosh S, Zaidman A (2016) Analyzing the state of static analysis: a large-scale evaluation in open source software. In: 2016 IEEE 23Rd international conference on software analysis, evolution, and reengineering (SANER). https://doi.org/10.1109/SANER.2016.105, vol 1, pp 470–481
Bennett KH (1995) Legacy systems: coping with success. IEEE Softw 12:19–23
Christakis M, Bird C (2016) What developers want and need from program analysis: an empirical study. In: Proceedings of the 31st IEEE/ACM international conference on automated software engineering, ACM, New York, NY, USA, ASE 2016. https://doi.org/10.1145/2970276.2970347. http://doi.acm.org/10.1145/2970276.2970347, pp 332–343
Curtis B, Sappidi J, Szynkarski A (2012) Estimating the principal of an application’s technical debt. IEEE Software 29(6):34–42. http://doi.ieeecomputersociety.org/10.1109/MS.2012.156
Dal Sasso T, Mocci A, Lanza M, Mastrodicasa E (2017) How to gamify software engineering. In: 2017 IEEE 24th international conference on software analysis, evolution and reengineering (SANER), IEEE. http://ieeexplore.ieee.org/abstract/document/7884627/, pp 261–271
Deci EL, Koestner R, Ryan RM (1999) A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychol Bull 125(6):627–668. https://doi.org/10.1037/0033-2909.125.6.627
Deterding S, Dixon D, Khaled R, Nacke L (2011) From game design elements to gamefulness: defining “Gamification”. In: Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, ACM, New York, NY, USA, MindTrek ’11. https://doi.org/10.1145/2181037.2181040, http://doi.acm.org/10.1145/2181037.2181040, pp 9–15
Dorling A, McCaffery F (2012) The gamification of SPICE. In: International conference on software process improvement and capability determination. Springer, Berlin, pp 295–301
Fjóla Tómasdóttir K, Finavaro Aniche M, van Deursen A (2017) Why and how JavaScript developers use linters. In: 32nd IEEE/ACM international conference on automated software engineering
França ACC, Gouveia TB, Santos PCF, Santana CA, FQBd Silva (2011) Motivation in software engineering: a systematic review update. In: 15Th annual conference on evaluation assessment in software engineering (EASE 2011). https://doi.org/10.1049/ic.2011.0019, pp 154–163
Gneezy U, Niederle M, Rustichini A et al (2003) Performance in competitive environments: gender differences. Quarterly Journal of Economics-Cambridge Massachusetts 118(3):1049–1074
Hall T, Sharp H, Beecham S, Baddoo N, Robinson H (2008) What do we know about developer motivation? IEEE software 25(4):92
Inozemtseva L, Holmes R (2014) Coverage is not strongly correlated with test suite effectiveness. In: Proceedings of the 36th international conference on software engineering, ACM, New York, NY, USA, ICSE 2014. https://doi.org/10.1145/2568225.2568271. http://doi.acm.org/10.1145/2568225.2568271, pp 435–445
Johnson B, Song Y, Murphy-Hill E, Bowdidge R (2013) Why don’t software developers use static analysis tools to find bugs?. In: Proceedings of the 2013 international conference on software engineering, IEEE Press, Piscataway, NJ, USA, ICSE ’13. http://dl.acm.org/citation.cfm?id=2486788.2486877, pp 672–681
Letouzey JL, Ilkiewicz M (2012) Managing technical debt with the SQALE method. IEEE Softw 29(6):44–51. https://doi.org/10.1109/MS.2012.129
Miller JC, Maloney CJ (1963) Systematic mistake analysis of digital computer programs. Commun ACM 6 (2):58–63. https://doi.org/10.1145/366246.366248. http://doi.acm.org/10.1145/366246.366248
Mockus A, Nagappan N, Dinh-Trong TT (2009) Test coverage and post-verification defects: a multiple case study. In: 2009 3rd international symposium on empirical software engineering and measurement. https://doi.org/10.1109/ESEM.2009.5315981, pp 291–301
Nadler DA (1979) The effects of feedback on task group behavior: a review of the experimental research. Organ Behav Hum Perform 23(3):309–338
Pedreira O, García F, Brisaboa NR, Piattini M (2015) Gamification in software engineering - a systematic mapping. Inf Softw Technol 57:157–168. https://doi.org/10.1016/j.infsof.2014.08.007
Prause CR, Jarke M (2015) Gamification for enforcing coding conventions. In: Proceedings of the 2015 10th joint meeting on foundations of software engineering, ACM, New York, NY, USA, ESEC/FSE 2015. https://doi.org/10.1145/2786805.2786806. http://doi.acm.org/10.1145/2786805.2786806, pp 649–660
Rojas JM, White TD, Clegg BS, Fraser G (2017) Code defenders: crowdsourcing effective tests and subtle mutants with a mutation testing game. In: Proceedings of the 39th international conference on software engineering, IEEE Press, Piscataway, NJ, USA, ICSE ’17. https://doi.org/10.1109/ICSE.2017.68, pp 677–688
Schooler LJ, Anderson JR (1990) The disruptive potential of immediate feedback. In: Proceedings of the twelfth annual conference of the Cognitive Science Society, pp 702–708
Seaborn K, Fels DI (2015) Gamification in theory and action: a survey. Int J Hum Comput Stud 74:14–31. https://doi.org/10.1016/j.ijhcs.2014.09.006. http://www.sciencedirect.com/science/article/pii/S1071581914001256
Sedlmair M, Meyer M, Munzner T (2012) Design study methodology: Reflections from the trenches and the stacks. IEEE Trans Vis Comput Graph 18(12):2431–2440
Singer L, Schneider K (2012) It was a bit of a race: gamification of version control. In: Proceedings of the second international workshop on games and software engineering: realizing user engagement with game engineering techniques, IEEE Press, Piscataway, NJ, USA, GAS’12. http://dl.acm.org/citation.cfm?id=2663700.2663702, pp 5–8
Snipes W, Nair AR, Murphy-Hill E (2014) Experiences gamifying developer adoption of practices and tools. In: Companion proceedings of the 36th international conference on software engineering, ACM, New York, NY, USA, ICSE Companion 2014. https://doi.org/10.1145/2591062.2591171. http://doi.acm.org/10.1145/2591062.2591171, pp 105–114
Spencer D (2009) Card sorting: designing usable categories. Rosenfeld Media
Tikir MM, Hollingsworth JK (2002) Efficient instrumentation for code coverage testing. SIGSOFT Softw Eng Notes 27(4):86–96. https://doi.org/10.1145/566171.566186. http://doi.acm.org/10.1145/566171.566186
Vasilescu B (2014) Human aspects, gamification, and social media in collaborative software engineering. In: Companion proceedings of the 36th international conference on software engineering, ACM, pp 646–649
Williams TW, Mercer MR, Mucha JP, Kapur R (2001) Code coverage, what does it mean in terms of quality?. In: Annual reliability and maintainability symposium. 2001 Proceedings. International symposium on product quality and integrity (Cat. No.01CH37179). https://doi.org/10.1109/RAMS.2001.902502, pp 420–424
Acknowledgements
We thank our research participants and Cassandra Petrachenko for improving our paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by: Martin Robillard
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Foucault, M., Blanc, X., Falleri, JR. et al. Fostering good coding practices through individual feedback and gamification: an industrial case study. Empir Software Eng 24, 3731–3754 (2019). https://doi.org/10.1007/s10664-019-09719-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10664-019-09719-4