Skip to main content

Tools for Injecting Software Faults at the Binary and Source-Code Level

  • Chapter
  • First Online:
Innovative Technologies for Dependable OTS-Based Critical Systems

Abstract

Software Fault Injection (SFI) is an approach for the assessment of fault-tolerant software, which emulates software faults (i.e., bugs) in a software component to assess the impact of these faults on the system behavior and faulttolerance properties. In order to make SFI feasible, support tools are required to reduce the costs of implementing the experimental testbed, to automate and to keep low the duration of SFI experiments, and to oversee the phases of SFI campaigns. The focus of this chapter is on the workflow of SFI campaigns and on their implementation and execution. We detail the steps to be performed in a SFI campaign, in order to allow a tester to reproduce them in SFI experiments, and to highlight the key aspects to which attention should be given when performing SFI. SFI is presented in the context of two SFI tools developed during the CRITICAL-STEP project, namely SAFE, which injects software faults by mutating the source code of the target, and csXception TM /G-SWFIT, which injects software faults by mutating binary code. We also provide a comparison of the two tools in terms of accuracy (i.e., ability to inject faults that match faults in the source code), and lessons learned about the implementation of SFI tools.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Each fault operator is related to a specific fault type and is denoted with the “O” prefix (e.g., the OMIA fault operator is related to the MIA fault type).

References

  1. Christmansson, J., Chillarege, R.: Generation of an error set that emulates software faults based on field data. In: Proceedings of the Symposium on Fault-Tolerant Computing (1996)

    Google Scholar 

  2. Voas, J., Charron, F., McGraw, G., Miller, K., Friedman, M.: Predicting how badly "Good" software can behave. IEEE Softw. 14(4), 15–19 (1997)

    Google Scholar 

  3. Durães, J., Madeira, H.: Emulation of software faults: a field data study and a practical approach. IEEE Trans. Software Eng. 32(11), 849–867 (2006)

    Google Scholar 

  4. Gray, J.: A census of tandem system availability between 1985 and 1990. IEEE Trans. Reliab. 39(4), 409–418 (1990)

    Google Scholar 

  5. Ng, W., Chen, P.: The systematic improvement of fault tolerance in the rio file cache. In: Proceedings of the 29th International Symposium on Fault-Tolerant Computing (1999)

    Google Scholar 

  6. Vieira, M., Madeira, H.: Benchmarking the dependability of different OLTP systems. In: Proceedings of the International Conference on Dependable Systems and Networks (2003)

    Google Scholar 

  7. Pecchia, A., Pietrantuono, R., Russo, S.: Criticality-Driven Component Integration in Complex Software Systems. Computer Safety, Reliability, and Security (2011)

    Google Scholar 

  8. International Organization for Standardization: Product development: software level. ISO 26262–6 (2012)

    Google Scholar 

  9. National Aeronautics and Space Administration: NASA Software Safety Guidebook. NASA-GB-8719.13 (2004)

    Google Scholar 

  10. Norris, J.: Mission-critical development with open source software: lessons learned. IEEE Softw. 21(1), 42–49 (2004)

    Google Scholar 

  11. Maia, R., Henriques, L., Barbosa, R., Costa, D., Madeira, H.: Xception fault injection and robustness testing framework: a case-study of testing RTEMS. In: VI Test and Fault Tolerance Workshop

    Google Scholar 

  12. Chillarege, R., Bhandari, I., Chaar, J., Halliday, M., Moebus, D., Ray, B., Wong, M.: Orthogonal defect classification—a concept for in-process measurements. IEEE Trans. Software Eng. 18(11), 943–956 (1992)

    Google Scholar 

  13. Chillarege, R., Bowen, N.: Understanding large system failures-a fault injection experiment. In: Proceedings of the International Symposium on Fault-Tolerant Computing (1989)

    Google Scholar 

  14. Barton, J., Czeck, E., Segall, Z., Siewiorek, D.: Fault Injection Experiments using FIAT. IEEE Trans. Comput. 39(4), 575–582 (1990)

    Google Scholar 

  15. Moraes, R., Barbosa, R., Durães, J., Mendes, N., Martins, E., Madeira, H.: Injection of faults at component interfaces and inside the component code: are they equivalent?. In: European Dependable Computing Conference (2006)

    Google Scholar 

  16. Koopman, P., DeVale, J.: The exception handling effectiveness of POSIX operating systems. IEEE Trans. Software Eng. 26(9), 837–848 (2000)

    Google Scholar 

  17. Daran, M., Thévenod-Fosse, P.: Software Error Analysis: A Real Case Study Involving Real Faults and Mutations. ACM Soft. Eng. Notes 21(3) (1996)

    Google Scholar 

  18. Andrews, J., Briand, L., Labiche, Y.: Is mutation an appropriate tool for testing experiments?. In: Proceedings of the International Conference on Software Engineering (2005)

    Google Scholar 

  19. Kao, W.I., Iyer, R., Tang, D.: FINE: A fault injection and monitoring environment for tracing the UNIX system behavior under faults. IEEE Trans. Software Eng. 19(11), 1105–1118 (1993)

    Google Scholar 

  20. Carreira, J., Madeira, H., Silva, J.: Xception: a technique for the experimental evaluation of dependability in modern computers. IEEE Trans. Software Eng. 24(2), 125–136 (1998)

    Google Scholar 

  21. Stott, D., Floering, B., Kalbarczyk, Z., Iyer, R.: A framework for assessing dependability in distributed systems with lightweight fault injectors. In: Proceedings of the International Computer Performance and Dependability Symposium (2000)

    Google Scholar 

  22. Aidemark, J., Vinter, J., Folkesson, P., Karlsson, J.: GOOFI: generic object-oriented fault injection tool. In: Proceedings of the International Conference on Depandable Systems and Networks (2001)

    Google Scholar 

  23. Arlat, J., Fabre, J., Rodríguez, M., Salles, F.: Dependability of COTS microkernel-based systems. IEEE Trans. Comput. 51(2), 138–163 (2002)

    Google Scholar 

  24. Martins, E., Rubira, C., Leme, N.: Jaca: a reflective fault injection tool based on patterns. In: Proceedings of International Conference on Dependable Systems and Networks (2002)

    Google Scholar 

  25. Cotroneo, D., Lanzaro, A., Natella, R., Barbosa, R.: Experimental analysis of binary-level software fault injection in complex software. In: Proceedings of the Ninth European Dependable Computing Conference (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna Lanzaro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Italia

About this chapter

Cite this chapter

Lanzaro, A., Natella, R., Barbosa, R. (2013). Tools for Injecting Software Faults at the Binary and Source-Code Level. In: Cotroneo, D. (eds) Innovative Technologies for Dependable OTS-Based Critical Systems. Springer, Milano. https://doi.org/10.1007/978-88-470-2772-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-88-470-2772-5_7

  • Published:

  • Publisher Name: Springer, Milano

  • Print ISBN: 978-88-470-2771-8

  • Online ISBN: 978-88-470-2772-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics