A Review on Benchmarking: Comparing the Static Analysis Tools (SATs) in Web Security

  • Rekha DeshlahreEmail author
  • Namita TiwariEmail author
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 100)


In this present IOT (Internet of things) era, strong security in a Web application is critical to the success of your online presence. Security importance has grown on a vast scale among Web application. Static analysis tools (SATs) are currently useful tools for developers to explore the vulnerabilities present in the initial source code in a Web application. The aim of the SAT is to improve the effectiveness and usefulness of the source code. There are many SATs are present in this era. However, different tools provide different results according to the complexity of the source code underneath analysis and the application scenario. To compare tool abilities, benchmarking is used on SATs. Benchmarks are used for comparing and accessing different system codes and components. Thus, while reporting the alarm information to the tools, vulnerability missing causes a problem and gives the result as a poor infrastructure of the source code. Benchmark is used to address the limitation of the SATs. However, present benchmarks have strict representative restrictions, disregarding the specificity of the domain, where the tools under the benchmarking will be used. In this paper, benchmark is introduced to compare and access static analysis tools (SATs) in terms of their vulnerability detection capabilities for security. Benchmark uses four real-life development scenarios, including workload with different goals and constraints.


Static analysis tools (SATs) Benchmarking OWASP SAMATE Security metrics Vulnerability detection 


  1. 1.
    Nunes P, Medeiros I, Fonsica JC, Neves N, Correia M, Vieira M (2018) Benchmarking static analysis tools for web security. IEEE Trans Reliab 67(3):1159–1175CrossRefGoogle Scholar
  2. 2.
    (Online) Available:, Accessed on: Jun 6 2015
  3. 3.
    Online Available:, Accessed on 10 Apr 2016
  4. 4.
    Avizienis A, Laprie J-C, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Trans Depend Secur Comput 1(1):11–33CrossRefGoogle Scholar
  5. 5.
    Okum V, Guthrie WF, Gaucher R, Black PE (2007) Effect of static analysis tools on software security: preliminary investigation. In: Proceedings of ACM workshop quality, protection, pp 1–5Google Scholar
  6. 6.
    Albreiki HH, Mahmoud QH (2014) Evaluation of static analysis tools for software security. IEEE, pp 93–98Google Scholar
  7. 7.
    Delaitre A, Stivalet B, Fong E, Okun V (2015) Evaluation bug finders: test and measurement of static code analysis. In: Proceedings of 1st international workshop complex faults failure large software system, pp 14–20Google Scholar
  8. 8.
    Antunes N, Vieira M (2015) On the metrics for benchmarking detection tools. In: Proceedings of IEEE/IFIP 45th annual international depend system network, pp 505–516Google Scholar
  9. 9.
    Masood A, Java J (2015) Static analysis for web service security-tools and techniques for a secure development life cycles. IEEE, pp 1–6Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Maulana Azad National Institute of TechnologyBhopalIndia

Personalised recommendations