Encyclopedia of Database Systems

Living Edition
| Editors: Ling Liu, M. Tamer Özsu

Intrusion Detection Technology

  • Tyrone GradisonEmail author
  • Evimaria Terzi
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7993-3_209-3

Definition

Intrusion detection (ID) is the process of monitoring events occurring in a system and signaling responsible parties when interesting (suspicious) activity occurs.

Intrusion detection systems (IDSs) consist of (1) an agent that collects the information on the stream of monitored events, (2) an analysis engine that detects signs of intrusion, and (3) a response module that generates responses based on the outcome from the analysis engine.

Historical Background

The concept of ID has existed for decades in the domains of personal home security, defense, and early-warning systems. However, automated IDSs emerged in the public domain in 1980 [1] and sought to identify possible violations of the system’s security policy by a user or a set of users.

One of the basic elements of an intrusion detection system is the audit log that captures the system activity. The initial IDSs exposed to the academic community stored operating system actions, i.e., addressed the operating system layer. Over time, other IDSs have emerged that store different artifacts and try to identify intrusive behaviors at different layers of operation. The following layers of operation can be easily identified.

Operating System: The logs in this layer contain information from the kernel and other operating system components and help determine if an attacker is trying to compromise the OS. Examples include the Audit Analysis Project [2], HayStack [3], USTAT [4], Wisdom and Sense [5], ComputerWatch [6], Information Security Officer’s Assistant (ISOA) [7], IDES [8], Hyperview [9], ASAX [10], DPEM [11], IDIOT [12], and Next-Generation Intrusion Detection Expert System (NIDES) [13, 14, 15, 16].

Network: At the network layer, communication data is analyzed to determine if an attacker is trying to access one’s network. Examples of IDSs that operate on this layer include Network Audit Director and Intrusion Reporter (NADIR) [17], Network Security Monitor (NSM) [18], Distributed Intrusion Detection System (DIDS) [19], Graph Based Intrusion Detection System (GrIDS) [20], JiNao [21], EMERALD [22], and Bro [23].

Application: Application-level IDSs examine the operations executed in an application to ascertain if the application is being manipulated to extract behavior that is prohibited. Examples include Multics Intrusion Detection and Alerting System (MIDAS) [24] and Janus [6]. Database-specific IDSs form an important group of application-level IDSs. Examples of such systems include Discovery [25] and RIPPER [26]. Due to the sensitive information stored in database systems, issues related to database-specific IDSs were among the first to be addressed [27, 28, 29].

The above categorization is historical and mostly depends on the type of log data the IDS uses in order to identify abnormal patterns.

Over the last decade, there has been increased interest in IDSs for distributed systems, which may emerge someday as another level that is a hybrid of the other levels. These systems are a product of the current set of systems, architectures, and domains, e.g., sensor networks [5, 30, 31], mobile networks [32, 33], Web services [34], SCADA networks [35], grid computing and metering infrastructure [36, 37, 38], cloud systems and virtual machines [39, 40], cyber-physical systems [41], etc. IDSs for distributed environments may utilize data from any combination of the operating system, network, or application levels.

Irrespective of the operational layer, the very basic detection techniques used by different IDSs have some common basis, which we describe in the next section.

Scientific Fundamentals and Key Applications

Types of Attacks

In this section, we give a generic classification of the types of attacks that ID systems have traditionally tried to cope with. The classification is mainly inspired by the one provided at [42]:
  • External break-ins: When an unauthorized user tries to gain access to a computer system.

  • Masquerader (internal) attacks: When an authorized user makes an attempt to assume the identity of another user. These attacks are also called internal attacks because already authorized users cause them.

  • Penetration attack: In this attack, a user attempts to directly violate the system’s security policy. 3

  • Leakage: Moving potentially sensitive data from the system.

  • Denial of Service: Denying other users the use of system resources, by making these resources unavailable to other users.

  • Malicious use: In this category fall miscellaneous attacks such as file deletion, viruses, resource hogging, etc.

Detection Methodologies

In this section, we provide a high-level categorization of IDSs and give an abstract idea of how they work. In the discussion, we provide examples of existing IDSs. However, the examples presented here are more indicative rather than complete. For a more complete discussion on IDSs, we refer to [42, 43, 44].

Traditionally, there are two basic approaches to intrusion detection: anomaly detection and misuse detection. In anomaly detection, the goal is to define and characterize legitimate behaviors of the users and then detect anomalous behaviors by quantifying deviations from the former. However, identifying the distance between anomalous and legitimate behaviors is a rather difficult notion to quantify.

Anomaly detection can be static or dynamic. A static anomaly detection system is based on the assumption that there is a static portion of the system being monitored. Static portions of the system can be represented as a binary string or a set of binary strings (like files). If the static portion of the system ever deviates from its original form, either an error has occurred or an intruder has altered the static portion of the system. Examples of static anomaly detectors are Tripwire [45, 46] and virus-specific checkers [3].

Dynamic anomaly detectors are harder to build since building them requires a definition of behavior, which is often defined as a sequence (or partially ordered sequence) of distinct events. Differentiating between normal and anomalous activity in dynamic anomaly detection systems is much harder than the problem of distinguishing changes in static elements. Dynamic anomaly detection systems usually create a base profile to characterize normal, acceptable behavior. A profile usually consists of a set of observed measures of behavior for a selected set of dimensions. After initializing the base profile, the dynamic anomaly detection systems are similar to the static ones; they monitor the behavior by comparing the current behavior with that implied by the base profile. Typically, there is a wide variation of acceptable behaviors and statistical methods are employed to measure deviation from the base profile. The main challenge in dynamic anomaly detection systems is that they must build accurate base profiles and then recognize behaviors that significantly deviate from the profile. An example of dynamic anomaly detection systems that uses statistical approaches to measure deviation from the base profile is Next-Generation Intrusion Detection Expert System (NIDES) [13, 14, 15, 16] developed by SRI.

The main advantage of dynamic anomaly detection systems is that they do not require any configuration since they automatically learn the behavior of large number of subjects. Lacking prior knowledge of how an intrusion would manifest itself anomaly detection systems are capable of identifying novel intrusions of variations of known intrusions. However, building base profiles and defining measures of deviations from them is not an easy computational task. For that reason, it has been an active area of research, in which several machine-learning techniques, four time-series analysis and other data-analysis, have been employed [27, 47, 48, 49, 50, 51, 52, 53].

Misuse detection is concerned with identifying intruders who are attempting to break into a system using some known technique. If a system security administrator were aware of all the known vulnerabilities, then a misuse detection system would be able to identify their occurrences and eliminate them. A fairly precisely known kind of intrusion is known as intrusion scenario. A misuse detection system compares current system activity to a set of intrusion scenarios in an attempt to identify a scenario in progress.

The differentiating factor between the various misuse detection techniques is the model used for describing bad behaviors that constitute intrusions. Rules have been primarily used to model the system administrator’s knowledge about the system. MIDAS [54] and IDES [8] are some examples of rule-based systems. Rule-based systems accumulate large numbers of rules, which usually prove difficult to interpret and modify. In order to overcome these problems, model-based rule organizations and state-transition representations were proposed. These modeling approaches are more intuitive particularly in misuse detection systems where users need to express and understand scenarios. An example of such system is Unix State Transition Analysis Tool (USTAT) [55].

The main advantage of a misuse detection system is that the system knows for a fact how normal behavior should manifest itself. This leads to a simple and efficient processing of the audit data. The obvious disadvantage of such systems is that the specification of the signatures to be detected is a time-consuming task that requires lots of domain knowledge. At the same time, misuse detection systems lack the ability to identify novel intrusion profiles.

Future Directions

One of the major concerns associated with IDSs and their utility is their runtime efficiency. More often than not, IDSs consume too many system resources in order to be effective. Developing resource-aware IDSs systems raises some interesting challenges. One possible way of addressing this concern is via extending IDSs for distributed systems [5, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] to build generic, holistic intrusion detection systems. These systems would simultaneously monitor all layers for an arbitrary environment. That is, the system administrator will not have to run a different ID software for operating system- and application-specific attacks but just a single system that will simultaneously be able to detect intrusions in all the desired operational layers. Such systems are expected to be less resource demanding; however, their development will certainly create several new design challenges.

The accuracy and reliability of IDSs also needs to be given further attention. Current intrusion detection tools have a proclivity for producing too many false positives and false negatives, i.e., signaling a security incident when none has occurred and not signaling an incident when it has occurred. More significantly, IDS efficacy with regard to true positives and true negatives is critical to the continued use of (and confidence in) intrusion detection technology. Computational intelligence [56] and collaboration strategies [57] are interesting starting points to find solutions for these set of issues.

IDSs process a large amount of data and are often required to provide the system administrator with a large amount of possibly, technically dense five information. Creating the appropriate visualization, determining the relevant data points to display, using the level of abstraction (if any), and reducing the cognitive load on the administrator are significant usability challenges that need to receive more focus from the research community.

For seamless integration with other tools, which includes other IDSs, there is a need for standardization of the messages sent between the various components of an intrusion detection system and the messages sent to external programs. This message standardization is slowly occurring [58], requires further research, and will enable interoperability of IDSs that will likely speed up technological advances in the field.

In this chapter, we have mainly focused on IDSs and described them as mechanisms that guarantee other systems’ security. However, IDSs are themselves systems and as such they have their own security risks. Therefore, they also require some protection to prevent an intruder from manipulating the intrusion detection system itself.

Recommended Reading

  1. 1.
    Bace RG. Intrusion detection. Macmillan Technical Publishing; 2000.Google Scholar
  2. 2.
    Lunt T, Halme L, Van Horne J. Automated analysis of computer system audit trails for security purposes. In: Thirteenth National Computer Security Conference; 1990.Google Scholar
  3. 3.
    Skardhamar R. Virus: detection and elimination. In: AP Professional; 1996.Google Scholar
  4. 4.
    Koral I. Ustat: a real-time intrusion detection system for unix. In: IEEE Symposium on Research in Security and Privacy; 1993.Google Scholar
  5. 5.
    Vaccaro HS, Liepins GE. Detection of anomalous computer session activity. In: IEEE Symposium on Research in Security and Privacy; 1989.Google Scholar
  6. 6.
    Goldberg I, Wagner D, Thomans R, Brewer E. A secure environment for untrusted helper applications (confining the wily hacker). In: Sixth USENIX Security Symposium; 1996.Google Scholar
  7. 7.
    Winkler JR. A unix prototype for intrusion and anomaly detection in secure networks. In: Thirteenth National Computer Security Conference; 1990.Google Scholar
  8. 8.
    Lunt TF, Jagannathan R, Lee R, Listgarten S, Edwards DL, Neumann PG, Javitz HS, Al Valdes. Ides: the enhanced prototype, a real-time intrusion detection system. In: Technical Report SRI Project 4185-010, SRI- CSI-88-12; 1988.Google Scholar
  9. 9.
    Debar H, Becker M, Siboni D. A neural network component for an intrusion detection system. In: IEEE Computer Society Symposium on Research in Security and Privacy; 1992.Google Scholar
  10. 10.
    Habra J, Le Charlier B, Mounji A, Mathieu I. ASAX: software architecture and rule-based language for universal audit trail analysis. In: ESORICS; 1992. p. 6.Google Scholar
  11. 11.
    Ko C, Fink G, Levitt K. Automated detection of vulnerabilities in privileged programs by execution monitoring. In: 10th Annual Computer Security Applications Conference; 1994.Google Scholar
  12. 12.
    Kumar S, Spafford EH. An application of pattern matching in intrusion detection. In: Purdue University Technical Report CSD-TR-94-013; 1994.Google Scholar
  13. 13.
    Anderson D, Frivold T, Valdes A. Next-generation intrusion detection expert system (NIDES): a summary. In: SRI International Computer Science Laboratory Technical Report SRI-CSL-95-07; 1995.Google Scholar
  14. 14.
    Anderson D, Lunt T, Javitz H, Tamaru A, Valdes A. Detecting unusual program behavior using the statistical component of the next-generation intrusion detection expert system (NIDES). In: SRI International Computer Science Laboratory Technical Report SRI-CSL-95-06; 1995.Google Scholar
  15. 15.
    Javitz H, Valdes A. The NIDES statistical component: description and justification. In: SRI International Computer Science Laboratory Technical Report; 1993.Google Scholar
  16. 16.
    Lunt TF. A survey of intrusion detection techniques. Comput Secur. 1993;12(4):405–18.CrossRefGoogle Scholar
  17. 17.
    Hochberg J, Jackson J, Stallings C, McClary JF, Dubois D, Ford J. Nadir: an automated system for detecting network intrusion and misuse. Comput Secur. 1993;12(3):235–48.CrossRefGoogle Scholar
  18. 18.
    Heberlein LT. A network security monitor. In: IEEE Symposium on Research in Security and Privacy; 1990.Google Scholar
  19. 19.
    Snapp SR, Brentano J, Dias GV, Goan TL, Heberlein LT, Ho C, Levitta KN, Mukherjee B, Smaha SE, Grance T, Teal DM, Mansur D. Dids (distributed intrusion detection system) motivation, architecture, and an early prototype. Internet Besieged: Countering Cyberspace Scofflaws; 1998. p. 211–27.Google Scholar
  20. 20.
    Stanfiford Chen S, Cheung S, Crawford R, Dilger M, Frank J, Hoagland J, Levitt K, Wee C, Yip R, Zerkle D. Grids – a graph based intrusion detection system for large networks. In: 19th National Information Systems Security Conference; 1996.Google Scholar
  21. 21.
    Frank Jou Y, Gong F, Sargor C, Wu SF, Rance CW. Architecture design of a scalable intrusion detection system for the emerging network infrastructure. In: North Carolina State University Technical Report CDRL A005; 1997.Google Scholar
  22. 22.
    Porras PA, Neumann PG. Emerald: event monitoring enabling responses to anomalous live disturbances. In: Nineteenth National Computer Security Conference; 1997.Google Scholar
  23. 23.
    Paxon V. Bro: a system for detecting network intruders in real-time. In: 7th USENIX Security Symposium; 1988.Google Scholar
  24. 24.
    Sebring MM, Shellhouse E, Hanna ME, Whitehurst RA. Expert systems in intrusion detection: a case study. In: Eleventh National Computer Security Conference; 1988.Google Scholar
  25. 25.
    Tener WT. Discovery: an expert system in the commercial data security environment. In: IFIP Security Conference; 1986.Google Scholar
  26. 26.
    Lee W. A data mining framework for building intrusion detection models. In: IEEE Symposium on Security and Privacy; 1999.Google Scholar
  27. 27.
    Bertino E, Kamra A, Terzi E, Vakali A. Intrusion detection in RBAC-administered databases. In: ACSAC; 2005. p. 170–82.Google Scholar
  28. 28.
    Lee VCS, Stankovic JA, Son SH. Intrusion detection in real-time database systems via time signatures. In: IEEE Real Time Technology and Applications Symposium; 2000. p. 124–33.Google Scholar
  29. 29.
    Wenhui S, Tan D. A novel intrusion detection system model for securing web-based database systems. In: COMPSAC; 2001. p. 249.Google Scholar
  30. 30.
    Butun I, Morgera SD, Sankar R. A survey of intrusion detection systems in wireless sensor networks. IEEE Commun Surv Tutor. 2014;16(1):266–82.CrossRefGoogle Scholar
  31. 31.
    Krontiris I, Dimitriou T, Freiling FC. Towards intrusion detection in wireless sensor networks. In: Proceedings of the 13th European Wireless Conference; 2007.Google Scholar
  32. 32.
    Sun B, Osborne L, Yang X, Guizani S. Intrusion detection techniques in mobile ad hoc and wireless sensor networks. Wirel Commun IEEE. 2007;14(5):56–63.CrossRefGoogle Scholar
  33. 33.
    Yazji S, Scheuermann P, Dick RP, Trajcevski G, Jin R. Efficient location aware intrusion detection to protect mobile devices. Pers Ubiquit Comput. 2014;18(1):143–62.CrossRefGoogle Scholar
  34. 34.
    Brahmkstri K, Thomas D, Sawant ST, Jadhav A, Kshirsagar DD. Ontology based multi-agent intrusion detection system for web service attacks using self learning. In: Networks and communications (NetCom2013). Springer International Publishing; 2014. p. 265–74.Google Scholar
  35. 35.
    Cheung S, Dutertre B, Fong M, Lindqvist U, Skinner K, Valdes A. Using model-based intrusion detection for SCADA networks. In: Proceedings of the SCADA Security Scientific Symposium, Vol. 46; 2007. p. 1–12.Google Scholar
  36. 36.
    Berthier R, Sanders WH, Khurana H. Intrusion detection for advanced metering infrastructures: requirements and architectural directions. In: Smart grid communications (SmartGridComm). 2010 First IEEE International Conference on IEEE; 2010. p. 350–5.Google Scholar
  37. 37.
    Gulisano V, Almgren M, Papatriantafilou M. METIS: a two-tier intrusion detection system for advanced metering infrastructures. In: Proceedings of the 5th International Conference on Future Energy Systems. ACM; 2014. p. 211–2.Google Scholar
  38. 38.
    Vieira K, Schulter A, Westphall C, Westphall CM. Intrusion detection for grid and cloud computing. IT Prof. 2010;12(4):38–43.CrossRefGoogle Scholar
  39. 39.
    Moffie M, Kaeli D, Cohen A, Aslam J, Alshawabkeh M, Dy J, Azmandian F. VMM-based intrusion detection system. US Patent 8,719,936, issued May 6, 2014.Google Scholar
  40. 40.
    Roschke S, Cheng F, Meinel C. Intrusion detection in the cloud. In: Dependable, autonomic and secure computing, 2009. DASC’09. Eighth IEEE International Conference on. IEEE; 2009. p. 729–34.Google Scholar
  41. 41.
    Mitchell R, Chen I-R. A survey of intrusion detection techniques for cyber-physical systems. ACM Comput Surv (CSUR). 2014;46(4):55.CrossRefGoogle Scholar
  42. 42.
    Axelsson S. Research in intrusion detection systems: a survey. In: Technical Report 98-17 (revised in 1999) Chalmers University of Technology; 1999.Google Scholar
  43. 43.
    Lee W, Fan W. Mining system audit data: opportunities and challenges. SIGMOD Rec. 2001;30(4):35–44.CrossRefGoogle Scholar
  44. 44.
    Stolfo SJ, Lee W, Chan PK, Fan W, Eskin E. Data mining-based intrusion detectors: an overview of the Columbia ids project. SIGMOD Rec. 2001;30(4):5–14.CrossRefGoogle Scholar
  45. 45.
    Kim GH, Spafford EH. A design and implementation of tripwire: a file system integrity checker. In: Purdue Technical Report CSD-TR-93-071; 1993.Google Scholar
  46. 46.
    Kim GH, Spafford EH. Experiences with tripwire: using integrity checkers for intrusion detection. In: Purdue Technical Report CSD-TR-94-012; 1994.Google Scholar
  47. 47.
    Bertino E, Leggieri T, Terzi E. Securing dbms: characterizing and detecting query floods. In: ISC; 2004. p. 195–206.Google Scholar
  48. 48.
    Huang Y, Fan W, Lee W, Yu P. Cross-feature analysis for detecting ad-hoc routing anomalies. In: Proceedings of 23rd International Conference on Distributed Computing Systems; 2003.Google Scholar
  49. 49.
    Kruegel C, Mutz D, Robertson W, Valeur F. Bayesian event classification for intrusion detection. In: ACSAC; 2003.Google Scholar
  50. 50.
    Lane T, Brodley CE. Temporal sequence learning and data reduction for anomaly detection. ACM Trans Inf Syst Secur. 1999;2(3):295–331.CrossRefGoogle Scholar
  51. 51.
    Lee W, Xiang D. Information-theoretic measures for anomaly detection. In: IEEE Symposium on Security and Privacy; 2001. p. 130–43.Google Scholar
  52. 52.
    Ramadas M, Ostermann S, Tjaden BC. Detecting anomalous network traffic with self-organizing maps. In: RAID; 2003. p. 36–54.Google Scholar
  53. 53.
    Tsai C-F, Hsu Y-F, Lin C-Y, Lin W-Y. Intrusion detection by machine learning: a review. Expert Syst Appl. 2009;36(10):11994–2000.CrossRefGoogle Scholar
  54. 54.
    Sebring M, Shellhouse E, Hanna M, Whitehurst R. Midas: multics intrusion detection and alerting system. Technical Report, National Computer Security Center, SRI International, Ft. Meade; 1998. p. 7.Google Scholar
  55. 55.
    Ilgun K, Kemmerer RA, Porras PA. State transition analysis: a rule-based intrusion detection approach. IEEE Trans Softw Eng. 1995;21(3):181–99.CrossRefGoogle Scholar
  56. 56.
    Wu SX, Banzhaf W. The use of computational intelligence in intrusion detection systems: a review. Appl Soft Comput. 2010;10(1):1–35.CrossRefGoogle Scholar
  57. 57.
    Zhou CV, Leckie C, Karunasekera S. A survey of coordinated attacks and collaborative intrusion detection. Comput Secur. 2010;29(1):124–40.CrossRefGoogle Scholar
  58. 58.
    Wood M, Erlinger MA. Intrusion detection message exchange requirements. IETF Network Working Group. 2007. http://www.ietf.org/rfc/rfc4765.txt.
  59. 59.
    Dowell C, Ramstedt P. The computer watch data reduction tool. In: IEEE Symposium on Research in Security and Privacy; 1989.Google Scholar
  60. 60.
    Smaha SE. An intrusion detection system for the air force. In: Fourth Aerospace Computer Security Applications Conference; 1988.Google Scholar
  61. 61.
    Wang Y, Wang X, Xie B, Wang D, Agrawal DP. Intrusion detection in homogenous and heterogeneous wireless sensor networks. IEEE Trans Mob Comput. 2008;7(6).Google Scholar

Copyright information

© Springer Science+Business Media LLC 2017

Authors and Affiliations

  1. 1.Proficiency LabsAshlandUSA
  2. 2.Computer Science DepartmentBoston UniversityBostonUSA
  3. 3.IBM Almaden Research CenterSan JoseUSA

Section editors and affiliations

  • Elena Ferrari
    • 1
  1. 1.DISTAUniversità degli Studi dell’InsubriaVareseItaly