Advertisement

Learning Symbolic User Models for Intrusion Detection: A Method and Initial Results

  • Ryszard S. Michalski
  • Kenneth A. Kaufman
  • Jaroslaw Pietrzykowski
  • Bartłomiej Śnieżyński
  • Janusz Wojtusiak
Part of the Advances in Soft Computing book series (AINSC, volume 35)

Abstract

This paper briefly describes the LUS-MT method for automatically learning user signatures (models of computer users) from datastreams capturing users’ interactions with computers. The signatures are in the form of collections of multistate templates (MTs), each characterizing a pattern in the user’s behavior. By applying the models to new user activities, the system can detect an imposter or verify legitimate user activity. Advantages of the method include the high expressive power of the models (a single template can characterize a large number of different user behaviors) and the ease of their interpretation, which makes possible their editing or enhancement by an expert. Initial results are very promising and show the potential of the method for user modeling.

Keywords

Intrusion Detection User Model Anomaly Detection Machine Learn Technique Intrusion Detection System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    1. Adomavicius, G. and Tuzhilin, A., “Building Customer Profiles in Personalization Applications Using Data Mining Methods,” IEEE Computer, 34(2), 2001.Google Scholar
  2. 2.
    2. Bace, R.G.,. Intrusion Detection, Indianapolis: Macmillan Technical Publishing, 2000.Google Scholar
  3. 3.
    3. Billsus, D. and Pazzani, M., “User Modeling For Adaptive News Access,” User Modeling and User-Adapted Interaction, 10(2–3):147–180, 2000.CrossRefGoogle Scholar
  4. 4.
    4. Cortes, C., Fisher, K., Pregibon, D., Rogers, A. and Smith, F., “Hancock: A Language For Extracting Signatures From Data Streams,” Proceedings of the 6th ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, 2000.Google Scholar
  5. 5.
    5. Eskin, E., Arnold, A., Prerau, M., Portnoy, L. and Stolfo, S., “A Geometric Framework for Unsupervised Anomaly Detection: Detecting Intrusions in Unlabeled Data,” in D. Barbara, & S. Jajodia (Eds.), Applications of Data Mining in Computer Security, Kluwer, 2002, pp. 77–102.Google Scholar
  6. 6.
    6. Goldring, T., “Recent Experiences with User Profiling for Windows NT,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.Google Scholar
  7. 7.
    7. Goldring, T., Shostak, J., Tessman, B. and Degenhardt, S., “User Pro.ling (Extended Abstract),” NSA unclassified internal report, 2000.Google Scholar
  8. 8.
    8. Hofmeyr, S., Forrest, S. and Somayaji, A., “Intrusion Detection using Sequences of System Calls,” Journal of Computer Security, 6, 1998, pp. 151–180.Google Scholar
  9. 9.
    9. Javitz H. S. and Valdes, A., “The SRI IDES Statistical Anomaly Detector,” Proceedings of the IEEE Symposium on Research in Security and Privacy, Oakland, CA, May 1991.Google Scholar
  10. 10.
    10. Julisch, K. and Dacier M., “Mining Intrusion Detection Alarms for Actionable Knowledge,” Proc. 8th Intl. Conf. on Knowledge Discovery and Data Mining, July 2002.Google Scholar
  11. 11.
    11. Kerber, R., “Chimerge: Discretization for Numeric Attributes,” Proceedings of the Tenth National Conference on Artificial Intelligence (AAAI-92), 1992, pp. 123–128.Google Scholar
  12. 12.
    12. Lane, T. and Brodley, C.E., “Temporal Sequence Learning and Data Reduction for Anomaly Detection,” ACM Trans. on Information and Syst. Security, 2, 1999, pp. 295–331.CrossRefGoogle Scholar
  13. 13.
    13. McHugh, J., “Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Intrusion Detection System Evaluations as Performed by Lincoln Laboratory,” ACM Trans. on Information & Systems Security, 3, November 2000, pp. 262–294.CrossRefGoogle Scholar
  14. 14.
    14. Michalski, R.S., “Attributional Calculus: A Logic and Representation Language for Natural Induction,” Reports of the Machine Learning and Inference Laboratory, MLI 04–2, George Mason University, 2004.Google Scholar
  15. 15.
    15. Michalski, R.S., Kaufman K., Pietrzykowski, J., Sniezynski, B. and Wojtusiak, J., “Learning User Models for Computer Intrusion Detection: Preliminary Results from Natural Induction Approach,” Reports of the Machine Learning and Inference Laboratory, MLI 05–3, George Mason University, 2005.Google Scholar
  16. 16.
    16. Mukkamala, S. and Sung, A. “Comparison of Neural Networks and Support Vector Machines in Intrusion Detection,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.Google Scholar
  17. 17.
    17. Novak, J., Stark, V. and Heinbuch, D., “Zombie Scan,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.Google Scholar
  18. 18.
    18. Reinke, R., “Knowledge Acquisition and Refinement Tools for the ADVISE Meta-expert System,” M.S. Thesis, Reports of the Intelligent Systems Group, ISG 84–5, UIUCDCS-F-84-921, University of Illinois Dept. of Computer Science, Urbana, 1984. Learning Symbolic User Models for Intrusion Detection 285Google Scholar
  19. 19.
    19. Schonlau, M. and Theus, M., “Detecting Masquerades in Intrusion Detection based on Unpopular Commands,” Information Processing Letters, 76, 2000, pp. 33–38.CrossRefGoogle Scholar
  20. 20.
    20. Scott, S., “A Bayesian Paradigm for Designing Intrusion Detection Systems,” Computational Statistics and Data Analysis, 45, 2004, pp. 69–83.CrossRefMathSciNetGoogle Scholar
  21. 21.
    21. Shah, K., Jonckheere, E. and Bohacek, S., “Detecting Network Attacks through Traffic Modeling,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.Google Scholar
  22. 22.
    22. Shavlik, J. and Shavlik, M., “Selection, Combination, and Evaluation of Effective Software Sensors for Detecting Abnormal Computer Usage,” Proc. of the 10th Intl. Conference on Knowledge Discovery and Data Mining, Seattle, WA, 2004, pp. 276–285.Google Scholar
  23. 23.
    23. Streilein, W.W., Cunningham, R.K. and Webster, S.E., “Improved Detection of Low-profile Probe and Novel Denial-of-service Attacks,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.Google Scholar
  24. 24.
    24. Valdes, A., “Profile Based Intrusion Detection: Lessons Learned, New Directions,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.Google Scholar
  25. 25.
    25. Valdes, A. and Skinner, K., “Adaptive, Model-based Monitoring for Cyber Attack Detection,” in H. Debar, L. Me and F. Wu (Eds.), Lecture Notes in Computer Science #1907 (from Recent Advances in Intrusion Detection, RAID-2000), Springer-Verlag, 2000.Google Scholar
  26. 26.
    26. Wojtusiak, J., “AQ21 User's Guide,” Reports of the Machine Learning and Inference Laboratory, MLI 04–3, George Mason University, 2004.Google Scholar

Copyright information

© Springer 2006

Authors and Affiliations

  • Ryszard S. Michalski
    • 1
    • 2
  • Kenneth A. Kaufman
    • 1
  • Jaroslaw Pietrzykowski
    • 1
  • Bartłomiej Śnieżyński
    • 1
    • 3
  • Janusz Wojtusiak
    • 1
  1. 1.Machine Learning and Inference LaboratoryGeorge Mason UniversityFairfaxUSA
  2. 2.Institute of Computer SciencePolish Academy of SciencesWarsawPoland
  3. 3.AGH University of Science and TechnologyKrakowPoland

Personalised recommendations