Human-Machine Decision Support Systems for Insider Threat Detection

Part of the Data Analytics book series (DAANA)


Insider threats are recognised to be quite possibly the most damaging attacks that an organisation could experience. Those on the inside, who have privileged access and knowledge, are already in a position of great responsibility for contributing towards the security and operations of the organisation. Should an individual choose to exploit this privilege, perhaps due to disgruntlement or external coercion from a competitor, then the potential impact to the organisation can be extremely damaging. There are many proposals of using machine learning and anomaly detection techniques as a means of automated decision-making about which insiders are acting in a suspicious or malicious manner, as a form of large scale data analytics. However, it is well recognised that this poses many challenges, for example, how do we capture an accurate representation of normality to assess insiders against, within a dynamic and ever-changing organisation? More recently, there has been interest in how visual analytics can be incorporated with machine-based approaches, to alleviate the data analytics challenges of anomaly detection and to support human reasoning through visual interactive interfaces. Furthermore, by combining visual analytics and active machine learning, there is potential capability for the analysts to impart their domain expert knowledge back to the system, so as to iteratively improve the machine-based decisions based on the human analyst preferences. With this combined human-machine approach to decision-making about potential threats, the system can begin to more accurately capture human rationale for the decision process, and reduce the false positives that are flagged by the system. In this work, I reflect on the challenges of insider threat detection, and look to how human-machine decision support systems can offer solutions towards this.


Insider Threat Detection Anomaly Detection Machine-based Approach Machine Active Learning Linguistic Inquiry Word Count (LIWC) 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



Many thanks to my colleagues from Oxford Cyber Security, Dr. Ioannis Agrafiotis, Dr. Jassim Happa, Dr. Jason Nurse, Dr. Oliver Buckley (now with Cranfield University), Professor Michael Goldsmith, and Professor Sadie Creese, with whom my early work on insider threat detection was carried out with.


  1. 1.
  2. 2.
    Guardian. Bradley manning prosecutors say soldier ‘leaked sensitive information’, 2013.
  3. 3.
    FBI. Robert Philip Hanssen Espionage Case, 2001.
  4. 4.
    CSO Magazine, CERT Program (Carnegie Mellon University) and Deloitte. CyberSecurity Watch Survey: Organizations Need More Skilled Cyber Professionals To Stay Secure, 2011.{_}survey{_}2011.cfm.
  5. 5.
    Kroll and Economist Intelligence Unit. Annual Global Fraud Survey. 2011/2012, 2012.Google Scholar
  6. 6.
    PricewaterhouseCoopers LLP. Cybercrime: Protecting against the growing threat - Events and Trends, 2012.Google Scholar
  7. 7.
    R. Anderson, T. Bozek, T. Longstaff, W. Meitzler, M. Skroch, and K. Van Wyk. Research on mitigating the insider threat to information systems. In Proceedings of the Insider Workshop, Arlington, Virginia, USA. RAND, August 2000.Google Scholar
  8. 8.
    F. L. Greitzer, A. P. Moore, D. M. Cappelli, D. H. Andrews, L. A. Carroll, and T. D. Hull. Combating the insider cyber threat. Security & Privacy, IEEE, 6(1):61–64, 2007.Google Scholar
  9. 9.
    D. M. Cappelli, A. P. Moore, and R. F. Trzeciak. The CERT Guide to Insider Threats: How to Prevent, Detect, and Respond to Information Technology Crimes. Addison-Wesley Professional, 1st edition, 2012.Google Scholar
  10. 10.
    L. Spitzner. Honeypots: catching the insider threat. In Proc. of the 19th IEEE Computer Security Applications Conference (ACSAC’03), Las Vegas, Nevada, USA, pages 170–179. IEEE, December 2003.Google Scholar
  11. 11.
    P. A. Legg, N. Moffat, J. R. C. Nurse, J. Happa, I. Agrafiotis, M. Goldsmith, and S. Creese. Towards a conceptual model and reasoning structure for insider threat detection. Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications, 4(4):20–37, 2013.Google Scholar
  12. 12.
    J. R. C. Nurse, O. Buckley, P. A. Legg, M. Goldsmith, S. Creese, G. R. T. Wright, and M. Whitty. Understanding insider threat: A framework for characterising attacks. In Security and Privacy Workshops (SPW), 2014 IEEE, pages 214–228, May 2014.Google Scholar
  13. 13.
    M. Maybury, P. Chase, B. Cheikes, D. Brackney, S. Matzner, T. Hetherington, B. Wood, C. Sibley, J. Marin, T. Longstaff, L. Spitzner, J. Haile, J. Copeland, and S. Lewandowski. Analysis and detection of malicious insiders. In Proc. of the International Conference on Intelligence Analysis, McLean, Viginia, USA. MITRE, May 2005.Google Scholar
  14. 14.
    C. Colwill. Human factors in information security: The insider threat who can you trust these days? Information Security Technical Report, 14(4):186–196, 2009.CrossRefGoogle Scholar
  15. 15.
    F. L. Greitzer and R. E. Hohimer. Modeling human behavior to anticipate insider attacks. Journal of Strategic Security, 4(2):25–48, 2011.CrossRefGoogle Scholar
  16. 16.
    O. Brdiczka, J. Liu, B. Price, J. Shen, A. Patil, R. Chow, E. Bart, and N. Ducheneaut. Proactive insider threat detection through graph learning and psychological context. In Proc. of the IEEE Symposium on Security and Privacy Workshops (SPW’12), San Francisco, California, USA, pages 142–149. IEEE, May 2012.Google Scholar
  17. 17.
    K. R. Sarkar. Assessing insider threats to information security using technical, behavioural and organisational measures. Information Security Technical Report, 15(3):112–133, 2010.CrossRefGoogle Scholar
  18. 18.
    E. E. Schultz. A framework for understanding and predicting insider attacks. Computers and Security, 21(6):526–531, 2002.CrossRefGoogle Scholar
  19. 19.
    Q. Althebyan and B. Panda. A knowledge-base model for insider threat prediction. In Proc. of the IEEE Information Assurance and Security Workshop (IAW’07), West Point, New York, USA, pages 239–246. IEEE, June 2007.Google Scholar
  20. 20.
    T. Sasaki. A framework for detecting insider threats using psychological triggers. Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications, 3(1/2): 99–119, 2012.Google Scholar
  21. 21.
    M. Bishop, S. Engle, S. Peisert, S. Whalen, and C. Gates. We have met the enemy and he is us. In Proc. of the 2008 workshop on New security paradigms (NSPW’08), Lake Tahoe, California, USA, pages 1–12. ACM, September 2008.Google Scholar
  22. 22.
    M. Bishop, S. Engle, S. Peisert, S. Whalen, and C. Gates. Case studies of an insider framework. In Proc. of the 42nd Hawaii International Conference on System Sciences (HICSS’09), Waikoloa, Big Island, Hawaii, USA, pages 1–10. IEEE, January 2009.Google Scholar
  23. 23.
    G. Doss and G. Tejay. Developing insider attack detection model: a grounded approach. In Proc. of the IEEE International conference on Intelligence and security informatics (ISI’09), Richardson, Texas, USA, pages 107–112. IEEE, June 2009.Google Scholar
  24. 24.
    Y. Liu, C. Corbett, K. Chiang, R. Archibald, B. Mukherjee, and D. Ghosal. SIDD: A framework for detecting sensitive data exfiltration by an insider attack. In System Sciences, 2009. HICSS ’09. 42nd Hawaii International Conference on, pages 1–10, Jan 2009.Google Scholar
  25. 25.
    M. Bishop, B. Simidchieva, H. Conboy, H. Phan, L. Osterwell, L. Clarke, G. Avrunin, and S. Peisert. Insider threat detection by process analysis. In IEEE Security and Privacy Workshops (SPW). IEEE, 2014.Google Scholar
  26. 26.
    I. Agrafiotis, P. A. Legg, M. Goldsmith, and S. Creese. Towards a user and role-based sequential behavioural analysis tool for insider threat detection. J. Internet Serv. Inf. Secur.(JISIS), 4(4):127–137, 2014.Google Scholar
  27. 27.
    I. Agrafiotis, J. R. C. Nurse, O. Buckley, P. A. Legg, S. Creese, and M. Goldsmith. Identifying attack patterns for insider threat detection. Computer Fraud & Security, 2015(7):9 – 17, 2015.Google Scholar
  28. 28.
    N. Elmrabit, S. H. Yang, and L. Yang. Insider threats in information security categories and approaches. In Automation and Computing (ICAC), 2015 21st International Conference on, pages 1–6, Sept 2015.Google Scholar
  29. 29.
    P. Parveen, J. Evans, Bhavani Thuraisingham, K.W. Hamlen, and L. Khan. Insider threat detection using stream mining and graph mining. In Privacy, security, risk and trust (passat), 2011 ieee third international conference on and 2011 ieee third international conference on social computing (socialcom), pages 1102–1110, Oct 2011.Google Scholar
  30. 30.
    P. Parveen and B. Thuraisingham. Unsupervised incremental sequence learning for insider threat detection. In Intelligence and Security Informatics (ISI), 2012 IEEE International Conference on, pages 141–143, June 2012.Google Scholar
  31. 31.
    J. F. Buford, L. Lewis, and G. Jakobson. Insider threat detection using situation-aware mas. In Proc. of the 11th International Conference on Information Fusion, pages 1–8, 2008.Google Scholar
  32. 32.
    S. L. Garfinkel, N. Beebe, L. Liu, and M. Maasberg. Detecting threatening insiders with lightweight media forensics. In Technologies for Homeland Security (HST), 2013 IEEE International Conference on, pages 86–92, Nov 2013.Google Scholar
  33. 33.
    H. Eldardiry, E. Bart, Juan Liu, J. Hanley, B. Price, and O. Brdiczka. Multi-domain information fusion for insider threat detection. In Security and Privacy Workshops (SPW), 2013 IEEE, pages 45–51, May 2013.Google Scholar
  34. 34.
    T. E. Senator, H. G. Goldberg, A. Memory, W. T. Young, B. Rees, R. Pierce, D. Huang, M. Reardon, D. A. Bader, E. Chow, et al. Detecting insider threats in a real corporate database of computer usage activity. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1393–1401. ACM, 2013.Google Scholar
  35. 35.
    A. S. McGough, D. Wall, J. Brennan, G. Theodoropoulos, E. Ruck-Keene, B. Arief, C. Gamble, J. Fitzgerald, A. van Moorsel, and S. Alwis. Insider threats: Identifying anomalous human behaviour in heterogeneous systems using beneficial intelligent software (ben-ware). In Proceedings of the 7th ACM CCS International Workshop on Managing Insider Security Threats, MIST ’15, pages 1–12, New York, NY, USA, 2015. ACM.Google Scholar
  36. 36.
    N. Nguyen and P. Reiher. Detecting insider threats by monitoring system call activity. In Proceedings of the 2003 IEEE Workshop on Information Assurance, 2003.Google Scholar
  37. 37.
    M. A. Maloof and G. D. Stephens. elicit: A system for detecting insiders who violate need-to-know. In Christopher Kruegel, Richard Lippmann, and Andrew Clark, editors, Recent Advances in Intrusion Detection, volume 4637 of Lecture Notes in Computer Science, pages 146–166. Springer Berlin Heidelberg, 2007.Google Scholar
  38. 38.
    J. S. Okolica, G. L. Peterson, and R. F. Mills. Using plsi-u to detect insider threats by datamining e-mail. International Journal of Security and Networks, 3(2):114–121, 2008.CrossRefGoogle Scholar
  39. 39.
    M. Harris. Visualizing insider activity and uncovering insider threats. Technical report, 2015.Google Scholar
  40. 40.
    K. Nance and R. Marty. Identifying and visualizing the malicious insider threat using bipartite graphs. In System Sciences (HICSS), 2011 44th Hawaii International Conference on, pages 1–9, Jan 2011.Google Scholar
  41. 41.
    F. Stoffel, F. Fischer, and D. Keim. Finding anomalies in time-series using visual correlation for interactive root cause analysis. In Proceedings of the Tenth Workshop on Visualization for Cyber Security, VizSec ’13, pages 65–72, New York, NY, USA, 2013. ACM.Google Scholar
  42. 42.
    C. Kintzel, J. Fuchs, and F. Mansmann. Monitoring large ip spaces with clockview. In Proceedings of the 8th International Symposium on Visualization for Cyber Security, VizSec ’11, pages 2:1–2:10, New York, NY, USA, 2011. ACM.Google Scholar
  43. 43.
    J. Zhao, N. Cao, Z. Wen, Y. Song, Y. Lin, and C. Collins. Fluxflow: Visual analysis of anomalous information spreading on social media. Visualization and Computer Graphics, IEEE Transactions on, 20(12):1773–1782, Dec 2014.Google Scholar
  44. 44.
    S. Walton, E. Maguire, and M. Chen. Multiple queries with conditional attributes (QCATs) for anomaly detection and visualization. In Proceedings of the Eleventh Workshop on Visualization for Cyber Security, VizSec ’14, pages 17–24, New York, NY, USA, 2014. ACM.Google Scholar
  45. 45.
    P. A. Legg, O. Buckley, M. Goldsmith, and S. Creese. Automated insider threat detection system using user and role-based profile assessment. IEEE Systems Journal, PP(99):1–10, 2015.Google Scholar
  46. 46.
    I. Jolliffe. Principal component analysis. Wiley Online Library, 2005.CrossRefzbMATHGoogle Scholar
  47. 47.
    I. Agrafiotis, A. Erola, J. Happa, M. Goldsmith, and S. Creese. Validating an insider threat detection system: A real scenario perspective. In 2016 IEEE Security and Privacy Workshops (SPW), pages 286–295, May 2016.Google Scholar
  48. 48.
    P. A. Legg. Visualizing the insider threat: challenges and tools for identifying malicious user activity. In Visualization for Cyber Security (VizSec), 2015 IEEE Symposium on, pages 1–7, Oct 2015.Google Scholar
  49. 49.
    D. H. Jeong, C. Ziemkiewicz, B. Fisher, W. Ribarsky, and R. Chang. ipca: An interactive system for pca-based visual analytics. In Proceedings of the 11th Eurographics / IEEE - VGTC Conference on Visualization, EuroVis’09, pages 767–774, Chichester, UK, 2009. The Eurographs Association; John Wiley & Sons, Ltd.Google Scholar
  50. 50.
    P. A. Legg, D. H. S. Chung, M. L. Parry, R. Bown, M. W. Jones, I. W. Griffiths, and M. Chen. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop. Visualization and Computer Graphics, IEEE Transactions on, 19(12):2109–2118, Dec 2013.Google Scholar
  51. 51.
    B. Settles. Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6(1):1–114, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  52. 52.
    P. A. Legg, O. Buckley, M. Goldsmith, and S. Creese. Caught in the act of an insider attack: detection and assessment of insider threat. In Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pages 1–6, April 2015.Google Scholar
  53. 53.
    P. A. Legg, O. Buckley, M. Goldsmith, and S. Creese. Visual analytics of e-mail sociolinguistics for user behavioural analysis. Journal of Internet Services and Information Security (JISIS), 4(4):1–13, 2014.Google Scholar
  54. 54.
    J. S. Wiggings. The five factor model of personality: Theoretical perspectives. Guilford Press, 1996.Google Scholar
  55. 55.
    D. L. Paulhus and K. M. Williams. The dark triad of personality: Narcissism, machiavellianism, and psychopathy. Journal of research in personality, 36(6):556–563, 2002.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer Science and Creative TechnologiesUniversity of the West of EnglandBristolUK

Personalised recommendations