Abstract
Cyber-physical systems are taking on a permanent role in the industry, such as in oil and gas or mining. These systems are expected to perform increasingly autonomous tasks in complex settings removing human operators from remote and potentially hazardous environments. High autonomy necessitates a more extensive use of artificial intelligence methods, such as anomaly detection, to identify unusual occurrences in the monitored environment. The absence of data characterizing potentially hazardous events leads to disruptive noise displayed as false alarms, a common anomaly detection issue for hazard identification applications. Contrastingly, disregarding the false alarms can result in the opposite effect, causing loss of early indications of hazardous occurrences. Existing research introduces simulating and extrapolating less represented data to expand the information on hazards and semi-supervise the methods or by introducing thresholds and rule-based methods to balance noise and meaningful information, necessitating intensive computing resources. This research proposes a novel Warning Identification Framework that evaluates risk analysis objectives and applies them to discern between true and false warnings identified by anomaly detection. We demonstrate the results by analyzing three seismic hazard assessment methods for identifying seismic tremors and comparing the outcomes to anomalies found using the unsupervised anomaly detection method. The demonstrated approach shows great potential in enhancing the reliability and transparency of anomaly detection outcomes and, thus, supporting the operational decision-making process of a cyber-physical system.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Data Availability
The datasets generated during and/or analysed during the current study are available in the University of California (UC) Irvine Machine Learning Repository, available at https://archive.ics.uci.edu/ml/datasets/seismic-bumps.
Change history
20 August 2023
Missing Open Access funding information has been added in the Funding Note.
References
Vachtsevanos, G., Lee, B., Oh, S., Balchanos, M.: Resilient Design and Operation of Cyber Physical Systems with Emphasis on Unmanned Autonomous Systems. J. Intell. Robot. Syst: Theory Appl. 91(1), 59–83 (2018). https://doi.org/10.1007/s10846-018-0881-x
McDermid, J.A., Yan J., Habli I.: Towards a framework for safety assurance of autonomous systems', proceedings of the workshop on artificial intelligence safety 2019, 28th International Joint Conference on Artificial Intelligence, Vol. 2419, CEUR-WS.org, Macao, China
Oxford University Press. Oxford Learner’s Dictionaries, Oxford University (2021). URL https://www.oxfordlearnersdictionaries.com/. Accessed 20 Feb 2023
Fraser, K., Homiller, S., Mishra, R.K., Ostdiek, B., Schwartz, M.D.: Challenges for Unsupervised Anomaly Detection in Particle Physics. Journal of High EnergyPhysics, Springer Science and Business Media (2022) p. 3. URL https://doi.org/10.1007/jhep03%282022%29066
Eldevik, S., Pedersen, F.B.: Safety implications for artificial intelligence why we need to combine causal-and data-driven models, DNV GL AS Oil & Gas Safety Risk Magement (2018). URL https://ai-andsafety.dnv.com. Accessed 20 Feb 2023
Spahic, R., Hepsø, V., Lundteigen, M.A.: Enhancing Autonomous Systems’ Awareness Conceptual Categorization of Anomalies by Temporal Change During Real-Time Operations. The Eighteenth International Conference on Autonomic and Autonomous Systems pp. 25–30 (2022). ISBN:978–1–61208–966–9
Spahic, R., Hepso, V., Lundteigen, M.A.: Reliable Unmanned Autonomous Systems: Conceptual Framework for Warning Identification during Remote Operations. 2021 IEEE International Symposium on Systems Engineering (ISSE) pp. 1–8 (2021). https://doi.org/10.1109/ISSE51541.2021.9582534. URL https://ieeexplore.ieee.org/document/9582534/
Spahic, R., Hepsø, V., Lundteigen, M.A.: Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022), ed. by M.C. Leva, E. Patelli, L. Podofillini, S. Wilson. pp. 273–280. Research Publishing, Singapore, Singapore (2022). https://doi.org/10.3850/978-981-18-5183-4_R08-03-390-cd. URL https://rpsonline.com.sg/rps2prod/esrel22-epro/html/toc.html
Aggarwal, C.C.: Outlier Analysis. chap. 1. Springer, Cham, pp. 1–34 (2017). https://doi.org/10.1007/978-3-319-47578-3_1
Taha, A., Hadi, A.S.: Anomaly detection methods for categorical data: A review. ACM Comput Surv 52(2), 1–35 (2019). https://doi.org/10.1145/3312739
Hawkins, D.M.: Identification of Outliers. Springer, Netherlands, Dordrecht (1980). https://doi.org/10.1007/978-94-015-3994-4
Beckman, R.J., Cook, R.D.: Outliers. Technometrics 25(2), 119–149 (2012). https://doi.org/10.1080/00401706.1983.10487840
Foorthuis, R.: On the nature and types of anomalies: a review of deviations in data. Int J Data Sci Anal 12(4), 297–331 (2021). https://doi.org/10.1007/s41060-021-00265-1
Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection. ACM Comput Surv (CSUR) 14(1), 1–22 (2009). https://doi.org/10.1145/1541880.1541882
Fisch, A., Eckley, I., Fearnhead, P.: Subset Multivariate Collective And Point Anomaly Detection. J Comput Graph Stat 31, 574–585 (2019). https://doi.org/10.1080/10618600.2021.1987257
Hayes, M.A., Capretz, M.A.: Proceedings 2014 IEEE International Congress on Big Data, BigData Congress 2014 (Institute of Electrical and Electronics Engineers Inc.), pp. 64–71 (2014). https://doi.org/10.1109/BigData.Congress.2014.19
Xiuyao, S., Mingxi, W., Jermaine, C., Ranka, S.: Conditional anomaly detection. IEEE Trans. Knowl. Data Eng. 19(5), 631–644 (2007). https://doi.org/10.1109/TKDE.2007.1009
Goldstein, M., Uchida, S.: A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data. PLOS ONE 11(4), e0152173 (2016). https://doi.org/10.1371/JOURNAL.PONE.0152173
ISO 31000, Risk management — Guidelines, International Organization for Standardization. Tech. rep., International Organization for Standardization (2018). URL https://www.iso.org/obp/ui/#iso:std:iso:31000:ed-2:v1:en
ISO:51. Safety aspects Guidelines for their inclusion in standards ISO/IEC Guide 51:2014(E) (2014)
Rausand, M.: Risk Assessment Theory, Methods, and Applications. John Wiley & Sons Inc, Hoboken (2011). https://doi.org/10.1002/9781118281116
Michau, G., Fink, O.: Unsupervised transfer learning for anomaly detection: Application to complementary operating condition transfer. Knowl. Based Syst. 216, 106816 (2021). https://doi.org/10.1016/J.KNOSYS.2021.106816
Scharpf, E.W., Thomas, H.W., Stauffer, T.R.: Practical SIL Target Selection: Risk Analysis Per the IEC 61511 Safety Lifecycle, 2nd edn. (exida.com LLC, Sellersville, Pennsylvania) (2012)
Garcia, R., Sreekanti, V., Yadwadkar, N., Crankshaw, D., Gonzalez, J.E., Hellerstein, J.M.: Common Model Infrastructure. London, UK (2018)
Schelter, S., Biessmann, F., Januschowski, T., Salinas, D., Seufert, S., Szarvas, G.: On Challenges in Machine Learning Model Management. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering pp. 5–13 (2018). URL http://sites.computer.org/debull/A18dec/p5.pdf
Derakhshan, B., Rezaei Mahdiraji, A., Abedjan, Z., Rabl, T., Markl, V.: Proceedings of the ACM SIGMOD International Conference on Management of Data (Association for Computing Machinery), pp. 1701–1716 (2020). https://doi.org/10.1145/3318464.3389715
Lacher, A.R.: A Framework for Discussing Trust in Increasingly Autonomous Systems. Tech. rep., The MITRE Corporation (2017)
Lee, M.S.A., Singh, J.: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2021)
Madras, D., Creager, E., Pitassi, T., Zemel, R.: Fairness through causal awareness: Learning causal latent-variable models for biased data. FAT* 2019 Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. pp. 349–358 (2019). https://doi.org/10.1145/3287560.3287564
Makhlouf, K., Zhioua, S., Palamidessi, C.: On the applicability of ML fairness notions. arXiv pp. 1–32 (2020)
Sekar, R., et al.: Proceedings of the 9th ACM conference on Computer and communications security CCS ’02 (Association for Computing Machinery, New York, NY, USA), p. 265–274 (2002). https://doi.org/10.1145/586110.586146
Patcha, A., Park, J.M.: An overview of anomaly detection techniques: Existing solutions and latest technological trends. Comput. Netw. 51(12), 3448–3470 (2007). https://doi.org/10.1016/J.COMNET.2007.02.001
G¨opfert, C., Ben-David, S., Bousquet, O., Gelly, S., Tolstikhin, I., Urner, R.: When can unlabeled data improve the learning rate? arXiv 1905.11866 (2019). URL https://arxiv.org/abs/1905.11866v1
Henne, M., Schwaiger, A., Weiss, G.: Managing Uncertainty of AI-based Perception for Autonomous Systems. AISafety@IJCAI (2019)
Phillip Durst, S.J., Gray, W.: ERDC/GSL SR-14–1 “Levels of Autonomy and Autonomous System Performance Assessment for Intelligent Unmanned Systems”. Tech. rep., The US Army Engineer Research and Development Center (ERDC), Vicksburg, MS (2014). URL www.erdc.usace.army.mil.
Marshall, C., Roberts, B., Grenn, M.: 3rd International Conference on Control, Automation and Robotics, ICCAR 2017 (Institute of Electrical and Electronics Engineers Inc.), pp. 438–443 (2017). https://doi.org/10.1109/ICCAR.2017.7942734
Hollnagel, E., Woods, D.D., Leveson, N.: Resilience Engineering: Concepts and Precepts (ASgate Publishing Ltd) (2007). URL https://books.google.no/books?hl=en&lr=&id=rygf6axAH7UC&oi=fnd&pg=PP1&dq=hollnagel+resilience+engineering+concepts+and+precepts+2007&ots=iq5GQV42bb&sig=mK37zLFtfiltAKZMV-JoNpT96Po&rediresc=y#v=onepage&q=hollnagelresilienceengineeringconceptsandprece
O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, p. 272. Crown Publishers, New York (2016)
Zhang, T., Chen, J., Li, F., Zhang, K., Lv, H., He, S., Xu, E.: Intelligent fault diagnosis of machines with small & imbalanced data: A state-of-the-art review and possible extensions. ISA Trans. 119, 152–171 (2022). https://doi.org/10.1016/j.isatra.2021.02.042
Lavin, A., Ahmad, S.: IEEE 14th International Conference on Machine Learning and Applications, ICMLA 2015 (Institute of Electrical and Electronics Engineers Inc., Miami, Florida, USA), pp. 38–44 (2015). https://doi.org/10.1109/ICMLA.2015.141
Li, P., Niggemann, O., Hammer, B.: Proceedings of the IEEE International Conference on Industrial Technology, vol. 2019-Febru (Institute of Electrical and Electronics Engineers Inc.), pp. 1311–1316 (2019). https://doi.org/10.1109/ICIT.2019.8754997
Shin, D.H., Park, R.C., Chung, K.: Institute of Electrical and Electronics Engineers Inc. IEEE Access 8, 108664–108674 (2020). https://doi.org/10.1109/ACCESS.2020.3000638
Omar, S., Ngadi, A., Jebur, H.H.: Machine Learning Techniques for Anomaly Detection: An Overview. Int. J. Comput. Appl. 79(2), 33–41 (2013). https://doi.org/10.5120/13715-1478
Deng, J., Brown, E.T.: EuroVis 2021 (The Eurographics Association, Chicago, IL, U.S.A.) (2021). https://doi.org/10.2312/evs.20211050
Zio, E.: The future of risk assessment. Reliab. Eng. Syst. Saf. 177(March), 176–190 (2018). https://doi.org/10.1016/j.ress.2018.04.020
Kiran, D.: Total Quality Management (Elsevier), pp. 391–404 (2017). https://doi.org/10.1016/B978-0-12-811035-5.00027-1. URL https://linkinghub.elsevier.com/retrieve/pii/B9780128110355000271
Hollnagel, E.: Resilience Engineering A New Understanding of Safety J. Ergon. Soc. Korea 35(3),185-191 (2016). https://doi.org/10.5143/jesk.2016.35.3.185. URL http://jesk.or.kr
P. Mcdermott, C. Dominguez, N. Kasdaglis, M. Ryan, I.T. Mitre, A. Nelson, Human-Machine Teaming Systems Engineering Guide. Tech. rep., The MITRE Corporation (2018). URL https://www.mitre.org/publications/technical-papers/human-machine-teaming-systems-engineering-guide
Schweder, T., Hjort, N.L.: Confidence, Likelihood, Probability, pp. 1–22. Cambridge University Press (2016). https://doi.org/10.1017/CBO9781139046671.002. URL https://www.cambridge.org/core/product/identifier/CBO9781139046671A008/type/bookpart
K.G. Mehrotra, C.K. Mohan, H. Huang, Anomaly Detection Principles and Algorithms, 1st edn. Springer, Cham, pp. 21–32 (2017). https://doi.org/10.1007/978-3-319-67526-8
Aggarwal, C.C.: Outlier Analysis. pp. 399–422. Springer, Cham (2017) https://doi.org/10.1007/978-3-319-47578-3_13.
Markou, M., Singh, S.: Novelty detection: A review Part 1: Statistical approaches. Signal Process. 83(12), 2481–2497 (2003). https://doi.org/10.1016/j.sigpro.2003.07.018
Sikora, M., Mazik, P.: Towards the better assessment of a seismic hazard—the Hestia and Hestia map systems. Mechanizat. Automat. Min. 3(457), 5–12 (2009)
Kabiesz, J., Sikora, B., Sikora, M., Wrobel, L.: Application of rule-based models for seismic hazard prediction in coal mines. Acta Montanist. Slovaca 18(4), 262–277 (2013)
Sathe, S., Aggarwal, C.: LODES: Local density meets spectral outlier detection. 16th SIAM International Conference on Data Mining 2016, SDM 2016 pp. 171–179 (2016). https://doi.org/10.1137/1.9781611974348.20. URL https://epubs.siam.org/terms-privacy
Bukowska, M.: The probability of rockburst occurrence in the Upper Silesian Coal Basin area dependent on natural mining conditions. J. Min. Sci. 42(6), 570–577 (2006). https://doi.org/10.1007/S10913-006-0101-0
Li, Z.L., He, X.Q., Dou, L.M., Wang, G.F.: Rockburst occurrences and microseismicity in a longwall panel experiencing frequent rockbursts. Geosci. J. 22(4), 623–639 (2018). https://doi.org/10.1007/S12303-017-0076-7
Frontera-Pons, J., Veganzones, M.A., Pascal, F., Ovarlez, J.P.: Hyperspectral Anomaly Detectors Using Robust Estimators. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9(2), 720–731 (2016). https://doi.org/10.1109/JSTARS.2015.2453014
The SciPy community. scipy.stats.shapiro — SciPy v1.9.1 Manual (2023). URL https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html. Accessed 10 Feb 2023
Liu, F.T., Ting, K.M., Zhou, Z.H.: Proceedings IEEE International Conference on Data Mining, ICDM (IEEE), pp. 413–422 (2008). https://doi.org/10.1109/ICDM.2008.17
Mohamed, D., El-Kilany, A., Mokhtar, H.M.: A Hybrid Model for Documents Representation. Int. J. Adv. Comput. Sci Appl 12(3), 317–324 (2021). https://doi.org/10.14569/IJACSA.2021.0120339
Amodei, D., Olah, C., Brain, G., Steinhardt, J., Christiano, P., Schulman, J., Dan, O., Google Brain, M.: Concrete Problems in AI Safety. ArXiv 1606.06565 (2016). URL https://arxiv.org/abs/1606.06565v2
ISO/IEC, ISO/IEC TR5469:202x(E) Artificial Intelligence Functional safety and AI systems. Tech. rep., International Electrotechnical Comission (2022). URL https://www.iso.org/standard/81283.html
Acknowledgements
This research is a part of Better Resource Utilization in 21st Century (BRU21), Norwegian University of Science and Technology (NTNU) Research and Innovation Program on Digital and Automation Solutions for the Oil and Gas Industry (www.ntnu.edubru21) and supported by Equinor.
Funding
Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital). This research is supported by Better Resource Utilization in 21st Century (BRU21) Innovation Program at the Norwegian University of Science and Technology Research.
Author information
Authors and Affiliations
Contributions
All authors contributed to the research conception. Rialda Spahic performed material preparation, data gathering, analysis, and manuscript writing. Mary Ann Lundteigen performed writing reviews and supervision of all prior drafts of the manuscript. Vidar Hepsø contributed to the concept visualisation of the research. All authors reviewed and commented on prior manuscript versions. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval
Not applicable.
Consent to Participate
Not applicable.
Consent for Publication
Not applicable.
Conflicts of Interest
There are no financial or non-financial interests to disclose by the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Spahic, R., Hepsø, V. & Lundteigen, M.A. A Novel Warning Identification Framework for Risk-Informed Anomaly Detection. J Intell Robot Syst 108, 17 (2023). https://doi.org/10.1007/s10846-023-01887-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-023-01887-2