Keywords

Introduction

This chapter is focused on providing up-to-date information about the recent technological developments in artificial intelligence (AI), as well as insights on the relevance of these technologies for the intelligence communities (IC) and law enforcement agencies (LEAs). This work occurs in a delicate and crucial phase, such as the final approval and the entry into force of the European Union (EU) AI Act, the first EU regulation on AI and AI-based technologies. The global interest of scientists, professionals, organizations, and policymakers around AI technologies is rising continuously as AI adoption is getting ubiquitous, meaning it is being applied across a multitude of domains. AI is a generic term for multiple technologies and is an umbrella concept that embraces multiple technologies in the field of data analytics, while there is a subset of them, namely machine learning (ML) and deep learning (DL) techniques, which are intensively gaining attention and being widely exploited. AI-based systems and applications used for improving and automation of crime investigations and solving may demand high computation and storage capabilities in order to ensure efficient and rapid management of the enormous amounts of data that intervene in the model training and testing activities. This is extremely relevant where data is entered in the AI system operation as a continuous flow of streaming data that needs to be analyzed. Therefore, big data and cloud technologies used to create information systems, databases, and diverse-associated services on top are in close relationships of AI adoption.

The focus of this chapter is not placed on the detailed technological explanation of each of the AI techniques, but rather in the description of the opportunities and usage scenarios of AI to facilitate and enhance security intelligence tasks. This work aims to be a source of inspiration and a reference for intelligence and security practitioners for further discussion on the potential opportunities and disadvantages of adopting AI in their daily activities.

The study descends from the research carried out in the first year of the NOTIONES project, which aims at providing intelligence and security practitioners with up-to-date information on technologies and research initiatives—specifically EC-funded projects—so that they can elaborate requirements and needs to be followed up by academic researchers and industrial technology providers. The technologies covered by the study include any kind of machine-based method, technique, mechanism, or system (equipment, platform, software, application) that constitutes, uses, or optimizes models to enable predictions, recommendations, classifications, and other tasks necessary in the security intelligence activities. Such kinds of AI technologies are key for the efficiency and accuracy of the intelligence tasks, and thus, the adoption of AI solutions is of paramount importance in the intelligence cycle [1].

The chapter is structured as follows. The first section presents the landscape of EU research projects on AI for civil security. The second section describes some relevant examples of AI-based solutions for intelligence and security practitioners as well as relative datasets. The third section presents the main challenges posed by the adoption of AI solutions. Finally, several recommendations on AI-based systems and applications used for security intelligence are provided.

EC-Funded Research Projects on AI for Civil Security

In this section, the main initiatives of EU-funded research projects are reported, which support civil security and especially security and intelligence practitioners as well as LEAs. The list was obtained through desk research on the Community Research and Development Information Service (CORDIS). These initiatives include European research projects funded under the Horizon 2020 funding program, particularly under the secure societies framework [2]. Many current projects are dealing with explainability, trustworthiness, and ethical challenges of AI. Most of them have developed or are developing AI solutions oriented to law enforcement and/or involve direct collaboration of LEAs in these technologies. Below are some of the most relevant EU-funded research projects supporting LEA activities with regard to the use of AI technologies and its challenges for its adoption in the intelligence scenarios and use cases.

  1. (a)

    Artificial Intelligence Data Analysis (AIDA). Closed in 2022, the AIDA project aimed to develop a framework for analyzing large volumes of data using AI technologies to improve the capabilities of LEAs to fight cybercrime and cyberterrorism [3]. Specifically, AIDA has developed a big data analysis and analytics framework equipped with a complete set of effective, efficient, and automated data mining and analytics solutions to deal with standardized investigative workflows, extensive content acquisition, information extraction and fusion, knowledge management, and enrichment through novel applications of big data processing, ML, AI, and predictive and visual analytics.

  2. (b)

    Deep AR Law Enforcement Ecosystem (DARLENE). The DARLENE project aims to offer European LEAs a proactive security solution that will enable them to sort through massive volumes of data to predict, anticipate, and prevent criminal activities [4]. To achieve this, it aims to combine augmented reality (AR) and AI techniques in order to improve LEAs’ decision-making and daily operations with regard to forensics and situational awareness.

  3. (c)

    Investigative, Immersive, and Interactive Collaboration Environment (INFINITY). The primary goals of INFINITY are to revolutionize data-driven investigations through the use of AI, ML, and big data analytics to facilitate effectiveness of an investigation and utilize modern innovations in virtual reality (VR), AR, and visual analytics in order to facilitate a better intelligence cycle [5]. The INFINITY project will try to overcome the challenging task of dealing with enormous amounts of data in crime investigations of cybercrime, terrorism, and other hybrid threats. The project will build a collaborative platform between different LEAs relying on VR, AR, ML, and big data technologies. The INFINITY system for LEAs’ operations addresses the whole intelligence cycle, including the generation of the required reporting and management of evidence admissible in court.

  4. (d)

    An Interoperable Multidomain CBRN System (NEST). The NEST project will develop systems to provide threat indications and warnings, as well as guidance for facility security through appropriate information-sharing and analysis mechanisms [6]. Specifically, the aim of NEST is the creation of an Internet of Things (IoT) network of low-cost chemical, biological, radiological, and nuclear (CBRN) sensors in different physical infrastructures, and to leverage AI for the detection of CBRN threats and pandemic viruses. All threats and dangers will be displayed with the help of AR. The CBRN detectors would be low-cost sensors embedded in one unique detection equipment placed in the infrastructure or carried by security staff. The detectors will send data to the IoT platform that processes and merges data from internal and third-party services. Besides the use of AR to display hazards and CBRN threats, the NEST solution adopts AI for the generation of threat alerts and decision-making in security of facilities.

  5. (e)

    Artificial Intelligence Roadmap for Policing and Law Enforcement (ALIGNER). The objective of the project is to bring together the main European actors in the field of AI applied to law enforcement and policing services [7]. The project will organize a series of workshops that gather different stakeholders with different points of view in order to focus, prioritize, and establish a roadmap of the most beneficial actions to cooperate on and research areas in the field of AI applied to law enforcement. Furthermore, the project will conduct a study on the policy and research needs as well as the AI capability needs of European LEAs. The project will also aid in the prevention of offensive AI by delivering a taxonomy of AI-powered crime. Finally, ALIGNER will assess and monitor AI technologies with potential for use by LEAs, together with their security, ethical, societal, and legal risks evaluation.

  6. (f)

    SusTainable Autonomy and Resilience for LEAs using AI against High-priority Threats (STARLIGHT). The STARLIGHT project aims to create a community that brings together LEAs, researchers, industry, and practitioners in the security ecosystem under a coordinated and strategic effort to bring AI into operational practices [8]. It is focused on improving the capacities and autonomy of LEAs in the use of AI tools. Two other objectives of the project are to protect own LEA’s AI systems against attacks on them and improve LEAs’ capacities to defend against attacks that use AI to be more effective; that is, against AI-powered crime and terrorist acts.

  7. (g)

    A European Positive Sum Approach Toward AI Tools in Support of Law Enforcement and Safeguarding Privacy and Fundamental Rights (popAI). popAI aims to address the concerns related to the use of AI-based technologies in the security domain [9]. To achieve this goal, popAI brings together security practitioners, AI scientists, ethics and privacy researchers, civil society organizations, as well as social sciences and humanities experts, aiming to boost trust in AI by increasing awareness and current social engagement, and delivering a unified European view and recommendations. The project also envisages the creation of an ecosystem and the structural basis for a sustainable and inclusive European AI hub for LEA.

It is worth noting that ALIGNER, STARLIGHT, and popAI are cluster projects that have been funded under the Horizon 2020 Artificial Intelligence calls (2020) and have approached the topic of AI-based technologies in the security domain by multiple perspectives (e.g., legal, ethical, and technological), facilitating knowledge sharing and nourishing joint activities.

AI Technologies and Tools for Intelligence and Security

This section reports on the relevant examples of AI-based software tools and techniques that appear promising to support data processing and analysis activities performed by the intelligence and security practitioners. The technologies were selected based on a desk research on the open web and selected databanks such as TheLens [10], an integrated search engine on scholarly works and patents with an export functionality. The research was focused on technologies and tools that have been developed or are being developed to serve particular intelligence needs, including both commercial and open-source solutions. Furthermore, some relevant datasets were identified, which can be used for the training of the AI models. In this chapter, we report on the selected case study scenarios and their relative datasets, namely biometric recognition and video surveillance-based crime detection, violence detection, illegal trafficking detection, and crime prevention.

  1. (a)

    Biometric Recognition and Video Surveillance-Based Crime Detection. Some AI-based commercial tools for face recognition, fingerprint identification, and surveillance are (1) “Amazon Rekognition,” which can do facial analysis and facial search, and identify objects and scenes. It also offers face detection and analysis in videos, live, or stored [11]. (2) “BioID,” which is a cloud-based online service providing biometric technology and includes an application programming interface (API) [12]. (3) “Biometric Identification Services” [13]. (4) “Defendry,” which can autodetect hundreds of different kinds of guns and weapons, and monitor for intruders such as banned former employees, expelled students, and more, using the existing security cameras and/or Defendry EyesOn Cameras [14]. Some related AI-based datasets are “FFHQ,” which consists of around 70,000 high-quality PNG images of human faces [15]; “Labelled Faces,” which provides face photographs designed for studying the problem of unconstrained face recognition [16]; “Google Facial,” which contains face image triplets along with human annotations that specify which two faces in each triplet form the most similar pair in terms of facial expression [17]; “YouTube Faces DB,” which provides face videos designed for studying the problem of unconstrained face recognition in videos [18]; “FVC2000_DB4_B,” with several hundred reference fingerprints of varying quality [19]; and “SOCOFing,” which is a biometric fingerprint database designed for academic research purposes [20].

  2. (b)

    Violence Detection. An interesting AI-based commercial tool for violence detection is “Jarvis,” which is a customizable Video Analytics Engine with state-of-the-art facial recognition technology and intelligent monitoring of objects, crowd (focused on violence detection), perimeters, and vehicles [21]. Examples of related datasets are “RWF-2000,” which is a large-scale video database for violence detection [22]; “airtlab,” which contains 350 video clips labeled “nonviolent” and “violent,” to be used to train and test algorithms for violence detection in videos [23]; and “XD-Violence,” which contains more than 4700 untrimmed videos with audio signals and weak labels [24].

  3. (c)

    Illegal Trafficking Detection. An AI-based tool funded by the European Union for firearms identification is “iARMS” (Illicit Arms Records and Tracing Management System). Police worldwide can record illicit firearms in the iARMS database and can search for seized persons to check if they have been reported as lost, stolen, trafficked, or smuggled [25]. Some of the relevant datasets for illegal trafficking detection are “INTERPOL Open Databases,” which is the only database at the international level with certified police information on stolen and missing objects of art [26]; and “Global Human Trafficking,” which contains information on almost 50,000 victims of human trafficking, including the reason, means of control, origin and destination, as well as other variables [27].

  4. (d)

    Crime Prevention. Finally, an example of tool in crime prevention is “PRECOBS”: it generates forecasts using the most up-to-date crime data, which can be used by police authorities for operational and preventive purposes. Control centers and operational units receive temporal and spatial indications for situation-oriented operational planning [28]. Examples of datasets for crime prevention are “FBI Crimes,” which provides crime and policing analysis within the United States [29]; and “London Crime,” which covers the number of criminal reports that occurred in London by month, area, and major/minor category from 2008 to 2016 [30]. It is noteworthy that some of the tools presented, although categorized under one of the use cases, may serve more than one case. For example, tools used for face recognition may aid in different policy tasks such as identification of violent criminals, identity fraud crimes detection, or criminal pedophiles identification.

Trustworthiness Challenges

As in other fields of application, ensuring trustworthiness of the AI technologies used for law enforcement is one of the major challenges. The high sophistication of this type of technologies makes them prone to uncertainties of whether they may turn out against the purpose they were created for, that is, aid humans. Therefore, intense research is being carried out to ensure AI compliance with the proposed EU’s AI Act [31], which calls for trustworthy AI solutions for Europe on the basis of the seven main requirements of trustworthy AI in the EU’s HLEG on AI [32]. From a technical perspective, a trustworthy AI shall ensure three main characteristics. (a) Explainability of AI. Having means to guarantee the transparency and interpretability of the AI algorithm or model result is key to understand whether the system is getting against or twisted somehow from its originally designed objective. (b) Fairness of AI. Ensuring that the model or algorithm does not fall into bias or discrimination or stigmatization of humans. (c) Technical robustness of AI. The technical robustness includes reliability as well as security and safety of AI, that is, the fact that the system securely treats the data in storage, transit, and operation, and keeps the personal data private. The system shall also protect humans from any intentional harm. Although it is extremely challenging due to AI still being an emerging technology, growing faster in capabilities and implementation platforms, law enforcement is required to take steps to ensure that all these aspects are respected in the AI systems they adopt to facilitate their work.

Conclusion

In recent years, AI technologies have become extremely popular thanks to the horizontality of the applicability of AI in multiple domains. The wide variety of research projects that were funded in recent years by the European Commission about AI solutions for security, and the growing number of commercial solutions available to practitioners, demonstrate that the advances in ML, DL, and all sorts of algorithms for vast amounts of data processing are transforming intelligence, security, cybersecurity, law enforcement, and operations. Indeed, AI technologies may be adopted in almost all activities within the intelligence life cycle that require classification, identification, problem solving, decision-making, and prediction among others.

Despite all the benefits that AI can bring to intelligence and security practices, it is necessary to study and continuously monitor the potential downsides of the use of AI. The civil society needs to make sure that security practitioners do not trespass the limits of fundamental rights when using AI-based systems in intelligence gathering and processing, crime prevention, crime detection, case investigations, and criminal prosecution. In particular, it is necessary that practitioners closely follow trustworthy AI practices and recommendations, as well as ensure the acquisition and use of tested AI systems, trained with adequate unbiased data. As the AI standardization progresses and the research on more secure and resilient AI systems advances, their adoption with all guarantees will become easier.