1 Introduction

Undoubtedly, AI has brought enormous benefits and advantages to humanity in the last decade and this trend will likely continue in coming years since AI is gradually becoming part of the digital services that we use in our daily lives. Many governments around the world are considering the deployment of AI systems and applications to help them achieve their activities and more concretely to facilitate the identification and prediction of crime.Footnote 1 Further, national security and intelligence agencies have also realized the potential of AI technologies to support and achieve national and public security objectives.

There are significant developments of AI technologies like the use of facial recognition in the criminal justice realm, the use of drones, lethal autonomous weapons and self-driving vehicles that when not properly configured or managed without proper oversight mechanisms in place have the potential to be used for disruptive purposes and harm individual’s rights and freedoms.

Currently, there is an ongoing discussion in international policy and legislative circles on the revision and improvement of the liability framework and threshold concerning AI systems and technologies,Footnote 2 although due to the complexity of the topic and the different legal approaches around the world concerning civil liability, there will probably not be a consensus on a harmonized and uniformed response, at least not in the near future.

Further, AI and machine learning have the potential and offer the possibility to detect and respond to cyberattacks targeted to critical infrastructure sectors including water, energy and electricity supplies, as well as the correct management of cybersecurity solutions to help reduce and mitigate security risks.Footnote 3 However, many complex challenges remain particularly for small and medium enterprises which continue to rely on limited budgets to improve their cybersecurity capabilities.

Due to the COVID-19 pandemic, a large part of the world’s connected population was confined. This situation made companies and individuals more dependent on the use of systems, technologies and applications based on AI to conduct their activities, including remote work, distance learning, online payments or simply having access to more entertainment options like streaming and video on demand services. Unfortunately, this situation also led organized criminal groups to reconsider and re-organized their criminal activities in order to specifically target a number of stakeholders, including international organizations,Footnote 4 research and health sector entities,Footnote 5 supply chain companiesFootnote 6 and individuals. We have witnessed that organized criminal groups have largely improve their CasS (crime as a service) capabilities and turn their activities into higher financial profits with very small possibilities of being traced by law enforcement and brought to justice.

Through the use of AI technologies, cybercriminals have not only found a novel vehicle to leverage their unlawful activities, but particularly new opportunities to design and conduct attacks against governments, enterprises and individuals. Although, there is no sufficient evidence that criminal groups have a strong technical expertise in the management and manipulation of AI and machine learning systems for criminal purposes, it is true that said groups have realized its enormous potential for criminal and disruptive purposes.Footnote 7 Further, organized criminal groups currently recruit and bring technical skilled hackers into their files to manipulate, exploit and abuse computer systems and to perpetrate attacks and conduct criminal activities 24/7 from practically anywhere in the world.Footnote 8

2 Current cybercrime trends

Current trends and statistics show that cybercriminals are relying more on the use of IoT to write and distribute malware and target ransomware attacks which are largely enhanced through AI technologies.Footnote 9 This trend will likely continue as it is expected that more than 2.5 million devices will be fully connected online in the next 5 years including industrial devices and critical infrastructure operators which will make companies and consumers more vulnerable to cyberattacks.Footnote 10

Furthermore, the discussion on bias and discriminationFootnote 11 are also relevant debated aspects on AI policy in many international and policy making circles.Footnote 12 The widespread use of technologies based on facial recognition systems,Footnote 13 deserves further attention in the international policy arena because even when facial recognition may be very appealing for some governments to enhance aspects of public security and safety to prioritize national security activities, including terrorist activities, this technology may as well raises relevant and polemic issues concerning the protection of fundamental rights, including privacy and data protection under existing international treaties and conventions, topics that are currently being discussed in relevant international fora including the Council of Europe, the European Commission, the European ParliamentFootnote 14 and the OECD.

There is an ongoing global trend to promote misinformation with the support of AI technologies known as ‘bots’.Footnote 15 Bots are mainly used to spread fake news and content throughout the internet and social networks and have the chilling effect of disinforming and misleading the population, particularly younger generations who cannot easily differentiate between legitimate sources of information and fake news. Further, the use of ‘bots’ have the potential to erode trust and question the credibility of the media and destabilize democratic and government institutions.

Although AI holds the prospect to enhance the analysis of big amounts of data to avoid the spread of misinformation in social networks,Footnote 16 humans still face the challenge to check and verify the credibility of the sources, an activity which is usually conducted by content moderators of technology companies and media outlets without specific links to government spheres, a situation that has led relevant policy making institutions like the European Commission to implement comprehensive and broad sets of action to tackle the spread and impact of online misinformation.Footnote 17

Another trend and technology widely used across many industries are deep fakes.Footnote 18

The abuse and misuse of deepfakes has become a major concern in national politicsFootnote 19 and among law enforcement circles.Footnote 20 Deepfakes have been used to impersonate politicians,Footnote 21 celebrities and CEO’s of companies which may be used in combination with social engineering techniques and system automatization to perpetrate fraudulent criminal activities and cyberattacks. The use of deep fake technologies for malicious purposes is expanding rapidly and is currently being exploited by cybercriminals on a global scale. For example, in 2019, cybercriminals used AI voice generating software to impersonate the voice of a Chief Executive of an energy company based in the United Kingdom and were able to obtain $243,000 and distribute the transfers of the funds to bank accounts located in Mexico and other countries.Footnote 22

Another relevant case occurred in January 2020 where criminals used deep voice technology to simulate the voice of the director of a transnational company. Through various calls with the branch manager of a bank based in the United Arab Emirates, criminals were able to steal $35 million that were deposited into several bank accounts, making the branch manager of the bank believe that the funds will be used for the acquisition of another company.Footnote 23

The spoofing of voices and videos through deep fakes raise relevant and complex legal challenges for the investigation and prosecution of these crimes. First and foremost, many law enforcement authorities around the world do not yet have full capabilities and trained experts to secure evidence across borders, and often times the lack of legal frameworks particularly procedural measures in criminal law to order the preservation of digital evidence and investigate cybercrime represents another major obstacle. Second, since most of these attacks are usually orchestrated by well organized criminal groups located in different jurisdictions, there is the clear need for international cooperation, and in particular a close collaboration with global services providers to secure subscriber and traffic data, as well as to conduct more expedited investigations and law enforcement actions with other countries through the deployment of joint investigation teams in order to be able to trace and locate the suspects and follow the final destination of illicit funds.Footnote 24 Cross-border cybercrime investigations are complex, lengthy, and do not always necessarily result in convictions of the perpetrators.

Further, cyberattacks based on AI systems is a growing trend identified by the European Cybercrime Centre (EC3) of EUROPOL in its Internet Crime Threat Assessment Report 2020. According to the EC3, the risks concerning the use of AI for criminal purposes need to be well understood in order to protect society against malicious actors. According to the EC3, “through AI, criminals may facilitate and improve their attacks by maximizing their opportunities for profit in a shorter period of time and create more innovative criminal business models, while reducing the possibility of being traced and identified by criminal justice authorities”.Footnote 25

Further, the EC3 of EUROPOL recommends the development of further knowledge regarding the potential use of AI by criminals with a view to better anticipating possible malicious and criminal activities facilitated by AI, as well as to prevent, respond to, or mitigate the effects of such attacks in a more proactive manner and in close cooperation with industry and academia.Footnote 26

3 Strategic partnerships

Due to the complexities that the misuse and abuse of AI systems for criminal purposes entail for law enforcement agencies, key stakeholders are trying to promote the development of strategic partnerships between law enforcement, international organizations and the private sector to counter more effectively against the misuse and abuse of AI technologies for criminal purposes. For example, in November 2020, Trend Micro Research, the EC3 of EUROPOL and the Centre for Artificial Intelligence and Robotics of the UN Interregional Crime and Justice Research Institute (UNICRI) published the report: Malicious Uses and Abuses of Artificial Intelligence.Footnote 27 This report contains an in-depth technical analysis of present and future malicious uses and abuses of AI and related technologies that drew from the outcomes of a workshop organized by EUROPOL, Trend Micro and UNICRI in March 2020. The report highlights relevant technical findings and contains examples of AI capabilities divided into “malicious AI uses” and “malicious AI abuses”. The report also sets forth future scenarios in areas like AI supported ransomware, AI detection systems, and developed a case study on deepfakes highlighting the development of major policies to counter it, as well as recommendations and considerations for further and future research.Footnote 28

Strategic initiatives and more partnerships like the one mentioned above are further needed in the field of AI and cybercrime to ensure that relevant stakeholders particularly law enforcement authorities and the judiciary understand the complexities and dimensions of AI systems and start developing cooperation partnerships that may help to identify and locate perpetrators that misuse and abuse AI systems with the support of the private sector. The task is complex and needs to be achieved with the support of the technical and business community, otherwise isolated investigative and law enforcement efforts against criminals making use of AI systems will not likely succeed.

AI policy has been at the core of the discussions only in recent years. At the regional level, the European Commission has recently published a regulation proposal known as the Digital Services ActFootnote 29 though this proposal has just recently been opened for consultation and it will take a few years until it is finally approved.

On April 21, 20021, the European Commission published its awaited Regulation proposal for Artificial Intelligence Systems.Footnote 30 The proposal contains broad and strict rules and obligations before AI services can be put into the European market based on the assessment of different levels of risks. The regulation proposal of the European Commission also contains express prohibitions of AI practices that may contravene EU values and violate fundamental rights of citizens, and it establishes the European Artificial Intelligence Board (EIAB) as the official body that will supervise the application and enforcement of the regulation across the EU.Footnote 31

The prospect of developing a new international convention that will regulate relevant aspects concerning the impact and development of AI systems and the intersection with the protection of fundamental rights has been proposed by the Ad-Hoc Committee on Artificial Intelligence of the Council of Europe, better known as ‘CAHAI’. The work of CAHAI will be analysed in section 5.1 of this paper.

4 International instruments to counter cybercrime

At the international level, there are a number of international and regional instruments that are used to investigate “cyber dependent crime”, “cyber enabled crime” and “computer supported crime”.Footnote 32 This paper will only focus on the analysis of three major instruments of the Council of Europe which are applicable to criminal conduct and activities concerning the use of computer and information systems, the exploitation and abuse of children and violence against women committed through information and computer systems:

  • The Convention on Cybercrime better known as the ‘the Budapest Convention’;

  • The Convention on Protection of Children against Sexual Exploitation and Sexual Abuse, better known as ‘the Lanzarote Convention’; and

  • The Convention on preventing and combating violence against women and domestic violence better known as the ‘the Istanbul Convention’.

4.1 The Budapest Convention

The Council of Europe’s Budapest Convention on Cybercrime is the only international treaty that criminalizes conducts and typologies committed through computer and information systems. This instrument contains substantive and procedural provisions for the investigation, execution and adjudication of crimes committed through computer systems and information technologies.Footnote 33 The Budapest Convention is mainly used as a vehicle for international cooperation to investigate and prosecute cybercrime among the now 66 State Parties, which includes many countries outside Europe.Footnote 34

The Cybercrime Convention Committee (T-CY) which is formed by State Parties, country observers invited to accede to the Budapest Convention and ad-hoc participants is the entity responsible inter alia for conducting assessments of the implementation of the provisions of the Budapest Convention, as well as the adoption of opinions and recommendations regarding the interpretation and implementation of its main provisions.Footnote 35

During the 2021 Octopus Conference on Cooperation against Cybercrime in November 2021 that marked the 20th anniversary of the Budapest Convention, the organizers announced that the Committee of Ministers of the Council of Europe approved the adoption of the Second Additional Protocol to the Budapest Convention on enhanced cooperation and the disclosure of electronic evidence as originally adopted by 24 the Plenary Session of the T-CY Committee in May 2021. The text of the Second Additional Protocol will be officially opened for signature among State parties to the Budapest Convention in the summer of 2022.Footnote 36

The Second Additional Protocol to the Budapest Convention on enhanced cooperation and the disclosure of electronic evidence regulates inter alia how the information and electronic evidence - including subscriber information, traffic data and content data - may be ordered and preserved in criminal investigations among State Parties to the Budapest Convention. It provides a legal basis for disclosure of information concerning the registration of domain names from domain name registries and registrars and other key aspects concerning cross-border investigations including mutual legal assistance procedures, direct cooperation with service providers, disclosure of data in emergency situations, protection of safeguards for transborder access to data and joint investigation teams.Footnote 37

Although, the T-CY Committee has not yet fully explored how the Budapest Convention and its first additional protocol on xenophobia and racism may be applicable in the context of technologies and systems based on AI, it is worth mentioning that the Budapest Convention was drafted with broad consideration of the principle of technological neutrality precisely because the original drafters of this instrument anticipated how the threat landscape for cybercrime would likely evolve and change in the future.Footnote 38

The Budapest Convention contains only a minimum of definitions; however, this instrument criminalizes a number of conducts and typifies many offenses concerning computer and content related crimes that may as well be applicable to crimes committed through the use of AI systems.

During the 2018 Octopus Conference on Cooperation against Cybercrime, the Directorate General of Human Rights and Rule of Law of the Council of Europe convened a panel on AI and CybercrimeFootnote 39 where representatives of the CoE presented its early activities and findings on AI policy.Footnote 40 Although the panel presentations were more descriptive concerning the technical terminology used in the field AI at that time, some speakers highlighted and discussed some of the challenges that AI poses to law enforcement authorities like for instance the criminalization of video and document forgery and how authorities could advance the challenge to obtain and preserve electronic evidence in court.Footnote 41

The 2021 Octopus Conference on Cooperation against Cybercrime held fully online from 16-18 November 2021 due to the COVID-19 situation, held a panel on “Artificial Intelligence, cybercrime and electronic evidence”.Footnote 42 This panel discussed complex questions concerning criminal liability and trustworthiness of evidence of AI systems in auditing and driving automation and assistance; and other relevant aspects concerning harms and threats of misinformation and disinformation developed by AI systems and effective responses, countermeasures and technical solutions from the private sector.

AI and cybercrime are relevant aspects that need further analysis and detailed discussions among the TC-Y and State Parties to the Budapest Convention, particularly since there has been an increase of cases concerning the misuse of AI technologies by cybercriminals and as vehicles to launch cyberattacks and commit criminal offenses against individuals in the cyberspace. Questions such as who will bear the responsibility for a conduct committed through the use of algorithms and machine learning and the liability threshold among State Parties need further discussion and clarification since the regulation of criminal liability differs significantly among the legal systems of many countries, as well as to explore the development of strategic partnerships in other regions of the world to counter attacks based on AI systems.

4.2 The Lanzarote Convention

The Council of Europe Lanzarote Convention is an international treaty that contains substantive legal measures for the protection of children from sexual violence including sexual exploitation and abuse of children online.Footnote 43 This convention harmonizes minimum legal conducts at the domestic level to combat crimes against children and provide measures for international cooperation to counter the sexual exploitation of children. The Lanzarote Convention requires the current 48 State Parties to offer a holistic response to sexual violence against children through the “4Ps approach”: Prevention, Protection, Prosecution and Promotion of national and international cooperation.Footnote 44 The monitoring and implementation body of the Lanzarote Convention is conducted by the Committee of the Parties, also known as the ‘Lanzarote Committee’. This committee is formed by State Parties and it is primarily responsible for monitoring how State Parties put legislation, policies and countermeasures into practice, including organizing capacity building activities to exchange information and best practices concerning the implementation of the Lanzarote Convention across State Parties.Footnote 45

Like, the TC-Y, the ‘Lanzarote Committee’ has not yet fully explored how the substantive and procedural criminal law provisions of the Lanzarote Convention may apply in the context of the use of AI systems for criminal related purposes, a situation that needs to be further discussed among State Parties in order to not only share and diffuse knowledge on current trends among State Parties of that treaty, but to also help identify illicit conducts and abuse and exploitation of children through AI systems, as well as an analysis of positive uses of AI technologies for the prevention of crimes concerning the protection of children online.

4.3 The Istanbul Convention

The Istanbul Convention is another treaty of the Council of Europe the main purpose of which is to protect women against all forms of violence and to counter and eliminate all forms of violence against women including aspects of domestic violence.Footnote 46 The Istanbul Convention consists of four main pillars: (i) prevention, (ii) protection of victims, (iii) prosecution of offenders, and (iv) implementation of comprehensive and coordinated policies to combat violence against women at all levels of government. The Istanbul Convention establishes an independent group of experts known as the GREVIO (Group of Experts on Action against Violence against Women and Domestic Violence). The GREVIO is responsible for monitoring the effective implementation of the provisions of the Istanbul Convention by the now 34 States Parties.Footnote 47

The Istanbul Convention does not specifically contain specific provisions in the context of violence committed through the use of information technologies, however the GREVIO is currently analysing approaches to extend the application of the commission of illegal conducts through the use of computer and information systems within the national legal framework of State Parties.Footnote 48 The GREVIO adopted during its twenty-fifth meeting on 20 October 2021, a General Recommendation on the Digital Dimension of Violence against Women.Footnote 49 The Recommendation addresses inter alia the application of the general provisions of the Istanbul Convention in relation to conducts and crime typologies committed against women in cyberspace and proposes specific actions to take, based on the four pillars of the Istanbul Convention: prevention, protection, prosecution and coordinated policies.

As part of promoting the scope of the adopted General Recommendation, the GREVIO held a conference in Strasbourg in November 24, 2021 that featured a keynote address of the Commissioner of Human Rights of the Council of Europe and presentations of the President of the GREVIO and the Chair of the Committee of the Parties to the Istanbul Convention followed by a panel discussion with representatives of EU member states, internet industry and civil society.Footnote 50 Among the relevant points made during the panel discussions were how the recommendation may help to advance legal and policy developments, attention of victims of current forms of cyberviolence, further international cooperation and to contribute to the general understanding of the scope of the provisions of the Istanbul Convention and other key instruments of the Council of Europe including the Budapest Convention and the Lanzarote Convention in relation to digital violence against women.Footnote 51

The Cybercrime Convention Committee (T-CY) issued a comprehensive report titled Mapping Study on Cyberviolence with recommendations adopted by the TC-Y on 9 July, 2018.Footnote 52

The mapping study developed a working definition on “cyberviolence”Footnote 53 and described how the different forms of cyberviolence may be classified and criminalized under the Budapest-, Lanzarote- and Istanbul Conventions. According to the mapping study “not all forms of violence are equally severe and not all of them necessarily require a criminal law solution but could be addressed with a combination of preventive, educational, protective and other measures”. The main conclusions of the Cybercrime Convention Committee (T-CY) in the Mapping Study on Cyberviolence were:

  1. (i)

    the Budapest Convention and its additional Protocol on Racism and Xenophobia covers and address some types of cyberviolence;

  2. (ii)

    the procedural powers and the provisions on international cooperation of the Budapest Convention will help to support the investigation of cyberviolence and the secure and preservation of digital evidence; and

  3. (iii)

    the Budapest, the Istanbul and Lanzarote conventions complement each other and should promote synergies. These synergies may include raising further awareness and capacity building activities among Parties to said treaties; encourage parties to the Lanzarote and Istanbul Conventions to introduce the procedural powers contained in the Budapest Convention (Arts. 16-21) into domestic law and consider becoming parties to the Budapest Convention to facilitate international cooperation on electronic evidence in relation to crimes related to cyberviolence; encourage parties to the Budapest Convention to implement the provisions on psychological violence, stalking and sexual harassment of the Istanbul Convention, as well as the provisions on sexual exploitation and abuse of children online of the Lanzarote Convention, among others.Footnote 54

Cyberviolence and crimes concerning the abuse and exploitation of children online require strategic cooperation of different stakeholders. Other key institutions at the regional level like the European Commission have also explored paths on how AI systems may help to identify, categorise and remove child sexual abuse images and to minimise the exposure of human investigators to distressing images and the importance of the role of internet hotlines in facilitation the reporting process.Footnote 55

5 Ongoing work of international organizations

5.1 Council of Europe CAHAI

The Ad-Hoc Committee on Artificial Intelligence of the Council of Europe (CAHAI)Footnote 56 was established by the Committee of Ministers during its 1353rd meeting on 11 September 2019.Footnote 57 The specific task of CAHAI is “to complete the feasibility study and produce the potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law.”

The work of CAHAI is relevant because it sets forth a multi-stakeholder group where global experts may provide their views on the development of policies on AI, to forward meaningful proposals to ensure the application of international treaties and technical standards on AI and submit proposals for the creation of a future legal instrument that will regulate AI while ensuring the protection of fundamental rights, rule of law and democracy principles contained in relevant instruments of the Council of Europe, like Convention 108+, the Budapest, Lanzarote and Istanbul Conventions, among others.Footnote 58

The work of CAHAI will impact the 47 members states and country observers of the Council of Europe, particularly state institutions including national parliamentarians and policy makers who are responsible for the implementation of international treaties into their national legal frameworks. Therefore, the inclusion and participation of relevant stakeholders from different nations will play a decisive role in the future implementation of a global treaty on AI in the coming years.

5.2 European Parliament

The European Parliament (EP) is perhaps the most proactive legislative and policy making institution worldwide. The European Parliament has a Centre for Artificial Intelligence known as (C4AI) that was established in December 2019.Footnote 59 The EP has Committees that analyse the impact of policy related aspects of AI in many different areas including cybersecurity, defence, predictive policing and criminal justice. The most active committee is the Special Committee on Artificial Intelligence in a Digital Age (AIDA Committee)Footnote 60 that has organized many hearings and workshops with different experts and stakeholders on AI from different regions of the world to hear views and opinions on the Regulation proposal for Artificial Intelligence Systems.Footnote 61

According to the President of the AIDA Committee, “the use of AI in law enforcement is a political decision and not a technical one, our duty is to apply the political worldview to determine what are the allowed uses of AI and under which conditions”.Footnote 62

As a result of the existing dangers and risks posed by the use of AI systems across Europe, the European Parliament adopted a resolution on 6 October 2021 that calls for a permanent ban on AI systems which allow for the use of automated recognition of individuals by law enforcement in public spaces. Further, the resolution calls for a moratorium on the deployment of facial recognition systems for law enforcement purposes and a ban on predictive policing based on behavioural data and social scoring in order to ensure the protection of fundamental rights of European citizens.Footnote 63

The Committee on Civil Liberties, Justice and Home Affairs of the European Parliament has also conducted relevant work on AI and criminal justice. On February 20, 2020, said committee conducted a public hearing on “Artificial Intelligence in Criminal Law and its use by the Police and Judicial Authorities” where relevant opinions and recommendations of experts and international organizations were discussed and presented.Footnote 64

Further, the AIDA Committee of the European Parliament held a two-day public hearing with the AFET Committee on March 1st and 4th 2021. The first hearing was on “AI Diplomacy and Governance in a Global Setting: Toward Regulatory Convergence”, and the second hearing on “AI, Cybersecurity and Defence”.Footnote 65 Many relevant aspects of AI policy were mentioned during the hearings, including the support of a transatlantic dialogue and cooperation on AI, the development of ethical frameworks and standards, the development of a shared system of norms, respect of fundamental rights, diplomacy and capacity building among others. Although, there was mention on the importance of AI for cybersecurity in the defence realm and how AI might be helpful to mitigate cyberattacks and protect critical infrastructure, there was no specific mention on how the current international treaties on cybercrime and national legal frameworks may coexist with a future treaty on AI to counter cybercrime more effectively.

The dialogue and engagement of the different committees of the European Parliament on AI policy is key for the future implementation of policies in the criminal justice area concerning the use and deployment of AI systems and applications. The European Parliament should continue to promote further dialogues and activities with other international organizations like the Council of Europe and the OECD, as well as with national parliamentarians around the world to help them understand the dimensions and implications of creating regulations and policies on AI to specifically counter cybercrime.

5.3 The UN Interregional Crime and Justice Research Institute (UNICRI) Centre for Artificial Intelligence and Robotics

The Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI), a research arm of the United Nations is very active in the organization of workshops and information and reports to demystify the world of robotics and AI and to facilitate an in-depth understanding of the crimes and threats conducted through AI systems among law enforcement officers, policy makers, practitioners, academia and civil society. UNICRI and INTERPOL drafted the report “Artificial Intelligence and Robotics for Law Enforcement”Footnote 66 in 2019 that draws upon the discussions of a workshop held in Singapore in July 2018. Among the main findings of UNICRI and INTERPOL’s report are:

“AI and Robotics are new concepts for law enforcement and there are expertise gaps that should be filled to avoid law enforcement falling behind.”

“Some countries have explored further than others and a variety of AI techniques are materializing according to different law enforcement authorities. There is, however, a need for greater international coordination on this issue.”

The mandate of the Centre for Artificial Intelligence and Robotics of UNICRI is quite broad. It covers policy related aspects of AI in the field of criminal justice including areas such as cybersecurity, autonomous weapons, self-driving vehicles and autonomous patrol systems. UNCRI organizes every year the Global Meeting on Artificial Intelligence for Law Enforcement, an event that discusses relevant developments on AI with experts and stakeholders from different sectors and countries to enhance and improve the capabilities for law enforcement authorities and the criminal justice system in the use and deployment of AI technologies.Footnote 67

The Centre for Artificial Intelligence and Robotics of UNICRI is currently working with a group of experts from INTERPOL, the European Commission and other relevant institutions and stakeholders in the development of a Toolkit for Responsible AI Innovation in Law Enforcement. The toolkit will provide and facilitate practical guidance for law enforcement agencies around the world on the use of AI in a trustworthy, lawful and responsible manner. The toolkit addresses practical insights, use cases, principles, recommendations, best practices and resources which will help to support law enforcement agencies around the world to use AI technologies and applications.Footnote 68

6 Conclusion

The use of AI systems across different sectors is an ongoing trend, and this includes authorities of the criminal justice system which have realized the benefits and advantages of using this technology. National law enforcement authorities involved in the investigation of cybercrime are not yet fully prepared to deal with the technical and legal dimensions of AI when used for disruptive or malicious purposes. Further, there is no yet sufficient evidence to justify whether law enforcement authorities around the world are well equipped and trained to gather cross-border evidence to conduct national investigations where an AI system was involved in the commission or perpetration of an illicit conduct.

Second, the coordination and cooperation with service providers and companies that manage and operate AI systems and services is crucial to help determine its abuse and misuse by perpetrators. However, these tasks bring a number of technical and legal challenges, since most AI systems rely on an internet connection to function where oftentimes subscriber and traffic data is needed to conduct an investigation. Therefore, global service providers will also have an important role to play in the possible identification and location of cybercriminals, a situation that needs well-coordinated efforts, measures and responses based on international treaties and national laws between law enforcement authorities and private sector entities. The need for further strategic partnerships to counter cybercrime is more important than ever.

The future work of international organizations like UNICRI, the Council of Europe through CAHAI and the T-CY Committee of the Budapest Convention will be very relevant for policy makers and law enforcement authorities for the correct guidance in the implementation of future national policies on AI. The CAHAI may fill up the missing discussions in international fora concerning AI to specifically counter cybercrime based on the current standards of the Council of Europe like the Budapest Convention, the Lanzarote Convention and the Istanbul Convention, as well as the emerging practices of members states to specifically counter cyber enable crimes.

The creation of national taskforces on cybercrime (composed of law enforcement authorities, representatives of the judiciary, AI technology developers and global service providers) may serve as a relevant vehicle to coordinate and tackle illicit conducts concerning the misuse and abuse of AI technologies. These taskforces may be articulated in the context of the national strategies on AI and should be linked to the tasks of the criminal justice authorities to specifically counter cybercrime.