Skip to main content

Administrative due process when using automated decision-making in public administration: some notes from a Finnish perspective

Abstract

Various due process provisions designed for use by civil servants in administrative decision-making may become redundant when automated decision-making is taken into use in public administration. Problems with mechanisms of good government, responsibility and liability for automated decisions and the rule of law require attention of the law-maker in adapting legal provisions to this new form of decision-making. Although the general data protection regulation of the European Union is important in acknowledging automated decision-making, most of the legal safeguards within administrative due process have to be provided for by the national law-maker. It is suggested that all countries have a need to review their rules of administrative due process with a view to bringing them up to date regarding the requirements of automated decision-making. In whichever way the legislation is framed, the key issues are that persons who develop the algorithm and the code as well as persons who run or deal with the software within public authorities are aware of the preventive safeguards of legality in the context of automated decision-making, not only of the reactive safeguards constituted by the complaint procedures, and that legal mechanisms exist under which these persons can be held accountable and liable for decisions produced by automated decision-making. It is also argued that only rule-based systems of automatized decision-making are compatible with the rule of law and that there is a general interest in preventing a development into a rule of algorithm.

Introduction

The use of automated decision-making (hereinafter ADM) is on the increase not only within the private sector (banks, insurance companies, etc.), but also within the public sector, where it may be considered suitable for various decision-making processes within public administration. In that respect, the use of ADM is linked to a paradigmatic change in the method of production taking place in society, from the early agriculture-based rule of king via the recent industrialism-based rule of law to the incoming digitalization-based rule of algorithm (Suksi 2017, 285–294). When ADM is used in the service of public administration, the objective is to produce a decision that involves the exercise of public law in a manner that defines, for an individual or for a private legal entity, a particular right, duty or benefit on the basis of material legislation.

In particular, expectations of speedy decision-making have an impact on areas of public administration where so-called mass decisions are made (taxation, social benefits, etc.) and where ADM can be used to perform tasks of an uncomplicated nature. Legislation may, indeed, contain provisions that underline the need for fast decision-making, which is the case, for instance, in Finland: according to Section 21(1) of the Constitution of Finland (731/1999), everyone has the right to have his or her case dealt with without undue delay. The provision is repeated in Section 23(1) of the Administration Act (434/2003), according to which an administrative matter must be dealt with without undue delay, supported by a provision in Section 14(1) of the Act on the Civil Servants of the State (750/1994), according to which a civil servant of the state must perform his or her tasks without delay. For these structural reasons alone, and to support the use of ADM in public administration, it is important to analyze the preconditions of an administrative due process nature that ADM either is or should be placed under, not only in Finland but also in other countries.

However, few substantive requirements are currently imposed upon ADM. At the European level, Art. 22 of the General Data Protection Regulation of the European Union (2016/679; the GDPR) creates, as of 25 May 2018, a right for an individual to opt out from ADM as long as the ADM procedure is not regulated in national or European law so as to make it compulsory for the individual. In addition, Articles 13(2)(f), 14(2)(g) and 15(1)(h) of the GDPR create a right for the individual to know the logic of the ADM. However, this EU regulation is relatively narrowly confined to the area of data protection; whilst important in itself, it leaves a large part of ADM processes to be regulated in other law, mainly at the national level. National ADM rules vary from non-existent to very general and in some cases to specific rules on particular ADM systems (Malgieri 2019). In Sweden, the new Administration Act (2017:900), in force since 1 July 2018, contains a very open provision in Section 28(1), according to which an administrative decision may be made by an individual civil servant alone, or jointly by several civil servants, or by way of automated procedure (Suksi 2018a). In Finland, it appears that old provisions on decision-making might apply, as supplemented by the Act on Electronic Communication in the Activities of Public Authorities (13/2003), which, inter alia, makes it possible to sign decisions electronically, although it does not contain substantive rules about decision-making by ADM (Suksi 2018b). The assumption appears to be that the existing legislation is technology neutral, but at least Section 118 of the Constitution of Finland and Section 91 of the Local Government Act (410/2015) depart from a premise of human beings as decision-makers.

This article deals with the issue of fully automated decision-making and does not explicitly consider the use of ADM as decision-support when civil servants are making decisions. The discussion is here carried out largely from a Finnish perspective, with some limited observations concerning Sweden, Denmark and other countries. However, the issues relating to the use of ADM in public administration are of a general nature. It is therefore hoped that readers may relate these notes to corresponding phenomena in their own jurisdictions.

Issues of a general nature presented from the point of view of Finnish law

A central issue is how to consistently maintain the rule of law when automated decisions are made by public authorities: are there technological solutions on the spectrum from rule-based ADM to machine-learning ADM that should be excluded as options for software creation? At this juncture, it is very difficult to know what it is that happens inside at least some automated decision-making processes in terms of the substantive requirements of administrative due process. Certainly, the right created by the GDPR to know the logic of the ADM for the purposes of data protection is important (although the reach of the term “logic” is not yet known: is it a general description of the ADM system or complete publication of the algorithm and the code?). Here, the reference to “logic” brings to the fore the right of an individual to receive an explanation of the reasons behind the actual decision made by means of ADM (the so-called local explanation, mentioned also in the preambular para. 71 of the GDPR, but regulated in the procedural law of the Member States). This would seem clearly to lean towards the opening up of the algorithm, but it is not clear how far the term “logic” reaches.

Another major issue appears to be that the use of ADM renders redundant a considerable proportion of the procedural rules that national law has created for decision-making by a human being. The decision to use ADM, often made by a public authority without any backing from an Act of Parliament, thus in effect sets aside legislation such as provisions concerning good government designed under the assumption that the decision-maker is a human being. In essence, this means that the internal decision by a public authority to start to use ADM is almost of a legislative nature. Yet at the same time, there are few rules in current law that require anything of ADM systems in terms of good governance. To what extent can ADM, producing consequences of this kind, be used in (Finnish) public administration? Summarizing the above issues, it can be asked whether the ADM should (and could) be made to comply with existing law, or should existing law be amended to accommodate ADM. As the suggestions in Sect. 7 below indicate, the former is preferred.

By way of example, it can be seen that certain safeguards required by Section 21 of the Constitution of Finland, as established in the Administration Act, are rendered obsolete by the use of ADM, thereby weakening the system of preventive safeguards ex ante of the law in administrative proceedings. What impact does the use of ADM have on preventive legal safeguards in administrative due process? Further, the use of ADM not only impacts on preventive safeguards and preventive legal protection of the individual in relation to administrative decision-making, but also weakens the reactive safeguards ex post, that is, legal guarantees after the administrative decision has been made. For instance, the legal responsibility in criminal law and in tort law of a civil servant for decisions involving public powers, as established in Section 118 of the Constitution on official accountability, loses its ratione personae when a public authority uses an ADM system. What, then, is the impact of the use of ADM on the liability of civil servants?

As this kind of lacunae emerge from the taking into use of ADM, the question arises as to what kinds of rules should be introduced concerning ADM in order to compensate for the missing legal protections, alluded to also in Art. 22(2)(c) of the GDPR: legislation should lay down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests. What rules should be inserted in an Act or Acts of Parliament to remedy, in particular, the absent preventive legal protections in ADM environments? The overall question following from the above issues links to the paradigmatic change that society is undergoing, from a rule of law to—potentially—a rule of algorithm: when does automated decision-making in public administration leave the realm of the rule of law and turn into a rule of algorithm, that is, where should the red line be drawn for applications of artificial intelligence (AI) within public administration? The discussion here in the main presumes weak AI in the form of robotic process automation (RPA) (Ranerup and Zinner Henriksen 2019, 2), but recognizes that the development of stronger forms of AI based on machine-learning is a future potential and creates particular challenges for the rule of law.

Initiation of ADM regulation

European and Nordic perspectives

The legal development of general regulation of ADM has commenced at different points of time in different countries during the past several decades (Malgieri 2019, 2–25), but it appears that the existing legal regulation at the national level is normally less than comprehensive (e.g. concerning Denmark, see Motzfeldt and Taheri Abkenar 2019; examples of full ADM in Denmark in the areas of taxation, study benefits, retirement pensions of public employees at 31–32). However, some examples exist of exceptions to the lack of comprehensive rules, such as France, where requirements concerning ADM have existed for a long time in Act no. 78-17 of 6 January 1978 on information technology, data files and civil liberties and in the Code of the Relationship Between the Public and the Administration. When the former was brought into line with the GDPR by means of amendments, the Conseil constitutionnel (CC, décision no 2018-765 DC du 12 juin 2018, para. 72), found that the legislator had established suitable safeguards for the protection of the rights and liberties of individuals who had been objects of ADM exclusively on the basis of an algorithm. As a consequence, the impugned provision was found compliant with the French constitution, more specifically with Art. 16 of the Declaration of 1789 and Art. 21 of the Constitution.

According to the CC, an algorithm which is the basis of an individual administrative decision is subject to three conditions: 1) an individual administrative decision must explicitly mention that it has been adopted on the basis of an algorithm and the main characteristics of implementing the latter must be communicated to the person in question, upon their request; 2) the individual administrative decision must be subject to administrative recourse, and the administrative decision, in the event of a dispute, must be reviewed by a judge, who may require the administration to disclose the characteristics of the algorithm; 3) the use of an algorithm alone is excluded if the data processing relates to any of the sensitive data mentioned in Paragraph I of Article 8 of the Law of 6 January 1978. Meanwhile, ADM has been taken into use in the public administrations of many countries, often without a legal basis and without comprehensive rules safeguarding administrative due process. The former data protection directive of the EU (95/46/EC), adopted in 1995 but now repealed, also contributed to awareness about ADM.

The entering into force of the GDPR has increased the attention given to ADM because of the provision in Art. 22, which provides a rule for ADM in the area of data protection: the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. This right to opt out from ADM or to trigger human intervention does not apply if the decision is, inter alia, authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests. This data protection perspective, applicable in principle both to private and public ADM solutions, is obviously a narrow one in the entire context of administrative due process of public authorities (although it is broad at least for private entities using ADM; see below), but it constitutes a good starting point for more comprehensive regulation in the area.

Art. 22 of the GDPR is often understood as a general prohibition of ADM, and if understood as such with regard to ADM by public authorities, it would have dramatic consequences in Member States of the EU. This interpretation appears to arise from Chapter IV of the Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, adopted by the Working Party on the Protection of Individuals with regard to the Processing of Personal Data (last revised and adopted on 6 February 2018; endorsed by the European Data Protection Board on 25 May 2018). This is an understanding that has caused some controversy (Bodea et al. 2018, 40, fn. 60), because up to half of the Member States do not consider the provision to be a prohibition.

According to Chapter IV (p. 19) of the Guidelines, Article 22(1) establishes a general prohibition of decision-making based solely on automated processing. Apparently, however, the interpretation of Art. 22 as a prohibition of all ADM is subject to some hesitation, because the Guidelines actually suggest the prohibition against the background of a (plausible) interpretation of the opposite understanding: “Interpreting Article 22 as a prohibition rather than a right to be invoked means that individuals are automatically protected from the potential effects this type of processing may have.” Because the legal basis of the GDPR is Art. 16 of the Treaty on the Functioning of the European Union, which creates an individual right to the protection of personal data, an interpretation of Art. 22(1) as a prohibition, if applied as such on the administrative agencies of Member States, would, under the principle of conferral of competence in Art. 5(2) of the Treaty on European Union, interfere with the competence of Member States and also disturb their institutional autonomy (for legal orders where a distinction between public and private entities is made, such as France and the Netherlands, see Malgieri 2019, 11, 13).

The “minimum” interpretation of the impact of Art. 22 in public administration as an individual right to opt out from ADM that is not based on provisions in the law would not, against the background of Art. 16(2) TFEU, apply where administrative agencies of the Member States are carrying out activities which fall within the scope of Union law. Therefore, Art. 22 applies, for instance, within national customs authorities, because material customs law is an EU competence throughout. Because Art. 6 of the EU Customs Code requires the use of ADM in customs decisions, individuals cannot opt out from ADM in that context. The situation is different in areas typically under national competence, such as social benefits and taxation, where the EU is not competent to act; in those contexts, Art. 22 of the GDPR creates an individual right to opt out from ADM if no national law exists that authorizes the use of ADM. Provided that various states wish to use ADM, in order to exclude the right to opt out, national legislation about the use of ADM is therefore necessary. Such national law could take on at least two different forms, a legal basis on a case-by-case basis for each context where the use of ADM is authorized or a more comprehensive system-level regulation in legislation (Wiese Schartum 2018, 400; for recent EU plans and strategies in the area of AI and digitalization, see a white paper, two reports and a strategy by the European Commission 2020a, b, c, d).

According to Danish interpretation, Art. 22 of the GDPR, even if understood as a prohibition (as may be the case in Denmark), is likely to have little impact on the use of ADM in the public sector, because a legal basis for such use would normally exist, as well as suitable measures to safeguard the data subject’s rights, freedoms and legitimate interests (Motzfeldt and Taheri Abkenar 2019, 297). However, for instance Sections 38a, 38b and 43a of the Danish Act on Study Benefits of the State (Lov om statens uddannelsesstøtte, LBK nr 1037 af 30/08/2017) do not seem to specify ADM for study benefit decisions, although the provisions establish that the person shall file applications and receive decisions in electronic form, and the Minister of Education is authorized to set rules about electronic communication between the applicant and public authorities and about the use of electronic signatures. It is thus not clear, on the basis of the Act, that ADM will be used for the actual decision-making in individual cases. Obviously, electronic handling of information within the administration would qualify as digital administration even in a situation where AMD is not used (on digital administration, see Motzfeldt and Taheri Abkenar 2019).

Developments in Finland

As noted above, a very general provision permitting the use of ADM was inserted into the Administration Act in Sweden, while the situation in the other Nordic countries mainly relies on existing law on administrative procedure. By and large, this is the case in Finland, too, but in 2018–2019, some first steps were taken in the development of general rules about ADM. The Constitutional Committee of the Parliament of Finland required, in two of its Opinions (68/2018 and 7/2019), that the Government carry out an evaluation of the regulatory situation and how the law meets the requirements of the rule of law, good government and the liability of civil servants. The legislation proposed by the Government in two separate Bills that would have resulted in piecemeal regulation of ADM processes for the use of ADM by separate authorities, was thus returned to the Government. The message sent was that general regulation of ADM is probably needed, and most likely particular rules, too, specifying ADM procedures within certain branches of the administration. This preference would tend to lead to systemic regulation of ADM on a comprehensive basis instead of regulation of decision-making processes on a case-by-case basis (Wiese Schartum 2018, 400). Each decision-making process where ADM is to be used would additionally have to be authorized by inserting a legal basis into national law.

The Opinions of the Constitutional Committee also raise the issue of transparency, against the background of Section 12(2) of the Constitution of Finland, which establishes the right of access to documents. The relevant Government Bills suggested that the algorithm that has resulted in a final decision of the public authority by means of ADM should be public. In addition, the registered individual would have the right to receive a separate explanation about the algorithm that has been used in the ADM process for an individual decision in her or his case. The Constitutional Committee repeated its previous observations about the publication of the algorithm and emphasized that a correct publication of the algorithm in a form that is understandable for individual persons requires that the law contains an exact and well-delineated definition of ADM by means of an algorithm. The Committee was of the opinion that the proposed provisions about the publication of algorithms should be made more precise and clear before the proposed law could be adopted in the ordinary legislative procedure. It can be added that this would also be important in the event that the individual wants to use his or her right to appeal, because such provisions in law would also enable the courts to understand the matter and thus perform a review of the case.

In addition to the Constitutional Committee, the Deputy Ombudsman issued three decisions in November 2019 on the basis of a review of ADM processes that are in use at the Finnish Taxation Authority (EOAK 3379/2018, EOAK 2898/2018, EOAK 2216/2018). In these decisions, she concluded that the ADM of the Taxation Authority, established by means of internal decisions of the Taxation Authority of a non-transparent nature, did not comply with several sections of the Constitution. The problematic points were, inter alia, Section 2(3) on the rule of law, Section 21 on good government, Section 80 on the requirement of provisions in an Act for rights and duties of individuals, Section 81(1) on legal safeguards within taxation, and Section 118 on official accountability. She also made critical observations ranging from insufficient justification of taxation decisions and insufficient consideration of general principles of administrative law, including the doctrine of legitimate expectation, to insufficient compliance with the duty to provide advice and further to insufficient legal basis. For instance, she found that it was unclear how information that taxpayers had submitted on the basis of requests of complementary information, sent by the Taxation Authority to those who had made electronic tax returns, had been taken into account when the taxation decision had been made by means of ADM.

As a consequence, the Deputy Ombudsman in these decisions required legislative measures to be taken to rectify the situation by way of providing a material basis in law for the use of ADM in taxation and creating procedural rules for the use of ADM. This expectation is in line with the Opinions of the Constitutional Committee, above. The Chancellor of Justice, who is another supervisor of legality in Finland, has requested information from the Social Insurance institution concerning ADM processes (see below), and it is to some extent likely that the result of his review will result in similar conclusions to those of the Deputy Ombudsman. Relatively far-reaching legislative measures are therefore likely to be taken in Finland in the future with the purpose of comprehensive regulation of ADM, as indicated by the preliminary report on the general need for regulation concerning ADM, produced by the Government of Finland upon request of the Constitutional Committee (Automaattiseen päätöksentekoon liittyvät yleislainsäädännön sääntelytarpeet 2020). Such provisions at the level of an Act of Parliament will also have the consequence of fulfilling the requirements in Art. 22 of the GDPR concerning the legal basis for ADM use as well as suitable measures to safeguard the data subject’s rights, freedoms and legitimate interests.

“But you can always complain to a court of law…”

The guarantees of legality within administrative law fall into two principal categories: preventive safeguards ex ante and reactive safeguards ex post. Preventive safeguards operate before and until the decision is made and result in various principles and mechanisms which try to ensure that a correct decision is made in the first place, by means of administrative due process. Conversely, reactive safeguards operate after the decision has been made and imply the possibility of a complaint to a court of law. This distinction between preventive and reactive safeguards is probably relevant in most countries, although the examples used in this context are mainly Finnish. The self-rectification decision discussed below in this section is actually not part of the reactive safeguards, because it is not a complaint procedure in the formal sense. The requirement for legal rules about suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests is also present in Art. 22(2)(b) of the GDPR. It appears from the Preamble of the GDPR that the term “suitable measures to safeguard” operates mainly in the ex ante situation, separated from the remedies and redress to which the GDPR makes reference.

According to Section 21 of the Constitution of Finland, there is a constitutional right to protection under the law. Preventive protection under the law in administrative procedure is formulated in Section 21(1) of the Constitution: everyone has the right to have his or her case dealt with appropriately and without undue delay by a legally competent authority. A decision by a public authority is thus a determination at first instance of the rights and duties of a person, as laid down in material law. However, the constitutional provision does not stop with the formulation of a general right of everyone to an appropriate procedure, but continues in Sect. 2 with a requirement that provisions concerning the right to be heard, the right to receive a reasoned decision and the other guarantees of a fair trial and good government shall be laid down by an Act. As a consequence, a relatively extensive infrastructure of legislation, including the Administration Act (434/2003), the Act on the Publicity of the Documents of Public Agencies (621/1999) and the Language Act (423/2003) as well as the Civil Servants’ Act (750/1994) and the Act on Municipal Civil Servants (304/2003), has been enacted by Parliament.

All of these pieces of law regulate administrative procedure under the assumption that the decision-maker is a human being, either a single individual, an individual, a formal drafting person of the decision, or a collegial body of physical persons. The recruitment, status and conduct of civil servants in the course of executing their duties is also subject to extensive regulation. In the context of automated decision-making, what happens is that the physical person is taken out of the decision-making context, which in itself renders much of the administrative legislation inapplicable (see below, Sect. 5). When ADM is taken into use and a natural person is no longer in charge of or even part of the process of decision-making, the automated public function slips outside of such legal provisions, which require the involvement of a human being. Yet at the same time, no equivalent regulation of administrative process is imposed upon systems of automated decision-making. The situation is not unique to administrative decision-making: similar observations have been made, for instance, with regard to arbitration, where, on the basis of the UNCITRAL Model Law and legislation of several countries, it has been suggested that the arbitrator ought to be a natural person, not a robot (Hope 2019).

It is said that a starting point in all computer science is that there will, in software, be problems and bugs that influence the outcome. Such problems can be embedded in the algorithm or in the code. Administrative law—and also constitutional law, for that matter—departs from the same premise: in administrative decision-making, errors will take place, both errors in fact and errors in law. In particular concerning the latter, there should be safeguards designed to minimize errors.

Errors are thus bound to take place, as exemplified by a case concerning the study benefits of Mr. R.L (information obtained during interview, last checked on 12 December 2019 by Markku Suksi). In his case, an automated decision in 2015 by the Social Insurance Institution (the SII) concerning study benefits had not, when calculating the total length of studies that would count for tax deductions, taken into account his absence from university due to compulsory military service (as described in the SII decision of 15 May 2019). As a consequence, when the information was delivered by the SII directly, by electronic transfer, to the tax authority (but not to the person concerned; he had never been made aware of the fact that the decision of the SII would be by ADM), the Taxation Authority did not grant, in its automated tax procedure, the deductions to which Mr. R.L. should have been entitled. Upon noticing this several years later, Mr. R.L. requested an explanation, but it was difficult for the SII to ascertain what the problem was. Upon finally receiving the information, Mr. R.L. filed a new application, and when the new application was dealt with, it was noticed that the automated decision-making system of the SII had produced the error in the first place.

Eventually, the SII corrected its original automated decision from 2015 by manually issuing a new decision on 15 May 2019. The decision is undersigned electronically by a civil servant, on the basis of various provisions in material study benefits law and tax legislation (and presumably also on the basis of the self-rectification provision in Section 50 of the Administration Act, although the provision is not mentioned in the decision). However, the new decision makes no reference to the erroneous ADM decision and provides no explanation of what might have taken place in the ADM process. When speaking on the phone with relevant officials, Mr. R.L. was left with the impression that this case had revealed a larger problem in the system and that, as a consequence, a good number of other ADM decisions on study time justifying a tax deduction had to be revisited and corrected.

The case illustrates problems connected with transparency, the right to be heard and the right to an explanation for a decision. In addition, it reveals problems that may emerge when two ADM systems communicate with each other: the ADM system of the Taxation Authority had used the information produced by the ADM system of the SII as the input data for its own decision. As pointed out by Wiese Schartum, this kind of “use of decision data which represent results from individual decisions that have been made” is likely to become more common in the future and therefore, there should be provisions in the law on the gathering and re-use of decision data from machine-readable sources (Wiese Schartum 2018, 385, the two other categories being “reuse of decision data which have been the basis of previous individual decisions” and the “use of data that have been established as a common government service”).

Taking ADM applications into use may thus make much of the legislation on preventive guarantees inapplicable or redundant. To render some of the guarantees (such as the conflict of interest rules in Sections 27 and 28 concerning disqualification of a civil servant from decision-making) redundant might be viewed as entirely welcome. However, a considerable portion of the administrative due process guarantees that are still needed may also entirely disappear when ADM is used by a public authority. Meanwhile, Section 21 of the Constitution remains applicable and requires appropriate decision-making pursuant to the principles of good government. Examples of provisions in the Administration Act that become redundant when ADM is taken into use include Sect. 8 on the principle of service, Section 10 on assistance to other public authorities and cooperation between authorities, Section 12 on counsel, potentially Section 14 on actions of under-age persons and Section 15 concerning action by a legal trustee on behalf of a person, Section 16–19, Section 21 on transfer of a matter from an authority to the correct public authority and Section 22 on complementing documentation (see Suksi 2018b, 359–368).

Without doubt, after a decision has been made by means of ADM, reactive safeguards ex post become applicable: everyone has the right to have a decision pertaining to his or her rights or obligations reviewed by a court of law or other independent organ for the administration of justice. This means that under the Finnish Constitution there must always be the possibility to file a complaint over a decision by ADM to an administrative court and to the Supreme Administrative Court in the final instance (provided that leave for appeal is granted). Here, pursuant to Section 21(2) of the Constitution, the publicity of proceedings, the right of appeal and the other guarantees of a fair trial shall be provided by an Act, in the Finnish case the Act on Court Proceedings in Administrative Matters (808/2019). According to a decision of the Constitutional Court of France, the point of departure is the same in France: there should always be recourse to administrative justice (décision no 2018-765 DC du 12 juin 2018, para. 70; see Section 3.1 above). However, only a relatively limited proportion of administrative decisions are normally brought before a court for review, which means that the great bulk of administrative decisions rely on the preventive safeguards ex ante. Furthermore, if procedural law governing due process in the making of automated decisions is sidelined without being replaced by new ADM-adapted procedural law, an important yardstick for assessing the correctness of administrative decisions in courts will disappear. A significant proportion of administrative decisions made by civil servants that are currently overturned by courts are overturned for reasons of failure to comply with procedural safeguards and requirements ex ante.

Although it would always be possible to complain to a court of law or another independent organ for the administration of justice when ADM is used, the intention of preventive safeguards in administrative law and with administrative due process in general is to diminish the burden of court cases. It should not be the case that individuals, when confronted with a potential legal problem in conjunction with decisions produced in an ADM procedure, have no standard by which to judge the operation of the ADM procedure, but, as a matter of routine, have to complain to a court in order to learn whether or not the decision was correctly made according to law and fact. The same applies to courts, as they also use due process standards for assessing administrative decisions, not simply material provisions. Indeed, procedural safeguards ex ante should be in operation at the stage when the ADM system is designed: the law of administrative procedure governing ADM ought to be implemented in the architecture of the ADM system.

The proper functioning of administrative decision-making is entirely dependent on the existence of the two arms of legal protection under the law, preventive safeguards and reactive safeguards, and it is problematic if the former arm disappears in the areas of administration where ADM is used. The problem is of a constitutional nature in Finland, where a specific provision requires preventive safeguards, but is at least of a principal nature in other jurisdictions.

Excursus on the vanishing physical decision-maker: Section 118 of the Constitution of Finland on official accountability

Liability issues related to the use of ADM in public administration have the potential to create concerns in all states. In this respect, it appears that awareness has not, as of yet, reached very high levels (Wirtz et al. 2019, 603). It should always be possible to define the legal status of who is in charge and responsible for decisions made by means of ADM. The point of departure should be that humans are always responsible for the consequences associated with ADM technology and that data is processed in compliance with the relevant legislation.

The introduction of ADM in Finland has, until now, in most cases occurred by means of internal decision of the public authority or by means of other internal measures of such agencies. A public authority that wishes to speed up decision-making in matters it is in charge of, in particular mass decisions, and thus also save financial resources, decides by means of its own measures that the process of decision-making will take place through ADM. Taking ADM solutions into use in such a context impacts the principles and content of official accountability as established in Section 118 of the Constitution of Finland, which has both a preventive and a reactive function (the preventive function is probably more important, because the reactive function is, in practice, not much used). Again, the example is taken from the particular context of Finland, but the phenomenon is likely to be of a more general nature, applicable to many countries (except perhaps the UK and similar governmental systems, where the responsibility and liability is with the relevant minister).

The first sub-section of Section 118 underlines that a civil servant is accountable for the lawfulness of his or her official actions. A civil servant is also accountable for decisions of a collegial body that he or she has supported. When ADM is taken into use, the civil servant who would be accountable for decisions and measures disappears. Yet the same decisions continue to be made by the public authority, by ADM. For the sake of simplicity, it is assumed here that decisions made by collegial bodies (municipal boards and committees, faculty boards of universities, etc.) are not easy to transform into ADM, and thus that the part of the constitutional provision on collegiate accountability remains largely unaffected by ADM.

The second sub-section of the provision holds that a rapporteur who has not objected to a decision made on the basis of his or her report is responsible for the decision in the same manner as the formal decision-maker. Mass decisions within administration are rarely made upon a report, but if simple decisions where the report procedure is currently used are transformed into ADM, the rapporteur function where a civil servant would be accountable for decisions that are made, disappears.

There is, in Sect. 3 of the provision, an individual right of everyone who has suffered a violation of his or her rights or sustained loss through an unlawful act or omission by a civil servant or other person performing a public task, as provided by an Act, to require that the civil servant or other person performing the public task is charged in criminal proceedings and that the public organization, civil servant or other person be held liable for damages caused. If ADM is used, there is no longer any civil servant who could be held liable for the administrative decisions, and as a consequence, the criminal and tort law safeguards, however rarely activated in real life, become inoperable. An algorithm or code cannot, according to common perceptions, be held liable in criminal law or tort law. What is left of those safeguards is the tort liability of the public entity (the state, the municipality or other public entity).

Against this background, it can be asked what remains of the accountability categories (administrative, criminal and tort liability) in Section 118 in a situation where ADM is used on the basis of an (internal) decision of the public authority. It appears that what remains is the accountability of the civil servant for decisions he or she has taken as a part of a collegial body and the tort liability of a public entity, while most elements of this form of constitutional protection cease at the point when a decision is transferred from a human decision-maker to ADM.

It is difficult to estimate the magnitude of the legal problems of accountability that follow from the use of ADM for internal decisions or measures of public authorities, but it appears clear that such decisions and measures have normative effects even at the level of the Constitution, at least in Finland and also presumably in some other countries. Evidently, the effects are not limited to the constitutional level; they can also impact at the level of ordinary legislation, such as the Administration Act, by making redundant a good part of the provisions in that Act. The understanding of a decision-maker as a human being is particularly clear in Section 91(1) of the Municipalities Act (410/2015), under which the Municipal Council can delegate its decision-making authority to other municipal decision-making bodies, to elected officials or to civil servants (a similar delegation provision exists in the Swedish Municipalities Act, see Suksi 2018a, 469–470). It is very likely that this provision would have to be amended if the municipalities were to start using ADM, albeit on the basis of a more explicit legal ground in material legislation.

In Finland, the Constitutional Committee of Parliament has concluded in its Opinions 68/2018 and 7/2019 that the constitutional provisions on the rule of law and public liability of the State and civil servants are grounded in the principle of administration through civil servants. As a consequence, the requirements of legal security and good government lead to the handling of matters under the general administrative legislation so that those involved act under public liability. Therefore, indirect liability of civil servants would not be sufficient.

However, the proposal in Government Bill 18/2019 to designate the director general of an administrative agency as the person individually liable for each decision taken by means of ADM was also criticized by the Constitutional Committee. The Committee concluded that, in such a scenario, ADM disassociates itself from a construction of liability which is based upon the functional role of a human decision-maker. The problem was that liability of the director general, although statutorily the leading civil servant of the agency, was an imaginary and artificial construction. At the same time, the Committee felt that the link to the provision on the rule of law in Section 2(3) was lost. Therefore, the Committee opined that ADM must, for reasons of Section 118 of the Constitution, be carefully supervised and legally controllable: in the last resort, it must be possible to link ADM to the liability of civil servants for decisions taken. As a consequence, the proposed provision could not be passed in the ordinary legislative procedure.

The issue of who is responsible for the correctness of decisions made by ADM at the Taxation Authority was also raised by the Deputy Ombudsman in two of her decisions. In a decision of 20 November 2019 (EOAK/3379/2018), she opined that criminal liability currently remains indirect in ADM decisions by the Taxation Authority. At the agency, there is an internal set of rules according to which a so-called owner of the process decides matters that deal with the nationwide procedure of work, distribution of work, planning, development, process management, communication and follow-up. According to the Deputy Ombudsman, this internal set of rules is too general and does not make it possible, in the context of ADM, to allocate criminal liability or define its content and extent. In addition, she considered insufficient a system which is based upon the liability of a civil servant in the capacity of the process owner (apparently the Director General of the Taxation Authority), because the connection to tort liability of a civil servant and the principle of legality in criminal matters in Sect. 8 of the Constitution become unclear. Evidently, the Deputy Ombudsman is thus voicing strong concern over the same matters as the Constitutional Committee in relation to ADM.

As these examples show, attribution of responsibility and liability become more difficult when decision-points are taken away from human beings, especially because inputs from humans decrease. Designing accountability and liability therefore needs to be a central part of building information systems architectures (Smith et al. 2010, 10; Automaattiseen päätöksentekoon liittyvät… 2020, 11), much in the same way as designing and developing the algorithm and software (see below, Sect. 6).

Rule of law in rule-based ADM versus machine-learning ADM by public authorities

The above sections raise concerns regarding ADM in the context of public administration and in a system of the rule of law in general. It is unclear how the liability and accountability issues should be resolved (or—potentially—how an algorithm or a code could be held liable), although some effort has already gone into the thought of whether an algorithm could be defined as a legal person or perhaps made object of taxation or of liability (see e.g., Dahiyat 2020 and European Commission 2020a, b, c, d). As indicated by the sections above, the taking into use of ADM, often by internal decisions of public authorities without explicit support in relevant legislation, creates redundancies where the “rule of algorithm” may turn code into law that replaces legislation enacted by Parliament. From that point of view, there is self-evidently a need for supervision, oversight and transparency and for ADM-adapted legislation (see below). In addition, it has been submitted that explicit representation of procedural knowledge in ADM may be difficult (Bench-Capon and Coenen 1992, 66–67). This means that a proper level of administrative due process in ADM by means of implementing legal safeguards ex ante may be challenging. An illustration of the difficulties has been seen in an attempt to create a single-source-of-knowledge on norms and rules (van Doesburg and van Engers 2019, 2). The pursuit of completeness of the legal framework comprising more than 500 laws and regulations and the choice of a deontic approach proved to be an insurmountable task that led van Doesburg and van Engers to choose a modular approach for a subsequent rule-based project (see below). It is likely that provisions regulating administrative procedure constitute a significant portion of the legal rules needed for setting up an ADM system.

A number of particular problems emerge with the use of ADM, such as the need to modify the software when the law is amended or when a court changes the interpretation of the material provisions. It is typical of legislation that it is in constant change, and if the current letter of the material law, such as a provision on a social benefit, is changed, the amendment should be reflected in a correct manner in the ADM software from the day the amendment enters into force. There thus exists a need to maintain preparedness to modify the ADM solution in accordance with amendments in legislation by way of so-called adaptive maintenance (see e.g. Bench-Capon 1996, 309–313, 315–321). The same holds true for situations where a court of law provides an interpretation that deviates from or modifies the rules that the ADM system is running on: the change should be reflected in the operation of the ADM system so as to produce decisions that are in line with the new interpretation. Additionally, in situations where a court makes determinations about the correctness of ADM decisions and potentially overturns them, there must be constant willingness and ability to modify the ADM solution and the software it is running on.

As explained in a review of several legal expert systems, it is important that rules in the knowledge base “maintain links to the statute or in general to the primary sources of the law” (Bratley et al. 1991, 74; see also Bench-Capon and Gordon 2009, 18–19, on system operation). A traditional knowledge-based system can therefore be amended more or less quickly depending on how far-reaching the changes are (although it might still take weeks at best from the perspective of maintenance). However, machine-learning systems can adapt only when there are sufficient new decisions to obliterate the memory of the past situation, and because a machine-learning system relies on past cases, there exists the possibility that a machine-learning ADM leads to a rigid system of a stare decisis kind (see also Branting et al. 2019, 26). It is easy to agree with Bench-Capon that while machine-learning is retrospective, interpretation of law is dynamic, able to meet new situations and to adapt as society changes (Bench-Capon 2020, 28): “But how it will develop cannot be predicted from a consideration of the past. A legal expert might be able to conjecture some trends, but that art is beyond the capacity of current machine learning algorithms.”

An issue of principal importance is whether to create the ADM solution as a rule-based system (“if x, then y”) or as a machine-learning system. Because ADM may be viewed as an application of artificial intelligence and because AI can be used, for instance, for profiling individuals in the manner defined by Art. 4, para. 4, of the GDPR, it is arguably important to rule out the use of machine-learning applications of ADM in public administration. This would seem to be important at least in relation to potential applications of AI to the public sector (Wirtz et al. 2019, 600–601) if actual decisions might be made without human involvement, although there might be more leeway for AI applications that are only used to support human decision-making. However, ruling out machine-learning applications of AI in the ADM context appears appropriate also for a number of other reasons, despite the fact that this probably means that the implementation of AI in public sector decision-making will be severely limited (Wirtz et al. 2019, 608; Automaattiseen päätöksentekoon liittyvät… 2020, 4).

Firstly, if the ADM facilitates certain types of profiling mentioned in Art. 6 of the GDPR, the individual has a right to object to such procedures on the basis of Art. 21 of the GDPR, with the result that this type of ADM may not be functional in situations where many individuals use their right to object. However, it appears that the right to object in Art. 21 can lead to interruption of data processing only in direct marketing situations (Sect. 3) and under certain conditions in situations of scientific and historical research and when collating statistical information (Sect. 6). This means that the right to object would not lead to interruption of data processing in situations where public authorities are carrying out data processing on the basis of law. Also, where ADM has not been made compulsory under national or European law, an individual has the right to opt out from ADM on the basis of Art. 22 of the GDPR, which might also render the ADM system dysfunctional if several individuals make use of this right. This means that national law covering the use of ADM by public authorities is in practice necessary.

Secondly, machine-learning ADM would probably not be able to independently observe and take into account changes in law or court practice so as to re-orientate the output in the categorical and imperative manner intended by the law-maker or the court, but would instead relate such changes to earlier data stored in the machine-learning system, thus making the amendment relative rather than categorical (see above). In contrast, a rule-based ADM system would undergo systematic changes by which amendments in law or court practice will be inserted as new rules in the software, provided that there is proper maintenance of the rule-based system (Bench-Capon and Gordon 2009, 12). To use certain modular techniques based on abstract dialectical framework methods appears promising with a view to refurbishing and complementing the case materials and increasing transparency in machine-learning contexts (for an interesting application of the ANGELIC methodology, see Al-Abdulkarim et al. 2019, 12–20; see also Al-Abdulkarim et al. 2016a for a comprehensive review of the development of the facilitation of management in situations of changing case-law). However, further research is needed within this complex field before it is possible to create “a substantial system to perform factor based reasoning in a legal domain”. (see Al-Abdulkarim et al. 2016b, 1–49; Al-Abdulkarim et al. 2019, 10–11, 21).

Thirdly, it would in all likelihood be difficult to ensure transparency of a machine-learning ADM system so as to produce a relevant explanation to the individual of how, and on the basis of what information, the ADM had arrived at a certain conclusion (see Branting et al. 2019, 22, where the point is made that “purely machine-learning-based systems require relatively little development effort but typically have little or no explanatory capability”). As pointed out above, it is difficult to avoid Black box situations. This is problematic, in particular, if the algorithm evolves on its own and amends itself while learning by doing, and particularly so if the programme is based on neural networks involving many levels of nodes (see, e.g., Geslevich Packin and Lev-Aretz 2018, 92–93; Al-Abdulkarim et al. 2019, 12). A rule-based ADM is better suited for these purposes, because it should be possible to retrieve log information on how an individual decision was made. It has, however, been pointed out that the use of AI does not necessarily lead ADM to Black box situations, because “it should be technically feasible to create AI systems that provide the level of explanation that is currently required of humans” (Doshi-Velez et al. 2017, 6, 11–12; see also Al-Abdulkarim et al. 2019, 19, for an example based on abstract dialectical framework method, ANGELIC, that promotes transparency and provides justifications).Footnote 1 Public decision-making should nevertheless not take machine-learning ADM systems into use before transparency and explainability issues are completely resolved.

Fourthly, and perhaps most importantly, machine-learning techniques in ADM for making administrative decisions cannot work in a rule of law situation because machine-learning is actually based on predictions, where a new decision is made on the basis of a data pool of previous decisions (on the results of machine learning in ADM as prediction, see Geslevich Packin and Lev-Aretz 2018, 88–89, 93, 96, 100–102; see also Al-Abdulkarim et al. 2019, 8–9, and Alkhatib and Bernstein 2019, 4, 6, where there point is made that more training data does not eliminate errors that lead to incorrect decision, because there will always be new real cases that do not conform to the patterns extracted from training data). Because decisions of public authorities cannot be based on predictions on the basis of earlier decisions but must be based on provisions in material (and procedural) law and fulfil requirements of legality which, for instance, in Finland follow from Section 2(3) of the Constitution, a rule-based ADM solution appears to be the only option when ADM systems are designed for administrative decision-making (see also Automaattiseen päätöksentekoon liittyvät… 2020, 3–4; in a similar vein, Branting et al. 2019, 23: “Legal justifications and explanations differ from typical XAI [explainable AI –MS] applications in that they must make explicit reference to authoritative legal sources to be persuasive”). Therefore, as pointed out by Branting et al., “decision prediction is not appropriate for completely autonomous processes” because “[d]enial of benefits by an automated process, no matter how accurate, raises significant due process issues, and in any event prediction accuracy is limited in this paradigm by the absence of explicit modeling of legal rules or issues” (Branting et al. 2019, 25). If machine-learning ADM were used, it is likely that the connection to the law and to the principle of legality would be broken, whereupon the rule of law would turn into the rule of algorithm. Such a development would be very problematic and should not be countenanced. Therefore, in the terminology of the GDPR on the right to know the logic of an automated decision-making procedure, the logic of ADM should always be rule-based, not based on machine learning, in the context of public administration.

Fifthly, because the procedural law that controls the application of material law should normally be included in a machine-learning ADM application, the difficulty of inserting administrative due process into such systems must be taken into account. As pointed out by Bench-Capon and Coenen, Karpf requires that “if procedural law is part of the domain of the model then the law module will have representation of material as well as procedural rules and it is demanded that the whole system functions in accordance with and in the order following the procedural rules” (quote in Bench-Capon and Coenen 1992, 66–67). Bench-Capon and Coenen state they would not insist on the above-mentioned condition, which is one of five, “being met by any representation claiming to exhibit isomorphism: but the explicit representation of procedural knowledge remains a difficult issue” (Bench-Capon and Coenen 1992, 67), where isomorphism is defined as correspondence between source documents and the representation of the information the documents contain as used in the system (Bench-Capon and Coenen 1992, 66; see also Bench-Capon and Gordon 2009, 11–12). It appears that the above-mentioned ANGELIC example is entirely based on material law, in particular case law (Al-Abdulkarim et al. 2019, 19), and it remains an open question whether or how procedural law, that is, provisions concerning administrative due process, could be inserted as control elements into a modular technique of ADM.

However, the use of algorithmic methods of decision-making might, at least in some future scenario, be able to accommodate a more open attitude towards machine learning. Hence, although the point of departure would be rule-based ADM, the borderline towards machine learning might not have to be so categorical (Bench-Capon 2018). Also, it has been suggested that the distinction between rule-based systems and machine learning systems should not entirely exclude the possibility of using machine learning systems in ADM, inter alia, for the reason that ADM systems may combine different techniques (Koulu et al. 2019, 132–133; Al-Abdulkarim et al. 2019, 12). It is, of course, possible that the problems which currently exist with machine learning in ADM can be overcome in a water-tight manner in some not too distant future, but at this point of time it appears safest to advocate a complete ban on using machine learning techniques in ADM used by public authorities. Such a ban can be repealed at a point in the future when it can be guaranteed that the link to the rule of law can be maintained for ADM based on machine learning. In this context, a precautionary principle should be observed.

A number of particular problems also arise with the use of rule-based ADM, such as the need to modify the software when the law is changed or when courts change the interpretation of the material rules on which the decision is based. In such situations, changes need to be made in the programme, but that appears to be much more possible in a rule-based setting, in particular concerning provisions of procedural law. Therefore, if the contents of material and procedural law are changed, such as a provision about a social benefit or a due process provision, the amendment should be inserted in the correct manner in the software of the rule-based ADM from the day the amendment enters into force (Automaattiseen päätöksentekoon liittyvät…2020, 4). This is also the case when a court decides on a new interpretation that departs from or modifies those rules that the rule-based ADM system applies: the new interpretation should be inserted in the functions of the ADM so that decisions made using ADM comply with the new interpretation. Thus, there should, in situations where a court resolves issues about the legality of the ADM decisions, exist a constant preparedness to modify the ADM system, both the data and the software that the system is running on (Wiese Schartum 2018, 384, 393; Motzfeldt and Taheri Abkenar 2019, 95–98, 101). The work on the necessary modifications is a task for a human being,Footnote 2 acting under proper rules of liability and accountability (see above, Sect. 5).

Proposals for procedural rules on ADM in legislation

Because of reasons related to administrative due process, accountability of civil servants and various aspects of the rule of law (including the requirement for preventive safeguards of legality found in, for instance, Section 21 of the Constitution of Finland), there is a need to specify in an Act of Parliament the conditions under which ADM can be used by public authorities. This need also exists on the basis of the lacunae created by the taking into use of ADM solutions in public administration. Therefore, in addition to context-specific authorization of the use of ADM, there is a need for general legislation that outlines the requirements of good government, transparency and public liability. The content of such legislation is likely to be different from the corresponding rules applied to human decision-makers in public administration.

Usual appeals procedures, used in an ADM context, may result in courts opening up the “Black boxes” of automated decision-making but only in that relatively limited number of cases where appeals are filed. Therefore, reactive safeguards via the courts are not enough in an ADM context; preventive safeguards for administrative due process, applicable to all decisions, are also needed. It is deeply disturbing that many of the existing preventive safeguards to ensure legality of administrative procedure are rendered redundant when ADM is taken into use. Arguably, the Black box should already be open, or at least partially open, before decisions are made through ADM.

Legislative needs within the area of ADM are not specifically Finnish, but of a wider nature. General legislation should, against the background of what has been mentioned in the above sections, define what good government means in an ADM environment by formulating principles and provisions of, for example, the following kind:

  • a civil servant should always be designated as a responsible party for the ADM system and its use, and the person in question should be accountable under administrative law as well as under the particular criminal law and tort law applicable in such situations;

  • the use of ADM for making various types of decisions should be authorized in material legislation, that is, in an Act of Parliament or on the basis of such an Act, for each of the instances where ADM is used;

  • the party receiving a decision made by ADM should be informed about the fact that the decision has been made by means of ADM;

  • there should be a determination as to which existing legal provisions on administrative procedure are applicable in ADM-situations;

  • there should be a requirement to identify those public (and private, as the case may be) registers that the ADM system has used when making the decision, and an explanation of how the external register information has been used (this is important, because government-held data normally encompasses all individuals in society and is highly accurate; it not only constitutes big data, but very big data);

  • there should be principles concerning the purging of unnecessary information from the ADM system after it has been used for the decision-making purpose;

  • there should be provisions on the transparency of the ADM system as concerns the algorithm and the code (“the logic”) as well as the information used;

  • there should be a provision that prevents the use of machine-learning ADM and requires the use of rule-based ADM in administrative decision-making (the prohibition should be in place at least until such time that it is entirely certain that inconsistencies between machine-learning ADM and the rule of law and other issues of legality have been resolved);

  • there should be a definition of situations where ADM systems must not be used (such as in relation to re-visiting or correcting decisions made by the ADM, in relation to children according to the preambular para. 71, first part, last sentence, of the GDPR, etc.);

  • there should be principles that are to be observed when the software is coded, in particular if such software has to be obtained by means of public procurement (which should include provisions about conditions for the involvement of private entities in the creation of ADM software).

The legislation envisioned in this non-exclusive list of proposals should be such that individuals can use it as a yardstick by which to assess whether the ADM decisions affecting them were correctly made, much in the same way as the Administration Act currently provides a measure for human-made administrative decisions. Even more importantly, such legislation would provide a yardstick for the persons who create the ADM software and who operate it in public authorities, at the stage when the ADM system is designed and maintained. The discretion of system designers and software developers should thus be limited by provisions in law, because their actions and potential inability to take into account the requirements of administrative due process may have a considerable impact on the outcomes of ADM (Pääkkönen et al. 2020; see also Smith et al. 2010, 3; Automaattiseen päätöksentekoon liittyvät… 2020, 11). As a consequence, the ADM system as a decision-maker would be subjected to legislation corresponding to that which civil servants and human decision-making are subjected to. Finally, legislation of this kind would function as a yardstick for courts of law by which to assess whether the appealed ADM decisions were correctly made from a procedural point of view.

Conclusions

Society today is very technology-driven and the (unintended or intended) consequence of the on-going development of AI is that algorithms will increasingly take over functions not only in the private sector, but also within public authorities, with the risk of turning the general system of the rule of law into a rule of algorithm. There are legislative lacunae in the regulation of ADM not only in Finland but probably also in a number of other countries. General ADM legislation should spell out how far the use of AI can be allowed to reach in public administration. The reach of ADM in public administration should be determined so that it is possible to avoid leaving the ambit of the rule of law and turning decision-making within public administration into a rule of algorithm.

At the moment, there is, in most countries, no good understanding of how far the use of AI applications in the form of ADM might extend into decision-making by public authorities. When establishing boundaries for the use of various AI applications in such decision-making, continued connection to the rule of law should be the starting point, and a further evolution into a rule of algorithm in public administration should be prevented. This endeavor is underpinned by a premise of an ethical nature; it is a duty to try to uphold the rule of law and to prevent its replacement by the rule of algorithm. It is submitted that, at the level of legislation, use of algorithms in ADM should be limited to rule-based variants by ruling out machine learning, and that entirely new types of ex ante safeguards of legality are needed.

It is thus clear that legislative provisions addressing ADM are needed and, as indicated by a preliminary review in Finland, in the making. Whether such provisions should be inserted into existing Acts, where they would supplement traditional forms of decision-making, or collected in separate and entirely new legislation on automated decision-making is a further relevant issue. Some provisions about ADM could probably be inserted into existing law, but a good number of the particular provisions needed for the regulation of ADM should be collected in a separate Act that spells out the general requirements of good government and of accountability in instances where ADM is used.

The use of ADM is likely to necessitate new kinds of procedural rules that are specific to ADM contexts, and it is important to start to identify what kind of new rules might be needed. Also, different countries may need different kinds of rules, for instance, because of constitutional requirements. The current reliance on the existing rules, albeit in many cases in principle technology neutral, may already have been deemed insufficient and, if not, will undoubtedly prove insufficient in the relatively near future. It is suggested that all countries have a need to review their rules of administrative due process with a view to adapting them to ADM. In whichever way the legislation is framed, the key issues are that persons who develop the algorithm and the code as well as persons who run or deal with the software within public authorities are aware of the preventive safeguards of legality in the ADM context, not only of the reactive safeguards constituted by the complaint procedures, and that legal mechanisms exist under which they can be held accountable and liable for decisions produced by ADM.

Notes

  1. See also Branting et al. 2019, 26 28, 31, who make the observation that “the relatively stereotyped language of administrative case decisions means that statements with similar legal effect in different cases tend to be close to each other in semantic-embedding space. Thus, annotation tags applied to a subset of cases can be mapped to an entire corpus, with accuracy and completeness that depends on the consistency of the case language within the corpus, the typicality of the annotated cases, and the threshold for semantic similarity.” The aim is to develop a model for how to apply contemporary computational linguistics techniques to homogeneous case corpora to produce explainable decision prediction systems with a minimum of manual case annotation as a methodology equally applicable to routine administrative adjudications in agencies throughout the world. The accuracy achieved by the method presented in the paper, the Semi-supervised Case Annotation for Legal Explanations, SCALE) is supposed to be sufficient for prediction, triage and decision-support, but—by implication—not for fully automated decision-making.

  2. An interesting rule-based project is in a trial phase concerning the Dutch Alien Act, which consists of altogether 1096 different material clauses. A development project broke down the provisions of the Act to 428 act frames with descriptions of outcomes under certain logical rules applied within the Calculemus method and by using the FLINT language for the explicit interpretation of sources of norms. The work was performed by one single person during 1 month (van Doesburg and van Engers 2019, 7, 10). For ADM to become operative on the basis of this Act following this method, clauses from some other acts, including the General Administrative Law Act, would have to be transformed into a number of additional act frames to make the ADM system complete. In fact, all relevant provisions of the legal order that affect decision-making on alien matters would have to be represented by act frames in the system, which may mean that a far greater number of act frames are needed. The article does not indicate how accountability and liability issues have been dealt with. However, it seems this modular technique may contain an interesting practical solution for how to manage the problem of constant modifications in law and practice.

References

  • Al-Abdulkarim L, Atkinson K, Bench-Capon T (2016a) Accommodating change. Artif Intell Law 24(4):409–427

    Article  Google Scholar 

  • Al-Abdulkarim L, Atkinson K, Bench-Capon T (2016b) A methodology for designing systems to reason with legal cases using abstract dialectical frameworks. Artif Intell Law 24(1):1–49

    Article  Google Scholar 

  • Al-Abdulkarim L, Atkinson K, Bench-Capon T, Whittle S, Williams R, Wolfenden C (2019) Noise induced hearing loss: building an application using the ANGELIC methodology. Argum Comput 10:5–22

    Article  Google Scholar 

  • Alkhatib A, Bernstein M (2019) Street-level algorithms: a theory at the gaps between policy decisions. CHI 2019, Glasgow

  • Automaattiseen päätöksentekoon liittyvät yleislainsäädännön sääntelytarpeet–esiselvitys (2020). Oikeusministeriö, Helsinki

  • Bench-Capon TJM (1996) Maintenance of legal knowledge-based systems. In: Kent A (ed) Encyclopedia of library and information science, vol 57. Marcel Dekker Inc., New York, pp 308–322

    Google Scholar 

  • Bench-Capon T (2018) Legal cases: argumentation versus ML. ArgSOC at COMMA 2018. https://cgi.csc.liv.ac.uk/~tbc/publications/commaWorkshop18.pdf. Accessed 11 Dec 2019

  • Bench-Capon T (2020) The need for good old fashioned ai and law. In: Hotzendorfer W, Tschohl C, Kummer F (eds) International trends in legal informatics: a Festschrift for Erich Schweighofer. Weblaw, Bern, pp 23–36

    Google Scholar 

  • Bench-Capon TJM, Coenen FP (1992) Isomorphism and legal knowledge based systems. Artif Intell Law (1):65–86

    Article  Google Scholar 

  • Bench-Capon T, Gordon TF (2009) Isomorphism and argumentation. In: ICAIL ‘09: Proceedings of the 12th international conference on artificial intelligence and law. Association for Computing Machinery, New York, pp 11–20

  • Bodea G, Karanikolova K, Mulligan DK, Makagon J (2018) Automated decision-making on the basis of personal data that has been transferred from the EU to companies certified under the EU-U.S. Privacy Shield-Fact-finding and assessment of safeguards provided by U.S. law. Final report. TNO. Directorate-General for Justice and Consumers, Directorate C: Fundamental Rights and Rule of Law, Unit C.4 International Data Flows and Protection. Brussels 2018

  • Branting K, Weiss B, Brown B, Pfeifer C, Chakraborty A, Ferro L, Pfaff M, Yeh A (2019) Semi-supervised methods for explainable legal prediction. In: Proceedings of the seventeenth international conference on artificial intelligence and law ICAIL’19, June 17–21, 2019, Montreal, pp 22–31

  • Bratley P, Frémont J, Mackaay E, Poulin D (1991) Coping with change. In: Proceedings of the 3rd international conference on artificial intelligence and law. ACM Press, New York, pp 69–76

  • Conseil Constitutionnel of France (2018) Decision no. 2018-765 DC of 12 June 2018. www.conseil-constitutionnel.fr/en/decision/2018/2018765DC.htm. Accessed 13 Feb 2020

  • Constitutional Committee of Finland (2019) Opinions 62/2018 (14 February 2019) and 7/2019 (14 November 2019). www.eduskunta.fi/FI/valiokunnat/perustuslakivaliokunta/Sivut/default.aspx. Accessed 13 Feb 2020

  • Dahiyat E (2020) Law and software agents: are they “Agents” by the way? Artif Intell Law. https://doi.org/10.1007/s10506-020-09265-1

    Article  Google Scholar 

  • Deputy Ombudsman of Finland (2019) Decisions EOAK 3379/2018 (20 November 2019), EOAK 2898/2018 (25 November 2019), and EOAK 2216/2018 (20 November 2019). www.eduskunta.fi/FI/valiokunnat/perustuslakivaliokunta/Sivut/default.aspx. Accessed 13 Feb 2020

  • Doshi-Velez F, Kortz M, Budish R, Bavitz C, Gershman S, O’Brien D, Schieber S, Waldo J, Weinberger D, Wood A (2017) Accountability of AI under the law: the role of explanation, pp 1–15. arXiv:1711.01134v2 [cs.AI] 21 Nov 2017

  • European Commission (2020) A European strategy for data-communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Brussels, 19.2.2020, COM(2020) 66 final

  • European Commission (2020) Report on the safety and liability implications of artificial intelligence, the internet of things and robotics—report from the Commission to the European Parliament, the Council and the European Economic and Social Committee. Brussels, 19.2.2020. COM(2020) 64 final

  • European Commission (2020) Shaping Europe’s digital future—communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Brussels, 19.2.2020. COM(2020) 67 final

  • European Commission (2020) White paper on artificial intelligence—a European approach to excellence and trust. Brussels, 19.2.2020, COM(2020) 65 final

  • Geslevich Packin N, Lev-Aretz Y (2018) Learning algorithms and discrimination. In: Barfield W, Pagallo U (eds) Research handbook on the law of artificial intelligence. Edward Elgar, Cheltenham, pp 88–113

    Chapter  Google Scholar 

  • Hope J (2019) Can a robot be an arbitrator. In: Calissendorff A, Schöldström P (eds) Stockholm arbitration yearbook 2019. Kluwer Law, Alphen aan den Rijn, pp 104–111

    Google Scholar 

  • Koulu R, Mäihäniemi B, Kyyrönen V, Hakkarainen J, Markkanen K (2019) Algoritmi päätöksentekijänä? Tekoälyn hyödyntämisen mahdollisuudet ja haasteet kansallisessa sääntely-ympäristössä Valtioneuvoston selvitys-ja tutkimustoiminnan julkaisusarja 2019:44. Valtioneuvoston kanslia, Helsinki

    Google Scholar 

  • Malgieri G (2019) Automated decision-making in the EU Member States: the right to explanation and other “suitable safeguards” in the national legislations. Comput Law Secur Rev 35:1–26

    Article  Google Scholar 

  • Motzfeldt HM, Taheri Abkenar A (2019) Digital forvaltning–udvikling af sagsbehandlende lösninger. DJOEF-forlag, Köbenhavn

    Google Scholar 

  • Pääkkönen J, Nelimarkka M, Haapoja J, Lampinen A (2020) Bureaucracy as a lens for analyzing and designing algorithmic systems. CHI’20, April 25–30, 2020, Honolulu, HI, USA (accepted for presentation)

  • Ranerup A, Zinner Henriksen H (2019) Value positions viewed through the lens of automated decision-making: the case of social services. Govern Inf Quart 36(2019):1–13

    Google Scholar 

  • Smith ML, Noorman ME, Martin AK (2010) Automating the public sector and organizing accountabilities. Commun Assoc Inf Syst 26(1):1–16

    Google Scholar 

  • Social Insurance Institution of Finland, decision of 15 May 2019 in the case of R.L

  • Suksi M (2017) On the openness of the digital society: from religion via language to algorithm as the basis for the exercise of public powers. In: Lind AS, Reichel J, Österdahl I (eds) Transparency in the future—Swedish openness 250 years. Ragulka Press, Tallinn, pp 285–317

    Google Scholar 

  • Suksi M (2018a) Automatiserat beslutsfattande enligt den svenska förvaltningslagen. Tidskrift, utgiven av Juridiska Föreningen i Finland 154:463–472

    Google Scholar 

  • Suksi M (2018b) Förvaltningsbeslut genom automatiserat beslutsfattande–statsförfattnings-och förvaltningsrättsliga frågor i en digitaliserad myndighetsmiljö. Tidskrift, utgiven av Juridiska Föreningen i Finland 154:329–371

    Google Scholar 

  • van Doesburg R, van Engers T (2019) Explicit interpretation of the Dutch Aliens Act—specifications for decision support systems and administrative practice. In: Proceedings of the workshop on artificial intelligence and the administrative state (AIAS 2019), June 17, 2019, Montreal, pp 1––10

  • Wiese Schartum D (2018) From facts to decision-data: About the factual basis of automated individual decisions. In: Wahlgren P (ed) 50 Years of Law and IT. The Swedish Law and informatics research institute 1968–2018. Scandinavian Studies in Law, Volume 65. Stockholm Institute for Scandinavian Law, Stockholm, pp 379–400

  • Wirtz BW, Weyerer JC, Geyer G (2019) Artificial intelligence and the public sector—applications and challenges. Int J Public Admin 42(7):596–615

    Article  Google Scholar 

Download references

Acknowledgements

Open access funding provided by Abo Akademi University (ABO). The author is grateful to the participants in the seminar “Opening the Black Box” (Åbo Akademi University, 28 August 2019) for excellent comments, to Dr. Anna Barlow for comments and linguistic corrections and to two anonymous reviewers for learned suggestions.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markku Suksi.

Ethics declarations

Conflicts of interest

The authors declared that they have no conflict of interest

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Suksi, M. Administrative due process when using automated decision-making in public administration: some notes from a Finnish perspective. Artif Intell Law 29, 87–110 (2021). https://doi.org/10.1007/s10506-020-09269-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10506-020-09269-x

Keywords

  • Administrative due process
  • Legal safeguards
  • Administration
  • Decision-making
  • Constitution
  • Software design