Skip to main content
Log in

Technology, institutions and regulation: towards a normative theory

  • OPEN FORUM
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Technology regulation is one of the most important public policy issues facing society and governments at the present time, and further clarity could improve decision making in this complex and challenging area. Since the rise of the internet in the late 1990s, a number of approaches to technology regulation have been proposed, prompted by the associated changes in society, business and law that this development brought with it. However, over the past decade, the impact of technology has been profound and the associated issues for government have extremely challenging, ranging across cyber security, artificial intelligence, and many other areas. To that end, this article introduces a Theory of Institutional Technology Actors and Norms (TITAN), a normatively informed and institutionally-based account of technology regulation. It focuses on the moral and legal (including regulatory) rights and responsibilities of the relevant actors and seeks to inform the development of regulation that is both fit for purpose, rights compliant and fair for all concerned. The account incorporates the perspectives of four key categories of groups in society: producers of technology, users of technology, government regulators, and normative policy shapers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Notes

  1. A non-rival good is one that can be consumed or used by multiple persons; a non-excludable good is one that it is costly or impossible to exclude others from its consumption or other use.

  2. As mentioned above, many goods that might be thought to be collective goods, in some sense, are not produced, e.g. the atmosphere. Moreover, a good can be a collective god in our sense without being non-rival or non-excludable, i.e. without being public goods in the economist’s sense since collective goods in our sense can be aggregate of goods such as, for instance, the supply of housing in a city.

  3. Here there is simplification for the sake of clarity. For what is said here is not strictly correct, at least in the case of many actions performed by members of organizations. Rather, typically some threshold set of actions is necessary to achieve the end; moreover, the boundaries of this set are vague.

References

  • Brownsword R, Scotford E, Yeung K (eds) (2017) The Oxford handbook of law, regulation and technology. Oxford University Press, Oxford

    Google Scholar 

  • Buitten M (2019) Towards intelligent regulation of artificial intelligence. Eur J Risk Regul 10:41–59

    Article  Google Scholar 

  • Ehteshami B, Veta M, van Diest P et al (2017) Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318:2199–2210

    Article  Google Scholar 

  • Esteva A, Kuprel B, Novoa R et al (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542:115–118

    Article  Google Scholar 

  • Food and Drug Administration (FDA) (2020) Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device. US Government, Washington

    Google Scholar 

  • Guihot M, Matthew A, Suzor N (2017) Nudging robots: innovative solutions to regulate artificial intelligence Vanderbilt. J Entertain Technol Law 20:385–455

    Google Scholar 

  • Habermas J (1970) Toward a rational society (J. J. Shapiro, trans.). Beacon Press, Boston

    Google Scholar 

  • Hirvonen H (2023) Just accountability structures—a way to promote the safe use of automated decision-making in the public sector. AI Soc. https://doi.org/10.1007/s00146-023-01731-z

    Article  Google Scholar 

  • Johnson D, Post D (1996) Law and borders: the rise of law in cyberspace. Stanford Law Review 48:1367–1402

    Article  Google Scholar 

  • Lessig L (1999) Code and other laws of cyberspace. Basic Books, New York

    Google Scholar 

  • Lewis M (2010) The big short: Inside the doomsday machine. Norton, New York

    Google Scholar 

  • McKinney S, Sieniek M, Shetty S et al (2020) International evaluation of an AI system for breast cancer screening. Nature 577:89–94

    Article  Google Scholar 

  • Meckling J, Nahm J (2018) When do states disrupt industries? Electric cars and the politics of innovation. Rev Int Polit Econ 25:505–529

    Article  Google Scholar 

  • Miller S, Bossomaier T (2022) Cybersecurity, ethics and collective responsibility. Oxford University Press, Oxford

    Google Scholar 

  • Miller S (2018) Dual use science and technology, ethics and weapons of mass destruction. Springer, Dordrecht

    Book  Google Scholar 

  • Miller S (2016) Shooting to kill: the ethics of police and military use of lethal force. Oxford University Press, New York

    Book  Google Scholar 

  • Miller S (2010) The moral foundations of social institutions: a philosophical study. Cambridge University Press, New York

    Google Scholar 

  • Miller S (2015) Design for values in institutions. In: Poel I, Van den Hoven J, Vermaas P (eds) The handbook of ethics, values and technological design. Springer, Dordrecht, pp 769–781

    Chapter  Google Scholar 

  • Miller S (2017) Ignorance, technology and collective responsibility. In: Peels R (ed) Perspectives on ignorance from moral and social philosophy. Routledge, London, pp 217–237

    Google Scholar 

  • Murray A (2007) The regulation of cyberspace: control in the online environment. Routledge, London

    Book  Google Scholar 

  • Murray A (2019) Information technology law. Oxford University Press, Oxford

    Book  Google Scholar 

  • Reed C, Murray A (2018) Rethinking the jurisprudence of cyberspace. Edward Elgar, London

    Book  Google Scholar 

  • Reidenberg J (1998) Lex informatica: the formulation of information policy rules through technology. Texas Law Rev 76:553–572

    Google Scholar 

  • Smith M, Urbas G (2021) Technology law: Australian and international perspectives. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Smith M, Heath Jeffery R (2020) Addressing the challenges of artificial intelligence in medicine. Intern Med J 50:1278–1281

    Article  Google Scholar 

  • Stamper R (1977) The LEGOL 1 prototype system and language. Comput J 20:102–108

    Article  Google Scholar 

  • Smith M, Urbas G (2022) Evolving legal responses to social media in Australia: litigation, legislation and system architecture. ANU J Law Technol 3:8–31

    Google Scholar 

  • Stamper R, Althaus K, Backhouse J (1988) MEASUR: method for eliciting, analysing and specifying users requirements. In: Olle T, Verrijn-Stuart A, Bhabuts L (eds) Computerised assistance during the information systems life cycle. Elsevier Science, Amsterdam

    Google Scholar 

  • Ulnicane I et al (2020) Framing governance for a contested emerging technology: insights from AI policy. Policy Soc 40:158–177

    Article  Google Scholar 

  • Vladeck D (2014) Machines without principles: liability rules and artificial intelligence. Washington Law Rev 89:117–150

    Google Scholar 

  • Wirtz B, Weyerer J, Sturm B (2020) The dark sides of artificial intelligence: an integrated ai governance framework for public administration. Int J Public Adm 43:818–829

    Article  Google Scholar 

Download references

Curmudgeon Corner

Curmudgeon Corner is a short opinionated column on trends in technology, arts, science and society, commenting on issues of concern to the research community and wider society. Whilst the drive for super-human intelligence promotes potential benefits to wider society, it also raises deep concerns of existential risk, thereby highlighting the need for an ongoing conversation between technology and society. At the core of Curmudgeon concern is the question: What is it to be human in the age of the AI machine? -Editor.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marcus Smith.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Smith, M., Miller, S. Technology, institutions and regulation: towards a normative theory. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01803-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-023-01803-0

Keywords

Navigation