1 Introduction

Cities are increasingly embracing data-driven infrastructures and algorithmic decision-making to improve urban planning and operational efficiency as well as mobility, sustainability, and safety for its residents. Much of the current discussion around the use of big data, computational systems, and artificial intelligence (AI) in urban spheres centers around so called “smart cities” and intelligent infrastructures (Aurigi and DeCindio 2008; Foth 2018; Hollands 2008; Kitchin 2019); these often implicitly assume benign intentions from public and private actors involved, while the ethical implications for data-driven urbanism in society are crucial (Kitchin 2016).

Our cities are confronted today by many urban contestations and intersecting crises, for example from the ongoing challenges of (1) inequity, affordable housing and inclusive employment for poor marginalized residents and migrant communities, (2) unprecedented adverse health and economic effects of climate change and global pandemics such as COVID-19, and (3) the systemic discrimination and violence against migrants, people identifying as BIPOC (Black, Indigenous, and People of Color), LGBTQ + and other marginalized groups due to structural racism and homophobia. The effects of these crises are in many ways ameliorated, amplified, or mediated through the use of technologies (Sawhney 2019), algorithmic infrastructures, and discriminatory policies enacted in urban spaces, often affecting the most marginalized segments of the population in far more severe ways.

As algorithmic and data-centric infrastructures become more prevalent in our urban environments and affect our lived experiences in cities, we must critically question their social, political, and ethical implications, particularly for the most vulnerable. I examine these concerns using an expanded notion of ‘urban mobility’ and anchor my arguments in a rights-based discourse to reveal emerging risks and responsibilities. Urban mobility is not just about getting from one place to another; it means being able to access health, education, culture, employment, and leisure using safe, environmentally friendly, and affordable transportation. In cities and elsewhere mobility is one of the fundamental means of participating in society. I argue that urban mobility goes beyond accessible transportation to all forms of movement, equitable access, and liberty to assemble, protest, and engage fully in a city’s urban fabric, without surveillance, coercion, or restrictions on civic and human rights.

When new technological infrastructures are usually introduced in urban contexts (e.g., smart tolls or surveillance systems), they are either imposed on society by state/municipal governments or market-driven forces; initiatives to democratize innovation often seek consensus through participatory processes among relevant actors, while generally ignoring or silencing dissenting voices, particularly among marginalized groups in the city. In a pluralistic democracy, constituted by diverse stakeholders, there needs to be room for differing views, disagreements and “conflictual consensus” to emerge as real alternatives to imposed dispositions, forced choices and tokenistic participation. Chantal Mouffe (1999, 2013) proposes the notion of agonistic pluralism and elevates contestation (the act of arguing or disagreeing) as a political alternative to the pursuit of consensus; this serves to confront the multiplicity of voices and complexity of power structures embedded in a pluralistic society (Sawhney 2020). Contestations in urban mobility are emerging situations of dissent and the deliberation of alternative views, to counter the lived experiences and societal implications of increasing datafication and algorithmic decision-making involved in smart-city infrastructures constituted as forms of Urban AI.

First, the article will discuss the many kinds of contestations emerging around the role of equity, inclusion, and non-discrimination in Urban AI including: (1) provision of urban mobility services in city neighborhoods, (2) digital contact tracing systems deployed during a pandemic, and (3) the use of technologies for urban policing and public surveillance. Second, these illustrative scenarios reveal the many challenging ethical implications, potential counter actions by civil society, and speculative futures for Urban AI in general.Footnote 1 Third, the article argues that these issues must be critically examined through a framework of rights, risks, and responsibilities for all stakeholders (citizens and non-citizens, human and non-human), providers of technologies/services, and state actors involved in wider Urban AI ecosystems. To this end, we consider the legal/policy and technology implications of proposed regulations on AI systems being deliberated by the European Commission (EC AI Act 2021) for mediating rights, risks, and responsibilities in the context of Urban AI. The section discusses how this ‘AI Act’ defines what may constitute an AI system, assesses risks and the challenges of making them accountable and auditable, and promotes responsible transparency and governance practices. We examine what implications the AI Act would have in the case of an automated parking control system introduced by the City of Amsterdam, that uses AI (with some human assistance) to validate car parking permits. The example shows how such Urban AI systems can address transparency and accountability, while highlighting potential risks for privacy, wrongful identification, and inadvertent profiling of residents or visitors in the city. This case study, along with the other scenarios of urban contestations discussed in the article, allow us to consider the societal implications of the proposed EC regulations on aspects of equity, inclusion, and non-discrimination in Urban AI systems. Finally, the article concludes by expanding these concerns beyond the city, to sustaining urban ecosystems and learning from protocols practiced by indigenous peoples living in native territories, such as in North America or New Zealand, for responsible stewardship and oversight of Urban AI technologies in a cooperative manner.

2 Urban mobility as a right to the city

Cities today are dynamic urban ecosystems with evolving physical, social and technological infrastructures facilitating, regulating, and often constraining the free movement of its inhabitants in crucial ways; how urban mobility is managed can both sustain and transform a city’s socio-economic and cultural capital.

Individuals residing or working in these metropolitan contexts increasingly rely on acquired movement sensibilities and accessible choices for urban mobility to regulate and enrich their own livelihoods and quality of life, including access to resources and services, everyday safety, exposure to pollution, and civic agency. This is particularly crucial for marginalized and vulnerable populations including children, elderly, disabled, and lower-income communities as well as women, ethnic minorities, and migrants who are often at greater social risk in particular locales or times of day.

Walking, cycling, commuting, ride-sharing and other forms of urban (micro-) mobility can make cities liveable and thrive culturally and economically. However, it can be argued that unconstrained, unmanipulated, and affordable mobility, what one may characterize as “free movement” (not unlike the political ethos of Free Software), is crucial to recognize within the notion of “right to the city.” The concept was first proposed by Henri Lefebvre in his book Le Droit à la ville (1968), to reclaim the city as a co-created space and mediate its socio-economic and spatial inequities. Lefebvre bemoaned the effects of capitalism in commodifying urban life, shared governance, and social interactions in the city (Lefebvre 1996; Purcell 2002). David Harvey (2008) has since argued that the right to the city is far more than an individual liberty to access urban resources, but a means for citizens to exercise collective agency in transforming urban space and the processes of urbanization. Harvey suggests that “the freedom to make and remake our cities and ourselves is one of the most precious yet most neglected of our human rights.” (Harvey 2008, 23).

While many social movements in Europe and Latin America have undertaken this concept of right to the city for social justice struggles, they have primarily focused on squatter rights, housing equity, and inclusive use of public spaces. Extending this notion to free movement in the city provides a powerful argument for broader notions of spatial justice, mobility, and renewed access to urban life. Framed within a “right to the city” it can be forcefully leveraged to advocate for fair and affordable access to transportation amenities and mobility alternatives. However, this presumes that citizens have the means (and incentive) to access meaningful information about evolving transportation infrastructure and mobility services, operating scope and costs, and actual patterns of provision and usage across the city. Access to such information, data and algorithmic policies that underlie urban mobility services in a city could be constituted as part of a citizen’s “Digital Rights to the City” (Foth et al. 2015; Shaw and Graham 2017; Anastasiu 2019, Cardullo et al. 2019; Heitlinger et al. 2019; Shingne, 2020; Walker et al. 2020). How should such digital rights be recognized by cities as forms of public good and leveraged by citizens for civic advocacy? We’ll consider this in the context of participatory approaches to designing urban mobility services in the following section.

3 Contestations in urban mobility services and city planning

Many contestations emerge as municipal governments, private entities, and citizen groups increasingly undertake or cooperatively participate in transforming the urban, cultural, and digital fabric of neighborhoods in the city. How can we improve everyday urban mobilities through technologies, mobility data, and urban policies that become more accessible and usable for citizens? In this section, we begin by examining these questions through the lens of urban mobility services and city planning, highlighting the nature of rights and risks for transparency and equitable access. Through participatory design workshops conducted with diverse stakeholders, data scientists, and urban practitioners, we consider the challenges for engaging citizens in making sense of open-access urban data and the civic implications for urban mobility in the city.

3.1 Mediating digital rights to the city through open data policies and platforms

Municipal governments have been slow to adopt digital infrastructure and make their data accessible to citizens, however some cities have begun to develop open data initiatives to recast urban data as a public asset in the spirit of transparency and accountability; this is often done to incentivize companies to develop technology-based services leveraging urban mobility data. On the other hand, municipal authorities have struggled to have private urban service providers (like AirBnB and Uber) to share data openly with them; a few cities have begun to create their own data standards or “mobility data specifications” as a way for city governments and companies to share knowledge. For example, the Los Angeles’ Department of Transportation (LADOT) has been developing data standards for “new mobility” options allowing two-way sharing of vehicle mobility in city streets (Marshall 2018). Cities like Brisbane in Australia have made their urban datasets openly available to all citizens,Footnote 2 while others have created digital portals like the Dublin dashboardFootnote 3 in Ireland and CivicDashboardsFootnote 4 in the U.S. While many municipal governments around the world do engage stakeholders in various ways, such efforts are often undertaken superficially to appeal to the public and mitigate community backlash from adoption of new urban technology services, however, much needs to be done to improve open data policies through initiatives like MyData.org, a human-centric declaration for fair and trustworthy data-sharing,Footnote 5 being adopted by dozens of cities around the world.

Several technology companies have been trying to address gaps in IT capacity among municipal agencies by developing software platforms for cities to manage and share urban mobility data, such as Remix for New Mobility offered by RemixFootnote 6 based in San Francisco and Mobility Manager offered by Populus.Footnote 7 The non-profit initiative SharedStreets,Footnote 8 emerging from the World Bank’s Open Transport Partnership, seeks to support public–private collaborations around transport to combine technology, policy, and governance standards to help solve issues like street safety, curb use, and congestion. While these platforms have laudable aspirations, they are not designed to directly engage citizens in these urban policy questions; in some cases, selective data from these systems can be exposed through APIs (application programming interfaces) created for specific purposes like access to traffic or bike sharing data, but still require a great deal of technical proficiency to use as well as interoperability standards (Robinson et al. 2012) for integrating with other digital urban infrastructures. Hence, such urban digital rights are not easily obtained or usable by citizens themselves let alone serve as tools for civic advocacy, co-design, or urban transformation, as Lefebvre and Harvey would insist.

3.2 Risks of urban mobility services for discriminatory access and safety

Urban-tech start-ups such as Stae,Footnote 9 based in New York City, have been working closely with cities to create uniform APIs, user interfaces and open data policies to help make such data more accessible and usable both for government agencies and citizen groups. Another NYC-based firm CoordFootnote 10 recently examined the nature of micro-mobility services in Washington DC, by analyzing data they gathered from different companies (like Bird, Lime, Lyft, Skip, and Spin) that provided electric scooters and dockless bikes for sharing across neighborhoods in DC. Such urban micro-mobility services use a combination of data gathered about city neighborhoods and mobility patterns and use algorithmic decision-making to optimize pricing, allocation, and availability of their services across the city. In their analysis Coord found that while scooters were deployed in the densest and most affluent areas of the city, including Downtown DC, Georgetown, and Dupont Circle, none were available in lower-income neighborhoods on the east side of the Anacostia River (Lazo 2018). This underscores how most urban mobility services, including ride hailing and bike sharing, often bypass a city’s poorest neighborhoods with the communities continuing to be underserved until there is greater awareness, advocacy, and intervention by citizens and city officials demanding fair and equitable access.

These forms of Urban AI for provision of mobility services pose many risks regarding bias, inclusion, fairness, transparency, and trust that cities must reconcile and hold service providers accountable for. Hence, with digital rights comes responsibilities and ensuring that the public assets comprising citizen urban mobility data are adequately protected both by private companies and cities that serve as its caretakers. There are currently many fragmented policies emerging among cities regarding data capture, ownership, access, and dissemination with little or no oversight from citizen rights groups or even active opposition (Mann et al. 2020). There is a need for standardized regulations regarding the use of Urban AI (as we discuss in Sect. 6) to safeguard privacy, government/corporate surveillance, and commercial exploitation. There are limited means to allows individuals to control the confidentiality of their own urban mobility data; lapses in urban mobility and mobile phone location data inadvertently allows re-identification of people’s whereabouts without their explicit knowledge (Gayomali 2014; Yin et al. 2015), while more recent examples of surveillance on Strava, a social network for athletes (Couture 2021) demonstrate how digital privacy is easily compromised. More alarmingly, there is an immense risk that unregulated access to such data can be used by governments and companies to deny services, discriminate, and in effect manipulate the free movement of citizens in their very own cities.

Recently the Swedish electric scooter company Voi announced plans to automatically reduce the speeds of its scooters in areas of Helsinki that have large concentrations of restaurants and bars, to alleviate the rise in nightly accidents regularly reported by local hospitals (YLE, 2021). The speeds would be dropped from 25 km/h to 15 km/h, enforced between 11 pm and 6am on weekends. In the south-western Finnish city of Turku, reduced speeds are being tested on streets favored by pedestrians, while in Helsinki they are also based on the time of day. The trial, the first of its kind in Europe, demonstrates how algorithmic decision-making can be used to dynamically change urban mobility provisions in a city to reduce risks introduced by urban mobility technologies. Such forms of ‘geo-fencing’Footnote 11 have already been introduced by companies like Lime for use with scooters in Australian cities like BrisbaneFootnote 12 for instance. Increasingly, AI could be used to automatically reduce or increase the speeds and availability of scooters in city neighborhoods, without adequate notification or consultation with residents or municipal authorities. While the experiment in Helsinki showcases the proactive initiative of a private provider (though to protect its own market viability), it also highlights the case for engaging public actors and city residents in consultations for cooperative analysis, design, and assessment of such urban policies to ensure principles of safety, accessibility and free movement.

3.3 Making sense of urban mobility services and city planning with stakeholders

There is a crucial need to recognize the contestations in such policy concerns and issues regarding urban mobility data in dialogue with a diverse set of stakeholders in the city. In 2018–2019, I co-facilitated design workshops hosted at The New School to examine contestations in urban mobility data in partnership with the urban-tech start-up Stae (Sawhney 2020). Stae had previously created backend solutions to generate uniform APIs to ingest open datasets on ridesharing in the city. Data scientists and urban practitioners in Stae’s team had worked with data activists and municipalities to develop visualizations and findings pertinent to them. Through this work Stae had been highlighting civic issues in New York City such as the usage and obstruction of shared bike lanes in certain neighborhoods.

For the workshop we invited over a dozen urban practitioners, data scientists, and policy advocates to examine focused case studies, mobility data and patterns in neighborhoods of Manhattan. Participants undertook role-play activities taking differing perspectives as job creators, safety advocates, placemakers, and technology disrupters to unpack the conflicting objectives and demands inevitably emerging for different stakeholders in the city. Participants then worked in teams using a digital sandbox to examine the contestations in accessing and analyzing urban mobility data, and their implications for civic action. These included how ride-sharing data is used by companies for planning docking stations in neighborhoods, while these may cause conflicts by residents opposed to them or others who feel are left behind as less lucrative areas for such private mobility infrastructure. Participants gained an overview of civic data, how it can be used, and the new ways in which organizations like Stae engages multiple stakeholders for awareness and informed decision making.

Participants considered privacy concerns, questions around who this data serves, and how the data is being captured. We then transitioned into utilizing Stae’s datasets and exploratory user interfaces to surface contestations in civic data. The initial exercise included conducting a brief “data scavenger hunt” of publicly accessible ride-sharing data in NYC neighborhoods, to familiarize participants with the tool, while revealing the limitations of leveraging such datasets for policy action.

In the workshop, participants without any backgrounds in data science found it challenging to locate and browse the local bike-sharing trip data (provided by CitiBike); many participants found it challenging to handle the amount of data being streamed through the platform on their laptops. Some found the platform’s user interface unintuitive to explain, contextualize and to manage the large datasets. Participants struggled to interpret the legibility and implications of the emerging data, including what certain variables meant, which ones to target for search, and the syntax in which to do their queries to generate visualizations. The user interface to access the data, clearly was designed for people with the capacity and expertise around urban data analysis. This revealed the limitations and expertise required for stakeholders to participate in meaningful data-centric decision making for urban planning.

Large datasets like these show the power imbalance between citizen access to data and institutional resources. Second, the data itself required expertise, not necessarily of data science but of what certain variables, syntax, and values meant. While Stae was often able to mediate this process by offering data services and visualizations to municipal entities and citizen groups, engaging them in participatory research and action is far more challenging as we learned in the workshop. Some participants, for example, examined the contradictory goals and values of supporting cycling lanes and ride sharing in traditional neighborhoods like Chinatown that are increasingly facing greater gentrification and conflicts due to loading trucks blocking such lanes. Examining many layers of mobility infrastructures (e.g. for delivery trucks vs. cycling) and actual data usage patterns highlighted the inherent agonistic spaces for participatory urban design and planning processes to ensure all stakeholders' interests and values are better addressed.

Despite these challenges, the workshop still allowed participants and stakeholders with diverse interests to examine, interrogate, and contest urban issues in the city. The outcomes and critical insights emerging from such workshops can be used to help co-design alternative tools, cooperative data platforms, and speculative policies. They can also inform ongoing practices and policies for data access and develop the capacity among citizen groups to engage and challenge how municipal governments and tech companies influence urban mobilities in the city. In many other situations, Urban AI technologies are rolled out city-wide to address concerns of public safety by authorities and providers, without adequately consulting diverse stakeholders to ameliorate the risks and ensure accountability. In the following sections, we examine urban contestations with surveillance technologies used for public health monitoring in pandemics and discriminatory policing of marginalized communities and civic protests.

4 Urban contestations for mobility, privacy, policing and protest

In this section we begin by examining some key rights afforded to individuals in democratic societies that have implications for new technologies introduced in urban contexts. We will do so by highlighting contestations for the rights of citizens in the ways that mobility restrictions, privacy, surveillance, policing, and protest are mediated by urban technologies.

4.1 Right to information, non-discrimination, and privacy

Access to information empowers citizens by informing them of their rights to voting, education, basic healthcare, and government services provided by the city. The right to information is considered vital for transparency, reducing corruption, and government accountability. Nearly 120 countries have laws enabling it, though in practice such information is not easily made available to all citizens, without legal advocacy and investigative journalism, which amplifies inequities in society. The notion of equity is also tied to discrimination, a multi-faceted social phenomenon that cuts across all public, private, and socio-cultural spheres of society. While explicit forms of racism as manifested in violence against Blacks and historically marginalized groups are being more widely reported in the media today, many discriminatory practices are implicitly embedded in the systems created and used by the state, private companies, and civil society on a daily basis. These serve to further disenfranchise marginalized individuals and communities in areas of health, education, housing, employment, and political participation among others facets of civic life. The Universal Declaration of Human Rights, adopted with the founding of the United Nations in 1948, and many subsequent international conventions have declared the right to equality and non-discrimination for all people without distinction as to race, sex, language, or religion; these principles were expanded to specifically combat discrimination against women, indigenous peoples, and people with disabilities among others, or discrimination based on sexual orientation and gender identity. The right to privacy allows for selectively revealing oneself to the world and is considered a fundamental human right. While the concept varies according to culture and context, it often enshrines protection of one’s personal and confidential data, as well as their locations, movements, communication exchanges, and transactions. In the digital realm, the right to privacy is supported by most European Union (EU) countries through compliance with the General Data Protection Regulation (GDPR). MyDataFootnote 13 initiatives embraced by many cities offer a policy declaration to “empower individuals by improving their right to self-determination regarding their personal data”, as a human-centric paradigm for data sharing.

4.2 Inequity in information access and mobility restrictions in a pandemic

During the COVID-19 pandemic many governments have provided clear and timely updates on the virus’ epidemiological spread and public healthcare guidelines along with restrictions for physical distancing and urban mobility, in effect reducing the rate of infections. Other countries and state agencies that have intentionally hidden or obfuscated such public health statistics, while adopting less restrictive mobility measures have endangered the lives of their citizens. In addition to widely available testing and well-prepared healthcare infrastructures, the right to information and data-driven epidemiological analysis, has been paramount to how some cities and countries successfully responded to the pandemic. As new waves of infections arise, public health agencies must continually build on such experiences while using statistical data analysis and AI/machine learning to improve modeling and prediction of epidemiological spread in urban contexts using data collected by hospitals and healthcare providers. But there are also opportunities to design tools and practices that better support open and collaborative dialogue among all stakeholders for greater transparency, accountability, and civic agency during and post-pandemic (Foth et al. 2021), especially to bridge government mistrust and the social and cultural divide (say among vulnerable or migrant communities), often emerging with equitable information exchange in crisis. Recent research has found that mobility restrictions can induce a segregation effect, especially for neighborhoods and communities experiencing inequality based on income, class, gender, or migrant backgrounds (Bonaccorsi et al. 2020; Dobusch and Kreissl 2020), requiring fiscal measures to compensate but also dialogue with all stakeholders to mediate potential socio-economic consequences. Urban informatics can also make can make planning processes more democratic and participatory, especially for disadvantaged groups (Pan et al. 2020). Others have proposed practices for supporting citizen participation (Falanga 2020) and community activism (Mendes 2020), as well as transparent, agile, and participatory governance (Moon 2020) during the COVID-19 pandemic.

4.3 Implications of using surveillance technologies on civil liberties

To mitigate the spread of COVID-19 many governments have introduced a range of surveillance technologies, such as mobile applications, facial recognition, biometric wearables, crowd monitoring, and predictive analysis, that also create many implications for civil liberties, inequity, and privacy concerns (Kitchin 2020). Digital Contact Tracing (Beaudouin-Lafon 2020; Berke and Larson 2021) has been widely adopted for identifying and isolating persons who may have been in contact with infected individuals. While contact tracing has been conducted by teams of healthcare individuals, there has been a push to develop digitally enabled contact tracing on mobile phones using either location-based or proximity-based anonymous data (Crocker et al. 2020). Systems using GPS-based location tracking and centralized data storage are particularly susceptible to privacy concerns. Norway halted its contact tracing app and deleted all data collected from over 600,000 active users after the Norwegian Data Protection Authority raised concerns about the disproportionate threat to user privacy from capturing location-based data (Lomas 2020). Exposure Notifications, a decentralized proximity-based contact tracing framework for use with Bluetooth-based mobile phones, was created by Apple and Google (2020); it handles privacy by anonymizing personal identifiers and is being adopted by many public health authorities. However, there are still many lingering concerns about the privacy and security implications of contact tracing using mobile phones (Mann et al. 2021). It has also been argued that the growing intervention of global technology corporations in digital governance threatens state sovereignty in determining public health responses (Mann 2020). An alternative to using the Exposure Notifications framework on mobile phones proposes anonymous physical tokens with more accurate UltraWideBand technology (EIT Digital 2020), yet there are privacy risks in all device-centric technologies. Researchers have proposed improving privacy by collecting anonymous statistics and conducting epidemiological modeling to monitor the probability of infections over time (Honkela 2020).

During the pandemic the South Korean government combined surveillance camera footage, smartphone location data, and credit card purchase records to track positive cases and their contacts (Singer & Sang-Hun 2020). Some researchers proposed using privacy preserving mechanisms for surveillance used in public spaces to analyze crowd behavior and physical distancing measures (Gencoglu 2020). While these would rely on people’s spatial and movement patterns instead of facial recognition, they may violate social norms and civil liberties in many democratic societies. Algorithmic decision-making systems today are rife with both explicit and implicit biases that entrench such discrimination in the civic and urban spheres of people’s lives. Cathy O’Neil (2016) critiques the widely held assumption that big data reduces or eliminates human bias and subjectivity, while predictive models are simply “opinions embedded in math”. Safiya Noble (2018) examines the ways in which algorithms often perpetuate data discrimination while worsening inequality and injustice. MIT researcher Joy Buolamwini has investigated how facial recognition algorithms have deeply flawed gender and skin-based biases, often incorrectly classifying them over a third of the time in the Gender Shades project (Buolamwini & Gebru 2018). With such glaring racial and gender discrepancies, decision-making systems relying on such flawed algorithms for surveillance, identification or policing would misclassify many marginalized people as criminals, leading to racial profiling. Researchers at UCLA found that Amazon’s commercially available facial recognition software, Rekognition, incorrectly matched dozens of students and faculty to actual criminals, the vast majority of them being People of Color (Jones 2020). A similar test conducted by the American Civil Liberties Union (ACLU) with members of the U.S. Congress also wrongly classified many of them with criminality, the overwhelming majority of the false positives being that of Black and Latino legislators (Snow 2018). While some would argue that more comprehensive training data would address such biases, I believe the very act of designing AI infrastructures of power and control, embedded in our urban realm and everyday life through public and private surveillance or provision of services, continually perpetuates discriminatory practices and inequity. The global outcry and widespread Black Lives Matter (BLM) protests amplified since late-May 2020, following the recent killing of George Floyd and ongoing violence against Blacks and People of Color by the police, has brought greater scrutiny to the use of facial recognition and racial profiling by law enforcement agencies. Since then, IBM has decided to stop offering general purpose facial recognition or analysis software (Meyer 2020). In March 2020, Microsoft divested its stake in an Israeli company called AnyVision following controversy over facial recognition targeting Palestinians in the occupied West Bank (Dastin 2020).

4.4 Countering urban policing through civic action and protest

Racially biased policing has also led to an increased scrutiny of AI-based programs for Predictive Policing, pioneered by the Los Angeles Police Department (LAPD). These algorithm-driven systems analyze crime data to find patterns predicting where in the city crimes are likely to be committed to re-direct police resources. In 2011, the LAPD deployed a tool, PredPol, which they helped develop for location-based analysis of historical crime data (Moony and Baek 2020); however, critics have pointed out that such data is overwhelmingly biased towards communities of color whom the police has regularly stopped, detained, frisked, and arrested. The Stop LAPD Spying Coalition (2016) stated that “because historic crime data is biased through the practice of racialized enforcement of law, predictive policing will inherently reinforce and perpetuate this structural racism." Analysis conducted by the AI Now Institute at NYU of predictive policing data across three U.S. cities showed that using it in jurisdictions with extensive histories of unlawful police practices elevates the risks that “dirty data” would lead to flawed or unlawful predictions, in turn further perpetuating criminal injustice for these communities (Richardson et al. 2019; Kate Crawford et al. 2019).

The recent protests have not only highlighted these concerns at the national and global stage but have also shown how police have violently targeted protesters themselves, most of them disproportionately Black and People of Color. World-wide protest movements have continued exercising their right to free assembly despite the imposition of curfews, violence, and police surveillance to identify and target protestors for arrest. The networks of surveillance cameras using AI-enhanced facial identification of protestors in public spaces in China and the U.S. have turned them into technologies for countering protests and oppression of dissent. Mobile video in the hands of citizens and protestors too have offered testimonial evidence circulated widely in the media to hold law enforcement agencies accountable for their actions. However, this alone is insufficient for justice, as historically such video testimony has rarely led to police convictions (Zuckerman 2020); they must be backed by stronger laws for oversight and reform of law enforcement, as currently being debated.

Establishing open information access, robust privacy policies, accountability and goog governance is fundamental to develop more trusted, secure, and flexible Urban AI ecosystems that preserve civil liberties while enabling novel ways to securely share information with algorithmic data infrastructures. We consider a potential regulatory framework to mediate rights, risks, and responsibilities for AI systems used in society.

5 Rethinking Urban AI through rights, risks and responsibilities

Supporting open, vibrant, and pluralistic urban spaces in cities, free of oppressive and discriminatory practices towards all its residents, requires radically altering how we imagine the role of law enforcement, state, private and city residents and how they can participate in multi-faceted aspects of urban civic life. Technologies and digital infrastructures devised to support these complex urban ecologies must also share emerging values, principles, and cooperative sensibilities, that honor the rights and responsibilities of all stakeholders. The scenarios of urban contestations discussed in this article thus far highlight the many inter-related challenges, ethical implications, and opportunities for critically rethinking the role of algorithmic infrastructures, big data, novel technologies, and inclusive policies that collectively constitute Urban AI ecosystems.

5.1 European commission proposal for regulating artificial intelligence systems

One way forward is to examine the European Commission’s proposed regulations on AI systems and their potential implications for Urban AI ecosystems. These include defining what may constitute an AI system, assessing risks and the challenges of making such systems can be made accountable and auditable with responsible governance and practices.

The new European Commission proposal for regulating Artificial Intelligence systems, or the so-called Artificial Intelligence Act (EC AI Act 2021), was published on April 21, 2021. The purpose is to lay down harmonized rules for the regulation of AI technologies developed, placed, and used in the European Union (EU) market. It is “based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.” (EC AI Act 2021) The proposal was prepared in response to expressed calls for legislative action to ensure a well-functioning internal market for AI systems where both benefits and risks of AI are adequately addressed. Towards these objectives the proposal undertakes a proportionate horizontal and risk-based regulatory approach to AI, based on a robust and flexible legal framework. It claims that the regulations are “limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.” (EC AI Act 2021) While there is already a good deal of debate around the efficacy and practical feasibility of the proposal, the consultations and deliberations with EU member states are continuing to refine and implement this framework as EU-wide regulations in the near future.

The proposed AI Act has many carefully devised aspects that make it distinct and need to be carefully considered. It proposed a single future-proof definition of AI defined in a supplementary Annex (which can be subsequently revised). Some experts have considered defining AI in this manner somewhat simplistic, incomplete, and open-ended to encompass the breath of AI methods and technologies (and may inevitably include other non-AI software), which in themselves are broad and ever-changing. Others believe it offers a means for governments to interpret the definition more widely and is a tactical means for making these regulations impactful.

The AI Act prohibits particularly harmful AI practices if they are deemed to be contravening EU values, while “specific restrictions and safeguards are designed to address certain uses of remote biometric identification systems for the purpose of law enforcement” (EC AI Act 2021). The proposal offers a well-devised risk methodology which defines “high-risk” AI systems that pose “significant risks to the health and safety or fundamental rights of persons” (EC AI Act 2021). Such high-risk AI systems would need to comply with a set of horizontal mandatory requirements for trustworthy AI by following procedures to assess how well they conform before they can be introduced in the EU market. For other, low-risk AI systems, only very limited obligations for transparency are imposed. Proportionate obligations are also placed on providers and users of such AI systems to ensure safety and compliance with existing legislations throughout the whole AI systems’ lifecycle.

The proposed regulations will be enforced “through a governance system at Member States level, building on already existing structures, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.” (EC AI Act 2021) The AI Act also proposes additional measures to support innovation, through AI regulatory sandboxes, means for reducing regulatory burden among EU member states, and mechanisms to support small and medium-sized enterprises (SMEs) and technology-based start-ups.

5.2 Case study of Urban AI: Amsterdam parking control system

How would these proposed regulations in the AI Act affect the development, use, and introduction of Urban AI technologies in the EU marketplace? As an illustrative example, we examine a simple automated parking control systemFootnote 14 that was recently introduced by the City of Amsterdam.

In many European cities including Amsterdam, there are a limited number of cars allowed to park in the city to make the urban areas more liveable and accessible, especially for pedestrians and cyclists. In Amsterdam, the municipality enforces the use of approved parking permits by owners of cars and any parking fines to be levied if they have not been paid via a parking meter or mobile app. The city has begun enforcing such parking measures automatically using municipal “scan cars” equipped with video cameras, to process license plates and conduct background checks on the drivers using automated image scanning and an AI-based identification service. The City of Amsterdam is currently using this service with over 150,000 parking spaces in the city streets.

As part of the parking control service, the scan cars drive through Amsterdam using object recognition software to scan and identify license plates of nearby cars they encounter. The license plate numbers are validated through a National Parking Register to ensure the cars are allowed to park in certain areas of the city. If no valid permit or payment has been determined for certain parked cars the case is sent to a human inspector for further processing. In the final step, parking inspectors assess the scanned images to verify if license plates are correctly recognized or if cars are parked temporarily for special situations like loading/unloading or stationary cars in front of traffic lights. Based on the remote assessment, the inspectors can decide whether to conduct an on-site visit to verify the situation before parking tickets are issued. Hence, the parking control system uses a hybrid approach to automated AI-based scanning and verification with the assistance of experienced human operators.

Now for all its safeguards such an Urban AI system may pose a range of risks to citizens including that of privacy, wrongful identification, and inadvertent neighborhood profiling. Since the video cameras may not simply scan and identify license plates but also other kinds of visual features in the environment including private details of people and their cars or homes while driving in the city. In some cases, wrongfully identified license plates may incriminate drivers who have legitimate parking permits. Finally, the routes and locations where the scan cars are most readily deployed in the city may also put certain residents at greater risk or proportionally higher rates of being incriminated, not unlike the concerns expressed regarding discriminatory practices around stop-and-frisk and predictive policing in certain so-called “high-risk” neighborhoods in U.S. cities. To mediate these concerns, clearly the municipal authorities in the City of Amsterdam must provide mechanisms for transparency, auditability, and accountability for such automating parking control systems to city residents. This would enable residents and visitors to the city to better understand the nature of risks such systems may introduce, what rights they have for due recourse, and how the system handles its overall services and policies in a responsible and trustworthy manner.

To ensure greater transparency the City of Amsterdam has publicly documented (at least partially) the automated parking control system in an “Algorithm Register”Footnote 15 established online with the assistance of a company called Saidot.ai.Footnote 16 Saidot provides a platform that allows organizations to publish documentation of their AI systems through public AI registers. Saidot has been working with cities including Amsterdam and Helsinki as well as several private companies to support AI transparency and algorithmic accountability through their platform (Haataja, van de Fliert and Rautio 2020). The documentation of the parking system indicates several key provisions: (1) the car’s scanning software finds and isolates license plates only from the camera’s data stream of the street surroundings, (2) the data collected by the scan cars consist only of scanned images of license plates along with car location and timestamp, (3) the data are retained for 48 h for cases with paid parking fees, and for 13 weeks for the cases with unpaid fees, (4) the system claims that it does not process or use information in a discriminatory manner for car owners; the service works the same way for all license plates regardless of the car model, age, or the owner’s profile.

The City of Amsterdam considers this automated parking control service a low-risk AI system. Regarding risk management, the documentation indicates the system could sometimes fine car owners undeservingly if a character in the license plate is incorrectly recognized by both the algorithm and the inspector. To ameliorate this risk, the municipality allows people to make an appeal online within 6 weeks. Car owners are given an opportunity to see images of the license plate and a “situation photo, if available. Any bystanders, unrelated license plates and other privacy-sensitive information are made unrecognizable in those images.” (City of Amsterdam 2020) While this documentation makes the risks somewhat transparent it also indicates that indeed contextual imagery of the surroundings of the car are being captured and stored by the system, in addition to the license plate information.

Automatic processing of license plates requires scrutiny of the image processing and storage algorithms for de-identification and privacy preservation, as well as policies to safeguard such information that may reveal more than the system is purported to capture and use for the parking control service. Another set of risks emerge for the cases with unpaid fees where a car’s location data is held for 13 weeks; such geo-located data about city residents could be used as a form of surveillance of their activities or incrimination (if the data is requested for policing purposes) without adequate safeguards in place. Here the residents’ right to privacy and right to information may allow for greater transparency and accountability of such systems, however without any consistent regulatory directives enforced across the municipality, nationally and EU-wide, the responsibility for upholding such rights is not always enforceable. This situation thus requires that citizens (and non-citizen residents) confront municipal authorities through other legislative and civic actions, to address specific instances or the potential for biased, unfair discrimination, or flawed algorithmic outcomes in such Urban AI systems.

5.3 Implications of proposed EC regulations for Urban AI: rights, risks, and responsibilities

The European Commission proposal for regulating artificial intelligence systems offers provisions that safeguard and improve trustworthiness for AI systems, using a proportional risk-based regulatory approach. There are several implications for these proposed regulations on Urban AI. We examine these using the automated parking control system introduced by the City of Amsterdam as an illustrative example, which should also apply more broadly for Urban AI systems in general.

5.3.1 Defining what constitutes AI

The proposed regulations apply rules that cover the placing on the EU market, putting into service, and use of an AI system, rather than merely developing one for exploratory use in a research context. AI systems are defined in this legal framework “to be as technology neutral and future proof as possible” recognizing the rapidly changing technological and market-related developments in AI. A working definition is specified in Annex I of the proposed regulations describing AI as follows: “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and (c) Statistical approaches, Bayesian estimation, search and optimization methods.” (EC AI Act 2021) This remains one of the most contested aspects of the proposed regulations, as many AI experts consider such a definition to be either overly broad applying to wide-ranging software systems, or too limited given ongoing developments in AI.

Through online public consultations led by the Commission on the proposed AI Act, most stakeholders requested a narrow, clear, and precise definition for AI. However, having a singularly defined notion of AI and specified using annex as a living document is an intentional aspect of the framework, which allows for flexible interpretation and future changes to the definition as AI technologies and methods evolve. The final version of the regulations adopted will likely have a revised scope and definition for AI, after ongoing consultations with experts in the area.

We must consider then the implications for how Urban AI is defined, as these regulations would clearly apply to them whether developed in the private or public sector. Urban AI technologies are part of the complex ecologies of products and services offered in the context of urban places. Aale Luusua and Johanna Ylipulli (2020) argue that urban technologies act as a gateway for the introduction of AI technologies, especially in cities (Luusua 2016) with the rapid digitization, mobile services and infrastructural computing being increasingly embedded in everyday urban experiences whether in the home, workplace, public spaces or travel; these urban places as suggested by Ray Oldenburg (1989) are being permeated by technologies, many of which integrate AI-based algorithmic decision making and learning from big data about people’s interactions in the urban sphere.

Identifying the data-centric and algorithmic components that rely on forms of artificial intelligence in an overall Urban AI system is crucial to determining what aspect if any would be considered under the purview of the proposed AI regulations. For example, in the automated car parking control system the AI-based component would primarily be related to the car license scanning software, which in itself may be outsourced through a commercial provider. However, city authorities would need to take responsibility for ensuring safeguards for privacy and non-discriminatory of such AI components in their overall municipal service, while documenting this in a transparent and auditable manner.

5.3.2 Acknowledging rights

As such Urban AI systems are deployed in a city, we must critically examine how they affect the rights of citizens (and non-citizens) who are affected by their use. These may include for example the right to privacy, right to information, right to equality and non-discrimination, and how municipalities uphold such rights for all residents of a city, which we discussed through illustrative examples in previous sections of this article. While there may be differences in how the laws are interpreted and enforced in certain regions, the proposed EC regulations on AI systems seek to provide a legal framework of consistent rules that govern them. One of the key objectives of the EC is to ensure that AI systems placed and used in the EU market are “safe and respect existing law on fundamental rights and Union values”. Stakeholder consultations on the proposed AI Act indicated the need to take into account the impact on fundamental rights and safety to assess the level of risks imposed by AI systems.

5.3.3 Recognizing risks

To achieve those objectives, the proposed AI Act undertakes a “proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.” (EC AI Act 2021) In the online consultations conducted in 2020, stakeholders highlighted the need to define differing notions of risk including ‘high-risk’, ‘low-risk’, ‘remote biometric identification’ and ‘harm’, to better clarify the proportional risks introduced by AI systems defined within the scope of the AI Act. “The types of risks and threats should be based on a sector-by-sector and case-by-case approach.” (EC AI Act 2021) This risk-based approach is a central feature of the proposed EC regulations allowing greater scrutiny of what are considered high-risk AI systems.

The proposed risk-based methodology in the AI Act primarily imposes regulatory burdens on “high-risk” AI systems that are likely to pose fundamental rights and safety of persons, while low-risk systems would have very limited transparency obligations. A limited set of high-risk AI systems, in pre-defined areas are specified in Annex III of the regulations; these include systems for biometric identification, law enforcement, managing and operating critical infrastructure, access to essential private and public services, employment, education, and vocational training, migration, asylum and border control, as well as administration of justice and democratic processes. Clearly, many Urban AI systems fall within the scope of such high-risk areas, while the risk level of others would need to be carefully assessed. This annex for high-risk AI would be expanded or amended in the future by applying a set of criteria and risk assessment methodology (also listed in Annex III) which pose harms or adverse effects for safety and fundamental rights.

5.3.4 Reconciling responsibilities

For these kinds of high-risk AI systems, the regulations in the AI Act stipulate a range of requirements for high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, to mitigate the potential risks arising from their use. As we have seen in the Urban AI case study of the automated parking control system, it clearly poses potential high-risks related to the privacy of drivers and residents through image-based surveillance, incrimination due to wrongful identification of license plates, and inadvertent neighborhood profiling if the systems is selectively deployed. Hence, it falls within the purview of the proposed regulations which render certain obligations for documenting the risks and ensuring accountability for safeguarding the trustworthiness of the overall service.

Assessing the risks and devising documentation for regulatory compliance would require a multi-disciplinary team-based effort to examine these Urban AI systems from technological, legal, and ethical perspectives. For example, in the case of the automated parking control system, stakeholders from the city including different municipal actors, private providers, and citizen advocacy groups would need to be engaged in the process to ensure the emerging service remains trustworthy, while the compliance framework is robust and accountable, based on the priorities of the city as well as the right and values of all stakeholders involved. Design research workshops that we hosted in New York City to examine urban mobility data with stakeholders, described in previous sections of the article (Sawhney 2020), offer an approach for examining conflicting objectives and demands, while engaging urban practitioners, technology experts, and citizen advocates to devise cooperative understanding of the implications of using Urban AI for equitable mobility in the city. Employing participatory methodologies for urban informatics offers a cultural shift in policy and governance towards collaborative city-making (Foth 2018), while nurturing shared responsibility among municipal authorities and city residents for a kind of cooperative digital urbanism.

In the next phase of the regulatory deliberations with EU member states for the proposed AI Act, the European Commission plans to establish compliance and enforcement mechanisms for high-risk AI systems placed in the EU market. After providers have conducted conformity assessment of their high-risk AI systems, these would be registered in a centralized EU database managed by the Commission to increase public transparency, oversight, and supervision by competent authorities among the member states. I believe that ultimately such a framework for assessment and compliance of high-risk AI systems, while seemingly onerous, would offer a means to strengthen greater trust, acceptance, and accountability, and subsequently enable wide-spread use of trustworthy Urban AI technologies and services in society.

6 Beyond the city: sustaining urban ecosystems and engaging indigenous protocols

This notion of right to the city should not simply apply to de facto citizens, but to all residents and non-human inhabitants of these changing urban ecosystems. Here it’s also important to take into consideration the right to culture and the right to livelihoods. There are many ways in which these rights are manifested today. For example, built monuments and heritage sites in cities are often designated as historical landmarks, while indigenous peoples, cultural artifacts like languages, and many vulnerable animal species are designated as legally protected. There is also an emerging drive to afford legal rights to vulnerable ecological entities. Many authors have highlighted the value of indigenous perspectives (Yunkaporta 2019) on data sovereignty (Kukati et al. 2016; Walter and Suina 2019), co-production and knowledge sovereignty in decision-making (Latulippe and Klenk 2020), and sustainability (Vásquez-Fernández and Ahenakew 2020).

The Whanganui river in New Zealand, revered as sacred by the Māori indigenous peoples, was granted legal personhood on March 20, 2017 (Roy 2017). New Zealand’s parliament passed legislation declaring that Te Awa Tupua (the river and all its physical and metaphysical elements) is an indivisible, living whole, and henceforth possesses “all the rights, powers, duties, and liabilities” of a legal person (Ware 2019). The river sustains many communities including the Māori tribes and the “Pakeha” (non-Maori New Zealanders), so their collective right to livelihood is intertwined with such legal protection and preservation. The symbolic declaration has fostered a form of shared identity and stewardship of the river, gradually displacing historical distrust with reconciliation and cooperation. In the context of climate crises in cities, these trends offer crucial means for recognizing and embedding a broader rights-based framework in urban ecologies. However, it’s good to distinguish the rights of nature (Graham & Maloney 2019), legal personhood, and rights to livelihood as these have different implications for stakeholders involved.

Designing Urban AI ecosystems that inevitably affect both human and non-human entities, provokes critical responses for engaging a wider sphere of stakeholders and devising means for honoring their rights to support new forms of cooperative agency. Many authors have proposed decentering human agency to take a more-than human participatory and non-anthropocentric perspective towards the smart-city agenda (Luusua et al. 2017; Lupton 2019; Clarke et al. 2019; Giaccardi and Redström 2020; Loh et al. 2020). For example, data-driven urban systems that monitor and manage the flow of water transport systems in city rivers could autonomously regulate their usage and pollution. The systems would do so by warning, reducing capacity or dynamically changing tolls for private boats and public ferry traffic at certain times of day or during the year. By treating the river as a legitimate actor, the city’s algorithmic infrastructures, with a distributed network of environmental sensors, could monitor, forecast, and readily act on the river’s changing ecological health, thereby preserving its rights as a legal entity in the urban ecosystem. It is unlikely that such systems would be entirely autonomous but would regularly rely on the domain expertise of environmental engineers, with municipal agencies, indigenous communities, and environmental advocacy groups ideally providing oversight in a cooperative manner. But is that in itself sufficient for designing, using, and governing such Urban AI systems, especially when they can have long lasting and unknown impacts?

6.1 Indigenous protocols as ethical guidelines for Urban AI

How do we center indigenous concerns to design Urban AI systems for broader ecosystems in an ethical manner? A recent position paper by Jason Edward Lewis (2020) and other indigenous AI researchers foregrounds the role of indigenous knowledge systems as alternative approaches to reframe the conversations around the challenges of AI in society. The relational paradigms of indigenous epistemologies refuse to center or elevate the human, focusing instead on principles and practices that engage social and environmental sustainability, while establishing reciprocal relationships (between humans, machines, and non-human entities) through mutual respect and aid. Here the notion of “Indigenous Protocols” offers a way forward; protocols in indigenous contexts refer to guidelines for initiating, maintaining and evolving relationships. “Learning, understanding and following proper protocol is central to many Indigenous interactions, whether informal or formal. Nations and even individual communities have their own sets of protocols, which are informed by the specific epistemologies of the communities using them.” (Jason E. Lewis et al. 2020).

In designing Urban AI technologies and services, such protocols reinforce the notion that these systems introduce not only new transactions but materialize new kinds of relationships between stakeholders, often with reciprocal responsibilities. The position paper introduces “Guidelines for Indigenous-centered AI Design” which include principles such as Locality, Relationality and Reciprocity, Responsibility, Relevance and Accountability, as well as Respect and Support for Data Sovereignty (including open data principles that respect the rights of Indigenous peoples). The guidelines suggest that all technological systems (and computation itself) are cultural materials; an expression of cultural and social frameworks for understanding and engaging with the world. This demands an awareness of socially dominant concepts and normative ideals, accompanied with biases and cultural values.

In a section of the position paper Suzanne Kite (2020), in conversation with other Indigenous practitioners and researchers, discusses how Indigenous Protocols can guide the design of physical computing devices, using the analogy of how the protocol for building a Lakota sweat lodge can act as a framework. Suzanne states how the “Lakota decision-making processes, as with many Indigenous decision-making processes, embed ethics that look Seven Generations ahead.”; this implies a much longer-horizon impact assessment, especially when it comes to developing Urban AI systems that are designed to function in urban ecosystems over a far greater lifespan than typical AI technologies. Suzanne continues to explain how Lakota knowledge is not static: “protocols change, decision-making shifts” allowing decisions to have effects on the world and must continue to be made and changed in a network of relations i.e., for the stakeholders, both human and non-human, that are affected by them.

In building a physical computing device in a “Good Way” through guidelines derived from Indigenous Protocols (using the analogy of a sweat lodge) many key aspects emerge including: (1) recognizing why such a device would be desired in the first place, (2) consultation with a committee of “knowledge keepers” with expertise in computation, materials, and ethics, (3) identifying stakeholders including communities who build and use them, to the non-human materials and the environment it is placed in, (4) extracting materials and constructing the devices in an environmentally sustainable manner, (5) arranging the design elements, algorithms, and code structures created in an intentional and intricate manner, that promotes responsible design from training and interaction to transforming the code into semiotic information rendered sensible to humans (what may be considered Explainable AI), (6) ensuring that the device created is announced to stakeholders with transparency of its relational impact, and finally (7) designing for the overall life-cycle and death-cycle of systems i.e. for ease of repair, recycling, reuse, or subsequent transformation.

Hence, Indigenous Protocols potentially offer an expanded framework of guidelines for ethical design and use of Urban AI systems that inherently embeds rights, risks, and responsibilities with greater attention to socio-cultural contexts, longer-horizon multi-generational impacts, and relational effects on a wider circle of affected stakeholders including human, non-human entities, and the environment. The lessons emerging in this article suggest a framework for critically rethinking ethical practices of designing Urban AI systems whereby we (1) consider the purpose and implications of introducing Urban AI systems in a societal and environmental context, (2) de-center human agency while promoting participation, collaboration and collective decision-making with Indigenous peoples, non-indigenous people, and non-human stakeholders, (3) design for the lifecycle of introducing, deploying, using and terminating or repurposing systems to ensure sustainable practices, (4) critically engage responsible policies and practices that mediate the rights and risks of all stakeholders involved to ensure trust, accountability, and good governance. The framework of rights, risks, and responsibilities introduced in this article, coupled with participatory, responsible, and relational approaches across a wider ecosystem of human and nonhuman stakeholders potentially supports more equitable, inclusive, and sustainable Urban AI.

7 Conclusions

As Urban AI systems become more pervasive in our lives and embedded in the many kinds of urban places that we experience, we must consider the living algorithmic and data infrastructures they create and wider implications for society and the environment along longer time horizons. In this article, we examined the contestations that emerge from scenarios of urban mobility services, surveillance, and policing in cities to designing wider Urban AI ecosystems. Grounding these discussions in a rights-based discourse allows us to consider the risks and responsibilities for all stakeholders (human and non-human), providers, and state actors in mediating their design and usage in an ethically responsible manner. There is a crucial role for engaging policy in smart mobility (Foth 2018) to mediate the risk that an engineering-led push for innovation at the intersection of mobility and Urban AI could simply lead to more “tech fixes” and “solutionism.” (Morozov 2014).

The emerging regulations on AI systems proposed by the European Commission offer many challenges and opportunities to anchor rights, risks, and responsibilities in a framework for assessment and compliance of such Urban AI systems. The illustrative example of an automated parking control systems introduced by the City of Amsterdam discussed here demonstrates how such systems can be designed to handle bias, fairness, and transparency, while highlighting the potential risks for privacy and incrimination of citizens and residents if the system is not made accountable and trustworthy. While the proposed AI Act provides a means to assess, document, and conform or enforce obligations and oversight for high-risk AI systems, I believe it is not a sufficient tool for conceptualizing, designing, and assessing the wider ethical implications of Urban AI. An indigenous perspective that engages guidelines derived through Indigenous Protocols offers an alternative framing that examines the relational effects on all stakeholders (human and non-human), socio-cultural contexts, and longer-horizon effects in designing and using Urban AI systems across their lifecycle and environmental context. I believe that engaging both the formal regulatory provisions of the proposed AI Act and informal guidelines based on Indigenous perspectives offers a more holistic way forward for designing technologies in a societal context.

Allison Powell (2021) argues that while civic life has been reconfigured by our use and expectations of urban technologies, notions of citizenship have also shifted in relation to how such technologies create contention over governance and civic liberties. Shannon Mattern (2021) imagines how we might rethink data-driven urbanism and algorithmic infrastructures in cities through myriad forms of local and indigenous intelligences and knowledge institutions in cities, to constitute more diverse, open, inclusive urban forms; she cites the example of public libraries functioning as stewards of urban intelligence. There is an opportunity to devise better participatory means of engaging and co-designing urban technologies that honor the rights, risks, and responsibilities of all stakeholders in society. Urban AI can thus potentially offer democratic, equitable, and inclusive futures by embracing the critical contestations, civic agency, good governance, and diverse intelligences embedded in the urban fabric of the city.