Introduction

The confluence of big data, open-source platforms and improved computational power have catalysed the expansion of AI technologies in urban planning and built-environment applications, enabling more sophisticated and data-driven approaches to urban analysis and decision-making (Park et al. 2023; Yigitcanlar et al. 2023a). Among these technologies, geospatial AI or GeoAI, which represents the integration of geospatial analysis and AI, has revolutionised human-environment modelling. By leveraging the capabilities of AI and geospatial analysis, GeoAI offers a range of advantages over traditional methods, including increased geographic coverage, reduced data bias and improved cost and time efficiency (ElHaj et al. 2023; Kang et al. 2023; Ying et al. 2023). These capabilities have been demonstrated across a range of urban applications, from smart urban management and urban analysis to data visualisation and support for decision-making in planning processes (Koseki et al. 2022; Marasinghe et al. 2024).

As cities seek to enhance their planning and efficiency through data-driven approaches, the integration of AI technologies into the planning profession has gained momentum (D’Amico et al. 2020; Sanchez 2023; Sawhney 2023). Geospatial analysis, fuelled by big data and AI technologies, supports data-driven planning efforts while also emphasising the need for a responsible framework in implementing GeoAI (Kovacs-Györi et al. 2020; Deep Block 2023). The intersection of AI and the built environment, termed as ‘urban AI’, raises complex ethical considerations and questions of spatial justice in the urban context (Alfrink et al. 2023; Son et al. 2023), such as matters of transparency and auditability (Faßbender 2021; Pansoni et al. 2023).

Responsible use of AI for geospatial tasks also necessitates a comprehensive understanding of hidden geospatial biases, the fostering connections between algorithm designers and the communities, and ethical and social implications (Rudin and Radin 2019; Gevaert et al. 2021). The ethical management of geographic data, the trustworthiness of results and the emphasis on addressing accuracy issues in variables and models, data accessibility and ethical concerns, all play a pivotal role in the broader discourse on AI ethics (Micheli et al. 2022; Zhang et al. 2023; Schirpke et al. 2023).

Thus, a strategic approach that considers a range of factors, such as clear objectives, ethical alignment, human interactions, context specific adaptations and addressing knowledge gaps—particularly those related to policymakers’ and planners’ understanding of complex AI outcomes-as well as the necessity for deep knowledge in areas like conceptualisation, parameterisation, system simplification and model accuracy and validation, is essential for the successful integration of GeoAI (Koseki et al. 2022; Sanchez 2023; Schirpke et al. 2023; Du et al. 2023). By taking a comprehensive and responsible approach to AI integration, cities can harness the potential of these technologies to improve the lives of their residents while minimising potential risks and negative consequences.

GeoAI leverages CV and ML as fundamental instruments to analyse, model and extract information from geospatial data. GeoAI has become the frontier for spatial analytics and widely used in large-scale image analysis (Li & Hsu 2022; Mai et al. 2022), making CV a cornerstone for GeoAI, enabling the automated extraction of meaningful information from visual data. This approach leads to significant opportunities in geospatial science and offers novel solutions to address the complexities of modern cities, thereby supporting sustainable urban decision-making processes (Asif et al. 2023; Nassar et al. 2023; Deep Block 2023). Nonetheless, the impact of AI-driven decisions on citizens’ lives and the implementation of AI in geospatial tasks present risks and ethical dilemmas, including data privacy, accuracy and the interpretability, emphasising the absence of standardisation in data collection and analysis and the need for specialised skills for their effective application (Koseki et al. 2022; Tingzon et al. 2023).

Given the potential benefits and challenges of GeoAI in urban planning, it is crucial to consider a range of ethical and responsible factors throughout the integration process to ensure its effective and equitable deployment. For instance, the incorporation of high-quality data and robust model development, along with solutions such as Explainable AI (XAI) and multidisciplinary collaboration, can enhance transparency, trust and the ethical use of AI in urban decision-making processes (Kumar et al. 2020; Akbarighatar et al. 2023). AI in support of sustainable urban development resides at the intersection of the availability of quality source and training data, technical feasibility, deployer’s capabilities, value creation and legal and ethical compliance (PwC 2020; Yigitcanlar et al. 2023b; Regona et al. 2024).

Sanchez (2023) describes three primary challenges to AI implementation in urban planning: (a) the necessity for new skills; (b) evolving data requirements and (c) incorporation of transparency. The most common policy areas of responsible technology are trustworthiness and the acceptability of technology (Li et al. 2023b). Hence, ‘Responsible AI’, the ethical and socially beneficial development, deployment and utilisation of AI, is a central concept in AI studies and is particularly relevant in the field of GeoAI (Asif et al. 2023; Nassar et al. 2023).

In light of these considerations, it is imperative to examine the complexities involved in integrating AI into geospatial applications and to develop strategies for addressing these challenges in a responsible and ethical manner. This paper seeks to contribute to this effort by exploring the challenges associated with AI integration in these contexts, advocating for responsible practices that can support impactful and beneficial GeoAI applications and proposing criteria for the successful incorporation of GeoAI. Drawing on insights from a review of both academic (white) and professional (grey) literature, this study aims to provide practical guidance for responsible AI integration in the geospatial domain, with a focus on supporting sustainable and equitable urban development outcomes.

Methodology

This study employs a systematic literature review methodology adhering to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) protocol, along with word frequency analysis, and a synthesis of insights from a review of grey literature. This study is guided by two key research questions: (a) What are the complexities involved in employing AI in geospatial applications? and (b) What constitutes responsible practices for the effective implementation of GeoAI? The PRISMA review process is structured into four stages: Identification, Screening, Eligibility, and Inclusion of sources in the final analysis (Figure 1).

Fig. 1
figure 1

Flow of the PRISMA review

Literature Identification

To identify relevant academic or white literature, a comprehensive search was conducted using the Scopus database, leveraging a strategically designed Boolean operation. The Boolean operation was formulated as follows: TITLE-ABS-KEY ( “responsi*” OR “ethic*” OR “best practice*” ) AND TITLE-ABS-KEY ( “computer vision” OR “image process*” OR “machine vision” OR “AI” OR “Artificial Intelligence” ) AND ALL ( “urban” OR “Geo*AI” OR “city planning” OR “regional planning”). This search strategy was carefully crafted to extract articles that span a wide range of responsible AI and CV uses in urban planning and geospatial studies. The literature search was carried out on 20 October 2023, resulting in the retrieval of 1742 articles from the Scopus database.

Literature Screening

To ensure the relevance and quality of the included sources, a set of inclusion and exclusion criteria were established and applied during the screening process, as summarised in Table 1. Only full-text articles published in peer reviewed journals or conference proceedings and written in English were considered for inclusion. Articles that did not meet these criteria were excluded, which resulted in 1420 articles. The titles, keywords and abstracts of these articles were subsequently evaluated to ascertain their pertinence to the study aim. Articles deemed irrelevant were systematically excluded from the review, resulting in a final set of 164 articles that were deemed eligible for the inclusion in the study (see Figure 1 for a summary of the selection process).

Table 1 Inclusion and exclusion criteria

Literature Eligibility and Inclusion

Subsequently, each article was rigorously assessed to ensure its alignment with the review objectives. This phase effectively narrowed down the article pool to 79. A concluding round of thorough full-text review resulted in the identification of 55 relevant articles. The chosen papers were subsequently utilised for qualitative analysis and reporting. The analysis was reported under three primary themes: (a) Complexities of employing AI in geospatial applications; (b) Responsible practices for effective AI adoption in geospatial applications; (c) Word frequency analysis; (d) Synthesis of professional or grey literature on effective GeoAI applications.

Word Frequency Analysis and Grey Literature Review

Further to the literature review, the study integrated a word frequency analysis utilising NVivo software to facilitate a more comprehensive analysis. The process highlighted the most frequent words within the document content, which was collected during the literature review, thereby providing valuable insights into the prevalent themes and topics. Furthermore, to complement the findings of the systematic literature review and gain insights into best practices for the use of AI and CV for geospatial applications, a review of grey literature was also conducted. Grey literature, which includes technical reports, government documents and other non-traditional sources, often provides valuable insights and practical knowledge that may not be captured in formal academic publications. To identify relevant grey literature, a Google search was conducted on September 19, 2023, using the terms ‘best practices for geo*AI and computer vision deployment’. The sources were selected based on their relevance to the study aim and their potential to provide insights into the best practices for the use of GeoAI. This screening yielded 17 relevant grey literature sources.

Results

General Observations

The descriptive analysis reveals that there has been a notable increase in research interest concerning responsible use of AI in the geospatial domain. The responsible implementation of AI technologies for urban planning and other geospatial studies has gained increased attention, with 53% of the articles reviewed for this study being published within the first 10 months of 2023. These articles span a wide array of journals, all of which emphasise the responsible use of technology in urban contexts (Figure 2).

Fig. 2
figure 2

Descriptive analysis of the reviewed articles

To provide a clearer visual summary of the literature review process, an alluvial diagram is presented below (Figure 3). The diagram illustrates the 55 articles that were reviewed. The height of the bars is proportional to the number of articles under each category. The figure illustrates each article’s focus and the application area, along with an indication of whether the article is directly oriented towards responsible practices in the context of AI integration in geospatial studies. ‘Article Focus’ represent specific branches or techniques within the broader field of AI and the ‘Application Area’ illustrates more specific fields where these AI technologies are applied. Acknowledging that terms like Generative AI and Explainable AI are subsets within the broader field of artificial intelligence, these specific applications or branches are distinctly represented in the diagram to avoid any ambiguity. These articles were examined to identify the key challenges and responsible practices essential for the effective integration of GeoAI technologies. The subsequent sections explain the findings derived from this review.

Fig. 3
figure 3

Summary of the reviewed literature

Complexities of Employing AI in Geospatial Applications

This section presents the multifaceted challenges associated with the use of AI tools and techniques in geospatial tasks. A crucial yet highly challenging aspect in the development of GeoAI is data. As depicted in Figure 4, data quality and availability and ethical handling of data are fundamental to the effective application of AI in geospatial tasks. Yet, they present significant hurdles. Most of the literature underscores the data and resource constraints, as well as the ethical and regulatory implications of gathering and analysing data, as the primary themes of challenges in employing AI for geospatial tasks. These challenges are mentioned more frequently than other technical constraints. This observation aligns with the practical experiences of researchers in the field and highlights the importance and the complexity of multifaceted ethical and data issues. When applying AI techniques to urban analysis and studies, researchers often encounter significant hurdles related to ethical and regulatory implications in terms of data privacy, algorithmic biases and errors, ethics of generative AI (GenAI), transparency, accountability, availability and quality of relevant data, knowledge gaps and many other application-specific limitations.

Fig. 4
figure 4

Challenges of AI adoption in geospatial studies derived from academic literature

In addition to these, the ‘Blackbox’ nature of AI-based models, the difficulties planners face in understanding the complex relationships arising from big data, the gaps in the knowledge base and systematic limitations of AI use in urban-related applications are also commonly mentioned challenges in literature. Moreover, the literature underscores potential conflicts in values at different levels (e.g. automated decision-making vs. human values, privacy vs. accuracy) that emerge when employing AI for urban applications, as well as the technical and analytical challenges. Similarly, there are multifaceted complexities when employing AI tools and techniques for geospatial tasks that necessitate a deeper understanding. A detailed elucidation of each of the challenges is presented in sections below.

Data, Technical and Resource-Based Constraints

AI adoption can be costly and inaccessible, which hinders its widespread adoption and benefits (Hariri-Ardebili et al. 2023). The main barrier to AI implementation is the quantity, quality and availability of data (Rapp et al. 2023; Abimannan et al. 2023; Akbarighatar et al. 2023). In the context of GeoAI, challenges include data scarcity, reliability issues and the cost of data acquisition, especially in developing countries. There are also concerns about the limited data relevant to geospatial studies and their inconsistency, inaccuracy, bias and subjectivity (Schirpke et al. 2023; Tingzon et al. 2023; Pansoni et al. 2023).

According to Du et al. (2023), data constraints and limited resources in the planning field pose challenges to AI use in urban planning. Biases in training datasets, particularly those collected from online social data or crowdsourcing, raise issues of fairness and discrimination (Araujo et al. 2020). Data inequalities (Nisar et al. 2022), historical biases and manipulation of metadata also pose challenges (Pansoni et al. 2023). The quality of data and the availability of adequate training samples are crucial for the accuracy of CV-based urban analysis and generalisation (Bernasco et al. 2023). As the choice of tasks is driven by dataset availability (Lai et al. 2023), the demand for quality and diverse data for geospatial tasks is crucial (Velev & Zlateva 2023). Furthermore, financial capabilities, organisational capabilities, computational resources and infrastructure are important aspects that pose barriers for AI implementation (Akbarighatar et al. 2023; Lucchi 2023).

Moreover, implementing AI solutions for the built environment requires the development of accurate and interpretable models that can adapt to changing conditions. However, this process is often hindered by computing complexities, concerns about security and the need for system scalability and flexibility (Abimannan et al. 2023). Moreover, the high volume of data associated with these applications, often referred to as ‘big data’, along with the resource-intensive parametrisation of ML models and high technical requirements, introduces further complications to these technical challenges (Schirpke et al. 2023). In conclusion, while AI offers promising capabilities for geospatial applications, it is imperative to acknowledge the resource constraints and technical and analytical complexities inherent in these systems.

Multifaceted Ethical and Regulatory Implications of Gathering and Analysing Data

The application of AI, ML and CV algorithms in data gathering and analysis raises the important ethical, regulatory and legal considerations. These include the potential for algorithmic errors due to biased datasets or training processes, the increased risk of discrimination, fairness concerns and vulnerability to security threats such as spoofing and adversarial inputs (Dufresne-Camaro et al. 2020; Bernasco et al. 2023; Velev & Zlateva 2023). In the context of urban sciences, the use of generative AI poses additional ethical concerns, including the potential for misinformation and bias in AI outputs (Jang et al. 2023; Bae & Xu 2023). For example, AI-generated maps, have been identified as a source of ethical concerns, including potential for misinformation, lack of reproducibility, inherent randomness in the generation process and inability to understand underlying geographic processes (Zhang et al. 2023). These issues present significant challenges for the validation and replication of cartographic research on GeoAI highlighting the need for robust and transparent methodologies and clear ethical guidelines.

The use of AI in decision-making and participatory planning introduces ethical considerations such as potential biases, imbalanced power dynamics, fairness and exclusionary conducts and raises the legal challenges, including privacy and accountability and concerns about individuals’ willingness to cede decision-making power to AI (Du et al. 2023; Hariri-Ardebili et al. 2023). Major challenges for AI use in urban studies, data-driven decision-making and big data include data privacy, security and the ethics of data collection. These issues are particularly pertinent for visual data and data involving human faces and information (Araujo et al. 2020; Lucchi 2023; Hariri-Ardebili et al. 2023; Bernasco et al. 2023). Other ethical challenges include lack of transparency in reporting and documenting methods and decision-making algorithms (Schirpke et al. 2023). Given the wide variations in the quality and coverage of geospatial data and errors in algorithm design, bias is a significant risk (Gevaert et al. 2021). Hence, when using AI for geospatial tasks, attention must be paid to transparency, responsibility, accountability attribution (Pansoni et al. 2023) and multifaceted ethical considerations.

Blackbox Nature of AI-Based Models

The integration of AI and ML models into decision-making processes has been hindered by the capacity and limited interpretability of these models, often referred to as the ‘Blackbox’ problem. Despite their high accuracy, the inability to fully understand, explain and trust the models’ outputs has raised concerns among decision-makers (Tingzon et al. 2023; Hariri-Ardebili et al. 2023). This lack of transparency and the inability of planners to explain the models’ outputs to non-experts, lead to uncertainty and make the integration of these models into planning problematic (Du et al. 2023).

In the field of urban studies, researchers often rely on pre-trained models, particularly in CV, due to the complexities, data intensities and resource requirements associated with developing specific models for their studies (Araujo et al. 2020). Nonetheless, these commercial APIs are not specifically designed for studies related to the built environment. They lack insight into the quality of the data used for training are unable to comprehend complex spatial relations, and their ‘Blackbox’ nature poses challenges (Araujo et al. 2020). Researchers must be mindful of potential biases, issues related to fairness and the need for explainability and transparency when using these models (Araujo et al. 2020; Velev & Zlateva 2023). Therefore, while AI and ML models offer promising capabilities, their application must be approached with caution and with a thorough understanding of their limitations to avoid potential negative consequences, such as perpetuating inequalities or undermining public trust in the technologies.

Knowledge Gaps

The adoption and implementation of AI systems in various fields, including urban planning and geospatial tasks, are hindered by knowledge gaps. Professionals often hesitate to employ technologies they do not fully understand, creating a barrier to the wider adoption of AI (Rapp et al. 2023, Tingzon et al. 2023). This hesitation is further compounded by a lack of trained professionals or expertise in AI systems (Rapp et al. 2023).

Successful AI-based modelling and simulation necessitates profound knowledge in areas such as conceptualisation, parameterisation, system simplification and model accuracy and validation (Schirpke et al. 2023). A comprehensive understanding of AI functionalities and advantages is often lacking, as well as the clear explanations about the impact of its methods, with planners often facing challenges in comprehending the complex relationships and patterns arising from big data (Du et al. 2023, Lucchi 2023; Tingzon et al. 2023). In the field of urban planning, the lack of clear explanations about the impact and methods of AI and general unfamiliarity with AI techniques pose significant challenges (Du et al. 2023). Building knowledge in these areas is crucial for the effective integration of AI systems into geospatial studies and planning processes.

Systematic Limitations

AI systems, despite their potential, have inherent limitations that can impact their adoption in geospatial applications. For instance, systematic limitations such as a dependence on instrumental rationality, the use of quantitative methods and the automation of technology, contribute to the complexities involved in employing AI tools and techniques for geospatial studies (Rittenbruch et al. 2022). Inflated expectations surrounding AI can encourage the use of unsuitable approaches and an excessive dependence on technology that is unfit for the intended use, leading to a decline in human decision-making abilities, absence in context awareness and limiting human responsibility (Yigitcanlar et al. 2021a; Velev & Zlateva 2023; Cruz et al. 2023).

The absence of standards for data collection, model creation, interoperability, subjectivity in analytical capabilities and interpretation and limited accuracy of the virtual representations of the physical entity represent challenges of employing AI for geospatial applications (Lucchi 2023). These challenges are of heightened importance when AI is used for decision-making that involves risks and societal impacts. The level of expertise and training (see previous section on Knowledge Gaps) can influence the decision outcome, adding further subjectivity of the process (Lai et al. 2023). The application of AI must be approached with caution and a thorough understanding of its limitations.

Potential Conflicts in Values Between Different Levels

The application of AI in diverse domains has raised significant ethical and value-based concerns. The task of aligning values is critical to ensure that the objectives and actions of AI are harmonised with human values (Rezwana & Maher 2023). In the context of geospatial tasks, the application of AI technologies necessitates a thorough consideration of the needs, values and ethics of local communities. It is crucial to be sensitive to potential conflicts and power asymmetries (Micheli et al. 2022). Insufficient comprehension of the local context may lead to the exclusion of local resources and values from geospatial data representation (Gevaert et al. 2021). For instance, GeoAI approaches, which often evaluate the built environment based on visual representations, could yield inaccurate and unethical outcomes in spatial analysis if the local context is overlooked, such as the cultural or ecological significance of a site (Kang et al. 2023). Furthermore, biases in the GeoAI training phase could result in unfair decisions. Therefore, research indicates that models trained on region-specific data are more likely to accurately represent locals’ perceptions (Kang et al. 2023).

The development and implementation of AI technologies, which can have broad social implications, necessitate careful consideration of ethical and social implications (Velev & Zlateva 2023), user acceptability and potential conflicts between values such as efficiency and privacy. This is essential to prevent disputes and ensure that AI applications align with human values and local contexts, thereby promoting ethical and fair outcomes.

Responsible Practices of Effective AI Implementation in Geospatial Applications

Recent studies have attempted to provide a more holistic perspective on AI, resulting in several new conceptualisations including “responsible AI”, “ethical AI” and “explainable AI” (Yigitcanlar et al. 2021b). Numerous studies have focused on the ethical and responsible dimensions of AI systems, frequently mentioning generic responsibility aspects such as explainability, transparency, scalability, ethics, safety, environmental impact, privacy, data governance and accountability (Pansoni et al. 2023; Fan et al. 2023; Akbarighatar et al. 2023).

This study delves into responsible practices that are crucial for the effective implementation of AI tools in geospatial tasks. In the geospatial domain, most studies that focus on the integration of ML and remote sensing are driven by technical and ethical factors (Li et al. 2023a). For the successful deployment of GeoAI-based systems in a human-centric application such as urban planning, it is imperative to consider aspects such as reliability, robustness, interpretability, social and ethical implications and sensitivities of these GeoAI methods in planning practices (Falco 2019; Ahmad et al. 2022; Kang et al. 2023). Therefore, this study offers insights into the responsible practices that are most frequently mentioned in the literature and are distinctive to urban decision-making when using AI technologies (Table 2). These insights underscore the importance of responsible AI use in urban planning and decision-making.

Table 2 Responsible practices in effective GeoAI applications derived from the academic literature

Accordingly, participatory approaches or strong multidisciplinary collaborations have been identified as the most crucial factors for successful GeoAI integration (Table 2). A human-in-the-loop approach is necessary for AI-based decision-making, which requires a blend of technical expertise, local knowledge and domain expertise. This approach guides the development of robust and context appropriate GeoAI solutions, addresses ethical dilemmas, accuracy and accountability issues while also mitigating risks associated with GeoAI such as the potential misinterpretations, overlooking of social and underlying geographical significances and privacy and transparency concerns. Furthermore, the review underscores the significance of integrating explainable AI methods to enhance the clarity, interpretability, transparency and trustworthiness of AI solutions for urban decision-making.

In GeoAI, where AI-informed decisions can significantly impact people, it becomes particularly crucial to provide the meaningful explanations for model decisions and actions. The findings from the review also highlight the importance of incorporating quality and context-specific data, considering multifaceted ethical considerations, ensuring algorithmic transparency and accountability, implementing strategies for privacy protection, recognizing the necessity of capacity building, ensuring robust design and evaluation and exploring more inclusive and human-centred approaches, among others (Table 2). Detailed explanations for each identified responsible AI practice are provided in the subsequent sections.

Engagement and Ethics

Multidisciplinary engagement and consideration of multifaceted ethical aspects throughout the development and deployment stages are necessary for responsible GeoAI applications. Participatory approaches, also known as collaborative, community-based or co-design approaches involve strong developer-stakeholder interactions in the design of planning support systems, fostering common goals and expectations (Rittenbruch et al. 2022). In the context of geospatial tasks, AI-based tools necessitate a blend of technical expertise, local knowledge and insights from humanities and social sciences to mitigate the potential community risks, particularly for applications with high societal impact (Dufresne-Camaro et al. 2020; Rapp et al. 2023).

Furthermore, the successful integration of AI in geospatial applications necessitates a multifaceted consideration of ethical aspects that extend to the values embedded in AI, the methods of documenting data algorithms, human perceptions, contextual morality, social values and bias among other facets (Kuberkar et al. 2022; Capel & Brereton 2023). A human-centred design of AI assistance tools, prioritizing decision-makers’ requirements over technical availability, facilitates better human-AI decision-making (Lai et al. 2023; Capel & Brereton 2023).

The potential of AI, while significant, is not a substitute for human expertise; rather, it necessitates a human-in-the-loop approach that involves an active interdisciplinary engagement throughout the full life cycle of AI adoption from data collection to design to the final output of an algorithmic decision-making system and beyond to ensure effectiveness (Nisar et al. 2022; Hariri-Ardebili et al. 2023; Abimannan et al. 2023). Furthermore, as AI training without human oversight can lead to harmful outcomes, such as bias and privacy violations, AI-based decision-making necessitates human supervision and diverse perspectives for a well-defined scope and impactful AI adoption (Micheli et al. 2022; Rapp et al. 2023; Li et al. 2023c). For instance, using AI in cartography or urban map making, a collaborative approach between cartographers, AI developers and local knowledge holders can address inherent limitations, improve map accuracy and mitigate potential ethical issues (Zhang et al. 2023).

Community science approaches that leverage social networking platforms and mobile apps can be used for comprehensive dataset preparation for AI adoption (Wu et al. 2023). Hence, collaboration across multiple disciplines, locations and communities enhances digital technology outcomes in urban planning and geospatial applications by improving trust, reducing bias that could lead to poor AI decisions in cities, developing robust, context-appropriate solutions and addressing risks associated with generative AI (Falco 2019; Engin et al. 2020; Bae & Xu 2023). Importantly, interdisciplinary collaboration does not necessarily require planners and researchers to have technical know-how.

Instead, they should collaborate with technical experts in the development of AI tools for their specific research needs, considering the significance of domain expertise in guiding GeoAI towards resolving urban issues (Bernasco et al. 2023; Kang et al. 2023). Hence, collaboration and partnerships support funding and resources, address data deficits, enhance transparency, tackle ethical and legal issues and fill knowledge gaps for a more inclusive and meaningful AI adoption (Lepri et al. 2021; Hariri-Ardebili et al. 2023; Schirpke et al. 2023).

Ethical considerations are particularly crucial for geospatial applications and AI in urban cartography, with a focus on bias and trustworthiness (Tingzon et al. 2023; Zhang et al. 2023). The use of co-creative or generative AI tools raises additional ethical concerns, necessitating a shift from automated modelling to AI-assisted data analysis (Rezwana & Maher 2023; Rittenbruch et al. 2022; Rapp et al. 2023). Although GeoAI possesses the capabilities to tackle various geographical and urban issues, it is imperative to resolve ethical dilemmas before implementing them in practical scenarios (Kang et al. 2023).

Trust in AI, linked to the ethics of algorithms, data and practice, requires sensitivity to local contexts, understanding of decision-making systems and integration of human experience (Kumar et al. 2020; Tingzon et al. 2023). Trustworthy GeoAI considers local needs, values, societal and environmental well-being, diversity, non-discrimination, fairness and ethics (Akbarighatar et al. 2023; Pansoni et al. 2023). This forms the basis of ethically aligned, human-centric AI that upholds justice and fairness (Rezwana & Maher 2023).

Communications and Transparency

Responsible AI adoption in geospatial applications necessitates effective communication of AI methodologies and assurance of transparency and accountability in algorithmic processes. Literature review underscores these factors, highlighting the importance of explainable AI methods, algorithmic transparency and accountability and a comprehensive understanding of inherent biases and limitations. All these elements are crucial to fostering trust and promoting ethical practices. Explainability is a key aspect of trustworthy AI (Pansoni et al. 2023) and is necessary in the development of reliable and interpretable AI models for urban measurement (Abimannan et al. 2023).

The adoption of Explainable AI (XAI) methods enhances the clarity and interpretability of AI solutions for urban decision-making (Akbarighatar et al. 2023). Algorithmic transparency and accountability are fundamental components of trustworthy AI (Pansoni et al. 2023) and hold particular significance in generative AI or human-AI co-creation and in the development and deployment processes of AI systems, thereby enhancing the trustworthiness of AI implementation (Akbarighatar et al. 2023; Rezwana & Maher 2023). Furthermore, the development of trustworthy GeoAI systems and better practical applications necessitates monitoring various model biases and examining the characteristics and limitations of GeoAI approaches (Kang et al. 2023).

In the context of co-creative AI, the necessity of explainability is underscored as it enables users to understand the appropriateness, ethical implications and potential risks associated with the AI models (Rezwana & Maher 2023). Especially in urban applications, model interpretation is a crucial step in examining whether model predictions align with domain sciences (Zhong et al. 2021). Various techniques, including random forests, SHAP, rule-based explanations, model visualisations and hybrid models can enhance the explainability and interpretability of AI models (Tingzon et al. 2023; Fan et al. 2023). In Earth Observation, ML explainability methods include interpretable models, domain knowledge incorporation, feature selection and saliency maps or heat maps (Gevaert et al. 2021).

‘Whitebox’ algorithms, such as classification and regression trees can provide understandable, transparent and interpretable results (Schirpke et al. 2023). Leveraging XAI methods enhances research potential by improving transparency, explainability and interpretability (Kök et al. 2023). However, limitations such as compute-intensitivity, system inefficiency and high cost should be considered in geospatial studies (Tingzon et al. 2023; Kök et al. 2023; Cruz et al. 2023). Effective model communication should cater to a diverse audience, from AI researchers and domain experts to non-experts, providing understandable and interpretable results (Schirpke et al. 2023).

The necessity for transparency and accountability extends to the mitigation of potential biases and discriminatory outcomes, with the responsibility being on the AI designers and deployers, especially when such decisions impact communities (Hariri-Ardebili et al. 2023; Akbarighatar et al. 2023). Addressing potential biases in algorithms, enhancing accountability and incorporating solutions such as fairness algorithms, auditing algorithms, explainable algorithms and decentralised AI algorithms are necessary steps to ensure the validity of findings and the transparency of methods in AI systems (Bernasco et al. 2023; Asif et al. 2023).

Transparency in AI also encompasses the logic behind the algorithms and participatory labelling and designing processes, which can mitigate the risk of misconceptions arising from exclusive data labelling by technical experts (Falco 2019; Faßbender 2021). Initiatives such as workshops involving all stakeholders for initial label identification not only enhance the transparency and accountability of the system but also promote an inclusive and participatory system design (Faßbender 2021). Better urban planning and management decisions necessitate a comprehensive understanding of data processing quality (Engin et al. 2020). Consequently, algorithmic transparency for cities involving the documentation and public sharing of algorithms (Falco 2019) and the thorough documentation of inherent biases and limitations of data and training schemes emerge as critical facets of responsible AI practice.

Data and Inputs

The literature underscores the importance of quality and context-specific data management and data privacy and security management for responsibly integrating AI into geospatial applications.

The responsible adoption of AI in urban planning and geospatial applications necessitates the utilisation of high-quality, appropriate and diverse data types that are representative, balanced, timely and unbiased for AI-based urban measurement (Dufresne-Camaro et al. 2020; Akbarighatar et al. 2023; Abimannan et al. 2023). The integration of multiple data sources, including publicly accessible datasets like social media data and satellite images, can enhance the accessibility of AI-generated maps, enable comprehensive analysis and expand dataset availability (Zhang et al. 2023; Tingzon et al. 2023; Lai et al. 2023; Li et al. 2023c). Bias-free training data is vital for GenAI or co-creative AI, and pre-processing of the dataset is necessary for quality results (Rezwana & Maher 2023; Jena et al. 2023). Furthermore, the scalability and high dimensionality of data are important considerations when using AI and deep learning (DL) for urban tasks (Fan et al. 2023).

The data used to train and operate models determine the reliability of the outcome, necessitating the implementation of bias mitigation mechanisms and the assurance of diversity and representativeness in the training data to prevent unintentional harm from decision-making systems (Akbarighatar et al. 2023; Asif et al. 2023). Hence, representative data, rather than big data, are more important for obtaining robust, powerful ML models (Zhong et al. 2021). Limitations of many AI- and CV-based models can be avoided significantly by integrating quality and context-specific data (Wu et al. 2023).

Furthermore, addressing privacy concerns is a key aspect of developing strategies for the responsible adoption of AI in geospatial tasks. Particularly in the domain of geospatial studies, ethical AI necessitates a significant local aspect and consideration of privacy concerns for gaining public trust (Micheli et al. 2022). For urban analysis based on CV technologies, privacy concerns present a significant challenge, making the development of privacy-preserving AI strategies essential for sustainable solutions (Dufresne-Camaro et al. 2020). Privacy and data governance are prerequisites for trustworthy AI and are particularly crucial in data collection and usage for co-creative AI or generative AI (Pansoni et al. 2023; Rezwana & Maher 2023). Solutions such as establishing data-sharing protocols, data anonymisation and collaborations can ensure confidentiality, facilitate responsible data governance and enhance privacy and security (Hariri-Ardebili et al. 2023; Schirpke et al. 2023).

AI Design and Operation

The literature emphasises the importance of robustness and context-appropriateness in AI-based designs, the necessity for robust validation of automated tasks and the significance of understanding the broader impacts of AI for responsible GeoAI design and operation. These elements are crucial as they ensure the reliability of the outcomes and effectiveness of AI systems, evaluate the model appropriateness, foster trust among users and mitigate potential negative impacts on society.

The model designers require collaborating with field experts to create robust, context-specific solutions, thereby ensuring responsible AI use and enhancing the system’s reliability and effectiveness across various conditions and contexts (Dufresne-Camaro et al. 2020). A trustworthy AI model should consider various factors, including the rationale behind the task, suitability, technical robustness, safety and scalability (Kumar et al. 2020; Fan et al. 2023; Pansoni et al. 2023; Lai et al. 2023). For instance, it is necessary to robustly examine how well generative AI represents accurate and place-specific contexts for trustworthy AI (Jang et al. 2023).

The development of scalable, flexible AI architectures is crucial for GeoAI to effectively manage the increasing data quantities, expanding and evolving urban conditions (Abimannan et al. 2023). Furthermore, regardless of choosing training own models, which allows for greater customisation but requires field expertise and sufficient labelled data, or using pre-trained models provided, which offer faster implementation and reduced costs but may require customisation, ensuring context-appropriateness of AI-based designs remains a critical aspect of responsible AI adoption.

Furthermore, validation protocols that incorporate human involvement and real-world validations are essential for successful GeoAI implementation (Tingzon et al. 2023). Robust validation of automated tasks in geospatial analysis necessitates not only technical expertise but also the input from target groups, including communities and institutions (Faßbender 2021). The process also requires continuous impact assessments and periodic evaluations of AI decisions (Akbarighatar et al. 2023). Insightful feedback and constructive criticism are pivotal in tackling the ethical issues associated with co-creative or generative AI (Rezwana & Maher 2023).

Robust validation of automated geospatial tasks can be achieved through human reviews, feedback loops, evaluation, verification, validation and accreditation of models (Zhong et al. 2021; Alfrink et al. 2023; Lucchi 2023). Geospatial tasks or AI-assisted tasks with significant societal impact should involve and be monitored by experienced professionals, rather than being fully automated (Rapp et al. 2023). This includes an evaluation of impacts that extends beyond the confines of ML performance metrics (Tingzon et al. 2023). It is also crucial to understand the broader impacts of AI and comprehend the potential societal, environmental and economic outcomes (Yigitcanlar et al. 2021b).

Strategic Deployment

Data scarcity, financial constraints and skill gaps are significant barriers to the implementation of GeoAI (Tingzon et al. 2023; Rezwana & Maher 2023). Overcoming these challenges requires strategic planning capabilities within organisations, which are critical for the successful adoption and execution of AI (Akbarighatar et al. 2023). The strategic use of AI, aimed at enhancing the value of geospatial tasks for all stakeholders, is characterised by a strong emphasis on co-design, fostering robust interactions between developers and stakeholders throughout the development process of Planning Support Systems, with a focus on value uplift and value capture (Rittenbruch et al. 2022).

To effectively leverage AI solutions, it is imperative that not only system design experts but also planners and policymakers possess a comprehensive understanding of AI systems. This necessitates the promotion of AI literacy and ethical awareness among non-experts and addressing knowledge gaps through increased access to data, training, user education and the sharing of knowledge and research (Schirpke et al. 2023; Lucchi 2023; Bae & Xu 2023; Rezwana & Maher 2023).

Furthermore, the progression of ML and the increasing computational capabilities make it crucial for urban practitioners to be adequately prepared to utilise these tools effectively (Zhong et al. 2021). An effective participatory approach in the urban sector requires equipping citizens with the basic skills and knowledge to engage in dialogues about public AI system outcomes, and professionals need capacity building for accountability and effective engagement (Falco 2019; Alfrink et al. 2023).

Organisational strategies are particularly important for developing GeoAI in data-scarce and low-resource settings (Tingzon et al. 2023; Li et al. 2023c; Alfrink et al. 2023; Nassar et al. 2023). These strategies include necessary changes in laws, regulations and policies before technological alterations and the development of GeoAI solutions that leverage advances such as affordable cloud computing, accessible sensor technology and the continuous growth of geospatial data (Henman 2020; Akbarighatar et al. 2023).

The responsible use of AI depends on the values that are incorporated into the problem formulation (Doorn 2021); hence, a comprehensive understanding of these values is crucial for the impactful implementation of AI in geospatial applications. Particularly, in the context of GenAI, the creation of legal and policy guidelines and regulations can mitigate negative impacts (Bae & Xu 2023). Multidisciplinary collaboration is a viable solution to manage regulatory and ethical considerations, as it facilitates the development of guidelines and standards (Hariri-Ardebili et al. 2023). Governance strategies including continuous performance monitoring, user feedback and improvement updates for optimisation, ensure the sustainability of GeoAI implementation in cities (Shaamala et al. 2024). It is of utmost importance that the outcomes of these AI systems are beneficial to all, contributing to societal and environmental well-being (Micheli et al. 2022; Pansoni et al. 2023).

Additionally, drawing upon the Responsible Innovation Technology (RIT) assessment framework proposed by Li et al. (2023b), the responsible practices identified through this study can be classified into five primary factors of responsible technology, as depicted in Figure 5. It is noteworthy that all aspects of responsible technology outlined in Li et al.’s (2023b) framework align with and can be traced back to the responsible practices identified through this review validating the comprehensiveness of the study in the context of responsible technology practices.

Fig. 5
figure 5

Categorisation of responsible practices derived from Li et al. (2023b)

Word Frequency Analysis

The qualitative word frequency analysis, conducted using NVivo software, highlighted the most frequent words within the document content. This content was collected during the literature review, providing valuable insights into prevalent themes and topics. The analysis involved creating a word cloud with the 100 most frequent words and performing a word cluster analysis based on word similarity for the 30 most frequent words. In conjunction with the manual literature review, this analysis yielded confirmatory insights into the dominant themes prevalent in the literature and contributed to the development of the conceptual framework (Figure 9).

Much like the manual synthesis of literature, the word frequency analysis highlighted key themes such as humans, data, model design, explainability and interpretability, ethics, transparency and accountability, biases, community, privacy, collaboration and trustworthiness, among many others (Figure 6). Figure 7 illustrates the generated clusters of key themes, namely explainability and interpretability, privacy, human-centric design approach, algorithmic transparency and accountability, model and technological aspects, geospatial data, ethics and biases and result evaluation.

Fig. 6
figure 6

Word frequency analysis: responsible practices in AI adoption in geospatial application derived from academic literature

Fig. 7
figure 7

Word cluster analysis: responsible practices in AI adoption in geospatial application derived from academic literature

Ultimately, the findings underscore for the adoption of AI for geospatial tasks, the imperative factors such as human-centric approach, the ability to explain and interpret the models’ outputs, ethics, transparency and accountability of the algorithmic process, robust model design, quality and availability of data, stakeholder engagement, comprehending limitations and capacity of the automated tasks, and fairness of the outcomes play crucial roles for reliable and effective outcomes in geospatial tasks.

Synthesis of Grey Literature on Effective GeoAI Application

This section integrates insights from the grey literature (refer to “Word Frequency Analysis and Grey Literature Review”) to explore responsible practices for the deployment of GeoAI. GeoAI, leveraging CV and ML as cornerstones, is at the forefront of spatial analytics, particularly in large-scale image analysis for built environment analysis and the extraction of information from geospatial data (Li & Hsu 2022; Deep Block 2023; Marasinghe et al. 2024). However, the application of GeoAI involves a strategic process and numerous ethical and social considerations that must be addressed throughout the entire process. Therefore, a framework for responsible AI applications is necessary to ensure the effective and ethical deployment of these technologies. The grey literature, which includes technical reports, policy documents, and other non-peer-reviewed sources, provides practical insights that complement the findings from the scholarly literature. Table 3 summarises the key aspects identified in the grey literature that are essential for the responsible AI applications for geospatial studies.

Table 3 Considerations for responsible GeoAI applications derived from grey literature

As demonstrated in Table 3, the criteria identified through the grey literature review align with the responsible factors identified in the scholarly literature review, underscoring the importance of these factors in ensuring the ethical and effective deployment of AI in geospatial applications. This alignment suggests a common set of responsible considerations that should be taken into account.

The responsible factors, identified through the scholarly literature review, indicate that certain factors bear relevance at various stages of AI adoption. This analysis, illustrated in Figure 8, shows that participatory approaches and ethical considerations are integral factors that warrant consideration throughout the entire process of AI application in geospatial studies. The insights derived from the analysis contributed to the development of the responsible GeoAI conceptual framework (Figure 9).

Fig. 8
figure 8

Mapping responsible aspects to stages of AI adoption derived from grey literature

Fig. 9
figure 9

Framework for responsible GeoAI, derived from academic and grey literatures

Findings and Discussion

Key Findings

The findings derived from the literature review reveal several key lessons for the responsible and effective implementation of GeoAI. These lessons underscore the importance of a participatory approach that involves diverse stakeholders, including local communities and domain specialists, into the design and deployment of GeoAI systems. This approach is essential for ensuring that GeoAI systems are tailored to the specific needs, priorities, and contexts of the communities they are intended to serve.

Another crucial lesson is the need for enhanced model interpretability, which refers to the ability of GeoAI systems to provide clear and meaningful explanations for their decisions and actions. This is particularly important in the context of urban planning and geospatial applications, where the consequences of AI-informed decisions can have significant impacts on people’s lives and livelihoods. The importance of utilising high-quality and context appropriate data in GeoAI systems is also emphasised in the literature. This includes ensuring the data are valid, representative and free from biases or errors that could compromise the performance and reliability of GeoAI systems.

Furthermore, the literature review highlights the need to account for a broad spectrum of ethical considerations in the development and deployment of GeoAI systems. This includes considerations of algorithmic transparency and accountability, data privacy and context awareness. A clear and precise problem framing stage, a robust evaluation and validation process, addressing implementation gaps and knowledge gaps are also necessary for responsible and successful implementation of GeoAI.

The subsequent sections provide a more detailed discussion and elaboration of these findings.

  • Comprehending the Complexities of GeoAI Applications Is Crucial

The effective application of GeoAI necessitates a comprehensive understanding of the complexities and challenges associated with these technologies. This understanding is not just about the technical aspects but also includes an appreciation of the various ethical considerations and data constraints that can significantly impact the performance and effectiveness of GeoAI systems. Notable issues include misinformation, potential bias, privacy, transparency, accountability and quality and availability of relevant data (Araujo et al. 2020; Jang et al. 2023; Du et al. 2023). Additionally, complexities arise from model opacity (Tingzon et al. 2023), knowledge gaps (Rapp et al. 2023), methodological limitations (Rittenbruch et al. 2022) and technical challenges (Schirpke et al. 2023) need comprehension. Understanding and strategizing to address these issues is crucial to harness the full potential of GeoAI while minimising potential risks and harms.

  • Collaboration and Human-Centric Approach Holds the Key

In the context of geospatial applications, collaboration is not just beneficial but essential, as it requires not only technical knowledge but also an understanding of complex spatial interactions. The literature underscores participatory approaches as the most significant factor for effective geospatial applications as it enhances trust and mitigates bias in AI decision-making in cities (Falco 2019). It fosters interactions between developers and stakeholders, incorporating a human-in-the-loop methodology that amalgamates technical expertise with local and domain-specific knowledge (Dufresne-Camaro et al. 2020). Hence, multidisciplinary participation in GeoAI for real-world algorithmic decision-making processes not only bolster resources but also ensures reliable and context-specific designs and outcomes, address data deficits, transparency, ethical and legal issues and knowledge gaps, while tackling inherent limitations and promoting inclusive AI adoption (Lepri et al. 2021; Hariri-Ardebili et al. 2023; Schirpke et al. 2023). Furthermore, effective GeoAI applications require a user-focused design and evaluation approach, which necessitates sensitivity to local contexts, values embedded in problem formulation and integration of human knowledge (Tingzon et al. 2023; Pansoni et al. 2023). Hence, human-centric approaches are indeed indispensable in the field of GeoAI as it ensures ethical and effective use.

  • Clarity and Interpretability of AI Solutions Are Necessary for Urban Decision-Making

Explainability is vital for developing reliable AI models for geospatial analysis (Abimannan et al. 2023), as it demystifies AI models, making them understandable to a diverse audience, including AI researchers, domain experts and non-experts (Kök et al. 2023; Schirpke et al. 2023). It aids in understanding the appropriateness of AI models, their ethical implications and potential risks in geospatial applications (Rezwana & Maher 2023). Model interpretation is necessary in geospatial applications for aligning predictions with domain sciences (Zhong et al. 2021). Hence, utilising XAI methods provides insights into how AI models make decisions, thereby enhancing transparency, trust and acceptance among users.

  • Data Serves as the Foundation

In the context of responsible AI adoption in geospatial applications, the quality, diversity and context-appropriateness of data are paramount (Dufresne-Camaro et al. 2020; Akbarighatar et al. 2023). The reliability of GeoAI analysis outcomes hinges on high-quality, appropriate and diverse data types that are representative, balanced and bias-free (Akbarighatar et al. 2023; Asif et al. 2023). Limitations of many AI- and CV-based models can be significantly mitigated by integrating quality and context-specific data (Wu et al. 2023). The data used to train and operate models determine the quality and reliability of the outcome, necessitating the implementation of bias mitigation mechanisms and the assurance of representativeness, accuracy and diversity in the training data, as well as addressing any potential biases or errors that could compromise the performance of the AI system to prevent unintentional harm from decision-making systems (Akbarighatar et al. 2023; Asif et al. 2023).

  • Multifaceted Ethical Implications Must be Considered

Effective GeoAI applications necessitate a multifaceted consideration of ethical aspects, ensuring responsible technology deployment, fostering trust and ensuring that local contexts and societal values are honoured in spatial analysis and decision support. This includes values embedded in AI, human perceptions, contextual morality, social values, data privacy and monitoring bias and trustworthiness (Kuberkar et al. 2022; Capel & Brereton 2023). Algorithmic transparency and accountability are fundamental for trustworthy GeoAI (Pansoni et al. 2023), with privacy and data governance being prerequisites when handling sensitive location data, human face data or other personal information, especially in data collection and addressing potential ethical concerns raised by generative AI (Pansoni et al. 2023; Rezwana & Maher 2023).

  • AI Literacy and Organisational Strategies Are Required for Effective Implementation

The rapid progression of ML and AI technologies necessitates AI literacy and effective organisational strategies for successful implementation (Zhong et al. 2021; Rezwana & Maher 2023). Addressing the knowledge gap among planners and urban practitioners is vital for effective GeoAI implementation. AI literacy and effective organisational strategies such as addressing data scarcity and financial constraints (Tingzon et al. 2023) are key to sustainable GeoAI implementation. This sustainability is further ensured by making necessary changes in standards, laws and policies, developing GeoAI solutions, promoting knowledge and data sharing, expanding infrastructure and fostering multidisciplinary collaboration.

  • Contextual Appropriateness and Robust Validation Are Vital for GeoAI

In the domain of GeoAI, the significance of contextual appropriateness and robust validation, coupled with continuous impact assessments and constructive feedback, cannot be overstated (Akbarighatar et al. 2023; Rezwana & Maher 2023). The development of a robust and context-specific AI model is also crucial considering the specific characteristics and requirements of the geospatial domain being studied. Contextual appropriateness ensures that AI systems are finely tuned to specific geographical contexts, thereby enhancing their reliability and effectiveness. Meanwhile, robust validation rigorously tests AI performance in real-world scenarios, drawing upon technical expertise, stakeholder engagement and real-world validations (Jang et al. 2023; Tingzon et al. 2023), fostering trust in these systems. Collectively, these elements form the foundation of accurate and effective adoption of GeoAI.

Conceptual Framework for Responsible GeoAI

As an outcome of the comprehensive analysis and insights derived from both academic and grey literature, the authors have endeavoured to construct a responsible GeoAI framework. The findings underscore the pivotal role of human-centric or participatory approaches and multidisciplinary collaboration for impactful AI applications in geospatial tasks. This is because local and domain-specific knowledge is as crucial as technical expertise, and active stakeholder engagement is key to success at any stage.

Given that urban-AI involves location-specific data and human elements, ethics and geoprivacy emerge as important principles of responsibility. Moreover, the necessity of clear problem formulation, robust data and model design, coupled with explainable models, algorithmic transparency and accountability, a rigorous validation and evaluation process and governance for improvement, is emphasised for ensuring reliable outcomes from AI adoption for geospatial decision-making.

In alignment with the analysis and insights derived from the literature, the authors have strived to develop a responsible GeoAI framework, as depicted in Figure 9. The detailed exploration of each responsible factor’s significance, as presented throughout the “Results” section, underscores the comprehensive and multifaceted approach necessary for the effective implementation of AI in urban-related applications.

Conclusion

This study has shed light on the ethical and responsible aspects of AI, particularly in the context of urban planning and geospatial applications. It culminated in the development of a comprehensive framework for responsible GeoAI, following a systematic literature review in accordance with the PRISMA protocol and a review of grey literature. The emergence of new conceptualisations such as “responsible AI” and “ethical AI” (Yigitcanlar et al. 2021a) has underscored the importance of key considerations such as explainability, transparency, scalability, ethics, safety, environmental impact, privacy, data governance and accountability, among others (Pansoni et al. 2023; Fan et al. 2023). There is a growing consensus around responsible AI principles that underscore the necessity for domain-specific perspectives that address the ethical challenges and prospects inherent in different application areas (Faßbender 2021; Pansoni et al. 2023).

While this review study contributes to the literature by identifying overarching responsible practices that can culminate in impactful and responsible GeoAI applications as outlined in the existing literature, it is imperative to acknowledge its limitations, which primarily confined to papers sourced from the Scopus database, supplemented by a selection of grey literature documents and webpages. Consequently, the study does not encompass detailed technical aspects or other practical strategies, such as best practices for data collection and model developments.

The literature outlines data and resource constraints and ethical considerations as primary challenges in GeoAI applications. Additionally, the opacity of AI models, knowledge and skill gaps and systematic limitations of AI adoption for urban decision-making pose significant concerns. Potential conflicts in values and technical challenges also present barriers. Strategies to address these issues are essential for the responsible integration of AI in the geospatial domain.

This study has proposed a comprehensive framework for responsible GeoAI derived from academic and grey literature, providing a roadmap for the ethical and responsible integration of AI technologies in geospatial applications. This framework encompasses various clusters of responsible practices, offering guidance on incorporating human-centric approaches, leveraging explainable AI methods, ensuring data quality and privacy, designing robust AI models and deploying AI strategically. The framework emphasises the importance of collaboration, participatory approaches and partnerships as key factors for effective geospatial implementation. It advocates for co-design approaches that are instrumental in addressing knowledge and resource limitations, improving accuracy, facilitate the development of context-appropriate solutions and mitigating potential ethical issues associated with geospatial tasks thereby enhancing the effectiveness of geospatial applications. For instance, Visan and Mone (2023) showcase the practical use of participatory GeoAI, emphasising the value of interdisciplinary collaboration and human-in-the-loop processes. Furthermore, the UN’s 2030 Agenda (Scott 2015) and recent studies (Zaman et al. 2023; De Sabbata et al. 2023) highlight the need for closer collaboration in GeoAI to address challenges and ensure responsible use of GeoAI models.

The framework underscores the importance of implementing Explainable AI (XAI) methods, which enhances the clarity, interpretability, transparency and trustworthiness of AI solutions for urban decision-making. Furthermore, the integration of high-quality, context-specific data is essential for ensuring the accuracy, reliability and relevance of GeoAI outcomes. Ethical considerations and human-centric approaches are also critical in the development and deployment of GeoAI. This includes attention to issues of algorithmic transparency and accountability, data privacy and security, and the potential impacts of GeoAI on individuals, communities and the environment. Addressing knowledge and skill gaps, along with organisational strategies, can help bridge the implementation gap of GeoAI. Additionally, a robust and context-specific design, thorough validation process, bias and limitation identification, impact assessment and a strategic framework are among the responsible practices mentioned in the literature for the effective GeoAI applications.

The framework for responsible GeoAI recognises the unique dimensions and challenges of applying AI for geospatial decision-making and emphasises the need for deep understanding of local context, geographical complexities, values, societal and environmental well-being and ethics (Pansoni et al. 2023). Domain knowledge is crucial in directing GeoAI studies towards effective urban problem-solving (Bernasco et al. 2023; Kang et al. 2023). Without such consideration, GeoAI-based perceptions could result in inaccurate and unethical outcomes in spatial analysis (Kang et al. 2023), and the implementation of GeoAI as data-driven approach alone can be misleading (Kovacs-Györi et al. 2020). The framework, while rooted in theoretical constructs, serves as a guide for the effective implementation of GeoAI. Insights from practitioners will be invaluable in refining and enhancing the framework.

The validation through verified application cases strengthens the framework, offering practical illustrations of its potential to guide ethical and responsible integration of GeoAI in real-world scenarios. For instance, a recent urban planning study by Gan et al. (2024) utilised generative AI to autonomously generate urban design schemes. The study ensured the AI model was trained on diverse and representative data to minimise biases in the predictions. They also ensured the model’s predictions were explainable and interpretable, fostering trust among stakeholders and facilitating their understanding of the model’s outputs. Furthermore, GeoAI research by Cheng et al. (2023) and Purbahapsari and Batoarung (2022) explores the technical and practical considerations such as scalability, computational efficiency, data privacy, ethical considerations and the quality and availability of diverse training datasets. These studies highlight the need for multidisciplinary expertise, data verification and validation results, resource requirements and policy and legal basis for GeoAI implementation. The study by Mahmood (2022) provides additional insights into the drivers of GeoAI adoption, such as quality datasets, access to infrastructure and technology, localization of GeoAI models, cross-cutting collaboration, a skilled workforce and a flexible regulatory framework for GeoAI. These examples not only validate the paper’s objective of providing a comprehensive and actionable framework for responsible AI adoption in the geospatial domain but also offer readers a practical understanding of how the framework can be applied in various contexts.

In conclusion, the framework for responsible GeoAI offers foundational principles for the ethical and responsible integration of AI in geospatial applications. This framework will inspire and guide future research and practice, encouraging the future research could benefit from a more detailed exploration of practical strategies for implementation GeoAI, as well as insights from practitioners’ who have experienced successful GeoAI technologies. Such investigations would identify best practice, common challenges and potential solutions that could be applied in a variety of urban planning and geospatial contexts (Yigitcanlar et al. 2016). Such a contribution would provide valuable insights and guidance for practitioners, policymakers and researchers and further contribute to the development and application of GeoAI, enhancing the outcomes of digital technology adoption and fostering public trust.