Dear Editor,

I am writing to provide a critique and analysis of the recent paper titled “Will ChatGPT/GPT-4 be a Lighthouse to Guide Spinal Surgeons?” by He et al. [1]. While the authors discuss potential applications of OpenAI’s advanced large language model GPT-4 and its chatbot-style interface ChatGPT in spinal surgery, it is crucial to examine the broader implications of technological determinism, biases, challenges in surgical domains, access and equity issues, cost implications, and global disparities in technology adoption within the context of clinical medicine and the biomedical enterprise.

Firstly, it is essential to address the issue of algorithmic bias in AI applications. Algorithms, including language models like GPT-4, can inherit and perpetuate biases present in the data they are trained on. These biases may disproportionately impact marginalized populations, leading to unequal healthcare outcomes [2, 3]. It is crucial to thoroughly evaluate and mitigate biases to ensure fair and equitable treatment for all patients. For example, Ahsen et al.'s study on breast cancer diagnosis demonstrated that biases inherited by algorithms from data generated by humans can diminish performance [4]. However, a bias-aware algorithm integrated into a clinical decision support system was able to mitigate the adverse impact of bias, improving the accuracy of decisions based on mammography. Specific examples of algorithmic biases relevant to spinal surgery must be explored to assess their potential impact on diagnostic accuracy, treatment recommendations, and surgical decision-making. Once identified, mitigating algorithmic biases in AI applications, including language models like GPT-4, is crucial in spinal surgery to ensure equitable care.

Furthermore, when considering the integration of AI technologies in surgical domains, unique challenges arise. Surgical procedures require real-time decision-making, precise manual dexterity, and adaptability to unforeseen circumstances. While ChatGPT/GPT-4 may assist with information retrieval and decision support, its ability to account for dynamic surgical situations and respond appropriately remains unverified. The limitations and challenges faced by AI in surgical domains should be acknowledged, including the need for rigorous validation, addressing technical limitations, and ensuring seamless integration with existing surgical workflows (Table 1).

Table 1 The table presents specific examples of potential challenges in integrating ChatGPT/GPT-4 into the practice of spinal surgery, mapped to five main considerations: Algorithmic Bias, Surgical Domain Challenges, Access and Equity, Global Disparities, and Technological Determinism. It illustrates the potential issues, their implications, and how these issues may manifest in the context of spinal surgery.

The discussion of ChatGPT/GPT-4 in spinal surgery cannot overlook access and equity issues. Implementing AI-driven technologies requires robust infrastructure, adequate training, and financial resources [5]. However, access to such resources may be limited in certain regions or healthcare settings, exacerbating healthcare disparities. The potential cost implications associated with the adoption and maintenance of AI systems, as well as training healthcare professionals, should be carefully considered to ensure equitable access for all patients, regardless of socioeconomic status or geographical location.

Global access to advanced technologies like ChatGPT/GPT-4 faces significant obstacles. Technological advancements are often concentrated in high-income regions and prestigious institutions, creating a digital divide that hampers equal access to cutting-edge tools. Developing countries and underserved communities may struggle to overcome barriers related to infrastructure, funding, and expertise. Efforts should be directed toward promoting global collaboration, technology transfer, and capacity building to address these disparities and foster equitable healthcare delivery.

The notion of technological determinism in clinical medicine and the biomedical enterprise must also be critically examined. While AI technologies like ChatGPT/GPT-4 hold promise, they should not be viewed as a panacea for all healthcare challenges. Human judgment, experience, and the doctor-patient relationship remain integral to medical practice. Emphasizing technological determinism can undermine the essential role of healthcare professionals, potentially leading to overreliance on AI systems and neglecting critical aspects of patient care that require human expertise and compassion [6].

In conclusion, I urge a comprehensive evaluation of the potential implications of ChatGPT/GPT-4 in spinal surgery, considering algorithmic biases, challenges in surgical domains, access and equity issues, cost implications, global disparities, and the impact of technological determinism. Responsible integration of AI technologies in healthcare should prioritize patient well-being, address biases, ensure equitable access, and maintain the essential role of healthcare professionals in decision-making and patient care.

Sincerely,

Aaron Lawson McLean