Introduction

The healthcare industry has emerged in the middle of transformation. This shift is being driven by the growing cost of health care as well as the resulting scarcity of educated experts. As a result, the healthcare industry is attempting to integrate new IT-based technologies and processes that may cut costs and give solutions to these expanding difficulties [1].

Accessibility, high costs, waste, and an aging population are just a few of the numerous difficulties confronting the world's healthcare systems. During pandemics such as the coronavirus (COVID-19), healthcare systems are stressed, resulting in concerns such as insufficient protective equipment, insufficient or erroneous diagnostic tests, [2] overworked physicians, and a lack of information exchange. More crucially, a healthcare tragedy like COVID-19 or the introduction of the human immunodeficiency virus (HIV) in the 1980s exposes the flaws in our healthcare systems. When crises exacerbate existing difficulties [3], such as uneven access to treatment, a lack of on-demand services, unreasonably expensive costs, and a lack of price transparency, we may envision and implement new systems of care and administrative support for healthcare [4].

When tackling these issues, we must keep in mind their interdependence belief that accessing healthcare is difficult, even though it is distributed via complex networks. This is not to say that providing high-quality healthcare is simple, but it does imply that we have some alternatives [5] to create simpler mechanisms that will offer better care and benefit everyone. ML is a technique used in healthcare system to assist medical practitioners in patient care and clinical data management. It is an artificial intelligence application in which computers are programmed to imitate how humans think and learn. Artificial intelligence (AI) has the potential to play a critical role in simplifying healthcare systems and advancing medical research. Medical care delivery systems are artificially intelligent. The COVID-19 challenge exemplifies one potential use of AI. Diagnostics [6], treatment choices, and communication are just a few of the many applications locating and using artificial intelligence-powered technologies [7, 8].

Artificial intelligence (AI) has the potential to make substantial progress toward the goal of making healthcare more personalized, predictive, preventative, and interactive [9]. We believe AI will continue its present path and ultimately become a mature and effective tool for biology [10]. The remainder of this essay will concentrate on the most essential applications of AI. There are several obstacles to successfully implementing any information technology in healthcare, much alone AI. These obstacles arise at all levels of AI adoption, including data collecting, technological development, clinical application, and ethical and societal concerns. This paper enlightens the drawbacks of AI in the healthcare industry besides its benefits.

Drawbacks

Data Collection Concern

The first problem is the inaccessibility of relevant data. Massive datasets are required for ML and DL models to properly classify or predict a wide range of jobs. The greatest significant advances in ML’s ability to generate more refined and accurate algorithms have occurred in sectors with easy access to large datasets. The healthcare business has a complex issue with information accessibility [11]. Because patient records are often regarded as confidential, there is a natural reluctance among institutions to exchange health data. Another difficulty is that data may not be readily available once an algorithm has been initially implemented using it. Ideally, ML-based systems would constantly improve as more data were added to their training set. Internal corporate resistance might make this difficult to achieve. It has been stated that the effective application of information technology and artificial intelligence in healthcare requires a paradigm shift from treating patients individually to improving healthcare. Some modern algorithms may be able to operate on a unimodal or less extensive basis as opposed to multimodal learning, and the converse problem of storing these ever-expanding datasets may be alleviated with the rise in use of cloud computing servers [12].

AI-based systems raise concerns regarding data security and privacy. Because health records are important and vulnerable, hackers often target them during data breaches. Therefore, maintaining the confidentiality of medical records is crucial [13]. Because of the advancement of AI, users may mistake artificial systems for people and provide their consent for more covert data collecting, raising serious privacy concerns [11]. Patient consent is a key component of data privacy issues since healthcare practitioners may allow wide usage of patient information for AI research without requiring specific patient approval. 2018 saw Google acquire DeepMind, a leader in healthcare AI. When it was discovered that the NHS had uploaded data on 1.6 million patients to DeepMind servers without the patients’ consent to construct its algorithm, Streams, an app with an algorithm for treating patients with acute renal impairment, came under criticism. A patient data privacy investigation on Google’s Project Nightingale was carried out in the USA. Data privacy is now much more of a problem since the app is now formally hosted on Google's servers [13, 14].

The General Computational Regulations of Europe and the Health Research Regulations, both of which went into force in 2018, are recent examples of legislation that may help resolve this problem by restricting the collection, use, and sharing of personal information. However, because various laws passed by various countries make problems of collaboration and cooperative research more difficult, data privacy regulations established to solve this issue may restrict the quantity of data accessible to train AI systems on a national and global scale [15]. We need more stringent data security regulations if we don’t want these restrictions to stifle innovation in the industry. One method is to improve client-side data encryption, and another is to employ federated learning to train models without data dispersion [12].

Analyzing the quality of the data used to develop algorithms is equally challenging. Given that patient data are estimated to have a ½ of around 4 months, certain predictive algorithms may not be as successful at predicting future results because they are at recreating the past. Additionally, medical records are seldom organized neatly since they are often erroneous and inconsistently stored. Datasets used to develop AI systems will always include unforeseen gaps, despite intensive attempts to clean and analyze the data. Although it is predicted that the broad deployment of electronic medical records will help to solve this issue, the amount of data that can be utilized to develop efficient algorithms is still constrained by issues with regulation and compatibility across institutions [16].

Algorithms Developments Concerns

Potentially distorted outcomes might be the consequence of biases in the data collection processes used to inform model development. For instance, under-representation of minorities as a consequence of racial biases in dataset development might lead to subpar prediction results. Many methods exist for combating this bias, such as the creation of multi-ethnic training sets. Yet, it’s possible for AI models to deal with bias on their own, like the existing stereotype neural network that dampens the effect of such ambiguous elements. Time will tell whether these strategies are successful in eliminating bias in the real world [15, 16].

The development of AI technology presents a new challenge after data collection. When the algorithm learns unimportant associations between patient features and outcomes, this is called overfitting. It happens when there are too many variables influencing the results, leading the algorithm to make inaccurate predictions. Thus, the algorithm may function well within the training dataset, yet provide inaccurate results when projecting future events. Data leakage is another area of worry. The method's ability to foretell occurrences beyond the training dataset is diminished if the algorithm achieves extremely high predicted accuracy since a covariate inside the dataset may have incorrectly referred to the outcome. However, a fresh dataset is required to corroborate the results reached to fix this issue [17,18,19].

One typical criticism leveled toward AI systems is the so-called “black-box” problem. Deep learning algorithms typically lack the ability to provide convincing explanations for their forecasts. If the recommendations are wrong, the system has no way to defend itself legally. It also makes it harder for scientists to understand how the data connects to their predictions. On top of that, the “black box” may cause people to lose faith in the medical system altogether. Although this discussion is ongoing, it is worth noting that the mechanism of action of many commonly prescribed medications, such as Panadol, is poorly understood, and that the majority doctors have only a basic understanding of diagnostic imaging tools like magnetic resonance imaging and computed tomography. The building of AI systems that can be understood by humans is still an active field of study, with Google having recently published a tool to help with this [20].

Ethical Concerns

Artificial intelligence has had ethical concerns raised about it ever since it was first conceived. The main problem is accountability, not the data privacy and security issues previously noted. Because of the gravity of the consequences, the current system requires that someone be held accountable when poor decisions are made, especially in the medical field. Many people see artificial intelligence (AI) as a “black box,” because researchers worry that it will be tough to figure out how an algorithm reached at a certain conclusion. Some have suggested that the “black-box” problem is less of a concern for algorithms used in lower-stakes applications, such as those that aren’t medical and instead prioritize efficiency or betterment of operations. Despite this, the issue of responsibility becomes much more important when thinking about AI applications that attempt to enhance medical outcomes, particularly when errors occur. Because of this, it is not apparent who is to blame in the event of a system failure. It might be hard to pin the blame on the doctor when they had no part in developing or overseeing the algorithm. However, the developer being at fault may appear unrelated to the clinical setting. Use of artificial intelligence for ethical decision-making in healthcare is prohibited in China and Hongkong [8,9,10, 21].

The absence of standard guidelines for the moral use of AI and ML in healthcare has only served to worsen the situation. There is debate about how far artificial intelligence (AI) may be utilized ethically in healthcare settings since there are no universal guidelines for its use. In that vein, the USA first attempts to establish criteria for evaluating the security and efficacy of AI systems has been undertaken by the Food and Drug Administration (FDA). To avoid adding unnecessary complexity to innovation and acceptance throughout the screening process, the NHS is also drafting standards for showing the effectiveness of AI-driven solutions. Both efforts are continuing, and they make it more difficult for courts and regulatory agencies to okay actions based on AI. Equally important is having a public conversation about these ethical dilemmas with the hope of arriving at a universal ethical standard that benefits patients [15, 16, 22].

Social Concerns

Humans have always feared that artificial intelligence (AI) in healthcare might eliminate their jobs. Some people are skeptical about and even hostile to AI-based projects because of the threat of being replaced. This perspective, however, is largely based on a misinterpretation of AI in its various manifestations. Even if we ignore the time, it will take for AI to evolve to the level where it can successfully replace healthcare personnel, the arrival of AI does not imply that employment would become obsolete [15], but rather that they will need to be re-engineered. Because of the human element and inherent unpredictability of many medical processes, they will never be as linear or as well ordered as an algorithm would be. Skepticism about AI, although understandable, clearly has a detrimental effect and acts as a barrier to wider acceptance of the technology. When it comes to the consequences and efficacy of AI, though, naiveté might lead to unrealistic expectations. The public might get disillusioned with AI if its current capabilities are overestimated. Greater public dialog about AI in health care is essential to address these attitudes among patients and medical professionals [2, 3].

Clinical Implementation Concerns

Lack of empirical data validating the effectiveness of AI-based medications in planned clinical trials is the main obstacle to successful deployment. Most research on AI's application has been conducted in the business setting; thus, we lack information on how it affects the final results for patients. Thus far, the majority of healthcare AI research has been done in non-clinical settings. Because of this, generalizing research results might be challenging. Randomized controlled studies, the gold standard in medicine, are unable to demonstrate the benefits of AI in healthcare. Due to the absence of practical data and the uneven quality of research, businesses are hesitant and difficult to implement AI-based solutions [22].

If artificial intelligence had been widely accepted, it may have been integrated into medical process for more efficient use. Effective load reduction relies on the usability of information systems. AI-based treatments must not slow down clinicians while examining or exploring electronic medical data. The price tag includes the investment of time and resources required to train medical professionals to effectively use the technology. Few instances of successfully incorporating AI into clinical therapy have been shown so far, with most cases remaining in the experimental phase [23]. Stakeholder participation in the development phase has been the key barrier to successful integration in many examples of innovation adoption. Getting input from a wide range of people is crucial to developing a solution that can be seamlessly integrated into clinical practice. Many AI advancements were made in the wake of the SARS and Ebola pandemics with the goal of bettering outcomes by means such as more accurate epidemiological forecasting or faster diagnosis. There are limitations to these rapidly evolving advances, however, since their usefulness in healthcare depends on their seamless incorporation into existing procedures without confusing or slowing down clinicians who lack training in AI and beside this the clinical research also faced issues related to the algorithms [24, 25].

Biased and Discriminatory Algorithms

The issue of “bias” is not limited to the social and cultural domains; it is also present in the technological domain. Biased software and technological artifacts may result from poor design or from incorrect or unbalanced data being input into algorithms. Therefore, AI only replicates the racial, gender, and age prejudice which already exists in our society, therefore widening the gap between the rich and the poor. You’ve certainly heard of Amazon’s controversial trial with a more nontraditional approach to recruiting from a few years back. The candidate search tool relied on AI to assess them on a scale first one to five stars, much way Amazon customers review things. Computer models developed by Amazon to screen job applications were biased in favor of male applicants and against those whose resumes contained the term “women,” because of a decade of data collection [26].

The lack of diversity between development teams is a problem, as is the biased nature of the data used to build the product. Due to a lack of variety, their cultural prejudices and misconceptions get embedded in the very fabric of technological development. As a result, businesses that fail to embrace diversity run the danger of creating services or goods that exclude large segments of the population. Research conducted four years ago discovered that certain face recognition algorithms erroneously classified less than 1% of white males but over 33% of black women. Even though the show’s creators insist the program is top-notch, the pool of participants they utilized to gauge effectiveness was over 77% male and 83% white [23,24,25, 28].

Suggested Potential Solutions to the Drawbacks of AI in Healthcare Sector

Ethical Concerns—Possible Solutions

The ethical accountability relation to AI in healthcare sector mainly fall in three categories concerning fairness, accountability, and transparency that has encouraged investigators to raise voice for these three paragons of AI ethics [26]. Biases can possibly be originated from the utilization of datasets that are overrepresented, underrepresented, or missed entire attributes which possessed relevant information for operation in question. There is also threat of “automation bias” means people begin to depend completely on the machine work, despite taking their own decisions and inspection [27]. Moreover, the practice of AI in the health sector evokes concerns about data security and privacy of patients’ private information. Since the algorithm training involves access to large datasets that preferably characterize different population groups, apprehensions about approval and efficacious de-identification and anonymization of data remain critical [28].

To overcome these hindrances, possible solutions have been suggested to deal with the issues of fairness, accountability, and transparency through the implementation of ethical governance, model explain ability, model interpretability, and ethical auditing [29]. In this way, development, certification, and application of AI in healthcare sector create possible biases transpicuous that will lead to better AI-based analysis and decision-making in various medical domains. These approaches also demand the enhancements in the trainings and education of health experts by providing efficient training sessions to the medical staff as well as students on the proper interaction and management of artificial intelligent equipment [30]. The regulation problems can also be resolved by author [31] two distinguished major approaches, the precautionary approach: claims that the implementation of AI is not allowed if the practice leads to harm and social inequality even if the evidence is not witnessed on the risk. It means it is explicitly elaborated that the application of AI is strictly controlled when it increases the social inequities, despite application evidence on risk are absent. The second approach is in contrast with the first approach; therefore, it is known as permission-less approach: argues that if there is not any evidence of hazards, then technological development is allowed. On broad terms, European approach is more strictly cautionary as compared to other countries, because it doesn’t allow the placement of technology despite no harm causing evidence, along with that possible advantages and dangers should be researched with deep comprehension.

AI and Education—Possible Solutions

AI education requires improvements in its implementation from its basic level to the high-level knowledge and practical skills [32]. AI education must be designed and developed in a way that demonstrates healthcare professionals to comprehend and work in AI domain which will implement in their clinical settings. Moreover, a platform in AI is given to trainees that enables them to contribute in health policy decisions associated with their field practice [33]. In future, AI will have a great impact to healthcare practice; therefore, it is crucial to integrate basics and AI tools applications and terminologies in medical institutes study programs. Particularly, training sessions on AI tools usage should be provided to present and future medical professionals to deliver valuable healthcare services, while following the ethical limits of AI systems would be useful [34].

The researcher suggested a stepwise method to provide AI education and its related applications in healthcare sector to future health professionals that initiates from undergraduate programs to onwards specializations in medical education [35]. In accordance with findings of research [36], an ideal model of AI conceptions could be categorized into three stages of medical education with reference to Oxford Medicine: undergraduate, postgraduate, and specializations. In undergraduate medical courses, medical professionals should be acknowledged with AI terminologies, basic knowledge of machine learning, deep learning, data science, AI proficiencies, and identification of AI applications in healthcare with suitable implementation of AI. In the next postgraduation phase, engagement in validation, evaluation process of models and installments of technologies should be emphasized. Ethical attentions and governance strategic policies should be deeply focused. In continuation to specialization professional growth, learning AI educational trainings, ethical guidance, social dialogs, and up-to-date AI knowledge and skills should be consistently provided [37].

Algorithms Development Concerns—Possible Solutions

Various AI algorithms that were used and will be utilized in future in clinical interpretation; the question arises here that have these algorithms earlier been permitted for clinical use? The AI-based algorithms that are designed for clinical interpretation require proper validation either hardware based, or software based because these are used by clinical experts for patient treatment and care, like decision-making in diagnosis and its related treatments, so for that purpose approval from regulatory authorities must be mandatory [38,39,40,41]. In clinical trials, it must be verified how accurately the established AI algorithms solution works as compared to the clinical standards like sensitivity and specificity of diagnostic tests. However, it is not entirely decided whether the good performance of AI algorithms is satisfactory in a case the solving way is a “black box” algorithm and not having transparency and logically explainable [40]. In addition, it is also not clear which suitable validation of a continuous learning-based solving procedure implies. A critical point is that deep learning-based “black box” algorithm lacks transparency so these types of algorithms cannot be easily rectified as compared to Bayesian models that are constructed on transparent structure [41,42,43].

Various new solving processes are capable and prepared to execute continuous learning [44]. On the other hand, in present regulations, an AI clinical setting system must be “frozen” so for that reason it cannot learn online and straightaway use new knowledge. Preferably, offline validation is required on an independent series of sample data (number of patients) from acquired “frozen” model. In the next continuous learning phase, the validation procedure required repetition again earlier to model’s new execution. In an ideal way, new clinically approved paths to lessen validation trails for digital applications in a patient safe environment must be established. It is estimated that special new processes will enable us to get regulatory acceptance of upgraded algorithms. In this connection, the Food and Drug Administration is actively engaged in developing a plan to cope with AI-based solutions [45]. Maximally, usage of current knowledge in causal and transparent model algorithms, like Bayesian models, is intended to assist in validation in clinical settings and acquiring regulatory acceptance, for unimodal and multimodal data. Therefore, it is crucial to obtain regulatory approval and proper validation of algorithms [46, 47].

Appropriate Methods to Apply AI Algorithms in Clinical Systems

Various AI algorithms have been discovered for clinical applications development [48]. Some algorithms have been proved more beneficial, and some have failed in clinical settings based on application types. Here research has suggested appropriate algorithms for specific diagnosis, for pathology in tissue slide images examination; deep learning is verified as a suitable method, while in the assessment of multimodal issues, like clinical prediction results and patient evaluation, approaches having domain knowledge are often preferred [49]. Probabilistic technology such as Bayesian modeling has proved advantageous in dealing complicated biological problems (e.g., Omics technologies, such as proteomics and metabolomics sample data). It has also proved useful in diagnostics and drug formation [50]. On the other hand, where knowledge is lacking, domain-agnostic generative AI methods are suitable; Bayesian reasoning and deep learning networks are considered well suited combination [51, 52]. Therefore, it is important to apply appropriate AI algorithms for specific clinical applications.

Some Crucial Recommendations for AI Approaches in Clinical Systems

Van Hartskamp et al. recommended that first it is necessary to find out the related and precise clinical information. Data analytics deprived of domain knowledge can be applicable in medical domain, but it will give irrelevant clinical results. Every new implementation of AI task must begin with explicit clinical questions and discussions with clinical professionals. And the results should again be revised under clinical and biological terms [53]. Suitable and accurate dataset is required to solve clinical questions. A dataset with ground truth must be adequately neat and authentic. Awareness of concealed variations that are not visible in dataset is a must. The dataset must be fitted to the query and represent the population under examination [54].

To obtain appropriate outcomes working with sufficiently large datasets in AI approaches is useful and reduction in variables where possible. And use of domain knowledge to avoid specious correlation is important. Association between given input and expected output variables, as dependent value, must be causal and undeviating as possible. The data ground truth and clinical question must be related together. Therefore, discovering new pathological features that greatly differentiate between two unlike pathological diagnoses can be efficacious [55].

In the perspective of clinical research, AI, ML, and DL bring innovations for professionals in medicines as well as in approaches like in materials sciences drug delivery vehicles (cyclodextrins [56], Ag nanoparticles [57], nanogels, TMPS [58]) structures are simulated by creating the algorithms [59] to explore the possibilities of their benefits. Beside Miley et al. 2021 reported the current issues, prognosis & possible solution for health hazards, clinical testing, approval, and technological uptake by patients and physicians in the domain  of smart ingestible electronics. Furthermore, it is concluded that endoscopic therapies and diagnostics will become more reliant on AI, ML, and personalized treatments. Eventually, video capsule endoscopy might successfully supplement current surgical and radiologic procedures by developing safe & high-quality out-patient treatments, reduced medical problems, and faster diagnostics at cheaper rates [60].

Conclusion

The concern AI in the health systems is concluded by highlighting several implementation issues with AI both within and outside the health sector. The data privacy, social issues, ethical issues, hacking issues, developer issues were among the obstacles to implementing the successfully AI in medical sector. Based on our review, AI’s existence in the present day seems unavoidable. Significant technical developments have occurred since the at the dawn of the modern age, it seems that technology such as AI will expand swiftly and become a vital requirement throughout the globe. Although AI is created in the present world, it is still a limited AI that is currently weak. For the time being, this technology is employed to accomplish certain jobs by concentrating on recognizing things using sensors and then AI taking appropriate action based on preprogrammed rules.

The primary goal of today’s scientists is to develop a complete universal AI with advanced and trustable algorithms. This broad AI’s specialized duties are likewise more sophisticated than the current AI. It is important to see the adoption of AI systems in healthcare as a dynamic learning experience at all levels, calling for a more sophisticated systems thinking approach in the health sector to overcome these issues.