Abstract
With its robust ability to integrate and learn from large sets of clinical data, artificial intelligence (AI) can now play a role in diagnosis, clinical decision making, and personalized medicine. It is probably the natural progression of traditional statistical techniques. Currently, there are many unmet needs in nephrology and, more particularly, in the kidney transplantation (KT) field. The complexity and increase in the amount of data, and the multitude of nephrology registries worldwide have enabled the explosive use of AI within the field. Nephrologists in many countries are already at the center of experiments and advances in this cutting-edge technology and our aim is to generalize the use of AI among nephrologists worldwide. In this paper, we provide an overview of AI from a medical perspective. We cover the core concepts of AI relevant to the practicing nephrologist in a consistent and simple way to help them get started, and we discuss the technical challenges. Finally, we focus on the KT field: the unmet needs and the potential role that AI can play to fill these gaps, then we summarize the published KT-related studies, including predictive factors used in each study, which will allow researchers to quickly focus on the most relevant issues.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Artificial Intelligence (AI), described as the science and engineering of making intelligent machines able to mimic human intelligence and to learn, was officially introduced in 1956 with the invention of robots. Recently, AI has evolved considerably and has become a basic tool in many sectors, such as banking [1], agriculture [2], and medicine [3]. It also provides a significant contribution in decreasing the involvement of humans in critically dangerous activities [4, 5]. After the explosion of numeric data availability, and the ability of AI algorithms to integrate and learn from large datasets, AI has been largely applied in clinical decision-making, biomedical research, and medical education [6]. The US Food and Drug Administration (FDA) and other regulatory agencies have allowed clinicians to use AI-based tools in several medical fields [6, 7]. Currently, AI can be used for routine detection of diabetic retinopathy without the need for ophthalmologist confirmation. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine. Many endoscopy manufacturers have launched their AI devices on the market with regulatory approval in Europe and Asia [8]. Nephrology seems to have all the assets to lend itself to AI experiments and advances. The kidney transplantation (KT) field is taking the lead in the use of AI in Nephrology. There is a large number of existing studies in the literature interested in the application of AI in KT in its different aspects. Some authors used machine learning (ML) to predict bioavailability of tacrolimus during the immediate post-transplant period and to estimate the risk of post-transplant diabetes mellitus. These outcomes were predicted based on ABCB1 and CYP3A5 genetic phenotypes, age, gender, and body mass index [9]. Predicting KT outcomes using data-driven approaches has drawn the interest of many researchers. Senanayake [10] and Sekercioglu [11] recently reviewed the ML models used in the field of KT. Actually, ML is a subtype of AI commonly used for prediction tasks. The first review was published in 2019 and covered eighteen studies that developed ML-based models to predict short- and long-term KT outcomes in adult patients. These studies were performed in the US, Iran, Italy, UK, Australia, Korea, Belgium, Germany, and Egypt. The second review was published in 2021, and the authors reviewed ML studies for predicting long-term kidney allograft survival. They identified eleven studies, most of which are case studies and pilot projects. Very few of them resulted in approved tools being officially introduced into daily practice.
This paper covers the AI basics, core concepts, and challenges, after which we focus on the KT field. We review the related studies and summarize the predictive factors to help nephrologists quickly concentrate on the most relevant work.
Core concepts
The ability to supervise the development of AI tools and their use will become a must-have skill for Nephrologists in the near future [12].The first step to understanding AI methods is to familiarize oneself with the basic concepts and the terms in use. In this section, we provide a precise and simplified explanation of the core concepts of AI useful to healthcare practitioners which will help them adequately understand how predictive models are created so that they can: (1) evaluate the models critically; (2) participate actively to minimize current limitations; and (3) collaborate with computer scientists and data scientists and take actions in order to meet the current needs in their field.
A summary of the basic terms used in the AI-related published medical articles, is listed in Table 1.
Big data
Big Data is data with large size and high complexity. The concept of big data includes an ensemble of techniques used to collect, store, analyze and manage an immense volume of both structured and unstructured data that is beyond the ability of traditional data management tools [13].
There are many types and structures of data that can be used in AI. Algorithms can learn from structured data, which is data that adheres to a pre-defined model and is therefore ready to analyze. Structured data conforms to a tabular format with a relationship between the different rows and columns. Excel files are common examples of structured data with structured rows and columns that can be sorted. Training data can also be unstructured, and information either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information may contain photos (e.g., Computed Tomography images, X-ray images, pathology images, etc.), videos, audio files, or text (e.g., medical record, datasheet, etc.). Machines cannot read texts and images. The input data need to be transformed or encoded into numbers. These numbers will be presented as vectors and matrices, so that they can be used to train and deploy the models. For example, in ML an image is considered an ensemble of pixels.
Artificial intelligence
AI is a branch of computer science that implies the use of a computer to model intelligent behaviors with minimal human intervention. AI started with the invention of robots [14], however, it evolved to cover a multitude of other branches (see Fig. 1).
Machine learning
ML is a branch of AI. It focuses on developing computer programs that can access data and use it to learn from, without being explicitly programmed for a specific task. This property makes ML fundamentally different from classic statistics [15]. ML uses a set of algorithms to analyze, interpret, and learn from a given set of data, and based on the learnings, make the best possible decisions.
An algorithm is a set of rules that precisely defines a sequence of operations. ML algorithms learn from data without human intervention. The algorithm is fed with data from which it learns and adapts without following explicit instructions. It analyzes the dataset and draws inferences from patterns in the data.
For example, if we want to predict 10-year kidney graft survival (the output), we provide the algorithm with a database with many variables such as recipient age, gender, history of rejection, infections, etc. and each KT (instance) is labeled: survived/failed by 10 years. The algorithm uses the provided data to detect the function that matches the input variables to the output values. After that, the trained algorithm generates a model capable of predicting the output for new input values different from the training data.
The input to any ML algorithm is called predictors/features, and the output from the algorithm is referred to as a target/label.
Supervised/unsupervised learning
In ML, there are two main types of tasks: supervised learning and unsupervised learning.
Supervised learning requires prior knowledge of the output values; therefore, the goal is to determine a function that best approximates the relationship between input and output, given a sample of data and the desired outputs (labels). Since kidney allograft biopsy contextualization will be based on ML in the upcoming Banff classifications [16], we will explain the concepts of supervised and unsupervised learning using similar examples. For example, to train the machine to classify a given image from kidney allograft biopsy, we input multiple specimens with known labels. The label of each image will be one of the six categories of the Banff classification [17]. Then the trained model will predict the category of a new input image. For this, the machine generates the output as a vector of scores: one score for each category. The goal is for the desired category to be assigned the highest score after training. An objective function that measures the error (or distance) between the output scores and the desired pattern of scores is computed. The machine then modifies its internal parameters to reduce this error. Farris et al. used supervised learning to develop a model for kidney allograft image analysis for evaluation and fibrosis quantification with satisfactory accuracy levels [18].
Supervised learning is applicable in the context of classification when we want to map the input to output classes, such as predicting whether the graft will survive or not [19] or classifying images into different categories [18]. Supervised learning can also be applied in the context of regression when we want to map the input to a continuous output, such as predicting the estimated glomerular filtration rate [20].
Unsupervised learning, on the other hand, does not have labeled outputs, so its goal is to infer the natural structure present within a set of data points. The most common task within unsupervised learning is clustering where we wish to learn the inherent structure of our data without using explicitly-provided labels. If we take the same previous example of the kidney allograft biopsy analysis with unsupervised learning, we will provide the algorithm with a set of images with no label, then the machine will infer the patterns in the images and will automatically divide them into groups with similar features (categories).
Deep learning
Conventional ML algorithms are limited in their ability to process data in their raw form. For several years, constructing a ML model required considerable domain expertise and meticulous engineering to implement a feature extractor in order to transform the raw data (e.g., the pixel values of an image) into a suitable internal representation or feature vector from which the learning algorithm, often a classifier, could detect or classify patterns in the input.
Deep learning is a learning method where a machine can be fed with raw data and automatically discover the representations/features needed for detection or classification. Suppose that a model wants to predict if an image contains a malignant tumor. The algorithm will learn from data (mammogram images) and try to find the patterns (features) present in the images labeled as containing a malignant tumor.
Deep learning structures the algorithm into multiple layers to create an artificial neural network (ANN). ANNs are algorithms that mimic human brain structure. An ANN has one input layer, optional hidden layers, and one output layer. Layers are rows of so-called “Neurons”. The number of neurons in each layer, the number of layers, and the type of connections between the layers (fully connected/not fully connected) are modifiable parameters for each ANN. In Fig. 2 we present an example of an ANN aiming to predict kidney graft survival. Deep Learning is called “deep” because of the additional layers added to learn from the provided data. In an ANN the input layer takes the input signals and passes them to the next layer. Several weights are applied within the nodes of the hidden layers. Weights define the importance of a feature in predicting the target value. For example, a single node may take the input data and multiply it by an assigned weight value, then add a bias before passing the data to the next layer (input × weight + bias = output). The final layer of the neural network, the output layer, uses the inputs from the hidden layers to produce the desired output. When a deep learning model is learning, it is simply updating the weights through an optimization function. Through these transformations, the machine will learn complex functions. For classification tasks, the layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. This helps the system to understand the complex perception tasks with maximum accuracy. Deep learning requires much more data than a traditional ML algorithm to function properly.
In their recent study, Kers et al. used deep learning to classify the histology of kidney allograft biopsies into a three-category output (normal/rejection/other diseases) using 5844 digital slide images of kidney allograft biopsies. Their model’s area under the curve reached 87% [21].
Barriers to the integration of artificial intelligence
AI applications have been validated as standard solutions for different tasks in many medical fields [6, 7]. Nephrology has all the characteristics to benefit from AI advances since patients are followed for several decades. There are enough universal recommendations and consensus to make the practice homogeneous, whether for dialysis, KT, or clinical nephrology. Actually, in many countries, nephrology has been digitalized for more than 20 years [22], which resulted in well-organized databases and easily exploitable data. Table 2 presents a number of nephrology registries worldwide.
The development and implementation of AI tools in healthcare are fundamentally different than the use of ML or big data in other fields. The limitations holding AI back from being fully integrated into the healthcare systems are mainly linked to the data structure, ethical challenges, and legal concerns [23]. These limitations can be categorized as follows:
-
Incompatible data formats
-
Unstructured datasets
-
High data sparsity
-
Lack of precision
-
Difficult data storage or transfer
-
Legal concerns
-
Heterogeneous data types
-
Large volumes of data
-
Data standardization (terminology, language …)
-
Data timelines, time-series, real-time analyses, etc
-
Lack of skills
-
Privacy protection
Actually, in healthcare, no two patient experiences are similar. Even at a standardized routine exam, two different doctors would likely record different data for the same patient. This problem is solved partially by the elaboration of international classifications and guidelines, such as the definition and classification of chronic kidney disease. Hence the importance of a homogeneous practice of nephrology worldwide [24].
Moreover, outcomes in healthcare such as kidney function or kidney graft survival are affected by complex parameters [25], most of which cannot be collected during a doctor visit. Some other data that affect the outcome of interest, if present in the record at all, are usually based on the patient’s imperfect recall and subjective description. Moreover, these clinical features may vary in diverse time scales, and this variability plays a vital role in indicating the health status. For example, intra-individual variability in kidney function biomarkers is associated with negative outcomes in terms of patient survival and renal survival [26].
Recent research may help overcome these issues. We cite the example of AdaCare which is a representation learning model that captures the variability of the biomarkers in the short and long term as clinical features to predict the health status at different time points [27]. It adaptively selects the clinical features that strongly indicate the health status of patients in diverse conditions and provides a personalized feature selection.
Data size is another concern in the healthcare field. ML shines when the model is trained with large databases [28]. In other fields data are easily collectible, sometimes with a simple click, such as the example of the Google ads model which is one of the most robust AI tools in the world. It is an AI model that determines when and where ads are shown for specific audiences and on specific pages. Data are collected when the client searches for something on Google and clicks on a result. Every step is captured as data [29]. Clinical datasets are inevitably far smaller which means less training data for algorithms to learn from. A randomized clinical trial aiming to collect high-quality data might involve less than 100 patients. In a systematic review of 40,970 clinical trials including 1054 nephrology trials, the authors found that compared with other specialties, nephrology trials were more likely to be smaller, with 64.5% of them enrolling less than 100 patients [30].
Bigger medical datasets do exist with millions of patients produced from imaging, electronic health record, telemedicine, genomic, and other sources of data [23]. However, poor quality remains the main issue with these datasets. They require rigorous data cleaning which is very challenging, though it reduces the data size considerably. Data cleaning is the process that ensures that datasets are correct, accurate, relevant, and consistent. Messy data can derail a big AI project, especially when disparate data sources are brought together [31]. While many data cleaning processes are still performed manually, some vendors do offer increasingly sophisticated data cleaning tools that use intelligent rules to correct large datasets which reduces the time and expense required to obtain high levels of integrity and accuracy in medical databases [32]. Recent research also proposed models to handle irregular medical records and extract feature interrelationships for individualized healthcare prediction [33].
Besides these data processing techniques, transfer learning can help to overcome the lack of data available for analysis. It is a method of ML where knowledge developed from previous training is recycled to help perform a new task.
Ma et al. [34] proposed a transfer learning framework, which leverages the massive publicly available online medical records, then learns to embed the medical features relevant to a specific task. Finally, the transferred parameters are further used for training. The authors applied the proposed framework for COVID-19 prognosis assessment and end-stage renal disease (ESRD) mortality prediction.
Another concern in the medical field is that the timelines are far longer than those in other sectors. In nephrology, we mainly deal with chronic diseases where our biggest concern is often chronic kidney disease and reaching ESRD. Similarly, in the KT field, interests have shifted to forecasting long-term outcomes [35]. An AI tool that is built to predict a long-term outcome will take years to begin to collect any feedback.
Not only is the nature of healthcare data more complex and variable, but ethical challenges exist as well, and include the cost of the error itself, interpretability issues, and patient privacy protection concerns.
Errors made by the models in industry sectors generally result in lost revenue, however in healthcare, mistakes are far costlier where the problem may be a question of life or death [36].
The lack of interpretability is another major ethical problem with ML algorithms.
ML aims to perform a prediction that is as accurate as possible, at the expense of clear interpretability. Broadly, interpretability is focused on finding an explanation for the decisions made by the models.
Most of the powerful ML algorithms operate as black boxes which raises reliability issues for both doctors and patients [37]. This issue may prevent the wide adoption of these methods by practitioners. It is easier for humans to trust a system that explains its decisions. On the other hand, it is hard to ignore the benefits of black box algorithms such as deep learning algorithms. Hence, we recommend using what works best after careful testing and large external validations [25].
Developing useful AI tools for healthcare is therefore challenging but the promise is enormous [38]. It is the way to make healthcare practice benefit from the vast amounts of data and experiences generated daily. An AI model is able to analyze and learn from the experience of millions of patients and the knowledge of thousands of clinicians, thus dramatically improving diagnosis and treatment. AI tools in healthcare will never replace physicians. Instead, they will help them do more than they could before.
Kidney transplantation outcomes: unmet needs and potential role of artificial intelligence
Since the early 1980s, short-term outcomes of KT have markedly improved due to the advancement of surgical techniques and immunosuppressive drugs; however, when it comes to long-term outcomes, no significant improvement has been achieved since the 2000s. Interest has now shifted to forecasting long-term patient and graft survival after KT [39,40,41].
Many factors such as delayed graft function (DGF) due to ischemia reperfusion injury, acute rejection (AR) and more particularly antibody-mediated rejection (AMR), chronic allograft nephropathy, and morbidities related to immunosuppressive treatment are blamed for the lack of long-term improvements in terms of patient and graft survival.
Table 3 provides an overview of the published studies aiming to predict these complications with AI: the predicted outcome, year, sample size, and findings of the studies in terms of predictors and performance measures of the predictive models.
Delayed graft function
There are many definitions of DGF in the literature but the most commonly adopted one is that of the United Network for Organ Sharing (UNOS), which is “the need for dialysis at least once within the first seven days after transplantation, indicated outside the context of hyperacute rejection, vascular or urinary tract complications, or hyperkalemia” [42].
DGF contributes to poor long-term gains [43], and its impact on KT outcomes is expected to grow as the use of marginal kidneys increases due to organ shortages.
DGF is associated with a significant reduction in the graft half-life. In an American cohort including more than 65,000 KTs, the half-life was 11.5 years in the absence of DGF graft function versus 7.2 years in cases involving DGF [44].
To date, no treatment or therapeutic strategy has become standard of care in the prevention or treatment of DGF. An accurate prediction of DGF can help establish an effective preventive strategy based on the predictors selected by the ML algorithm. Such a model may be beneficial not only in better graft allocation but also in determining the factors that predict DGF which enables interventions to prevent the modifiable ones.
Many authors used AI (ML) for predicting DGF (Table 3). However, most of the published studies did not generate an approved predictive model which can be introduced into daily practice. Such achievement can be obtained with the use of large, high-quality datasets including the relevant variables needed to predict DGF. The rate of DGF may also be reduced with more robust kidney graft allocation systems, which is also achievable with AI.
Antibody-mediated rejection
Several studies have evaluated the impact of AR on long-term graft survival, and it has been demonstrated that an AR episode is a major risk factor of chronic graft dysfunction and graft failure [45,46,47]. The FDA held an open public workshop in June 2010 to discuss the challenges in the treatment of AMR and highlighted the need for a clinical trial design aimed at improving the long-term outcomes [48]. In April 2017, another workshop was held to discuss new advances in AMR and the challenges of clinical trial design for its prevention and treatment [49]. Such trials can now be performed thanks to the advances of AI [50].
Shaikhina et al. used a very small dataset (80 KTs) for predicting acute AMR at 30 days post KT using ML algorithms, and their model had an accuracy of 85% [51].
Despite the decrease in its incidence AR is still a major issue because of the high rates of subclinical rejection which is detectable only by protocol biopsies [52]. AI and more particularly, deep learning, can help in extracting the features of this subtype of rejection. AI was introduced into the Banff classification in 2019 and its contribution was discussed with regards to form image recognition and rejection type recognition [16].
Graft survival
The definition of long-term kidney graft survival is not unanimous in the literature [53]. Several thresholds have been used in the published studies (3 years, 5 years, 10 years, etc.) [25, 53]. All practitioners desire a graft that remains functional for life, hence the interest of extending the period required to judge prolonged survival.
For long-term graft survival, the main endpoint over time in some of the published studies was defined as the time of graft failure by either a return to dialysis or retransplantation [54]. Other studies developed models for a combined outcome of graft failure and death (graft and patient survival) [55]. The challenge in predicting and improving long-term graft survival is: (1) the multiplicity and the complexity of the involved factors; (2) the lack of study designs that can address this need. These limitations can be overcome thanks to the ML ability to integrate and learn from large and complex datasets, and its powerful prediction ability. The predictive models can be used as surrogate endpoints in the clinical trials on long-term outcomes (see Sect. 4.5).
Patient survival
Patient survival after KT is also far below that of the general population. The leading causes of death in the KT population have changed in the past few years. Even if cardiovascular disease and infections are still the main causes of death in this population, higher rates of death from malignancy are observed, even overtaking cardiovascular disease in some series. Infections and malignancies in KT are primarily due to immunosuppressive therapy. Cardiovascular complications can also be partly linked to immunosuppressive drugs. Hence, there is a vital need for new therapies in KT.
Development of new treatments and new study designs
The development of innovative therapies that are safer and better able to prevent DGF and reduce AMR is critical to improving long-term graft and patient outcomes.
The main barrier to the development of new drugs is the lack of acceptable new study designs that can address the current needs in KT. The main end-points accepted by the FDA for the past 20 years have been the incidence of biopsy-proven rejection, as well as 1-year patient and graft survival. These end-points, however, are insufficient to assess the long-term impact of the drugs. The traditional end-points have now forced non-inferiority trial designs, given that short-term outcomes are relatively good. Long-term efficiency assessment requires clinical trials with a follow-up period of 5–10 years. The extended survey periods result in an inefficient return on investment for pharmaceutical companies; therefore, the regulatory agencies do not enforce them. Moreover, long-term studies impose delays in offering potentially beneficial treatments to transplant recipients. This led two of the main regulatory agencies worldwide, the FDA and the European Medicines Agency (EMA), to emphasize the need for an early and powerful alternative tool in KT that pertinently predicts long-term outcomes [56]. AI has been used to meet this requirement. The iBox, which is a validated AI-based predictive model, has been approved as an alternative endpoint of long-term kidney graft survival. It has been applied in the large randomized controlled trial TRANSFORM in KT in order to project the long-term risk of kidney graft failure up to 11 years post-randomization using the 1-year post-randomization validated data [25, 50].
Conclusions
The areas of application of AI in the world are expanding exponentially. Nephrologists will have to interact with AI in their daily practice in the near future; however, the nephrology community needs to be well-informed regarding this technology. AI has the potential to help them reach the unmet needs in the field by enabling accurate predictions and data analysis of the use of conventional statistics, especially in this era of data abundance, by capturing complex relationships among large datasets with a large number of variables. With the existing KT databases and registries, AI technologies seem to be the best solution to meet current gaps, especially long-term outcomes. In order to generalize the use of AI in nephrology, nephrologists worldwide are required to understand the core concepts of AI and its subtypes in order to understand how the models are created so that they can evaluate them critically and participate actively to minimize current challenges.
References
Kaya O, Schildbach J, AG DB, Schneider S (2019) Artificial intelligence in banking. Artif Intell
Kanika K, Priyanka P, Latika L, Kumar D (2019) Artificial intelligence... Application in Agriculture
Gómez-González E, Gomez E, Márquez-Rivas J, Guerrero-Claro M, Fernández-Lizaranzu I, Relimpio-López MI et al. (2020) Artificial intelligence in medicine and healthcare: a review and classification of current and near-future applications and their ethical and social Impact. arXiv Prepr arXiv200109778
Caselli M, Fracasso A, Traverso S (2021) Robots and risk of COVID-19 workplace contagion: evidence from Italy. Technol Forecast Soc Change 173:121097. Available from: https://www.sciencedirect.com/science/article/pii/S0040162521005308
Fishel JA, Oliver T, Eichermueller M, Barbieri G, Fowler E, Hartikainen T et al. (2020) Tactile telerobots for dull, dirty, dangerous, and inaccessible tasks. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). pp 11305–10
Benjamens S, Dhunnoo P, Meskó B (2020) The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. Npj Digit Med 3(1):118. https://doi.org/10.1038/s41746-020-00324-0
United States Food & Drug Administration (2019) Proposed regulatory framework for modifications to artificial intelligence / machine learning (AI/ML)—based software as a medical device (SaMD)—discussion paper and request for feedback. US Food Drug Adm [Internet]. pp 1–20. Available from: https://www.fda.gov/media/122535/download
Mori Y, Neumann H, Misawa M, Kudo SE, Bretthauer M (2020) Artificial intelligence in colonoscopy: now on the market. What’s next? J Gastroenterol Hepatol 36:7–11
Thishya K, Vattam KK, Naushad SM, Raju SB, Kutala VK (2018) Artificial neural network model for predicting the bioavailability of tacrolimus in patients with renal transplantation. PLoS ONE 13(4):e0191921. https://doi.org/10.1371/journal.pone.0191921
Senanayake S, White N, Graves N, Healy H, Baboolal K, Kularatna S (2019) Machine learning in predicting graft failure following kidney transplantation: a systematic review of published predictive models. Int J Med Inform 130:103957. http://www.sciencedirect.com/science/article/pii/S1386505619302977
Sekercioglu N, Fu R, Kim SJ, Mitsakakis N (2021) Machine learning for predicting long-term kidney allograft survival: a scoping review. Irish J Med Sci 190(2):807–817. https://doi.org/10.1007/s11845-020-02332-1
Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P (2019) Introducing artificial intelligence training in medical education. JMIR Med Educ 5(2):e16048. https://pubmed.ncbi.nlm.nih.gov/31793895
Zulkarnain N, Anshari M (2016) Big data: concept, applications, and challenges. In: 2016 International Conference on Information Management and Technology (ICIMTech). pp 307–10
Hamet P, Tremblay J (2017) Artificial intelligence in medicine. Metabolism 69:S36–40. http://www.sciencedirect.com/science/article/pii/S002604951730015X
Niel O, Bastard P (2019) Artificial intelligence in nephrology: core concepts, clinical applications, and perspectives. Am J Kidney Dis 74(6):803–810. https://doi.org/10.1053/j.ajkd.2019.05.020
Loupy A, Mengel M, Haas M (2022) Thirty years of the international banff classification for allograft pathology: the past, present, and future of kidney transplant diagnostics. Kidney Int 101(4):678–691. https://doi.org/10.1016/j.kint.2021.11.028
Roufosse C, Simmonds N, Clahsen-van Groningen M, Haas M, Henriksen KJ, Horsfield C, et al (2018) A 2018 reference guide to the Banff classification of renal Allograft pathology. Transplantation 102(11):1795–814. https://pubmed.ncbi.nlm.nih.gov/30028786
Farris AB, Vizcarra J, Amgad M, Donald Cooper LA, Gutman D, Hogan J (2021) Image analysis pipeline for renal allograft evaluation and fibrosis quantification. Kidney Int reports [Internet]. 6(7):1878–87. https://pubmed.ncbi.nlm.nih.gov/34307982
Badrouchi S, Ahmed A, Mongi Bacha M, Abderrahim E, Ben Abdallah T (2021) A machine learning framework for predicting long-term graft survival after kidney transplantation. Expert Syst Appl [Internet]. 182:115235. https://www.sciencedirect.com/science/article/pii/S0957417421006679
Van Loon E, Zhang W, Coemans M, De Vos M, Emonds M-P, Scheffner I et al (2021) Forecasting of patient-specific kidney transplant function with a sequence-to-sequence deep learning model. JAMA Netw Open 4(12):e2141617. https://doi.org/10.1001/jamanetworkopen.2021.41617
Kers J, Bülow RD, Klinkhammer BM, Breimer GE, Fontana F, Abiola AA et al (2022) Deep learning-based classification of kidney transplant pathology: a retrospective, multicentre, proof-of-concept study. Lancet Digit Heal 4(1):e18-26. https://doi.org/10.1016/S2589-7500(21)00211-9
Boenink R, Astley ME, Huijben JA, Stel VS, Kerschbaum J, Ots-Rosenberg M et al (2022) The ERA Registry Annual Report 2019: summary and age comparisons. Clin Kidney J 15(3):452–472. https://doi.org/10.1093/ckj/sfab273
Kruse CS, Goswamy R, Raval Y, Marawi S (2016) Challenges and opportunities of big data in health care: a systematic review. JMIR Med Inform. 4(4):e38. Available from: https://pubmed.ncbi.nlm.nih.gov/27872036
Levey AS, Eckardt K-U, Tsukamoto Y, Levin A, Coresh J, Rossert J et al. (2005) Definition and classification of chronic kidney disease: a position statement from kidney disease: improving global outcomes (KDIGO). Kidney Int 67(6):2089–100. https://www.sciencedirect.com/science/article/pii/S0085253815506984
Loupy A, Aubert O, Orandi BJ, Naesens M, Bouatou Y, Raynaud M, et al. (2019) Prediction system for risk of allograft loss in patients receiving kidney transplants: international derivation and validation study. BMJ 366:l4923. http://www.bmj.com/content/366/bmj.l4923.abstract
Al-Aly Z, Balasubramanian S, McDonald JR, Scherrer JF, O’Hare AM (2012) Greater variability in kidney function is associated with an increased risk of death. Kidney Int 82(11):1208–14. https://www.sciencedirect.com/science/article/pii/S0085253815554732
Ma L, Gao J, Wang Y, Zhang C, Wang J, Ruan W et al (2020) AdaCare: explainable clinical health status representation learning via scale-adaptive feature extraction and recalibration. Proc AAAI Conf Artif Intell 3(34):825–832
Mullainathan S, Spiess J (2017) Machine learning: an applied econometric approach. J Econ Perspect 31(2):87–106
Passarelli G (2022) “Don’t Google It”: the effects of Google’s ads dominance for users and competitors. In: Marchisio E (ed) Handbook of research on applying emerging technologies across multiple disciplines [Internet]. IGI Global, Hershey, p. 333–51. Available from: https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-7998-8476-7.ch019
Inrig JK, Califf RM, Tasneem A, Vegunta RK, Molina C, Stanifer JW et al. (2014) The landscape of clinical trials in nephrology: a systematic review of Clinicaltrials.gov. Am J Kidney Dis 63(5):771–80. https://pubmed.ncbi.nlm.nih.gov/24315119
Ehsani-Moghaddam B, Martin K, Queenan JA (2019) Data quality in healthcare: a report of practical experience with the Canadian Primary Care Sentinel Surveillance Network data. Heal Inf Manag J 50(1–2):88–92. https://doi.org/10.1177/1833358319887743
Oni S, Chen Z, Hoban S, Jademi O (2019) A comparative study of data cleaning tools. Int J Data Warehous Min 15(4):48–65
Ma L, Zhang C, Wang Y, Ruan W, Wang J, Tang W, et al (2020) Concare: personalized clinical feature embedding via capturing the healthcare context. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 833–40
Ma L, Ma X, Gao J, Jiao X, Yu Z, Zhang C et al. (2021) Distilling knowledge from publicly available online EMR data to emerging epidemic for prognosis, pp 3558–3568
Yoo KD, Noh J, Lee H, Kim DK, Lim CS, Kim YH et al (2017) A machine learning approach using survival statistics to predict graft survival in kidney transplant recipients: a multicenter cohort study. Sci Rep 7(1):1–12. https://doi.org/10.1038/s41598-017-08008-8
Ibrahim SA, Pronovost PJ (2021) Diagnostic errors, health disparities, and artificial intelligence: a combination for health or harm? JAMA Heal Forum. 2(9):e212430. https://doi.org/10.1001/jamahealthforum.2021.2430
Vellido A (2020) The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput Appl 32(24):18069–18083. https://doi.org/10.1007/s00521-019-04051-w
Pastorino R, De Vito C, Migliara G, Glocker K, Binenbaum I, Ricciardi W et al. (2019) Benefits and challenges of big data in healthcare: an overview of the European initiatives. Eur J Public Health [Internet]. 29(Supplement_3):23–7. https://pubmed.ncbi.nlm.nih.gov/31738444
Coemans M, Süsal C, Döhler B, Anglicheau D, Giral M, Bestard O et al (2018) Analyses of the short- and long-term graft survival after kidney transplantation in Europe between 1986 and 2015. Kidney Int 94(5):964–973. https://doi.org/10.1016/j.kint.2018.05.018
Meier-Kriesche H-U, Schold JD, Srinivas TR, Kaplan B (2004) Lack of improvement in renal allograft survival despite a marked decrease in acute rejection rates over the most recent era. Am J Transplant 4(3):378–383
Rana A, Godfrey EL (2019) Outcomes in solid-organ transplantation: success and stagnation. Texas Hear Inst J 46(1):75–6. https://pubmed.ncbi.nlm.nih.gov/30833851
Schröppel B, Legendre C (2014) Delayed kidney graft function: from mechanism to translation. Kidney Int 86(2):251–258. https://doi.org/10.1038/ki.2014.18
Gorayeb-Polacchini FS, Caldas HC, Fernandes-Charpiot IMM, Ferreira-Baptista MAS, Gauch CR, Abbud-Filho M (2020) Impact of cold ischemia time on kidney transplant: a mate kidney analysis. Transplant Proc 52(5):1269–71. https://www.sciencedirect.com/science/article/pii/S0041134519314083
Halloran PF, Hunsicker LG (2001) Delayed graft function: state of the art, November 10–11, 2000. Summit Meeting, Scottsdale, Arizona, USA. Am J Transplant 1(2):115–20. https://doi.org/10.1034/j.1600-6143.2001.10204.x
Koo EH, Jang HR, Lee JE, Park JB, Kim S-J, Kim DJ et al (2015) The impact of early and late acute rejection on graft survival in renal transplantation. Kidney Res Clin Pract 34(3):160–4. https://pubmed.ncbi.nlm.nih.gov/26484041
Jalalzadeh M, Mousavinasab N, Peyrovi S, Ghadiani MH (2015) The impact of acute rejection in kidney transplantation on long-term allograft and patient outcome. Nephrourol Mon 7(1):e24439 https://pubmed.ncbi.nlm.nih.gov/25738128
Pallardó Mateu LM, Sancho Calabuig A, Capdevila Plaza L, Franco EA (2004) Acute rejection and late renal transplant failure: risk factors and prognosis. Nephrol Dial Transplant 19(Suppl 3):iii38-42
Archdeacon P, Chan M, Neuland C, Velidedeoglu E, Meyer J, Tracy L et al (2011) Summary of FDA antibody-mediated rejection workshop. Am J Transplant 1(11):896–906
Velidedeoglu E, Cavaillé-Coll MW, Bala S, Belen OA, Wang Y, Albrecht R (2018) Summary of 2017 FDA Public Workshop: Antibody-mediated Rejection in Kidney Transplantation. Transplantation 102(6). https://journals.lww.com/transplantjournal/Fulltext/2018/06000/Summary_of_2017_FDA_Public_Workshop_.15.aspx
Aubert O, Divard G, Pascual J, Oppenheimer F, Sommerer C, Citterio F et al. (2021) Application of the iBox prognostication system as a surrogate endpoint in the TRANSFORM randomised controlled trial: proof-of-concept study. BMJ Open [Internet]. 11(10):e052138. http://bmjopen.bmj.com/content/11/10/e052138.abstract
Shaikhina T, Lowe D, Daga S, Briggs D, Higgins R, Khovanova N (2019) Decision tree and random forest models for outcome prediction in antibody incompatible kidney transplantation. Biomed Signal Process Control 52:456–62. http://www.sciencedirect.com/science/article/pii/S1746809417300204
Rush DN (2020) Subclinical rejection: a universally held concept? Curr Transplant Rep 7(3):163–168. https://doi.org/10.1007/s40472-020-00290-2
Legendre C, Canaud G, Martinez F (2014) Factors influencing long-term outcome after kidney transplantation. Transpl Int 27(1):19–27
Brown TS, Brown TS, Elster EA, Stevens K, Graybill JC, Gillern S et al. (2012) Bayesian modeling of pretransplant variables accurately predicts kidney graft survival. Am J Nephrol 36(6):561–9. https://www.karger.com/DOI/https://doi.org/10.1159/000345552
Lin RS, Horn SD, Hurdle JF, Goldfarb-Rumyantzev AS (2008) Single and multiple time-point prediction models in kidney transplant outcomes. J Biomed Inform 41(6):944–52. http://www.sciencedirect.com/science/article/pii/S1532046408000439
Stegall MD, Morris RE, Alloway RR, Mannon RB (2016) Developing new immunosuppression for the next generation of transplant recipients: the path forward. Am J Transplant 16(4):1094–1101
Jen K-Y, Albahra S, Yen F, Sageshima J, Chen L-X, Tran N et al. (2021) Automated en masse machine learning model generation shows comparable performance as classic regression models for predicting delayed graft function in renal allografts. Transplantation 105(12). https://journals.lww.com/transplantjournal/Fulltext/2021/12000/Automated_En_Masse_Machine_Learning_Model.38.aspx
Konieczny A, Stojanowski J, Rydzyńska K, Kusztal M, Krajewska M (2021) Artificial intelligence-a tool for risk assessment of delayed-graft function in kidney transplant. J Clin Med 10(22):5244. https://pubmed.ncbi.nlm.nih.gov/34830526
Bae S, Massie AB, Caffo BS, Jackson KR, Segev DL (2020) Machine learning to predict transplant outcomes: helpful or hype? A national cohort study. Transpl Int 33(11):1472–1480. https://doi.org/10.1111/tri.13695
Kawakita S, Beaumont JL, Jucaud V, Everly MJ (2020) Personalized prediction of delayed graft function for recipients of deceased donor kidney transplants with machine learning. Sci Rep 10(1):18409. https://doi.org/10.1038/s41598-020-75473-z
Costa SD, de Andrade LGM, Barroso FVC, de Oliveira CMC, Daher EDF, Fernandes PFCBC et al (2020) The impact of deceased donor maintenance on delayed kidney allograft function: a machine learning analysis. PLoS ONE 15(2):e0228597. https://doi.org/10.1371/journal.pone.0228597
Decruyenaere A, Decruyenaere P, Peeters P, Vermassen F, Dhaene T, Couckuyt I (2015) Prediction of delayed graft function after kidney transplantation: comparison between logistic regression and machine learning methods. BMC Med Inform Decis Mak 15:83. https://pubmed.ncbi.nlm.nih.gov/26466993
Li J, Serpen G, Selman S, Franchetti M, Riesen M, Schneider C (2010) Bayes net classifiers for prediction of renal graft status and survival period. World Acad Sci Eng Technol 1(63):144–150
Brier ME, Ray PC, Klein JB (2003) Prediction of delayed renal allograft function using an artificial neural network. Nephrol Dial Transplant 18(12):2655–2659. https://doi.org/10.1093/ndt/gfg439
Shoskes DA, Ty R, Barba L, Sender M (1998) Prediction of early graft function in renal transplantation using a computer neural network. Transplant Proc 30(4):1316–7. https://www.sciencedirect.com/science/article/pii/S0041134598002577
Tapak L, Hamidi O, Amini P, Poorolajal J (2017) Prediction of kidney graft rejection using artificial neural network. Heal Inf Res 23(4):277–284. https://doi.org/10.4258/hir.2017.23.4.277
Esteban C, Staeck O, Baier S, Yang Y, Tresp V (2016) Predicting clinical events by combining static and dynamic information using recurrent neural networks, pp 93–101
Hummel AD, Maciel RF, Rodrigues RGS, Pisa IT (2010) Application of artificial neural networks in renal transplantation: classification of nephrotoxicity and acute cellular rejection episodes. Transplant Proc 42(2):471–2. https://www.sciencedirect.com/science/article/pii/S0041134510001429
Petrovsky N, Tam SK, Brusic V, Russ GR, Socha LA, Bajic VB (2002) Use of artificial neural networks in improving renal transplantation outcomes. Graft 5:6–13
Abdolmaleki P, Movhead M, Taniguchi R-I, Masuda K, Buadu LD (1997) Evaluation of complications of kidney transplantation using artificial neural networks. Nucl Med Commun 18(7). https://journals.lww.com/nuclearmedicinecomm/Fulltext/1997/07000/Evaluation_of_complications_of_kidney.5.aspx
Nematollahi M, Akbari R, Nikeghbalian S, Salehnasab C (2017) Classification models to predict survival of kidney transplant recipients using two intelligent techniques of data mining and logistic regression. Int J organ Transplant Med 8(2):119–22. https://pubmed.ncbi.nlm.nih.gov/28959387
Bashiri A, Ghazisaeedi M, Safdari R, Shahmoradi L, Ehtesham H (2017) Improving the prediction of survival in cancer patients by using machine learning techniques: experience of gene expression data: a narrative review. Iran J Public Health 46(2):165–72. Available from: https://pubmed.ncbi.nlm.nih.gov/28451550
Lofaro D, Maestripieri S, Greco R, Papalia T, Mancuso D, Conforti D et al. (2010) Prediction of chronic allograft nephropathy using classification trees. Transplant Proc 42(4):1130–3. http://europepmc.org/abstract/MED/20534242
Greco R, Papalia T, Lofaro D, Maestripieri S, Mancuso D, Bonofiglio R (2010) Decisional trees in renal transplant follow-up. Transplant Proc 42(4):1134–6. http://www.sciencedirect.com/science/article/pii/S0041134510003490
Akl A, Ismail AM, Ghoneim M (2008) Prediction of graft survival of living-donor kidney transplantation: nomograms or artificial neural networks? Transplantation 86(10). https://journals.lww.com/transplantjournal/Fulltext/2008/11270/Prediction_of_Graft_Survival_of_Living_Donor.12.aspx
Krikov S, Khan A, Baird BC, Barenbaum LL, Leviatov A, Koford JK et al (2007) Predicting kidney transplant survival using tree-based modeling. ASAIO J 53(5):592–600. http://europepmc.org/abstract/MED/17885333
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors of this manuscript have no conflicts of interest to disclose.
Ethical statement
This research is the authors’ original work conducted in compliance with ethical standards. All sources used are properly cited.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Badrouchi, S., Bacha, M.M., Hedri, H. et al. Toward generalizing the use of artificial intelligence in nephrology and kidney transplantation. J Nephrol 36, 1087–1100 (2023). https://doi.org/10.1007/s40620-022-01529-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40620-022-01529-0