1 Introduction

Edge computing brings data collection, data processing and data storage to the location where it is generated. An edge device deployed at a hospital can perform data processing at source instead of sending the data to cloud. This approach can help healthcare applications to reduce latency and optimize the collection, storage and analysis of data. There are ongoing discussions in the health and life sciences industry on new data management models that can switch between cloud or edge computing just-in-time (JIT) based on need, cost and benefit. However, such a model does not exist in the industry neither has been discussed in the current literature.

Some authors in their research papers have said that it might make sense to limit the communications from patient wearable devices to the cloud which means that data need to be processed at the edge while data summaries are sent to the cloud at a prescribed interval [1, 2] while others have stated that all data need to be processed at the cloud [3, 4] since the size of patient data are huge.

With respect to processing data at the edge, authors in [5] proposed an edge-based heart disease prediction device where patient health data such as temperature, pulse rate and accelerometer readings were sent to the edge for processing thereby ensuring low latency is achieved. In another article [6] authors have proposed an architecture that enhances service allocation by leveraging transparent computing that take advantage of edge-computing devices to enhance scalability and reduce response delay. Another article [7] has proposed an IoT-based smart healthcare video surveillance system where network bandwidth required for transmitting video data is reduced significantly due to edge-level computing and filtering mechanism. In another article [8] authors developed a clustering model using edge computing for medical applications (CMMA) to provide effective communication for IoMT-based applications. Though these studies seem to convey that edge computing is powerful, but processing all data at the edge, as proposed in these papers can be problematic for many real-world use cases involving healthcare data since IoMT devices generate massive amounts of data. Therefore, it is practically impossible for edge devices to process such massive data without incurring processing delays [7, 9].

With respect to processing data at the cloud, there are several studies which have used cloud to store and process healthcare data [8, 10, 11] but most of them have said that latency is one of the major challenge since all data need to be sent to the cloud for processing. Other studies have also pointed out that cloud could pose a challenge for healthcare use cases since network breakdowns are possible [12] and high latency or failure of the internet access could happen during the critical patient monitoring leading to catastrophic situations.

1.1 The problem statement

Based on the above discussion, it is clear that edge devices cannot process massive healthcare data without incurring processing delays. On the other hand, sending all data to the cloud introduces latency and is also not a feasible solution. There are solutions defined in literature which either processes data at the edge [7, 9] or on the cloud [4] but there are no models which in real time, based on the scenario, can switch data processing and storage between cloud and edge.

We need to have an intelligent model that can just-in-time automatically detect if data need to be processed at the edge or on the cloud, based on the latency requirement of the scenario. Such a model has not been discussed in the literature so far which brings in a very important gap.

1.2 The solution

As part of this paper, we have developed an application that can automatically and in real time determine if data need to be processed and stored at the edge or on the cloud based on the latency requirements of the use case and process data accordingly. The thumb rule is that if latency requirements of the use case are high, data will be processed at the edge otherwise data are processed on the cloud. Latency is used as a trigger to determine where data needs to be stored and processed (i.e., at the edge or cloud) because some critical healthcare use cases demand low latency, as an example, a patient who is at high risk of heart attack after a surgical procedure needs to be monitored round the clock and doctors need real time and instant access to patients data during an emergency situation. For such use cases low latency is mandatory and therefore data is placed at the edge.

In this paper, we have defined a unique solution called Automatic edge application (Automatic edge app) to monitor high risk patients (aged between 65 and 85 years who have a history of cardiovascular disease) who carry risk of heart attack after their non-cardiac surgery. Since the chances of a heart attack for the first 30 days [13, 14] after a non-cardiac surgery (such as coronary angiography) are high especially for these high risk patients. It is essential that they are continuously monitored after non-cardiac surgery to predict an occurrence of heart attack. In such a scenario, low latency is extremely important which means that if risk of heart attack is predicted then it should be immediately flagged to the doctor. This can be achieved by placing the specific patient data at the edge and using a machine learning model (deployed at the edge) to predict a heart attack using patient real time data. The edge servers are placed within the hospital premises to ensure there are no latency issues either because of network or connectivity. After the health of the specific patient is stable (i.e., after 30 days) patient can be discharged and their data is automatically removed from edge and moved to the cloud. By default, the Automatic edge app will automatically place all high risk patients data on the edge for a duration of 30 days. After the 30-day period, patient’s data are automatically moved to the cloud thereby releasing the edge resources for other patients and use cases.

1.3 Datasets

The Heart Attack ML model was trained using the Heart Failure Prediction Dataset on Kaggle [15]. In this dataset, 5 heart datasets are combined over 11 common features which makes it the largest heart disease dataset available so far for research purposes [12] and, therefore, we have used this data sets.

1.4 Latency results

We performed experiments by processing 66 patient data on the edge and the same patent data was loaded on the cloud. The results across these 66 different patients have proven that edge computing is able to process data approximately 55% faster than on the cloud.

The rest of this paper is organized as follows: Section 2 covers the functional scenario of the Automatic edge app, the datasets and technologies used for the implementation. Section 3 describes the solution. Section 4 provides the results. Section 5 shares a discussion, and Sect. 6 provides a conclusion and future research.

2 Methods

In this section, we describe the Automatic edge app that can automatically decide where Data Processing needs to happen for a specific patient (edge or cloud) and for what duration based on the latency requirements of the use case.

2.1 Application scenario

2.1.1 Current challenge

Approximately 251,000 patients are known to die from human mistakes, diagnostic errors and preventable patient monitoring events each year, making known medical errors the nation’s third leading cause of death [16]. Patients’ data availability with low latency is one of the most important requirements for healthcare use cases [17] especially during emergency situations when doctors are treating critically ill patients. Latency is one of the biggest challenges of using cloud computing in healthcare [18] which could lead to catastrophic situations during emergency situations [19] such as surgeries.

More than 300 million surgeries are performed worldwide per year [20] and despite the undoubted benefits of surgery, non-cardiac surgical procedures are possible triggers for major adverse cardiac events (MACE) in high risk patients [13, 21] which typically occur during the first 30 days after the surgery. These adverse cardiac events include myocardial infarction, acute heart failure and cardiovascular death [22, 23]. Several complex pathophysiological processes triggered by the surgical and anesthetic stress including an increase in sympathetic and neurohumoral activity, pro-coagulant factors, intravascular volume load, and systemic inflammation [24,25,26] as well as several periprocedural factors such as intraoperative tachycardia [24], intraoperative hypertension [27] perioperative hypotension [26, 28], and perioperative anaemia [29], seem to contribute to MACE following non-cardiac surgery. High risk patients are defined as individuals aged between 65 and 85 years who have a history of cardiovascular disease (coronary artery disease, peripheral artery disease, or stroke) [30]. For sake of clarity, we will refer all other patients as “low risk patients”.

2.1.2 Our solution to address the challenge (using automatic edge app)

The proposed Automatic edge app addresses the challenges arising from the above research findings by continuously monitoring high risk individuals for heart attacks after non-cardiac surgical procedures for a duration of 30 days which is considered the “risk period”. We used a trained ML algorithm to predict heart attack using the patients’ real time health readings.

The Automatic edge app can monitor the high risk individuals for heart attacks after non-cardiac surgical procedures. However, from a technical standpoint it is essential that such monitoring is accomplished with a low latency [31]. Latency plays an important role in delivering high quality patient care [27]. Doctors need to be immediately informed of any risk of heart attack to prevent any catastrophic health situations. As cloud models introduce latency [12] it is not an ideal solution for monitoring such critical case [12]. Therefore, we have developed the Automatic edge app which can intelligently and automatically perform Data Processing at the edge to predict heart attack for high risk individuals (who have undergone a non-cardiac surgical procedure) during the first 30 days risk period while Data Processing for low risk patients will be performed on the cloud.

2.1.3 Controlling variables

  • Data Placement on the edge—Data of all high risk individuals who have undergone a non-cardiac surgical procedure are automatically placed on the edge for the first 30 days risk period.

  • Data Placement on the Cloud—Data of all low risk individuals are automatically placed on the cloud indefinitely.

  • Data movement from edge on the Cloud—After the 30 days risk period high risk patient data are automatically deleted from the edge and transferred to the cloud.

2.2 Automatic edge application environment

2.2.1 Servers and storage

Cloud server—We have used Linux server which uses x86-based Elastic Compute Cloud (EC2) Mac instances hosted on Amazon Web Services (AWS).

  • Intel 8th generation 3.2 GHz (4.6 GHz turbo) Core i7 processors.

  • 6 physical and 12 logical cores.

  • 32 GiB of memory.

30 GB of storage was available through Amazon Elastic Block Store (EBS)

Edge Server—We have used Lenovo ThinkPad X390 Yoga 2-in-1 Laptop, Intel Core i5-8265U 1.6 GHz, 8 GB DDR4, 256 GB SSD, Intel UHD, and Windows 10 Pro.

2.2.2 Software

We have used a combination of ReactJS version 18, NodeJS version 18 and Python 3.10 to develop the application and PostgreSQL version 12 was used as the relational database.

  • Front End User Interfaces were developed using ReactJS

  • EdgeController (depicted in Fig. 1) and CloudController are two back-end programs developed using ReactJS.

  • Heart Attack ML Model—We used an Optimized XGBoost-based heart disease prediction model [32] for the implementation. An encoding technique named as One-Hot was utilized to encode the categorical features at data pre-processing step. We then applied Bayesian Optimization technique for hyper-parameter tuning to improve the prediction results. XGboost is a package that belongs to a community named Distributed Machine Learning Community [33]. XGBoost Algorithm is an advanced version of gradient boosting algorithm which has been used across several other implementations [34,35,36] and can handle regularization and overfitting-underfitting issues. The heart Attack ML model is trained using Heart Attack Prediction Dataset (refer Sect. 2.3.1 below for details on the dataset).

Fig. 1
figure 1

Automatic edge app architecture

2.2.3 Deploying heart attack ml model on edge and cloud (Amazon EC2)

We hosted the ML model on the edge device and AWS EC2 instance.

As the first step the Heart Attack ML model code was downloaded and deployed locally on the edge server and started as a flask app. We then installed all the packages and executed app.py (app.py is the generic entry point for Abseil Python applications) to start the ML model on edge.

Similarly, we launched the AWS EC2 instance. We copied all the files to AWS EC2 instance and installed the packages and executed app.py to start the ML model on AWS.

2.3 Automatic edge application data collection instruments

2.3.1 Heart attack prediction dataset

We used the Heart Failure Prediction Dataset from Kaggle [15]. In this dataset, 5 heart datasets are combined over 11 common features which makes it the largest heart disease dataset available so far for research purposes [15]. The five datasets used are:

  1. 1.

    Cleveland: 303 observations

  2. 2.

    Hungarian: 294 observations

  3. 3.

    Switzerland: 123 observations

  4. 4.

    Long Beach VA: 200 observations

  5. 5.

    Stalog (Heart) Data Set: 270 observations

Total: 1190 observations

Duplicated: 272 observations

The Optimized XGBoost-based Heart Attack ML model was trained using this data set [15].

Each patient record has the following data attributes:

  1. 1.

    Age: age of the patient [years]

  2. 2.

    Sex: sex of the patient [M: Male, F: Female]

  3. 3.

    ChestPainType: chest pain type [TA: Typical Angina, ATA: Atypical Angina, NAP: Non-Anginal Pain, ASY: Asymptomatic]

  4. 4.

    RestingBP: resting blood pressure [mm Hg]

  5. 5.

    Cholesterol: serum cholesterol [mm/dl]

  6. 6.

    FastingBS: fasting blood sugar [1: if FastingBS > 120 mg/dl, 0: otherwise]

  7. 7.

    RestingECG: resting electrocardiogram results [Normal: Normal, ST: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV), LVH: showing probable or definite left ventricular hypertrophy by Estes’ criteria]

  8. 8.

    MaxHR: maximum heart rate achieved [Numeric value between 60 and 202]

  9. 9.

    ExerciseAngina: exercise-induced angina [Y: Yes, N: No]

  10. 10.

    Oldpeak: oldpeak = ST [Numeric value measured in depression]

  11. 11.

    ST_Slope: the slope of the peak exercise ST segment [Up: upsloping, Flat: flat, Down: downsloping]

  12. 12.

    HeartDisease: output class [1: heart disease, 0: Normal]

2.3.2 Patients data

Simulated patients real time health data are captured in a CSV file (named as “PatientSimilationFile.csv”). We will call this Patient Simulator file henceforth. This file is read by the Automatic edge app and then each patient data is stored in the database for further processing. In future this CSV file can be replaced by integrating the Automatic edge app with wearable devices.

2.3.2.1 Patient real time data (simulation file)

Patient simulator file simulates date capture from the wearable devices such as, heart rate monitoring, sugar tracking device and ECG readers. The file contains all patients data (listed in Sect. 2.3.1) and uses Patient ID and Patient Name as a link (primary key) to identify an individual patient.

2.3.2.2 Database tables

We have created two database tables to store patients’ data. These tables are created on AWS cloud and on the edge server. High risk patient data are stored on the edge database whereas low risk patient data are stored on the cloud using these tables.

In one table patient permanent data are stored and in second table patients real time data (which changes continuously) are stored. The reason for keeping these two tables separate is to optimize database performance since read and writes most often will happen in Table 2 and, therefore, keeping this table separate can improve database query performance.

The first table is named as “Patient_Basic_Data” which contains patients’ basic data as depicted in Table 1.

Table 1 Sample dataset used for the application (Patient_Basic_Data table)

The second database table is named “ Patient_Transactional_Heart_Data” which consists of patient transactional data as depicted in Table 2. In addition, “Data on edge” field depicts the location where specific patient data are stored i.e., cloud or edge.

Table 2 Transactional Data (Patient_Transactional_Heart_Data Table)

Patient data are read from Patient Simulator file by the Automatic edge app. The Automatic edge app is invoked automatically whenever there is a change to the “PatientSimilationFile.csv”. Patients are identified by the application using the Patient ID and for every new patient a record is added to the edge database or cloud database based on whether the patient is categorized as high risk patient or low risk patient respectively. Similarly changes to existing patients record are identified by the application using the Patient ID and records are updated in the edge database or cloud database. As an example for a high risk patient, if Patient ID is not existing in the database, patient record is inserted into the edge database treating it as a new patient and if Patient ID is existing in the database edge database is updated with new health reading for the specific patient.

2.4 Automatic edge architecture

Our Automatic edge app architecture is depicted in Fig. 1.

The patient body readings are used to initialize the Automatic edge app, which is currently simulated using Patient Simulator file. In future, this file can be replaced and integrated with wearable devices. Doctors are connected to the Automatic edge app to receive recommendations on patients’ health via an email.

The “edge layer” is generally owned by hospitals who deploy the Automatic edge app on edge server within the hospital premises.

In the “Cloud layer” high computational activities are performed such as training the machine learning algorithm and performing Data Processing for low risk patients. The Cloud Layer is connected to the edge layer using an internet connection.

  1. 1.

    As soon as new patients data are entered in the Patient Simulator file, the EdgeController is invoked which automatically selects all high risk patients who have undergone a non-cardiac surgical procedure (using a combination of Age and Heart Disease field) and store all such patients data on the edge for a duration of 30 days and monitors any risk of heart attacks using a Heart Attack ML model deployed on the edge. Similarly, all other low risk patients data are stored on the cloud and Heart Attack ML model on cloud monitors risk of heart attacks. The Heart Attack ML model is pretrained using the Kaggle dataset and is deployed on the cloud and edge, respectively. The Heart Attack ML model automatically runs on edge and cloud as soon as new patient record is added to Patient Simulator file.

  2. 2.

    In addition to the default functionality where high risk patients data are automatically stored on the edge for 30 days, the doctor is provided with User Interface 1 (Fig. 3) to extend the duration of a specific patient data on the edge. By default, after the 30-day time limit, all data existing on the edge for a specific patient are moved to the Cloud and deleted on edge unless doctor specifically extends the duration beyond the default 30 days period to remain on edge.

  3. 3.

    EdgeController program continuously monitors any changes to the Patient Simulator file. Whenever a change is encountered to a specific high risk Patient record (with new readings) in the Patient Simulator file, the EdgeController program detects this change and automatically updates the edge database with new readings for the high risk patient record. Subsequently Heart Attack ML model located at the edge is triggered to calculate risk of heart attack. If risk of heart attack is predicted, an email is sent to the doctor indicating the heart attack risk. Similar process is executed when a new high risk Patient record is added to the Patient Simulator file.

  4. 4.

    Similar to step 3, whenever a change is encountered to a specific low risk Patient record (with new readings) in the Patient Simulator file, the EdgeController program detect this change and automatically calls the CloudController which in turn updates the cloud database with new readings for the low risk patient record. Subsequently Heart Attack ML model located at the cloud is invoked to calculate the patient risk of heart attack, and if risk of heart attack is predicted, an email is sent to the doctor. Similar process is executed when a new low risk Patient record is added to the Patient Simulator file.

  5. 5.

    The doctor is also provided with User Interface 2 (Fig. 4) to view all of the patients data stored on both cloud and edge servers. As soon as doctor logs into User Interface 2, EdgeController program retrieves all patients’ data from edge database and cloud database, respectively, and displays the patient details.

3 Automatic edge app implementation

In this section, we describe the application flow of the Automatic edge app. There are three specific application flows.

Data Processing Decision Flow—The first flow (Sec 3.1) is related to Data Processing location of specific patients data, which is either on the edge or cloud and decided automatically by our Automatic edge app.

Heart Attack Prediction and Notification—In this flow (Sec 3.2), patients’ data are analyzed by the Heart Attack ML model to predict risk of heart attack and accordingly doctor is notified.

Reporting—The third flow (Sec 3.3) is the data visualization flow where doctor is provided with a user interface to view patients data.

3.1 Data processing” decision flow

In this section, we describe the application flow (depicted in Fig. 2) to store patient data either on the cloud database or the edge database based on patient health condition and their age.

Fig. 2
figure 2

The logical execution flow to store patient storage

New patients’ data can be added to the Patient Simulator file or existing patients data can be updated in this file. As soon as a change is performed to this file, the EdgeController program is invoked automatically using addupdatePatientData().

In case new patients are added to the Patient Simulator File, the EdgeController program automatically identifies all high risk patients who are suffering from cardiovascular disease, aged between 65 and 85 years and have undergone a non-cardiac surgery and stores all such patients data in the edge database using function addupdatePatientDataonedge() and rest of the low risk patients data are sent to the CloudController for storage on the cloud database.

Alternatively, if existing patient record is modified in the Patient Simulator file with new readings, EdgeController program updates the edge database with new readings using addupdatePatientDataonedge() for high risk patients or updates the cloud database for low risk patients.

The doctor can view all his patients’ records whose data either reside on the edge or on the cloud as depicted in Fig. 3. All patients record which resides on the edge remains on the edge for a duration of 30 days by default, however, doctor is provided with an option via User Interface 1 (Fig. 3) to extend the duration based on his requirements.

Fig. 3
figure 3

User interface 1, to view patients record or extend patient records on edge

If the Doctor chooses to extend the duration of specific patient records to remain on the edge beyond the 30 days default duration, updateSelection() function invokes the EdgeController program which then calls addupdatePatientDataonedge() function to update the new values (date and time) for the specific patient record.

3.2 Heart attack prediction and notification (on the edge and cloud)

Patient Simulator file consists of all patient data and any change to this file invokes the Automatic edge app. Change to this file could mean adding new patient data to the file or existing patient data is updated with new readings in the file. In both the cases the next step in the process is to predict heart attacks based on the patient latest data available in the database. The steps shown in Fig. 4 are executed by the EdgeController program and CloudController program for each individual patient whose data have changed on the edge database and cloud database, respectively. We have only described the process flow for the edge below, but it has to be noted that similar flow is also executed on the cloud for all patients whose data have been modified on the cloud.

Fig. 4
figure 4

Heart attack ML prediction process flow

As soon as new patient data are inserted or existing patient data are modified, EdgeController program sends patients data to the Heart Attack ML model using sendPatientData() function after which the Heart Attack ML model runs the ML prediction on the patients data using process() function and sends the results back to EdgeController using sendHeartAttackResults(). Based on the results the EdgeController triggers an email recommendation to the doctor if a risk of heart attack is determined.

3.3 Reporting/data visualization

As part of data visualization flow, we developed User Interface 2 (refer Fig. 5) for doctors to view the patients detailed information. In this flow, doctor can review all patients records irrespective of where their data reside, i.e., cloud database or edge database.

Fig. 5
figure 5

User interface 2 to review patient details

As depicted in Fig. 5, a doctor can retrieve patient information by logging into the User Interface 2. Patients’ information includes patient name, age and current medical condition. Doctors can also view if specific patient record is located on the cloud or edge along with the time until when a specific patient record will reside on the edge. This user interface can be extended further by other researchers who desire to add new fields to the screen.

Figure 6 depicts the process flow to retrieve patient data from cloud and edge database respectively. As soon as the page is loaded, retrieveSummary () function is invoked which retrieves all patient data residing on the edge database using retrievepatientDetailsedge() function. Subsequently retrievepatientDetailsCloud() function on the CloudController program is invoked to retrieve all patient data from the cloud database. Finally patient data retrieved from edge and cloud database are displayed on the User Interface 2 by EdgeController program.

Fig. 6
figure 6

Process flow to review patient details

4 Results

In this section, we evaluate the performance of our Automatic edge app both from latency perspective.

4.1 Latency

Latency is determined by measuring the time taken (in seconds) for each patients’ data to be loaded on the edge server and the cloud sever.

We performed experiments across 66 patient data. We split the experiments across four batches —the first batch of approximately 19 patients were experimented on a weekday (i.e., Monday) and second batch of another 20 patients we tested on a weekend (i.e., Sunday). We experimented another batch of 16 patients during peak internet usage timings (i.e., on Monday between 10 AM and 11 AM) and another batch during off-peak internet usage time (i.e., Monday between 10 PM and 11 PM). The reason why we split our experiment across weekend, weekday and then during peak internet usage hours and off peak hours is because we wanted to test our application under different stress conditions.

For each of the experiment, we updated patients record in a “PatientSimilationFile.csv” file. We then ensured that patient data first reach the edge for processing and then to the cloud for processing and finally compared the end to end latency for each patient on the edge and cloud respectively. We achieved this by alternatively changing two fields (Age and Heart Disease) for each patient in the CSV file.

The execution flow for Patient 1 on the edge is demonstrated below. The same experiment can be repeated easily by anyone who desire to perform more tests:

  1. (a)

    As an initial step, patient A data are entered in the “PatientSimilationFile.csv” file manually from the back end. We entered age of this patient as 70 and heart disease as “Y”. Based on the age and heart disease field data is automatically sent to the edge for processing

  2. (b)

    The EdgeController program is automatically initiated as soon as the file is saved, and start time is recorded. As the next step, the EdgeController program executes the business logic, i.e., it invokes the Heart Attack ML model and an email notification is sent to the doctor and end time is recorded.

  3. (c)

    Difference between start time and end time provides the time it took to execute the program i.e., Loading Time (or latency) on the edge (LTE).

We next modified Patient A age to 26 and heart disease to “N” in “PatientSimilationFile.csv” file, based on which data are again read from the file by the Automatic edge app and this time it is sent to the cloud for processing. We then captured the latency (also referred as loading time) on the cloud for Patient A(LTC).

We repeated the same experiment across all the 66 patients and for each patient calculated the latency percentage difference. The formula to calculate latency percentage difference is:

$$ {\text{Latency}}\,{\text{Percentage}}\,{\text{Diffrence}} = \frac{{\left| {{\text{LTC}} - {\text{LTE}}} \right|}}{{\left( {\frac{{{\text{LTC}} + {\text{LTE}}}}{2}} \right)}} \times 100 $$

For each group of experiments, we then calculated the average latency percentage difference.

$$ {\text{Average}}\,{\text{Latency}}\,{\text{Percentage}}\,{\text{Diffrence}} = \frac{{{\text{Sum}}\,{\text{of}}\,{\text{all}}\,{\text{reading}}\,\left( {\text{LTC/LTE}} \right)}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{all}}\,{\text{reading}}\,\left( {\text{LTC/LTE}} \right)}} $$

4.1.1 Experiment 1: Weekday tests

Results are depicted in Table 3 below for experiments conducted during the weekday peak working hours (i.e., between 10 AM and 10:30 AM BST). The summary of this experiment concludes that during the weekday peak hours on an average edge server can process patient data 56% faster than on the cloud.

Table 3 Latency chart for weekday test

Based on the above results, it is clear that patients data retrieved from the edge server is 57% faster than data retrieved from cloud server thereby implying that our Automatic edge app can deliver low latency for heart condition monitoring of high risk patients. The reason we observed a deviation in latency percentage difference for Patient 9 is due to poor internet bandwidth we faced at that specific time that impacted the latency on cloud.

4.1.2 Experiment 2: Weekend tests

Results are depicted in Table 4 below for experiments conducted during the weekend. The summary of this experiment concludes that during the weekend edge server can process patient data 57% faster than on the cloud.

Table 4 Latency chart for weekend test

Based on the above results, it is clear that patients data retrieved from the edge server is 57% faster than data retrieved from cloud server.

4.1.3 Experiment 3: Weekday peak internet usage time

Results are depicted in Table 5 below for experiments conducted during peak hours during the weekday. The summary of this experiment concludes that edge server can process patient data 54% faster than on the cloud.

Table 5 Latency tests during peak hours

The above results depict that the patients data retrieved from the edge server is 54% faster than data retrieved from cloud server.

4.1.4 Experiment 4: Weekday off-peak internet usage time

We finally conducted experiments during off-peak hours over a weekday and results are depicted in Table 6 below which shows that the edge server can process patient data 54% faster than on the cloud.

Table 6 Latency chart during off-peak hours

Based on the above results, it is clear that patients data retrieved from the edge server is 54% faster than data retrieved from cloud server.

4.1.5 Further experiments can be performed by other researchers

The application and the installation steps can be made available to the research community by the corresponding author on reasonable request.

After installing the application, similar experiments can be repeated by other researchers. The application is very simple to use and to perform more experiments the researcher simply need to add new patient data in the “PatientSimulatorfile.csv” and then save the file at the same location. As soon as this file is saved, the Automatic edge app automatically detects these changes and run the complete business process for all new patients.

5 Discussion

This paper proposes a novel solution for the integration of edge computing with ML models for patient health monitoring which is an important concept to monitor high risk patients for critical illnesses, such as heart attacks after non-cardiac surgery. Most importantly we have developed an application that can automatically and in real time determine if patient data need to be processed and stored at the edge or on the cloud based on the latency requirements of the use case and process data accordingly.

There are several papers that have discussed about processing data on the edge such as for detecting heart attacks [5] or using edge for smart healthcare [2, 37]. There are others who have specifically discussed about healthcare use cases on the cloud [3, 4]. Though these interesting papers denote the enormous research that was performed using edge computing and cloud computing technologies, the biggest challenge which the industry is facing today is of the workload placement decision. In simple terms workload placements decision means where should healthcare data be placed, i.e., should data be placed on the edge or cloud or both. This is a hard decision to make since placing workloads either on the edge or on the cloud has its own challenges. As an example since healthcare use cases are latency sensitive, all data cannot be sent to the cloud for processing and storage [12]. On the other hand, placing all data on the edge is also not an option as edge has limited processing and storage capacity. We need a solution to tackle this problem.

In contrast to these studies, we have developed a unique Automatic edge app that detects high risk individuals (who have undergone non-cardiac surgical operation) and deploys their workloads automatically on the edge and then monitors these individuals for heart attacks for an initial duration of 30 days. Since the chances of heart attack for initial 30 days [21] after a non-cardiac surgery such as coronary angiography is high especially for these high risk patients [13] it is essential that these patients are continuously monitored during this period for risk of heart attacks. During this period there is zero tolerance towards latency delays and, therefore, these workloads are deployed on the edge thereby achieving high latency requirements. All other patients (i.e., low risk individuals) workloads are automatically deployed on the cloud and monitored for heart attack risks on the cloud.

Heart attack risks are identified using the Heart Attack ML model which is pretrained on the cloud using the public dataset provided by the Kaggle [15]. After training the model, it was deployed both on the cloud and edge, respectively.

The latency evaluation as discussed in the results section of this paper proves that data at the edge significantly improves the latency as compared to the cloud. The performance comparison with various patients data demonstrate that our edge scheme outperforms the cloud schemes and can scale extremely well to increased offloading demand and varying data sizes. We also demonstrated that our Automatic edge app can achieve around 54% better performance on the edge as compared to the cloud.

It is now abundantly clear that hosting computing resources very close to the end-users, possibly at the access network edge, is the only viable option to achieve a satisfactory quality of experience and low latency [5] but we need to do it based on the business need due to limited capacity of the edge servers. The Automatic edge app we developed has now solved this challenge.

6 Conclusion and future work

This paper has investigated the suitability of edge computing for healthcare use cases with low latency requirements. Specifically, we evaluated the performance of edge computing for patients data as a representative scenario to showcase how edge computing and cloud computing can be used in tandem. This can not only address the cloud latency issues but also the edge computing challenges relating to low processing power. The Automatic edge app proposed in this paper is unique because it automatically ensures that only high risk patients data as deemed important by the doctor is transferred to the edge servers for a stipulated duration and this avoids any associated data latency issues. There is no manual intervention required in the overall process.

An interesting research direction we will undertake in future is a large-scale data processing and storage of patients data on the edge. Such a study could consider, for instance, performing machine learning predictions with massive datasets on the edge for critical illness. Security is another potential area for future work and blockchain can address security and privacy concerns of the patients data.