The Federation’s Pages

figure a

The challenges of contemporary times make us run the risk of a perfect storm in which the waves of demand (demographic and epidemiological transition) and those of supply (technological innovation, differentiation, and scarcity of professional resources) can overwhelm health systems. Even those of the richest countries can seriously destabilize their citizens. These challenges must be taken up by the entire global Public Health community by opening up their conceptual horizons and considering new tools.

Time has brought several changes in the way we communicate, travel, write, and read. Important changes and innovations that are having a significant role in our lives are, for sure, the wearables; let’s call them ‘Internet of Things (IoT)’ devices. In simple terms, IoTs are sensors that can capture and broadcast over the internet an immense quantity of data coming from our bodies.

IoTs are the biggest business and revenue component of many companies today. The revenue comes from the sale of the devices, as well as the data. These data have become the most profitable business for IoT companies. Most companies’ business models center on the analysis of data and repackaging the data in marketing products for their company. This is one of many problems related to the sharing of information. Fundamentally, there is nothing wrong with using the internet to create and market products that will be sold to interested buyers; however, major problems surface when international entities and governments decide to protect these data with laws that restrict collection and use of the data. This trend of denying the use of cookies and data collection is most poignant when it comes to medical data.

In United States (U.S.), due to the presence of many healthcare corporations that own hospitals and insurance, every component of information related to patients becomes part of the data asset of the medical corporation and they do not share it easily with others. To complicate matters, the U.S. data privacy law known as ‘HIPAA’ has instituted rules that make the acquisition, use, and sharing of patients’ medical data extremely difficult.

In Europe, the General Data Protection Regulation (GDRR), the privacy and security law for exchanging and using data is the toughest law in the world, no less difficult than that of the United States. There are several IoT companies worldwide that are very advanced in the production of medical applications, especially applications in small highly-targeted markets. Medical professionals and patients are, unfortunately, on a different level of advancement and innovation. The use of Machine Learning algorithms is getting more accepted day by day. While one side of the healthcare world gets enriched by extraordinary futuristic solutions, the other side suffers from the immense limitation of data due to privacy laws and company rules related to data ownership.

If we take a quick look to the Machine Learning (ML) and the Artificial Intelligence (AI) fields, the first thing that we find out is that a Machine Learning algorithm needs to have an incredible quantity of data to perform a specific task that we then call Artificial Intelligence. These data need to be labeled, categorized, and normalized to become a training dataset for an algorithm. To predict medical problems, the algorithm must be trained with a large quantity of data under the supervision of doctors, domain experts, and computer scientists who further tune the algorithm.

During a model training, we usually use, as good practice, 80% of our data for algorithm’s training and 20% for testing the system. Accuracy, precision, and recall are three metrics that we use in Machine Learning. Accuracy can be measured as the percentage of predictions that are correct; precision—as the percentage of false positive that the machine sees as correct; and recall—as the percentage of false positive that the algorithm predicts.

We need to have a tremendous quantity of data to comply with these rules and outputs. When the Artificial Intelligence is applied to a human body, these data must be large in quantity, and also collected systematically over time. This is what an IoT device, such as an iWatch, does; privacy rules prohibit unauthorized use of these data.

Machine Learning, and in particular ‘deep’ learning—a type of machine learning techniques in which multiple layers of information are used to extract progressively higher-level data features—is one of the ways to achieve artificial intelligence. The ideas of Machine learning are based on the Artificial Neural Network theory, conceptualized in the 1960s. It is a computing system that gets its name from the human brain. There, different components are grouped together, like neurons, that communicate, imitating a human learning process. Modern technology enables us to use deep learning to train the algorithms.

There are several ways to train an algorithm. One of them is called Digital Twins. A digital twin is a virtual model designed to accurately reflect a physical object. It represents how a digital analog may behave in real world. We could focus on possible cells interactions, possible organ rejection, or a study of an allergic reaction to a specific drug. A digital twin model requires a detailed study to fully understand the depth of a structure, a process, or a system in its complexity and a computational platform of extremely high performance.

We have a paradox today in healthcare: a desire to protect data, but also a vast number of healthcare organizations that do not have sufficiently strong platforms to consistently run data to predict and create substantial AI results. Private corporations are providing cloud-based platforms and high-performance computers to run these algorithms for healthcare. In countries with social medicine, this approach is more complicated as this sector is handled by the government. With our computational power today, the AI is the most important and productive way to proceed.

Where expertise is very limited, deep learning can make the difference. For example, in countries where there are few radiologists, use of an automated system that works in an unsupervised environment (supervised only one time for the data introduction) can help. Many noteworthy and interesting projects use deep learning, for example, projects that study presence or lack of diabetic Retinopathy (DR). Approximately, 455 million people with diabetes need to be screened at least one time per year. Analysis starts with a photo of the retina. Then, the doctor creates a scale of the disease ranging from “No presence of DR” to “Mild presence DR” to “Moderate DR” to “Severe DR” to “Proliferative DR”. Doctors grade the disease because they are looking for little aneurism that are stopping blood circulation inside the blood vessel of the eye.

In some countries, there is a shortage of eye doctors. This is true in India, where there is a shortage of 127,000 eye doctors. There, 45% of the people who undergo the first exam for diabetic retinopathy are already at a “Severe DR” stage and suffering vision loss. In this specific project in 2016, researchers had to train the algorithm and obtain a very high percentage of accuracy in the prediction mode by collecting a great deal of data. What is the accuracy of the predictions? It is never 100%, and for certain diseases, the accuracy may be only to 50%. The way we train models to improve their performance in terns of accuracy and precision is key. In a very limited and focused sector like to diabetic retinopathy, the parameters that the scientists had to improve were on several different scales. Scientists deployed thermography and direct medical diagnosis to compare images and to train the algorithm.

IoT devices and wearables accumulate the data needed to create algorithms. It would be extremely beneficial to also create a catalog of good usable data and non-usable data. It is most important to understand that to train an algorithm we need to use data from various tests and devices, then to validate or eliminate them. The importance of wearables is extremely high in cases like isolated events that could lead to a wrong diagnosis. A patient who has an increased spike in glucose levels in a particular moment can generate data that is not useful for training the algorithm aiming to reflect a stable profile. Such data could better come, for example, from a continuous glucose monitoring module (CGM) that provides a stream of data over time. This will generate a better way to train the algorithm and, of course, a better way to increase the algorithm quality.

This short overview of what can be done with Machine Learning demonstrates the real power of new data analytics. The more data, the better. There is an immense need for well-collected data that can be used to better train algorithms. Computation power is another important dilemma to solve. The more data we collect the more computation power we need. There are few super computers in the world, but they cannot be used constantly by a specific team on one task only. This approach is inefficient for worldwide health AI as we have big concerns about cyber security and cyber protection of data, leaks, and data exchange. The problem is real, and ranges from stealing data, to modifying data, or deleting data.

Several universities and private corporations are moving fast to create quantum computers. The computational power of a quantum computer is incomparable to any supercomputer. Another problem that our advancing world will soon face will be how to protect the exchange and storage of information when quantum computers are in use. The way we encrypt data will be replaced by new technology that will generate a new type of encryption.

One of the frontier advances that we hope see will be a new generation of nano-technology wearable devices that can monitor our bodies 24/7. This method will generate a constant and well-organized flow of data that can be used to train machine-learning algorithms on a personal level. A person needs to have a constant monitoring and predictable analysis of how his/her own body is doing. Predictions will first be analyzed and sent to a physician for review and then to the patient. This will start a cutting-edge future of benefit to everyone, including patients and the medical industry. The AI could eventually benefit healthcare rather than remaining isolated in single projects that all suffer from the lack of updated data. The same algorithm training methods will improve quality of algorithms if new real time data can generate constant training.

At this point, each of us could have Neuron in communication with others using Cyber-Quantistic Synapses elaborated by a centralized Cluster of Quantum Computers. The new network, represented by millions of people from several nations, will be monitored by real time trained algorithms possibly also in time regressions. This will generate an immense Artificial Neural Network that will open a new era that we like to call the Health Neural Network. Public health must be at the center of this development to prevent the promise of these new technologies from becoming engines of inequalities and injustices.