Classroom lectures and practical sessions of many medical schools have been rapidly migrating towards online teaching in response to the COVID-19 pandemic. Despite the many benefits of distance learning, educators began to realize that it came along with certain limitations [1]. Laboratory sessions of visual-centric subjects such as histology and pathology are challenging to replace with common distance learning platforms, because interpreting images is a skill that students master after repeated exposure to a considerable amount of images, often with guidance from an instructor. Alongside this modern approach to teaching, institutions should take advantage of digital technologies such as cloud computing and artificial intelligence (AI) to create an enhanced educational experience.

Usually, histology courses introduce students to the basic types of tissue, providing them with information about the structure and function of complex organs. With the purpose of aiding students to correctly identify histological basic tissues, I developed the Histology Classifier app (https://play.google.com/store/apps/details?id=com.app.bima). This Android application uses deep convolutional neural networks (CNN) to quickly make an accurate identification of histological images, using the camera on a smartphone. Furthermore, a website (https://bimascope.wordpress.com/2020/07/15/remote-lab/) is provided with sample micrographic images of each topic that students can display on any screen to test the application, thus recreating a laboratory session.

For image recognition, the MobileNetV2 network was chosen [2], a memory-efficient CNN architecture designed for mobile applications, to train five different models covering subclasses of the basic tissues including the epithelia, glands, connective, muscle, and the nerve tissue. Each one of the CNN models was trained by using histologic imagery of the Virtual Microscopy Learning Resources from both the University of Michigan and the University of British Columbia [3, 4]. Training, validation, and testing of each one of the CNN models were done using the Tensorflow framework (v1.14.0). After the training process, the value of the test accuracy of each model was recorded. By comparing these values, we gained a better understanding of each model’s performance with their corresponding datasets (see references in Table 1 for details about evaluation protocol). The user interface adapts the TensorFlow Lite API to an easy-to-use design (Fig. 1).

Table 1 Information regarding image datasets, training steps, and overall results
Fig. 1
figure 1

Screen captures from the model selection, main activity, and details window. The “Show details” prompt displays quick references including location, description, and the function of the observed tissue

The app was built by BIMA, a biomedical informatics group in Venezuela, and trialed with students enrolled in the University of the Andes (ULA) School of Medicine Histology course, adapting the content to their recommended bibliography. When the campus was closed due to the pandemic, an illustrated step-by-step instruction was sent to the students, encouraging them to use the app to validate their own analysis and identification results. Within the app, users start by preselecting one of the five models available in the drop-down list, this enables the device’s camera for image classification. Moreover, a thresholding procedure was integrated, such that, if the match probability of a specific tissue is > 80%, the app displays a text description that corresponds to the observed image (Fig. 1).

This digital tool was well received by both the students and faculty. In particular, students praised the app for serving as an unconventional self-review tool. One student mentioned that she found this innovation as an entertaining way to assess her competency of recognizing cells and tissues. Feedback from course instructors was positive as well, they inquired about the possibility of adding interactive quizzes and expanding the app’s availability for IOS devices. The next steps under development will include a randomized controlled trial to analyze the app’s effectiveness in aiding the student’s learning process. Although, low-quality or ambiguous images can be confusing and mislabeled by the Histology Classifier app, upcoming versions will aim to improve the model’s recognition ability, while also introducing features that have been requested by users.

I believe that this learning method offers students a practical approach to using AI-assisted tools in research and medical diagnostics, thus creating valuable data through user insights and inputs. Approaching histopathological image-related diagnostics by using smartphones has demonstrated promising results [5]; therefore, implementations such as this one could be considered a future clinical choice for fast and reliable analysis of large amounts of samples.