Introduction

Mosquito-borne diseases are still major public health concerns. Across the world today, surveillance of mosquito vectors is still a manual process. Steps include trap placement, collection of specimens, and identifying each specimen one by one under a microscope to determine the genus and species. Unfortunately, this process is cognitively demanding and takes hours to complete. This is because, mosquitoes that fall into traps include both vectors, and also many that are not vectors. Recently, AI approaches are being designed to automate classification of mosquitoes. Works like1,2,3,4 design machine learning models based on hand-crafted features from image data that are generated from either smartphones or digital cameras. Two recent 2020 papers design deep neural network techniques (that do not need hand-crafted features) to classify mosquitoes from image data generated via smartphones5,6. Other works process sounds of mosquito flight for classification, based on the notion that wing-beat frequencies are unique across mosquito species7,8,9,10.

In this paper, we demonstrate novel applications for mosquito images when processed using AI techniques Since, the most descriptive anatomical components of mosquitoes are the thorax, abdomen, wings and legs, we present a technique in this paper that extracts just the pixels comprising of these specific anatomical components from any mosquito image. Our technique is based on Mask Region-based Convolutional Neural Network11. Here, we first extract feature maps from our training dataset of 1500 smartphone images of 200 mosquito specimens spread across nine species trapped in Florida. Our network to extract feature maps is ResNet-101 with a Feature Pyramid Network12 (an architecture that can handle images at multiple scales, and one well suited for our problem). Subsequently, we detect and localize anatomical components only (denoted as foreground) in the images in the form of rectangular anchors. Once the foreground is detected, the next step is to segment the foreground pixels by adding a branch to mask (i.e., extract pixels of) each component present in the foreground. This is done in parallel with two other branches to classify the extracted rectangular anchors and to tighten them to improve accuracy. Evaluation of our technique reveals favorable results. We see that the thorax, wings, abdomen and legs are extracted with high Precision (i.e., very low False Positives). For legs though, False Negatives are high, since the number of background pixels overwhelm the number of leg pixels in the image. Nevertheless, we see that enough descriptive features within the leg of a mosquito are indeed extracted out, since mosquito legs are long, and the descriptive features do repeat across the leg.

We believe that extracting images of mosquito anatomy has impact towards (a) faster classification of mosquitoes in the wild; (b) new digital-based, larger-scale and low-cost training programs for taxonomists; (c) new and engaging tools to stimulate broader participation in citizen-science efforts and more. Also, to evaluate generality, we tested our architecture trained on mosquito images with images of bumblebees (which are important pollinators). We see excellent accuracy in extracting the wings, and to a certain extent, the thorax, hence demonstrating the generality of our technique for many classes of insects.

Results

We trained a Mask Region-based Convolutional Neural Network (Mask R-CNN) to automatically detect and separately extract anatomical components of mosquitoes-thorax, wings, abdomen and legs from images. For this study, we utilized 23 specimens of Aedes aegypti and Aedes infirmatus, and 22 specimens of Aedes taeniorhynchus, Anopheles crucians, Anopheles quadrimaculatus, Anopheles stephensi, Culex coronator, Culex nigripalpus and Culex salinarius. After imaging the specimens via multiple smartphones, our dataset was 1600 mosquito images. These were split into 1500 images for training the neural network, and 100 images for validation. Together, this dataset yielded 1600 images of thorax, 1600 images of abdomen, 3109 images of wings and 6223 images of legs. We trained our architecture illustrated in Fig. 1 on an Nvidia graphic processing unit (GPU) cluster of four GeForce GTX TITAN X cards having 3,583 cores and 12 GB memory each. It took 48 h to train and validate the architecture. For testing, we trapped and imaged (via smartphones) another set of 27 mosquitoes, i.e., three per species. The testing data set consisted of 27 images of thorax and abdomen, 48 images of wings and 105 images of legs.

Figure 1
figure 1

The workflow of our architecture based on Mask R-CNN.

First, we visually present results of our technique to extract anatomical components of a mosquito in Fig. 2 for one sample image among the nine species in our testing dataset. This figure is representative of all other images tested. We see that the anatomical components are indeed coming out clearly from image data. Next, we quantify performance for our entire dataset using four standard metrics: Precision, Recall, Intersection over Union (IoU) and Mean Average Precision (mAP). Precision is basically the fraction of relevant instances (here, pixels) among those instances (again, pixels) that are retrieved. Recall is the fraction of the relevant instances that were actually retrieved. IoU is a metric that assesses the ratio of areas of the intersection and the union among the predicted pixels and the ground truth. A higher IoU means more overlap between predictions and the ground-truth, and so better classification. To define our final metric, the Mean Average Precision (mAP), we define another metric, Average precision (AP), which is the average of all the Precision values for a range of Recall (0 to 100 for our problem) at a certain preset IoU threshold for a particular class among the four for our problem (i.e., wings, thorax, legs and abdomen). This metric essentially balances both Precision and Recall for a particular value of IoU for one class. Finally, the Mean Average Precision (mAP) is the average of AP values among all our four classes.

Figure 2
figure 2

Results of extracting anatomical components for one sample image among the nine mosquito species in our dataset.

The Precision and Recall values for the validation and testing datasets are presented in Tables 1 and 2 respectively for various values of IoU. We see that the performance metrics in the validation dataset during training match the metrics during testing (i.e., unseen images) post training across all IoUs. This convinces us that our architecture is robust and not overfitted. Precision for all classes is high, which means that false positives are low. Recall is also high for the thorax, abdomen and wings, indicating low false negatives for these classes. However, Recall for legs class is relatively poor. It turns out that a non-trivial portion of the leg pixels are classified as the background in our architecture. While this may seem a bit discouraging, we once again direct readers to Fig. 2, wherein we can see that a very good portion of the legs are still identified and extracted correctly by our architecture (due to the high Precision). As such, the goal of gleaning the morphological markers from all anatomical components is still enabled. Finally, the mean Average Precision is presented in Table 3 for all classes. The lower numbers in Table 3, are due to poorer performance for classifying legs, as compared to thorax, abdomen and wings.

Table 1 Precision and Recall for different IoU thresholds on validation set.
Table 2 Precision and Recall for different IoU thresholds on testing set.
Table 3 mAP scores for masking.

Results from a small experiment with bumblebee images We subsequently verified how our AI architecture that was trained only with mosquito images, performs, when tested with images of bumblebees. Bumblebees (Genus: Bombus) are important pollinators, and detecting them in nature is vital. Figure 3 presents our results for one representative image among three species of bumblebees, although our results are representative of more than 100 bumblebee images we tested. Our image source for bumblebees was Smithsonian National Museum of Natural History in Washington, D.C. Images can be found here13. As we see in Figure 3, our technique is robust in detecting and extracting wings. While the thorax is mostly extracted correctly, the ability to extract out the abdomen and legs is relatively poor. With these results, we are confident that our architecture in its present form, could be used to extract wings of many insects. With appropriate ground-truth data, only minimal tweaks to our architecture will be needed to ensure robust extraction of all anatomical components for a wide range of insects.

Figure 3
figure 3

Results of extracting anatomical components for bumblebees.31,32,33

Discussion

We now present discussions on the significance of contributions in this paper.

(a) Faster classification of trapped mosquitoes Across the world, where mosquito-borne diseases are problematic, it is standard practice to lay traps, and then come next day to pick up specimens, freeze them and bring them to a facility, where expert taxonomists identify each specimen one-by-one under a microscope to classify the genus and species. This process takes hours each day, and is cognitively demanding. During rainy seasons and outbreaks, hundreds of mosquitoes get trapped, and it may take an entire day to process a batch from one trap alone. Based on technologies we design in this paper, we expect mobile cameras can assist in taking high quality pictures of trapped mosquito specimens, and the extracted anatomies can be used for classification by experts by looking at a digital monitor rather than peer through a microscope. This will result in lower cognitive stress for taxonomists and also speed up surveillance efforts. For interested readers, Table 4 presents details on morphological markers that taxonomists look for to identify mosquitoes used in our study14.

Table 4 Anatomical components and markers aiding mosquito classification.26,27,28,29,30

(b). AI and Cloud Support Education for Training Next-generation Taxonomists The process of training taxonomists today across the world consists of very few training institutes, which store a few frozen samples of local and non-local mosquitoes. Trainees interested in these programs are not only professional taxonomists, but also hobbyists. The associated costs to store frozen mosquitoes are not trivial (especially in low economy countries), which severely limit entry into these programs, and also make these programs expensive to enroll. With technologies like the ones we propose in this paper, digital support for trainees is enabled. Benefits include, potential for remote education, reduced operational costs of institutes, reduced costs of enrollment, and opportunities to enroll more trainees. These benefits when enabled in practice will have positive impact to taxonomy, entomology, public health and more.

(c). Digital Preservation of Insect Anatomies under Extinction Threats: Recently, there are concerning reports that insects are disappearing at rapid rates. We believe that digital preservation of their morphologies could itself aid preservation, as more and more citizen-scientists explore nature and share data to identify species under immediate threat. Preservation of insect images many also help educate future scientists across a diverse spectrum.

Conclusions

In this paper, we design a Deep Neural Network Framework to extract anatomical components-thorax, wings, abdomen and legs from mosquito images. Our technique is based on the notion of Mask R-CNN, wherein we learn feature maps from images, emplace anchors around foreground components, followed by classification and segmenting pixels corresponding to the anatomical components within anchors. Our results are favorable, when interpreted in the context of being able to glean descriptive morphological markers for classifying mosquitoes. We believe that our work in this paper has broader impact in public health, entomology, taxonomy education, and newer incentives to engage citizens in participatory sensing.

Methods

Generation of Image Dataset and Preprocessing

In Summer 2019, we partnered with Hillsborough county mosquito control district in Florida to lay outdoor mosquito traps over multiple days. Each morning after laying traps, we collected all captured mosquitoes, froze them in a portable container and took them to the county lab, where taxonomists identified them for us. For this study, we utilized 23 specimens of Aedes aegypti and Aedes infirmatus, and 22 specimens of Aedes taeniorhynchus, Anopheles crucians, Anopheles quadrimaculatus, Anopheles stephensi, Culex coronator, Culex nigripalpus and Culex salinarius. We point out that specimens of eight species were trapped in the wild. The An. stephensi specimens alone were lab-raised whose ancestors were originally trapped in India.

Each specimen was then emplaced on a plain flat surface, and then imaged using one smartphone (among iPhone 8, 8 Plus, and Samsung Galaxy S8, S10) in normal indoor light conditions. To take images, the smartphone was attached to a movable platform 4 to 5 inches above the mosquito specimen, and three photos at different angles were taken. One directly above, and two at \(45^{\circ }\) angles to the specimen opposite from each other. As a result of these procedures, we generated a total of 600 images. Then, 500 of these images were preprocessed to generate the training dataset, and the remaining 100 images were separated out for validation. For preprocessing, the images were scaled down to \(1024 \times 1024\) pixels for faster training (which did not lower accuracy). The images were augmented by adding Gaussian blur and randomly flipping them from left to right. These methods are standard in image processing, which better account for variances during run-time execution. After this procedure, our training dataset increased to 1500 images.

Note here that all mosquitoes used in this study are vectors. Among these, Aedes aegypti is particularly dangerous, since it spreads Zika fever, dengue, chikungunya and yellow fever. This mosquito is also globally distributed now.

Our Deep Neural Network Framework based on Mask R-CNN

To address our goal of extracting anatomical components from a mosquito image, a straightforward approach is to try a mixture of Gaussian models to remove background from the image1,15. But this will only remove the background, without being able to extract anatomical components in the foreground separately. There are other recent approaches in the realm also. One technique is U-Net16, wherein semantic segmentation based on deep neural networks is proposed. However, this technique does not lend itself to instance segmentation (i.e., segmenting and labeling of pixels across multiple classes). Multi-task Network Cascade17 (MNC) is an instance segmentation technique, but it is prone to information loss, and is not suitable for images as complex as mosquitoes with multiple anatomical components. Fully Convolutional Instance-aware Semantic Segmentation18 (FCIS) is another instance segmentation technique, but it is prone to systematic errors on overlapping instances and creates spurious edges, which are not desirable. DeepMask19 developed by Facebook, extracts masks (i.e., pixels) and then uses Fast R-CNN20 technique to classify the pixels within the mask. This technique though is slow as it does not enable segmentation and classification in parallel. Furthermore, it uses selective search to find out regions of interest, which further adds to delays in training and inference.

In our problem, we have leveraged Mask R-CNN11 neural network architecture for extracting masks (i.e. pixels) comprising of objects of interest within an image which eliminates selective search, and also uses Regional Proposal Network (RPN)21 to learn correct regions of interest. This approach best suited for quicker training and inference. Apart from that, it uses superior alignment techniques for feature maps, which helps prevent information loss. The basic architecture is shown in Fig. 1. Adapting it for our problem requires a series of steps presented below.

  • Annotation for Ground-truth First, we manually annotate our training and validation images using VGG Image Annotator (VIA) tool22. To do so, we manually (and carefully) emplace bounding polygons around each anatomical component in our training and validation images. The pixels within the polygons and associated labels (i.e., thorax, abdomen, wing or leg) serve as ground truth. One sample annotated image is shown in Fig. 4.

  • Generate Feature Maps using CNN Then, we learn semantically rich features in the training image dataset to recognize the complex anatomical components of the mosquito. To do so, our neural network architecture is a combination of the popular Res-Net101 architecture with Feature Pyramid Networks (FPN)12. Very briefly, ResNet-10123 is a CNN with residual connections, and was specifically designed to remove vanishing gradients at later layers during training. It is relatively simple with 345 layers. Addition of a feature pyramid network to ResNet was attempted in another study, where the motivation was to leverage the naturally pyramidal shape of CNNs, and to also create a subsequent feature pyramid network that combines low resolution semantically strong features with high resolution semantically weak features using a top-down pathway and lateral connections12. This resulting architecture is well suited to learn from images at different scales from only minimal input image scales. Ensuring scale-invariant learning is specifically important for our problem, since mosquito images can be generated at different scales during run-time, due to diversity in camera hardware and human induced variations.

  • Emplacing anchors on anatomical components in the image In this step, we leverage the notion of Regional Proposal Network (RPN)21 and results from the previous two steps, to design a simpler CNN that will learn feature maps corresponding to ground-truthed anatomical components in the training images. The end goal is to emplace anchors (rectangular boxes) that enclose the detected anatomical components of interest in the image.

  • Classification and pixel-level extraction Finally, we align the feature maps of the anchors (i.e., region of interest) learned from the above step into fixed sized feature maps which serve as input to three branches to: (a) label the anchors with the anatomical component; (b) extract only the pixels within the anchors that represents an anatomical component; and (c) tighten the anchors for improved accuracy. All three steps are done in parallel.

Figure 4
figure 4

Manual annotation of each anatomy (thorax, abdomen, wings, and legs) using VGG Image Annotator (VIA) tool.

Loss functions

For our problem, recall that there are three specific sub-problems: labeling the anchors as thorax, abdomen, wings or leg; masking the corresponding pixels within each anchor; and a regressor to tighten anchors. We elaborate now on the loss functions used for these three sub-problems. We do so because, loss functions are a critical component during training and validation of deep neural networks to improve learning accuracy and avoid overfitting.

Labeling (or classification) loss For classifying the anchors, we utilized the Categorical Cross Entropy loss function, and it worked well. For a single anchor j, the loss is given by,

$$\begin{aligned} CCE_j=-log(p), \end{aligned}$$
(1)

where p is the model estimated probability for the ground truth class of the anchor.

Masking loss Masking is most challenging, considering the complexity in learning to detect only pixels comprising of anatomical components in an anchor. Initially, we experimented with the simple Binary Cross Entropy loss function. With this loss function, we noticed good accuracy for pixels corresponding to thorax, wings and abdomen. But, many pixels corresponding to legs were mis-classified as background. This is because of class imbalance highlighted in Fig. 5, wherein we see significantly larger number of background pixels, compared to number of foreground pixels for anchors (colored blue) emplaced around legs. This imbalance leads to poor learning for legs, because the binary class entropy loss function is biased towards the (much more, and easier to classify) background pixels.

Figure 5
figure 5

After emplacement of anchors, we see significantly more background pixels than foreground pixels for anchors encompassing legs.

To fix this shortcoming, we investigated another more recent loss function called focal loss24 which lowers the effect of well classified samples on the loss, and rather places more emphasis on the harder samples. This loss function hence prevents more commonly occurring background pixels from overwhelming the not so commonly occurring foreground pixels during learning, hence overcoming class imbalance problems. The focal loss for a pixel i is represented as,

$$\begin{aligned} FL(i)=-(1-p)^\gamma \log (p), \end{aligned}$$
(2)

where p is the model estimated probability for the ground truth class, and \(\gamma\) is a tunable parameter, which was set as 2 in our model. With these definitions, it is easy to see that when a pixel is mis-classified and \(p \rightarrow 0\), then the modulating factor \((1-p)^\gamma\) tends to 1 and the loss (log(p)) is not affected. However, when a pixel is classified correctly and when \(p \rightarrow 1\), the loss is down-weighted. In this manner, priority during training is emphasized more on the hard negative classifications, hence yielding superior classification performance in the case of unbalanced datsets. Utilizing the focal loss gave us superior classification results for all anatomical components.

Regressor loss To tighten the anchors and hence improve masking accuracy, the loss function we utilized is based on the summation of Smooth L1 functions computed across anchor, ground truth and predicted anchors. Let (xy) denote the top-left coordinate of a predicted anchor. Let \(x_a\) and \(x^*\) denote the same for anchors generated by the RPN, and the manually generated ground-truth. The notations are the same for the y coordinate, width w and height h of an anchor. We define several terms first, following which the loss function \(L_{reg}\) used in our architecture is presented.

$$\begin{aligned} \begin{array}{l} t_x^*=\frac{(x^*-x_a)}{w_a},\quad \quad t_y^*=\frac{(y^*-y_a)}{h_a},\quad \quad t_w^*=\log (\frac{w^*}{w_a}),\quad \quad t_h^*=\log (\frac{h^*}{h_a}),\\ \\ t_x=\frac{(x-x_a)}{w_a},\quad \quad t_y=\frac{(y-y_a)}{h_a},\quad \quad t_w=\log (\frac{w}{w_a}),~\quad \quad t_h=\log (\frac{h}{h_a}),\\ \\ smooth_{L_1}= {\left\{ \begin{array}{ll} 0.5x^2 ,&{} \text {if } |x|< 1\\ |x| -0.5, &{} \text {otherwise} \end{array}\right. } ~~~\text {and} \\ \\ L_{reg}(t_i,t_{i}^{*})=\sum _{i\epsilon {{x,y,w,h}}}smooth_{L_1}(t_{i}^{*}-t_i).\\ \\ \end{array} \end{aligned}$$
(3)

Hyperparameters

For convenience, Table 5 lists values of critical hyperparameters in our finalized architecture.

Table 5 Values of critical hyperparameters in our architecture.