1 Introduction

A subset of machine learning-based tools, called computer vision tools, deal with object identification, classification and tracking in images or videos. State-of-the-art computer vision tools can read handwritten text [1,2,3,4], find objects in images [5,6,7,8], find product defects [8, 9], make a medical diagnosis from medical images with accuracy surpassing humans [10, 11] and object tracking [12, 13], just to name a few. In the last few years, they have been increasingly consolidating their place in all scientific fields and industries as reliable and fast analysis methods.

Fig. 1
figure 1

Schematic representation of the microfluidic device used for the droplet generation

Computer vision tools have shown remarkable success in studying microfluidic systems. Artificial neural networks, for example, can predict physical observables, such as flow rate and chemical composition, from images of microfluidics systems with high accuracy, thus reducing hardware requirements to measure these quantities in an microfluidics experiment [14, 15]. More recently, a convolutional autoencoder model was trained to predict stable vs unstable droplets from their shapes within a concentrated emulsion [16].

Another application of computer vision tools in microfluidics is tracking droplets. Droplet recognition and tracking in experiments such as ref. [17,18,19] and in simulation studies [20, 21] can yield rich information without needing human intervention. For example, counting droplet numbers, measuring flow rate, observing droplets size distribution and computing statistical quantities are cumbersome to measure with the manual marking of the droplets across several frames. Two natural questions, while using computer vision tools for image analysis, are i) how accurate the application is in terms of finding and tracking the objects, and ii) how fast the application is in analyzing each image. A typical digital camera operates at 30 frames per second (fps), thus one challenge is to analyze the images at the same or higher rate for real-time applications.

Along with a few other algorithms, You Only Look Once (YOLO) has the capability to analyze images at a few hundred frames per second [22, 23] and is designed to detect 80 classes of objects in a given image. The very first version of YOLO was introduced back in 2015 and the subsequent versions have been focused on making the algorithm faster and more accurate at detecting objects. The latest release of YOLO is its 7th version [24], with a reported significant gain in speed and accuracy for object detection in standard datasets containing several objects in realistic scenes. In our previous study, we trained YOLO version 5 and DeepSORT for real-time droplet identification and tracking in microfluidic experiments and simulations [25, 26], and we reported the image analysis speed for various YOLOv5 models. In this one, we train the latest YOLOv7 models along with DeepSORT and compare performance and image analysis speed of these models with the previous one. In particular, this paper studies and compares training time, droplet detection accuracy and inference time for an application that combines YOLOv5/YOLOv7 with DeepSORT for droplet recognition and tracking.

2 Experimental methods

The images analyzed in this study were obtained from a microfluidic device for the generation of droplets exploiting a flow-focusing configuration (scheme of the device in Fig. 1). The device has two inlets for oil flow (length: 7 mm, width: 300 \(\upmu \hbox {m}\), depth: 500 \(\upmu \hbox {m}\)), one inlet for the flow of an aqueous solution (length: 5 mm, width: 500 \(\upmu \hbox {m}\), depth: 500 \(\upmu \hbox {m}\)), a Y-shaped junction for droplet generation and an expansion channel. The latter is connected to an outlet for collecting the two-phase emulsion. The device was realized by using a stereolithography system (Enviontec, Micro Plus HD) and the E-shell®600 (Envisiontec) as pre-polymer. The continuous phase consists of silicone oil (Sigma-Aldrich, oil viscosity 350 cSt at \(25^{\circ }\hbox {C}\)), while an aqueous solution constitutes the dispersed phase. The latter was made by dissolving 7 mg of a black pigment (Sigma Aldrich, Brilliant Black BN) in 1 mL of distilled water. Both phases were injected through the inlets at constant flow rates by a programmable syringe pump with two independent channels (Harvard Apparatus, model 33). The images analyzed in this study were obtained by using a flow rate of 10 \(\upmu \)l/min and 150 \(\upmu \)l/min for the dispersed phase and the continuous phase, respectively. The droplets have average diameter of 185 \(\upmu \hbox {m}\). The droplet formation is imaged by using a stereomicroscope (Leica, MZ 16 FA) and a camera (Photron, fastcam APX RS). The fast camera acquired the images at 3000 frames per second (fps). This image capture rate is far higher than any present algorithm’s real-time object detection capabilities. The image playback rate is to 30 fps. The sequences of images were stored as AVI video files. Later, images from the video were used to train YOLO and DeepSORT models as described in the following section.

3 Training YOLOv5 and YOLOv7 models

Fig. 2
figure 2

Example from custom training dataset to train YOLOv5 and YOLOv7 object detector models. Each object is manually placed in a rectangle (called the bounding box) and the dimensions of the rectangle are noted in an associated label file

Table 1 YOLO models training time on the same machine with an identical training dataset. The YOLO model descriptions can be found in Ref. [27] for v5 and in Ref. [24] for v7
Fig. 3
figure 3

Loss function during the YOLOv5 and YOLOv7 as the training progress. See Ref. [22] for a detailed loss function description. Figure legends are the same as in Fig. 4

The steps required to train YOLOv5 and YOLOv7 are identical. First, a training dataset is prepared by manually annotating 1000 images taken from a microfluidics experiment as described in Sect. 2. Each image in this dataset has approximately 13 to 14 droplets. One example from the training dataset is shown in Fig. 2. The droplets in these images are identified, and the dimensions of a rectangle that fully covers the droplet are noted in a separate text file called the label file. We used PyTorch implementation of YOLOv5 [27] and YOLOv7 [28] to train several YOLO models on an HPC system on a single node containing two Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Cascade Lake and NVIDIA Tesla V100 GPU with 32 GB VRAM. YOLOv5 and YOLOv7 models differ in the number of trainable parameters (see Table 1). YOLOv7 algorithm includes extended efficient layer aggregation networks to enhance the features learned by different feature maps and improve the use of parameters and calculations over its previous versions [22]. A typical training time is mentioned in Table 1.

During the training phase, a subset of data (called a batch) is passed through the network and a loss value is computed using the difference between the network’s predictions and the ground truth provided in the label file. The loss value is then used to update the network’s trainable parameter to minimize the loss in subsequent passes. An epoch is said to be completed when all of the training data is passed through the network. YOLO’s loss calculation takes into account the error in bounding box prediction, error in object detection and error in object classification [22]. The loss value components computed with training and validation data are shown in Fig. 3.

Fig. 4
figure 4

Mean average precision (mAP) comparison between YOLOv5 and YOLOv7 models with custom dataset

Table 2 Inference time per frame - CPU

4 Inference with YOLO and DeepSORT

During the training phase, the quality of YOLOv5 and YOLOv7 models is measured with a well-known mean average precision (mAP), which is calculated with an Intersection over Union (IoU) threshold of 0.5 (see Fig. 4). For both versions, mAP value quickly saturates to unity after training with 20 epochs. Similarly, the average of mAP calculated with IoU threshold of 0.5 to 0.95 in steps of 0.05 for YOLOv5 models are observed between 0.9 and 0.94, and for YOLOv7 models, the mAP values are observed between 0.8 and 0.9. These differences in the mAP values are practically insignificant for droplet detection with the YOLOv5 and YOLOv7 models.

Table 3 Inference time per frame - GPU

After the models are trained, they can be deployed for real-world applications. One challenging milestone for any computer vision application is to use it in real time, i.e., when the image analysis speed exceeds 30 fps. YOLO models on their own do deliver real-time performance. In Tables 2 and 3, we show the total time for droplet identification and tracking, combining YOLOv5/YOLOv7 with DeepSORT on two hardware configurations. Here, we measured YOLO and DeepSORT time as time taken by the functions that include the algorithms to analyze the input. The time to load the input and write the output is not taken into account. The benchmarking study was carried out on an MSI G77 Stealth laptop with i7-12,700 H, 32 GB RAM and NVIDIA RTX 3070 Ti 8 GB VRAM GPU. Two ’X’ in the table shows those YOLOv7 models that require more than 8GB VRAM making them unfeasible to run on RTX 3070 Ti GPU. Running on GPU, we observe approximately 10% improvement in the inference speed for YOLOv7 over YOLOv5. However, additional time by the object tracking algorithm DeepSORT is comparable with heavier YOLO models. 30 FPS is a commonly acceptable threshold for real-time tracking. The single application combining object identification and tracking can deliver real-time tracking with lighter YOLO models (YOLOv5s, YOLOv5m, YOLOv7-tiny and YOLOv7-x), but they fall below the real-time tracking mark with other heavier YOLO models. Finally, a video of droplet tracking is provided in supplemental material (see SM1.avi).

5 Conclusion

This paper studied two versions of YOLO object detector models coupled with DeepSORT tracking algorithms in a single tool we call DropTrack. DropTrack produces bounding boxes and unique IDs for the detected droplets, which help in constructing trajectories of droplets across sequential frames, thus allowing to compute other derived quantities in real time such as droplet flow rate, droplet size distribution, the distance between droplets, local order parameters, etc., which are desired observations in other applications [29,30,31,32]. The benchmarks studied in this work serve as a guide for computational resource requirements to train the networks and mention expected inference time for various models on diverse hardware configurations.

YOLOv5 and YOLOv7 networks were trained with identical training datasets on the same HPC machine with NVIDIA-V100 GPU. The training time per epoch is comparable for lighter YOLOv5 and YOLOv7, but the heavier YOLOv7 models take almost double the time to complete the training.

We observe a significant increase in inference speed in YOLOv7 models compared to their YOLOv5 counterparts, as one would expect. Moreover, we report detailed computational costs on object detection and object tracking routines and the overall performance of the combined application. Lighter YOLO models are much quicker to identify objects in comparison with the time taken by DeepSORT to track them. However, the object identification time increases with the increasing complexity of the object-detecting networks. Thus, it is crucial to choose the right YOLO network and hardware configuration for real-time tracking at the cost of the bounding box accuracy.