MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking

Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for research. We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT) launched in late 2014, to collect existing and new data and create a framework for the standardized evaluation of multiple object tracking methods. The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community, with applications ranging from robot navigation to self-driving cars. This paper collects the first three releases of the benchmark: (i) MOT15, along with numerous state-of-the-art results that were submitted in the last years, (ii) MOT16, which contains new challenging videos, and (iii) MOT17, that extends MOT16 sequences with more precise labels and evaluates tracking performance on three different object detectors. The second and third release not only offers a significant increase in the number of labeled boxes, but also provide labels for multiple object classes beside pedestrians, as well as the level of visibility for every single object of interest. We finally provide a categorization of state-of-the-art trackers and a broad error analysis. This will help newcomers understand the related work and research trends in the MOT community, and hopefully shed some light into potential future research directions.


Introduction
Evaluating and comparing single-camera multi-target tracking methods is not trivial for numerous reasons [Milan et al., 2013]. Firstly, unlike for other tasks, such as image denoising, the ground truth, i.e., the perfect solution one aims to achieve, is difficult to define clearly. Partially visible, occluded, or cropped targets, reflections in mirrors or windows, and objects that very closely resemble targets all impose intrinsic ambiguities, such that even humans may not agree on one particular ideal solution. Secondly, many different evaluation metrics with free parameters and ambiguous definitions often lead to conflicting quantitative results across the literature. Finally, the lack of pre-defined test and training data makes it difficult to compare different methods fairly.
Even though multi-target tracking is a crucial problem in scene understanding, until recently it still lacked large-scale benchmarks to provide a fair comparison between tracking methods. Typically, methods are tuned for each sequence, reaching over 90% accuracy in wellknown sequences like PETS [Ferryman and Ellis, 2010].
Nonetheless, the real challenge for a tracking system is to be able to perform well on a variety of sequences with different level of crowdedness, camera motion, illumination, etc., without overfitting the set of parameters to a specific video sequence.
To address this issue, we released the MOTChallenge benchmark in 2014, which consisted of three main components: (1) a (re-)collection of publicly available and new datasets, (2) a centralized evaluation method, and (3) an infrastructure that allows for crowdsourcing of new data, new evaluation methods and even new annotations. The first release of the dataset named MOT15 consists of 11 sequences for training and 11 for testing, with a total of 11286 frames or 996 seconds of video. 3D information was also provided for 4 of those sequences. Pre-computed object detections, annotations (only for the training sequences), and a common evaluation method for all datasets were provided to all participants, which allowed for all results to be compared fairly.
Since October 2014, over 1,000 methods have been publicly tested on the MOTChallenge benchmark, and over 1833 users have registered, see Fig. 1. In particular, 760 methods have been tested on MOT15, 1,017 on MOT16 and 692 on MOT17; 132, 213 and 190 (respectively) were published on the public leaderboard. This established MOTChallenge as the first standardized large-scale tracking benchmark for single-camera multiple people tracking.
Despite its success, the first tracking benchmark, MOT15, was lacking in a few aspects: -The annotation protocol was not consistent across all sequences since some of the ground truth was collected from various online sources; the distribution of crowd density was not balanced for training and test sequences; some of the sequences were well-known (e.g., PETS09-S2L1) and methods were overfitted to them, which made them not ideal for testing purposes; the provided public detections did not show good performance on the benchmark, which made some participants switch to other pedestrian detectors.
To resolve the aforementioned shortcomings, we introduced the second benchmark, MOT16. It consists of a set of 14 sequences with crowded scenarios, recorded from different viewpoints, with/without camera motion, and it covers a diverse set of weather and illumination conditions. Most importantly, the annotations for all sequences were carried out by qualified researchers from scratch following a strict protocol and finally doublechecked to ensure a high annotation accuracy. In addition to pedestrians, we also annotated classes such as vehicles, sitting people, and occluding objects. With this fine-grained level of annotation, it was possible to accurately compute the degree of occlusion and cropping of all bounding boxes, which was also provided with the benchmark.
For the third release, MOT17, we (1) further improved the annotation consistency over the sequences 1 and (2) proposed a new evaluation protocol with public detections. In MOT17, we provided 3 sets of public detections, obtained using three different object detectors. Participants were required to evaluate their trackers using all three detections sets, and results were then averaged to obtain the final score. The main idea behind this new protocol was to establish the robustness of the trackers when fed with detections of different quality. Besides, we released a separate subset for evaluating object detectors, MOT17Det.
In this work, we categorize and analyze 73 published trackers that have been evaluated on MOT15, 74 trackers on MOT16, and 57 on MOT17 2 . Having results on such a large number of sequences allows us to perform a thorough analysis of trends in tracking, currently bestperforming methods, and special failure cases. We aim to shed some light on potential research directions for the near future in order to further improve tracking performance.
In summary, this paper has two main goals: -To present the MOTChallenge benchmark for a fair evaluation of multi-target tracking methods, along with its first releases: MOT15, MOT16, and MOT17; to analyze the performance of 73 state-of-the-art trackers on MOT15, 74 trackers on MOT16, and 57 on MOT17 to analyze trends in MOT over the years. We analyze the main weaknesses of current trackers and discuss promising research directions for the community to advance the field of multitarget tracking.

Related work
Benchmarks and challenges. In the recent past, the computer vision community has developed centralized benchmarks for numerous tasks including object detection [Everingham et al., 2015], pedestrian detection [Dollár et al., 2009], 3D reconstruction [Seitz et al., 2006], optical flow [Baker et al., 2011, Geiger et al., 2012, visual odometry [Geiger et al., 2012], single-object short-term tracking [Kristan et al., 2014], and stereo estimation [Geiger et al., 2012, Scharstein andSzeliski, 2002]. Despite potential pitfalls of such benchmarks [Torralba and Efros, 2011], they have proven to be extremely helpful to advance the state of the art in the respective area.
For single-camera multiple target tracking, in contrast, there has been very limited work on standardizing quantitative evaluation. One of the few exceptions is the well-known PETS dataset [Ferryman and Ellis, 2010] addressing primarily surveillance applications. The 2009 version consists of 3 subsets S: S1 targeting person count and density estimation, S2 targeting people tracking, and S3 targeting flow analysis and event recognition. The simplest sequence for tracking (S2L1) consists of a scene with few pedestrians, and for that sequence, state-of-the-art methods perform extremely well with accuracies of over 90% given a good set of initial detections [Henriques et al., 2011, Milan et al., 2014, Zamir et al., 2012. Therefore, methods started to focus on tracking objects in the most challenging sequence, i.e., with the highest crowd density, but hardly ever on the complete dataset. Even for this widely used benchmark, we observe that tracking results are commonly obtained inconsistently, involving using different subsets of the available data, inconsistent model training that is often prone to overfitting, varying evaluation scripts, and different detection inputs. Results are thus not easily comparable. Hence, the questions that arise are: (i) are these sequences already too easy for current tracking methods?, (ii) do methods simply overfit?, and (iii) are existing methods poorly evaluated?
The PETS team organizes a workshop approximately once a year to which researchers can submit their results, and methods are evaluated under the same conditions. Although this is indeed a fair comparison, the fact that submissions are evaluated only once a year means that the use of this benchmark for high impact conferences like ICCV or CVPR remains challenging. Furthermore, the sequences tend to be focused only on surveillance scenarios and lately on specific tasks such as vessel tracking. Surveillance videos have a low frame rate, fixed camera viewpoint, and low pedestrian density. The ambition of MOTChallenge is to tackle more general scenarios including varying viewpoints, illumination conditions, different frame rates, and levels of crowdedness.
A well-established and useful way of organizing datasets is through standardized challenges. These are usually in the form of web servers that host the data and through which results are uploaded by the users. Results are then evaluated in a centralized way by the server and afterward presented online to the public, making a comparison with any other method immediately possible.
There are several datasets organized in this fashion: the Labeled Faces in the Wild [Huang et al., 2007] for unconstrained face recognition, the PASCAL VOC [Everingham et al., 2015] for object detection and the Im-ageNet large scale visual recognition challenge [Russakovsky et al., 2015].
The KITTI benchmark [Geiger et al., 2012] was introduced for challenges in autonomous driving, which includes stereo/flow, odometry, road and lane estimation, object detection, and orientation estimation, as well as tracking. Some of the sequences include crowded pedestrian crossings, making the dataset quite challenging, but the camera position is located at a fixed height for all sequences.
Another work that is worth mentioning is [Alahi et al., 2014], in which the authors collected a large amount of data containing 42 million pedestrian trajectories. Since annotation of such a large collection of data is infeasible, they use a denser set of cameras to create the "ground-truth" trajectories. Though we do not aim at collecting such a large amount of data, the goal of our benchmark is somewhat similar: to push research in tracking forward by generalizing the test data to a larger set that is highly variable and hard to overfit.
DETRAC [Wen et al., 2020] is a benchmark for vehicle tracking, following a similar submission system to the one we proposed with MOTChallenge. This benchmark consists of a total of 100 sequences, 60% of which are used for training. Sequences are recorded from a high viewpoint (surveillance scenarios) with the goal of vehicle tracking.
Evaluation. A critical question with any dataset is how to measure the performance of the algorithms. In the case of multiple object tracking, the CLEAR-MOT metrics [Stiefelhagen et al., 2006] have emerged as the standard measures. By measuring the intersection over union of bounding boxes and matching those from groundtruth annotations and results, measures of accuracy and precision can be computed. Precision measures how well the persons are localized, while accuracy evaluates how many distinct errors such as missed targets, ghost trajectories, or identity switches are made.
Alternatively, trajectory-based measures by Wu and Nevatia [2006] evaluate how many trajectories were mostly tracked, mostly lost, and partially tracked, relative to the track lengths. These are mainly used to assess track coverage. The IDF1 metric [Ristani et al., 2016] was introduced for MOT evaluation in a multi-camera setting. Since then it has been adopted for evaluation in the standard single-camera setting in our benchmark. Contrary to MOTA, the ground truth to predictions mapping is established at the level of entire tracks instead of on frame by frame level, and therefore, measures long-term tracking quality. In Sec. 8 we report IDF1 performance in conjunction with MOTA. A detailed discussion on the measures can be found in Sec. 7.
A key parameter in both families of metrics is the intersection over union threshold which determines whether a predicted bounding box was matched to an annotation. It is fairly common to observe methods compared under different thresholds, varying from 25% to 50%. There are often many other variables and implementation details that differ between evaluation scripts, which may affect results significantly. Furthermore, the evaluation script is not the only factor. Recently, a thorough study [Mathias et al., 2014] on face detection benchmarks showed that annotation policies vary greatly among datasets. For example, bounding boxes can be defined tightly around the object, or more loosely to account for pose variations. The size of the bounding box can greatly affect results since the intersection over union depends directly on it.
Standardized benchmarks are preferable for comparing methods in a fair and principled way. Using the same ground-truth data and evaluation methodology is the only way to guarantee that the only part being evaluated is the tracking method that delivers the results. This is the main goal of the MOTChallenge benchmark.

History of MOTChallenge
The first benchmark was released in October 2014 and it consists of 11 sequences for training and 11 for testing, where the testing sequences have not been available publicly. We also provided a set of detections and evaluation scripts. Since its release, 692 tracking results were submitted to the benchmark, which has quickly become the standard for evaluating multiple pedestrian tracking methods in high impact conferences such as ICCV, CVPR, and ECCV. Together with the release of the new data, we organized the 1st Workshop on Benchmarking Multi-Target Tracking (BMTT) in conjunction with the IEEE Winter Conference on Applications of Computer Vision (WACV) in 2015. 3 After the success of the first release of sequences, we created a 2016 edition, with 14 longer and more crowded sequences and a more accurate annotation policy which we describe in this manuscript (Sec. C.1). For the release of MOT16, we organized the second workshop 4 in conjunction with the European Conference in Computer Vision (ECCV) in 2016.
For the third release of our dataset, MOT17, we improved the annotation consistency over the MOT16 sequences and provided three public sets of detections, on which trackers need to be evaluated. For this release, we organized a Joint Workshop on Tracking and Surveillance in conjunction with the Performance Evaluation of Tracking and Surveillance (PETS) Ellis, 2010, Ferryman andShahrokni, 2009] workshop and the Conference on Vision and Pattern Recognition (CVPR) in 2017 5 .
In this paper, we focus on the MOT15, MOT16, and MOT17 benchmarks because numerous methods have already submitted their results to these challenges for several years that allow us to analyze these methods and to draw conclusions about research trends in multiobject tracking. Nonetheless, we are continuously working on our benchmark and frequently release new challenges and datasets. At the 4th MOTChallenge workshop 6 , organized in conjunction with CVPR 2019, we presented eight new sequences that we released as the  [Voigtlaender et al., 2019]. The future vision of MOTChallenge is to establish it as a general platform for benchmarking multi-object tracking, expanding beyond pedestrian tracking. To this end, we recently added a public benchmark for multicamera 3D zebrafish tracking [Pedersen et al., 2020], and a benchmark for the large-scale Tracking any Object (TAO) dataset [Dave et al., 2020]. This dataset consists of 2,907 videos, covering 833 classes by 17,287 tracks.
In Fig. 1, we plot the evolution of the number of users, submissions, and trackers created since MOT-Challenge was released to the public in 2014. Since our 2nd workshop was announced at ECCV, we have experienced steady growth in the number of users as well as submissions.

MOT15 Release
One of the key aspects of any benchmark is data collection. The goal of MOTChallenge is not only to compile yet another dataset with completely new data but rather to: (1) create a common framework to test tracking methods on, and (2) gather existing and new challenging sequences with very different characteristics (frame rate, pedestrian density, illumination, or point of view) in order to challenge researchers to develop more general tracking methods that can deal with all types of se-7 https://motchallenge.net/workshops/bmtt2020/ quences. In Tab. 5 of the Appendix we show an overview of the sequences included in the benchmark.

Sequences
We have compiled a total of 22 sequences that combine different videos from several sources [Andriluka et al., 2010, Benfold and Reid, 2011, Ess et al., 2008, Ferryman and Ellis, 2010, Geiger et al., 2012 and new data collected from us. We use half of the data for training and a half for testing, and the annotations of the testing sequences are not released to the public to avoid (over)fitting of methods to specific sequences. Note, the test data contains over 10 minutes of footage and 61440 annotated bounding boxes, therefore, it is hard for researchers to over-tune their algorithms on such a large amount of data. This is one of the major strengths of the benchmark.
We collected 6 new challenging sequences, 4 filmed from a static camera and 2 from a moving camera held at pedestrian's height. Three sequences are particularly challenging: a night sequence filmed from a moving camera and two outdoor sequences with a high density of pedestrians. The moving camera together with the low illumination creates a lot of motion blur, making this sequence extremely challenging. A smaller subset of the benchmark including only these six new sequences were presented at the 1st Workshop on Benchmarking Multi-Target Tracking 8 , where the top-performing method reached MOTA (tracking accuracy) of only 12.7%. This confirms the difficulty of the new sequences. 9

Detections
To detect pedestrians in all images of the MOT15 edition, we use the object detector of Dollár et al. [2014], which is based on aggregated channel features (ACF). We rely on the default parameters and the pedestrian model trained on the INRIA dataset [Dalal and Triggs, 2005], rescaled with a factor of 0.6 to enable the detection of smaller pedestrians. The detector performance along with three sample frames is depicted in Fig. 2, for both the training and the test set of the benchmark. Recall does not reach 100% because of the non-maximum suppression applied.
We cannot (nor necessarily want to) prevent anyone from using a different set of detections. However, we require that this is noted as part of the tracker's description and is also displayed in the rating table.

Weaknesses of MOT15
By the end of 2015, it was clear that a new release was due for the MOTChallenge benchmark. The main weaknesses of MOT15 were the following: -Annotations: we collected annotations online for the existing sequences, while we manually annotated the new sequences. Some of the collected annotations were not accurate enough, especially in scenes with moving cameras. -Difficulty: generally, we wanted to include some wellknown sequences, e.g., PETS2009, in the MOT15 benchmark. However, these sequences have turned out to be too simple for state-of-the-art trackers why we concluded to create a new and more challenging benchmark.
To overcome these weaknesses, we created MOT16, a collection of all-new challenging sequences (including our new sequences from MOT15) and creating annotations following a more strict protocol (see Sec. C.1 of the Appendix).

MOT16 and MOT17 Releases
Our ambition for the release of MOT16 was to compile a benchmark with new and more challenging sequences compared to MOT15. Figure 3 presents an overview of the benchmark training and test sequences (detailed information about the sequences is presented in Tab. 9 in the Appendix).
MOT17 consists of the same sequences as MOT16, but contains two important changes: (i) the annotations are further improved, i.e., increasing the accuracy of the bounding boxes, adding missed pedestrians, annotating additional occluders, following the comments received by many anonymous benchmark users, as well as the second round of sanity checks, (ii) the evaluation system significantly differs from MOT17, including the evaluation of tracking methods using three different detectors in order to show the robustness to varying levels of noisy detections.

MOT16 sequences
We compiled a total of 14 sequences, of which we use half for training and a half for testing. The annotations of the testing sequences are not publicly available. The sequences can be classified according to moving/static camera, viewpoint, and illumination conditions ( Fig. 12 in Appendix). The new data contains almost 3 times more bounding boxes for training and testing than MOT15. Most sequences are filmed in high resolution, and the mean crowd density is 3 times higher when compared to the first benchmark release. Hence, the new sequences present a more challenging benchmark than MOT15 for the tracking community.

Detections
We evaluate several state-of-the-art detectors on our benchmark, and summarize the main findings in Fig. 4. To evaluate the performance of the detectors for the task of tracking, we evaluate them using all bounding   [Ren et al., 2015] and a detector with scale-dependent pooling (SDP) , both of which outperform the previous DPM method. After a discussion held in one of the MOTChallenge workshops, we agreed to provide all three detections as public detections, effectively changing the way MOTChallenge evaluates trackers. The motivation is to challenge trackers further to be more general and work with detections of varying quality. These detectors have different characteristics, as can be seen in in Fig. 4. Hence, a tracker that can work with all three inputs is going to be inherently more robust. The evaluation for MOT17 is, therefore, set to evaluate the output of trackers on all three detection sets, averaging their performance for the final ranking. A detailed breakdown of detection bounding box statistics on individual sequences is provided in Tab. 10 in the Appendix.

MOT20 Outlook
In 2020 we released our latest dataset MOT20, focusing on very crowded scenes, where the object density can reach up to 246 pedestrians per frame. We provide eight sequences that are filmed in both indoor and outdoor locations, during day and night. This dataset contains over 2,000,000 bounding boxes and 3,833 tracks -10 times more than MOT16. Therefore, the benchmark provides new challenging sequences in which the trackers have to demonstrate their ability to cope with extremely dense scenarios. For the benchmark, we provide public detections (examples are shown in Fig. 5(d)) obtained using Faster-RCNN [Ren et al., 2015] detector.
At the time of this article, we have received only 11 submissions for MOT20, hence a discussion of the results is not yet significant nor informative, and is left for future work.

Evaluation
MOTChallenge is also a platform for a fair comparison of state-of-the-art tracking methods. By providing authors with standardized ground-truth data, evaluation metrics, scripts, as well as a set of precomputed detections, all methods are compared under the same conditions, thereby isolating the performance of the tracker from other factors. In the past, a large number of metrics for quantitative evaluation of multiple target tracking have been proposed [Bernardin and Stiefelhagen, 2008, Li et al., 2009, Schuhmacher et al., 2008, Smith et al., 2005, Stiefelhagen et al., 2006, Wu and Nevatia, 2006]. Choosing "the right" one is largely application dependent and the quest for a unique, general evaluation measure is still ongoing. On the one hand, it is desirable to summarize the performance into a single number to enable a direct comparison between methods. On the other hand, one might want to provide more informative performance estimates by detailing the types of errors the algorithms make, which precludes a clear ranking.
Following a recent trend [Bae and Yoon, 2014, Milan et al., 2014, Wen et al., 2014, we employ three sets of tracking performance measures that have been established in the literature: (i) the frame-to-frame based CLEAR-MOT metrics proposed by Stiefelhagen et al. [2006], (ii) track quality measures proposed by Wu and Nevatia [2006], and (iii) trajectory-based IDF1 proposed by Ristani et al. [2016]. These evaluation measures give a complementary view on tracking performance. The main representative of CLEAR-MOT measures, Multi-Object Tracking Accuracy (MOTA), is evaluated based on frame-to-frame matching between track predictions and ground truth. It explicitly penalizes identity switches between consecutive frames, thus evaluating tracking performance only locally. This measure tends to put more emphasis on object detection performance compared to temporal continuity. In contrast, track quality measures [Wu and Nevatia, 2006] and IDF1 Ristani et al. [2016], perform prediction-to-ground-truth matching on a trajectory level and over-emphasize the temporal continuity aspect of the tracking performance. In this section, we first introduce the matching between predicted track and ground-truth annotation before we present the final measures. All evaluation scripts used in our benchmark are publicly available 10 .

Multiple Object Tracking Accuracy
MOTA summarizes three sources of errors with a single performance measure: where t is the frame index and GT is the number of ground-truth objects. where F N are the false negatives, i.e., the number of ground truth objects that were not detected by the method. F P are the false positives, i.e., the number of objects that were falsely detected by the method but do not exist in the ground-truth. IDSW is the number of identity switches, i.e., how many times a given trajectory changes from one ground-truth object to another. The computation of these values as well as other implementation details of the evaluation tool are detailed in Appendix Sec. D. We report the percentage MOTA (−∞, 100] in our benchmark. Note, that MOTA can also be negative in cases where the number of errors made by the tracker exceeds the number of all objects in the scene.
Justification. We note that MOTA has been criticized in the literature for not having different sources of errors properly balanced. However, to this day, MOTA is still considered to be the most expressive measure for singlecamera MOT evaluation. It was widely adopted for ranking methods in more recent tracking benchmarks, such as PoseTrack [Andriluka et al., 2018], KITTI tracking [Geiger et al., 2012], and the newly released Lyft [Kesten et al., 2019], Waymo [Sun et al., 2020], and ArgoVerse [Chang et al., 2019] benchmarks. We adopt MOTA for ranking, however, we recommend taking alternative evaluation measures [Ristani et al., 2016, Wu andNevatia, 2006] into the account when assessing the tracker's performance.
Robustness. One incentive behind compiling this benchmark was to reduce dataset bias by keeping the data as diverse as possible. The main motivation is to challenge state-of-the-art approaches and analyze their performance in unconstrained environments and on unseen data. Our experience shows that most methods can be heavily overfitted on one particular dataset, and may not be general enough to handle an entirely different setting without a major change in parameters or even in the model.

Multiple Object Tracking Precision
The Multiple Object Tracking Precision is the average dissimilarity between all true positives and their corresponding ground-truth targets. For bounding box overlap, this is computed as: where c t denotes the number of matches in frame t and d t,i is the bounding box overlap of target i with its assigned ground-truth object in frame t. MOTP thereby gives the average overlap of t d between all correctly matched hypotheses and their respective objects and ranges between t d := 50% and 100%. It is important to point out that MOTP is a measure of localisation precision, not to be confused with the positive predictive value or relevance in the context of precision / recall curves used, e.g., in object detection.
In practice, it quantifies the localization precision of the detector, and therefore, it provides little information about the actual performance of the tracker.

Identification Precision, Identification
Recall, and F1 Score CLEAR-MOT evaluation measures provide event-based tracking assessment. In contrast, the IDF1 measure [Ristani et al., 2016] is an identity-based measure that emphasizes the track identity preservation capability over the entire sequence. In this case, the predictions-toground-truth mapping is established by solving a bipartite matching problem, connecting pairs with the largest temporal overlap. After the matching is established, we can compute the number of True Positive IDs (IDTP), False Negative IDs (IDFN), and False Positive IDs (IDFP), that generalise the concept of per-frame TPs, FNs and FPs to tracks. Based on these quantities, we can express the Identification Precision (IDP) as: and Identification Recall (IDR) as: Note that IDP and IDR are the fraction of computed (ground-truth) detections that are correctly identified. IDF1 is then expressed as a ratio of correctly identified detections over the average number of ground-truth and computed detections and balances identification precision and recall through their harmonic mean:

Track quality measures
The final measures that we report on our benchmark are qualitative, and evaluate the percentage of the groundtruth trajectory that is recovered by a tracking algorithm. Each ground-truth trajectory can be consequently classified as mostly tracked (MT), partially tracked (PT), and mostly lost (ML). As defined in [Wu and Nevatia, 2006], a target is mostly tracked if it is successfully tracked for at least 80% of its life span, and considered lost in case it is covered for less than 20% of its total length. The remaining tracks are considered to be partially tracked. A higher number of MT and a few ML is desirable. Note, that it is irrelevant for this measure whether the ID remains the same throughout the track.
We report MT and ML as a ratio of mostly tracked and mostly lost targets to the total number of ground-truth trajectories.
In certain situations, one might be interested in obtaining long, persistent tracks without trajectory gaps. To that end, the number of track fragmentations (FM) counts how many times a ground-truth trajectory is interrupted (untracked). A fragmentation event happens each time a trajectory changes its status from tracked to untracked and is resumed at a later point. Similarly to the ID switch ratio (c.f. Sec. D.1), we also provide the relative number of fragmentations as FM / Recall.

Analysis of State-of-the-Art Trackers
We now present an analysis of recent multi-object tracking methods that submitted to the benchmark. This is divided into two parts: (i) categorization of the methods, where our goal is to help young scientists to navigate the recent MOT literature, and (ii) error and runtime analysis, where we point out methods that have shown good performance on a wide range of scenes. We hope this can eventually lead to new promising research directions.
We consider all valid submissions to all three benchmarks that were published before April 17th, 2020, and used the provided set of public detections. For this analysis, we focus on methods that are peer-reviewed, i.e., published at a conference or a journal. We evaluate a total of 101 (public) trackers; 73 trackers were tested on MOT15, 74 on MOT16 and 57 on MOT17. A small subset of the submissions 11 were done by the benchmark organizers and not by the original authors of the respective method. Results for MOT15 are summarized in Tab. 1, for MOT16 in Tab. 2 and for MOT17 in Tab. 3. The performance of the top 15 ranked trackers is demonstrated in Fig. 6.

M P N T ra ck
T ra ck to r+ + v2 T rc tr D 1 5 T ra ck to r+ +    The entries are ordered from easiest sequence / best performing method, to hardest sequence / poorest performance, respectively. The mean performance across all sequences / submissions is depicted with a thick black line.

Trends in Tracking
Global optimization. The community has long used the paradigm of tracking-by-detection for MOT, i.e., dividing the task into two steps: (i) object detection and (ii) data association, or temporal linking between detections. The data association problem could be viewed as finding a set of disjoint paths in a graph, where nodes in the graph represent object detections, and links hy-pothesize feasible associations. Detectors usually produce multiple spatially-adjacent detection hypotheses, that are usually pruned using heuristic non-maximum suppression (NMS).
Before 2015, the community mainly focused on finding strong, preferably globally optimal methods to solve the data association problem. The task of linking detections into a consistent set of trajectories was often cast as, e.g., a graphical model and solved with k-shortest  hghan et al., 2015], using joint probabilistic data association filter (JPDA) [Rezatofighi et al., 2015] or as a variational Bayesian model in OVBT [Ban et al., 2016]. A number of tracking approaches investigate the efficacy of using a Probability Hypothesis Density (PHD) filter-based tracking framework [Baisa, 2019,a, Baisa and Wallace, 2019, Fu et al., 2018, Sanchez-Matilla et al., 2016, Song and Jeon, 2016, Song et al., 2019, Wojke and Paulus, 2016. This family of methods estimate states of multiple targets and data association simultaneously, reaching 30.72% MOTA on MOT15 (GMPHD OGM), 41% and 40.42% on MOT16 (PHD GSDL and GM-PHD ReId, respectively) and 49.94% (GMPHD OGM) on MOT17.
Newer methods [Tang et al., 2015] bypassed the need to pre-process object detections with NMS. They proposed a multi-cut optimization framework, which finds the connected components in a graph that represent feasible solutions, clustering all detections that correspond to the same target. This family of methods (JMC [Tang et al., 2016], LMP [Tang et al., 2017], NLLMPA [Levinkov et al., 2017], JointMC [Keuper et al., 2018] Motion models. A lot of attention has also been given to motion models, used as additional association affinity cues, e.g., SMOT [Dicle et al., 2013], CEM [Milan et al., 2014], TBD [Geiger et al., 2014], ELP [McLaughlin et al., 2015] and MotiCon [Leal-Taixé et al., 2014]. The pairwise costs for matching two detections were based on either simple distances or simple appearance models, such as color histograms. These methods achieve around 38% MOTA on MOT16 (see Tab. 2) and 25% on MOT15 (see Tab. 1).
Hand-crafted affinity measures. After that, the attention shifted towards building robust pairwise similarity costs, mostly based on strong appearance cues or a combination of geometric and appearance cues. This shift is clearly reflected in an improvement in tracker performance and the ability for trackers to handle more complex scenarios. For example, LINF1 [Fagot-Bouquet et al., 2016] uses sparse appearance models, and oICF [Kieritz et al., 2016] use appearance models based on integral channel features. Top-performing methods of this class incorporate long-term interest point trajectories, e.g., NOMT [Choi, 2015], and, more recently, learned models for sparse feature matching JMC [Tang et al., 2016] and JointMC [Keuper et al., 2018] [Wang and Fowlkes, 2016] demonstrates a significant performance boost by learning the parameters of linear cost association functions within a network flow tracking framework, especially when compared to methods using a similar optimization framework but hand-crafted association cues, e.g. Leal-Taixé et al. [2014]. The parameters are learned using structured SVM [Taskar et al., 2003]. MDP [Xiang et al., 2015] goes one step further and proposes to learn track management policies (birth/death/association) by mod- In parallel, methods start leveraging the representational power of deep learning, initially by utilizing transfer learning. MHT DAM [Kim et al., 2015] learns to adapt appearance models online using multi-output regularized least squares. Instead of weak appearance features, such as color histograms, they extract base features for each object detection using a pre-trained convolutional neural network. With the combination of the powerful MHT tracking framework [Reid, 1979] and online-adapted features used for data association, this method surpasses MDP and attains over 32% MOTA on MOT15 and 45% MOTA on MOT16. Alternatively, JMC [Tang et al., 2016] and JointMC [Keuper et al., 2018] use a pre-learned deep matching model to improve the pairwise affinity measures. All aforementioned methods leverage pre-trained models.
Learning appearance models. The next clearly emerging trend goes in the direction of learning appearance models for data association in end-to-end fashion directly on the target (i.e., MOT15, MOT16, MOT17) datasets. SiameseCNN [Leal-Taixe et al., 2016] trains a siamese convolutional neural network to learn spatiotemporal embeddings based on object appearance and estimated optical flow using contrastive loss [Hadsell et al., 2006]. The learned embeddings are then combined with contextual cues for robust data association. This method uses similar linear programming based optimization framework [Zhang et al., 2008] compared to LP SSVM [Wang and Fowlkes, 2016], however, it surpasses it significantly performance-wise, reaching 29% MOTA on MOT15. This demonstrates the efficacy of fine-tuning appearance models directly on the target dataset and utilizing convolutional neural networks. This approach is taken a step further with QuadMOT [Son et al., 2017], which similarly learns spatio-temporal embeddings of object detections. However, they train their siamese network using quadruplet loss [Chen et al., 2017b] and learn to place embedding vectors of temporallyadjacent detections instances closer in the embedding space. These methods reach 33.42% MOTA in MOT15 and 41.1% on MOT16.
The learning process, in this case, is supervised. Different from that, HCC [Ma et al., 2018b] learns appearance models in an unsupervised manner. To this end, they train their method using object trajectories obtained from the test set using offline correlation clusteringbased tracking framework [Levinkov et al., 2017]. TO [Manen et al., 2016], on the other hand, proposes to mine detection pairs over consecutive frames using single object trackers to learn affinity measures which are  App. -appearance model; OA -online target appearance adaptation; TR -target regression; ON -online method; (L) -learned. Components: MC -motion compensation module; OF -optical flow; Re-id -learned re-identification module; HoG -histogram of oriented gradients; NCC -normalized cross-correlation; IoU -intersection over union. Association: GMMCP -Generalized maximum multi-clique problem; MCF -Min-cost flow formulation [Zhang et al., 2008]; LP -linear programming; MHT -multi-hypothesis tracking [Reid, 1979]; MWIS -maximum independent set problem; CRF -conditional random field formulation.

Method
Box   Re-id (L) -MC TT17 [Zhang et al., 2020] Appearance, geometry (L) MHT/MWIS CRF TRACK [Xiang et al., 2020] Appearance, geometry ( [Sadeghian et al., 2017] leverages recurrent neural networks in order to encode appearance, motion, pedestrian interactions and learns to combine these sources of information. STRN [Xu et al., 2019] proposes to leverage relational neural networks to learn to combine association cues, such as appearance, motion, and geometry. RAR  proposes recurrent auto-regressive networks for learning a generative appearance and motion model for data association. These methods achieve 37.57% MOTA on MOT15 and 47.17% on MOT16.
Fine-grained detection. A number of methods employ additional fine-grained detectors and incorporate their outputs into affinity measures, e.g., a head detector in the case of FWT [Henschel et al., 2018], or a body joint detectors in JBNOT [Henschel et al., 2019], which are shown to help significantly with occlusions. The latter attains 52.63% MOTA on MOT17, which places it as the second-highest scoring method published in 2019.
Tracking-by-regression. Several methods leverage ensembles of (trainable) single-object trackers (SOTs), used to regress tracking targets from the detected objects, utilized in combination with simple track management (birth/death) strategies. We refer to this family of models as MOT-by-SOT or tracking-by-regression. We note that this paradigm for MOT departs from the traditional view of the multi-object tracking problem in computer vision as a generalized assignment problem (or multi-dimensional assignment problem), i.e. the problem of grouping object detections into a discrete set of tracks. Instead, methods based on target regression bring the focus back to the target state estimation. We believe the reasons for the success of these methods is two-fold: (i) rapid progress in learning-based SOT [Held et al., 2016 [Xiang et al., 2020] uses MHT framework [Reid, 1979]  re-evaluating appearance/motion models based on progressively merged tracklets. This approach is one of the top on MOT17, achieving 54.87% MOTA.
In the spirit of combining optimization-based methods with learning, Zhang et al.
[2020] revisits CRFbased tracking models and learns unary and pairwise potential functions in an end-to-end manner. On MOT16, this method attains MOTA of 50.31%.
We do observe trends towards learning to perform end-to-end MOT. To the best of our knowledge, the first method attempting this is RNN LSTM [Milan et al., 2017], which jointly learns motion affinity costs and to perform bi-partite detection association using recurrent neural networks (RNNs). FAMNet [Chu and Ling, 2019] uses a single network to extract appearance features from images, learns association affinities, and estimates multi-dimensional assignments of detections into object tracks. The multi-dimensional assignment is performed via a differentiable network layer that computes rank-1 estimation of the assignment tensor, which allows for back-propagation of the gradient. They perform learning with respect to binary cross-entropy loss between predicted assignments and ground-truth.
All aforementioned methods have one thing in common -they optimize network parameters with respect to proxy losses that do not directly reflect tracking quality, most commonly measured by the CLEAR-MOT evaluation measures [Stiefelhagen et al., 2006]. To evaluate MOTA, the assignment between track predictions and ground truth needs to be established; this is usually performed using the Hungarian algorithm [Kuhn and Yaw, 1955], which contains non-differentiable operations. To address this discrepancy DeepMOT  proposes the missing link -a differentiable matching layer that allows expressing a soft, differentiable variant of MOTA and MOTP.
In general, the major outstanding trends we observe in the past years all leverage the representational power of deep learning for learning association affinities, learning to adapt appearance models online [Chu et al., 2019, Kim et al., 2018, Zhu et al., 2018 and learning to regress tracking targets [Bergmann et al., 2019, Chu et al., 2019, Zhu et al., 2018. Figure 7 visualizes the promise of deep learning for tracking by plotting the performance of submitted models over time and by type.
The main common components of top-performing methods are: (i) learned single-target regressors (singleobject trackers), such as [Held et al., 2016, and (ii) re-identification modules [Bergmann et al., 2019]. These methods fall short in bridging large occlusion gaps. To this end, we identified Graph Neural Network-based methods [Brasó and Leal-Taixé, 2020] as a promising direction for future research. We observed the emergence of methods attempting to learn to track objects in end-to-end fashion instead of training individual modules of tracking pipelines [Chu and Ling, 2019, Milan et al., 2017. We believe this is one of the key aspects to be addressed to further improve performance and expect to see more approaches leveraging deep learning for that purpose.

Runtime Analysis
Different methods require a varying amount of computational resources to track multiple targets. Some methods may require large amounts of memory while others  Fig. 9: Detailed error analysis. The plots show the error ratios for trackers w.r.t detector (taken at the lowest confidence threshold), for two types of errors: false positives (FP) and false negatives (FN). Values above 1 indicate a higher error count for trackers than for detectors. Note that most trackers concentrate on removing false alarms provided by the detector at the cost of eliminating a few true positives, indicated by the higher FN count.
need to be executed on a GPU. For our purpose, we ask each benchmark participant to provide the number of seconds required to produce the results on the entire dataset, regardless of the computational resources used.
It is important to note that the resulting numbers are therefore only indicative of each approach and are not immediately comparable to one another. Figure 8 shows the relationship between each submission's performance measured by MOTA and its efficiency in terms of frames per second, averaged over the entire dataset. There are two observations worth pointing out. First, the majority of methods are still far below real-time performance, which is assumed at 25 Hz. Second, the average processing rate ∼ 5 Hz does not differ much between the different sequences, which suggests that the different object densities (9 ped./fr. in MOT15 and 26 ped./fr. in MOT16/MOT17) do not have a large impact on the speed the models. One explanation is that novel learning methods have an efficient forward computation, which does not vary much depending on the number of objects. This is in clear contrast to classic methods that relied on solving complex optimization problems at inference, which increased computation significantly as the pedestrian density increased. However, this conclusion has to be taken with caution because the runtimes are reported by the users on a trust base and cannot be verified by us.

Error Analysis
As we now, different applications have different requirements, e.g., for surveillance it is critical to have few false negatives, while for behavior analysis, having a false positive can mean computing wrong motion statistics. In this section, we take a closer look at the most common errors made by the tracking approaches. This sim-ple analysis can guide researchers in choosing the best method for their task. In Fig. 9, we show the number of false negatives (FN, blue) and false positives (FP, red) created by the trackers on average with respect to the number of FN/FP of the object detector, used as an input. A ratio below 1 indicates that the trackers have improved in terms of FN/FP over the detector. We show the performance of the top 15 trackers, averaged over sequences. We order them according to MOTA from left to right in decreasing order.
We observe all top-performing trackers reduce the amount of FPs and FNs compared to the public detections. While the trackers reduce FPs significantly, FNs are decreased only slightly. Moreover, we can see a direct correlation between the FN and tracker performance, especially for MOT16 and MOT17 datasets, since the number of FNs is much larger than the number of FPs. The question is then, why are methods not focusing on reducing FNs? It turns out that "filling the gaps" between detections, what is commonly thought trackers should do, is not an easy task.
It is not until 2018 that we see methods drastically decreasing the number of FNs, and as a consequence, MOTA performance leaps forward. As shown in Fig. 7, this is due to the appearance of learning-based trackingby-regression methods [Bergmann et al., 2019, Brasó and Leal-Taixé, 2020, Chu et al., 2017, Zhu et al., 2018. Such methods decrease the number of FNs the most by effectively using image evidence not covered by detection bounding boxes and regressing targets to areas where they are visible but missed by detectors. This brings us back to the common wisdom that trackers should be good at "filling the gaps" between detections.
Overall, it is clear that MOT17 still presents a challenge both in terms of detection as well as tracking. It will require significant further future efforts to bring performance to the next level. In particular, the next challenge that future methods will need to tackle is bridging large occlusion gaps, which can not be naturally resolved by methods performing target regression, as these only work as long as the target is (partially) visible.

Conclusion and Future Work
We have introduced MOTChallenge, a standardized benchmark for a fair evaluation of single-camera multi-person tracking methods. We presented its first two data releases with about 35,000 frames of footage and almost 700,000 annotated pedestrians. Accurate annotations were carried out following a strict protocol, and extra classes such as vehicles, sitting people, reflections, or distractors were also annotated in the second release to provide further information to the community.
We have further analyzed the performance of 101 trackers; 73 MOT15, 74 MOT16, and 57 on MOT17 obtaining several insights. In the past, at the center of vision-based MOT were methods focusing on global optimization for data association. Since then, we observed that large improvements were made by hand-crafting strong affinity measures and leveraging deep learning for learning appearance models, used for better data association. More recent methods moved towards directly regressing bounding boxes, and learning to adapt target appearance models online. As the most promising recent trends that hold a large potential for future research, we identified the methods that are going in the direction of learning to track objects in an end-to-end fashion, combining optimization with learning.
We believe our Multiple Object Tracking Benchmark and the presented systematic analysis of existing tracking algorithms will help identify the strengths and weaknesses of the current state of the art and shed some light into promising future research directions.

A Benchmark Submission
Our benchmark consists of the database and evaluation server on one hand, and the website as the user interface on the other. It is open to everyone who respects the submission policies (see next section). Before participating, every user is required to create an account, providing an institutional and not a generic e-mail address 12 . After registering, the user can create a new tracker with a unique name and enter all additional details. It is mandatory to indicate: the full name and a brief description of the method a reference to the publication of the method, if already existing, whether the method operates online or on a batch of frames and whether the source code is publicly available, whether only the provided or also external training and detection data were used.
After creating all details of a new tracker, it is possible to assign open challenges to this tracker and submit results to the different benchmarks. To participate in a challenge the user has to provide the following information for each challenge they want to submit to: name of the challenge in which the tracker will be participating, a reference to the publication of the method, if already existing, the total runtime in seconds for computing the results for the test sequences and the hardware used, and whether only provided data was used for training, or also data from other sources were involved.
The user can then submit the results to the challenge in the format described in Sec. B.1. The tracking results are automatically evaluated and appear on the user's profile. The results are not automatically displayed in the public ranking table. The user can decide at any point in time to make the results public. Results can be published anonymously, e.g., to enable a blind review process for a corresponding paper. In this case, we ask to provide the venue and the paper ID or a similar unique reference. We request that a proper reference to the method's description is added upon acceptance of the paper. Anonymous entries are hidden from the benchmark after six months of inactivity.
The trackers and challenge meta information such as description, project page, runtime, or hardware can be edited at any time. Visual results of all public submissions, as well as annotations and detections, can be viewed and downloaded on the individual result pages of the corresponding tracker.

A.1 Submission Policy
The main goal of this benchmark is to provide a platform that allows for objective performance comparison of multiple target tracking approaches on real-world data. Therefore, we introduce a few simple guidelines that must be followed by all participants.
Training. Ground truth is only provided for the training sequences. It is the participant's own responsibility to find the best setting using only the training data. The use of additional training data must be indicated during submission and will be visible in the public ranking table. The use of ground truth labels on the test data is strictly forbidden. This or any other misuse of the benchmark will lead to the deletion of the participant's account and their results.
Detections. We also provide a unique set of detections (see Sec. 4.2) for each sequence. We expect all tracking-by-detection algorithms to use the given detections. In case the user wants to present results with another set of detections or is not using detections at all, this should be clearly stated during submission and will also be displayed in the results table.
Submission frequency. Generally, we expect one single submission for a particular method per benchmark. If for any reason the user needs to re-compute and re-submit the results (e.g. due to a bug discovered in the implementation), they may do so after a waiting period of 72 hours after the last submission to submit to the same challenge with any of their trackers. This policy should discourage the use of the benchmark server for training and parameter tuning on the test data. The number of submissions is counted and displayed for each method. We allow a maximum number of 4 submissions per tracker and challenge. We allow a user to create several tracker instances for different tracking models. However, a user can only create a new tracker every 30 days. Under no circumstances must anyone create a second account and attempt to re-submit in order to bypass the waiting period. Such behavior will lead to the deletion of the accounts and exclusion of the user from participating in the benchmark.

A.2 Challenges and Workshops
We have two modalities for submission: the general open-end challenges and the special challenges. The main challenges, 2D MOT 2015, 3D MOT 2015, MOT16, andMOT17 are always open for submission and are nowadays the standard evaluation platform for multi-target tracking methods submitting to computer vision conferences such as CVPR, ICCV or ECCV.
Special challenges are similar in spirit to the widely known PASCAL VOC series [Everingham et al., 2015], or the Ima-geNet competitions [Russakovsky et al., 2015]. Each special challenge is linked to a workshop. The first edition of our series was the WACV 2015 Challenge that consisted of six outdoor sequences with both moving and static cameras, followed by the 2nd edition held in conjunction with ECCV 2016 on which we evaluated methods on the new MOT16 sequences. The MOT17 sequences were presented in the Joint Workshop on Tracking and Surveillance in conjunction with the Performance Evaluation of Tracking and Surveillance (PETS) Ellis, 2010, Ferryman andShahrokni, 2009] benchmark at the Conference on Vision and Pattern Recognition (CVPR) in 2017. The results and winning methods were presented during the respective workshops. Submission to those challenges is open only for a short period of time, i.e., there is a fixed submission deadline for all participants. Each method must have an accompanying paper presented at the workshop. The results of the methods are kept hidden until the date of the workshop itself when the winning method is revealed and a prize is awarded.

B MOT 15
We have compiled a total of 22 sequences, of which we use half for training and half for testing. The annotations of the testing sequences are not released in order to avoid (over)fitting of the methods to the specific sequences. Nonetheless, the test data contains over 10 minutes of footage and 61440 annotated bounding boxes, therefore, it is hard for researchers to overtune their algorithms on such a large amount of data. This is one of the major strengths of the benchmark. We classify the sequences according to: We classify the sequences according to: -Moving or static camera: the camera can be held by a person, placed on a stroller [Ess et al., 2008] or on a car [Geiger et al., 2012], or can be positioned fixed in the scene. -Viewpoint: the camera can overlook the scene from a high position, a medium position (at pedestrian's height), or at a low position. -Weather : the illumination conditions in which the sequence was taken. Sequences with strong shadows and saturated parts of the image make tracking challenging, while night sequences contain a lot of motion blur, which is often a problem for detectors. Indoor sequences contain a lot of reflections, while the sequences classified as normal do not contain heavy illumination artifacts that potentially affect tracking.
We divide the sequences into training and testing to have a balanced distribution, as shown in Figure 10.

B.1 Data format
2D image coordinates. The position is indicated by the topleft corner as well as the width and height of the bounding box. This is followed by a single number, which in the case of detections denotes their confidence score. The last three numbers indicate the 3D position in real-world coordinates of the pedestrian. This position represents the feet of the person. In the case of 2D tracking, these values will be ignored and can be left at −1.

C.1 Annotation rules
We follow a set of rules to annotate every moving person or vehicle within each sequence with a bounding box as accurately as possible. In this section, we define a clear protocol that was obeyed throughout the entire dataset annotations of MOT16 and MOT17 to guarantee consistency.

C.1.1 Target class
In this benchmark, we are interested in tracking moving objects in videos. In particular, we are interested in evaluating multiple people tracking algorithms. Therefore, people will be the center of attention of our annotations. We divide the pertinent classes into three categories: (i) moving or standing pedestrians;  Bounding box height Height in pixels of the pedestrian bounding box 7 Confidence score Indicates how confident the detector is that this instance is a pedestrian. For the ground truth and results, it acts as a flag whether the entry is to be considered. 8 x 3D x position of the pedestrian in real-world coordinates (−1 if not available) 9 y 3D y position of the pedestrian in real-world coordinates (−1 if not available) 10 z 3D z position of the pedestrian in real-world coordinates (−1 if not available) (ii) people that are not in an upright position or artificial representations of humans; and (iii) vehicles and occluders.
In the first group, we annotate all moving or standing (upright) pedestrians that appear in the field of view and can be determined as such by the viewer. People on bikes or skateboards will also be annotated in this category (and are typically found by modern pedestrian detectors). Furthermore, if a person briefly bends over or squats, e.g. to pick something up or to talk to a child, they shall remain in the standard pedestrian class. The algorithms that submit to our benchmark are expected to track these targets.
In the second group, we include all people-like objects whose exact classification is ambiguous and can vary depending on the viewer, the application at hand, or other factors. We annotate all static people that are not in an upright position, e.g. sitting, lying down. We also include in this category any artificial representation of a human that might fire a detection response, such as mannequins, pictures, or reflections. People behind glass should also be marked as distractors. The idea is to use these annotations in the evaluation such that an algorithm is neither penalized nor rewarded for tracking, e.g., a sitting person or a reflection.
In the third group, we annotate all moving vehicles such as cars, bicycles, motorbikes and non-motorized vehicles (e.g. strollers), as well as other potential occluders. These annotations will not play any role in the evaluation, but are provided to the users both for training purposes and for computing the  Fig. 11: Left: An overview of annotated classes. The classes in orange will be the central ones to evaluate on. The red classes include ambiguous cases such that neither recovering nor missing will be penalized in the evaluation. The classes in green are annotated for training purposes and for computing the occlusion level of all pedestrians. Right: An exemplar of an annotated frame. Note how partially cropped objects are also marked outside of the frame. Also note that the bounding box encloses the entire person but not e.g. the white bag of Pedestrian 1 (bottom left).
level of occlusion of pedestrians. Static vehicles (parked cars, bicycles) are not annotated as long as they do not occlude any pedestrians. The rules are summarized in Tab. 7, and in Fig. 11 we present a diagram of the classes of objects we annotate, as well as a sample frame with annotations. Start as early as possible.
End as late as possible.
Keep ID as long as the person is inside the field of view and its path can be determined unambiguously.

How?
The bounding box should contain all pixels belonging to that person and at the same time be as tight as possible. Occlusions Always annotate during occlusions if the position can be determined unambiguously.
If the occlusion is very long and it is not possible to determine the path of the object using simple reasoning (e.g. constant velocity assumption), the object will be assigned a new ID once it reappears.

C.1.2 Bounding box alignment
The bounding box is aligned with the object's extent as accurately as possible. It should contain all object pixels belonging to that instance and at the same time be as tight as possible. This implies that a walking side-view pedestrian will typically have a box whose width varies periodically with the stride, while a front view or a standing person will maintain a more constant aspect ratio over time. If the person is partially occluded, the extent is estimated based on other available information such as expected size, shadows, reflections, previous and future frames and other cues. If a person is cropped by the image border, the box is estimated beyond the original frame to represent the entire person and to estimate the level of cropping. If an occluding object cannot be accurately enclosed in one box (e.g. a tree with branches or an escalator may require a large bounding box where most of the area does not belong to the actual object), then several boxes may be used to better approximate the extent of that object.
Persons on vehicles are only annotated separately from the vehicle when clearly visible. For example, children inside strollers or people inside cars are not annotated, while motorcyclists or bikers are.

C.1.3 Start and end of trajectories
The box (track) appears as soon as the person's location and extent can be determined precisely. This is typically the case when ≈ 10% of the person becomes visible. Similarly, the track ends when it is no longer possible to pinpoint the exact location. In other words, the annotation starts as early and ends as late as possible such that the accuracy is not forfeited. The box coordinates may exceed the visible area. A person leaving the field of view and re-appearing at a later point is assigned a new ID.

C.1.4 Minimal size
Although the evaluation will only take into account pedestrians that have a minimum height in pixels, annotations contain all objects of all sizes as long as they are distinguishable by the annotator. In other words, all targets are annotated independently of their sizes in the image.

C.1.5 Occlusions
There is no need to explicitly annotate the level of occlusion. This value is be computed automatically using the annotations. We leverage the assumption that for two or more overlapping bounding boxes the object with the lowest y-value of the bounding box is closest to the camera and therefore occlude the other object behind it. Each target is fully annotated through occlusions as long as its extent and location can be determined accurately. If a target becomes completely occluded in the middle of a sequence and does not become visible later, the track is terminated (marked as 'outside of view'). If a target reappears after a prolonged period such that its location is ambiguous during the occlusion, it is assigned a new ID.

C.1.6 Sanity check
Upon annotating all sequences, a "sanity check" is carried out to ensure that no relevant entities are missed. To that end, we run a pedestrian detector on all videos and add all high-confidence detections that correspond to either humans or distractors to the annotation list.    The first number indicates in which frame the object appears, while the second number identifies that object as belonging to a trajectory by assigning a unique ID (set to −1 in a detection file, as no ID is assigned yet). Each object can be assigned to only one trajectory. The next four numbers indicate the position of the bounding box of the pedestrian in 2D image coordinates. The position is indicated by the top-left corner as well as the width and height of the bounding box. This is followed by a single number, which in the case of detections denotes their confidence score. The last two numbers for detection files are ignored (set to -1).
An example of such a 2D detection file is: 1, -1, 794.2, 47.5, 71.2, 174.8, 67.5, -1, -1 1, -1, 164.1, 19.6, 66.5, 163.2, 29.4, -1, -1 1, -1, 875.4, 39.9, 25.3, 145.0, 19.6, -1, -1 2, -1, 781.7, 25.1, 69.2, 170.2, 58.1, -1, -1 For the ground truth and result files, the 7 th value (confidence score) acts as a flag whether the entry is to be considered. A value of 0 means that this particular instance is ignored in the evaluation, while a value of 1 is used to mark it as active. The 8 th number indicates the type of object annotated, following the convention of Tab Fig. 13: Four cases illustrating tracker-to-target assignments. (a) An ID switch occurs when the mapping switches from the previously assigned red track to the blue one. (b) A track fragmentation is counted in frame 3 because the target is tracked in frames 1-2, then interrupts, and then reacquires its 'tracked' status at a later point. A new (blue) track hypothesis also causes an ID switch at this point. (c) Although the tracking results are reasonably good an optimal single-frame assignment in frame 1 is propagated through the sequence, causing 5 missed targets (FN) and 4 false positives (FP). Note that no fragmentations are counted in frames 3 and 6 because tracking of those targets is not resumed at a later point. (d) A degenerate case illustrating that target re-identification is not handled correctly. An interrupted ground-truth trajectory will typically cause a fragmentation. Also note the less intuitive ID switch, which is counted because blue is the closest target in frame 5 that is not in conflict with the mapping in frame 4. its matched hypothesis. False positives and false negatives are plotted as empty circles. See figure caption for more details.
After determining true matches and establishing correspondences it is now possible to compute the metrics. We do so by concatenating all test sequences and evaluating the entire benchmark. This is in general more meaningful than averaging per-sequences figures because of the large variation on the number of targets per sequence.

D.2 Distance measure
The relationship between ground-truth objects and a tracker output is established using bounding boxes on the image plane. Similar to object detection [Everingham et al., 2015], the intersection over union (a.k.a. the Jaccard index) is usually employed as the similarity criterion, while the threshold t d is set to 0.5 or 50%.

D.3 Target-like annotations
People are a common object class present in many scenes, but should we track all people in our benchmark? For example, should we track static people sitting on a bench? Or people on bicycles? How about people behind a glass? We define the target class of MOT16 and MOT17 as all upright people, standing or walking, that are reachable along the viewing ray without a physical obstacle. For instance, reflections or people behind a transparent wall or window are excluded. We also exclude from our target class people on bicycles (riders) or other vehicles.
For all these cases where the class is very similar to our target class (see Fig. 14), we adopt a similar strategy as in [Mathias et al., 2014]. That is, a method is neither penalized nor rewarded for tracking or not tracking those similar classes. Since a detector is likely to fire in those cases, we do not want to penalize a tracker with a set of false positives for properly following that set of detections, i.e., of a person on a bicycle. Likewise, we do not want to penalize with false Fig. 14: The annotations include different classes of objects similar to the target class, a pedestrian in our case. We consider these special classes (distractor, reflection, static person and person on vehicle) to be so similar to the target class that a tracker should neither be penalized nor rewarded for tracking them in the sequence. negatives a tracker that is based on motion cues and therefore does not track a sitting person.
To handle these special cases, we adapt the tracker-totarget assignment algorithm to perform the following steps: 1. At each frame, all bounding boxes of the result file are matched to the ground truth via the Hungarian algorithm. 2. All result boxes that overlap more than the matching threshold (> 50%) with one of these classes (distractor, static person, reflection, person on vehicle) excluded from the evaluation. 3. During the final evaluation, only those boxes that are annotated as pedestrians are used.