Skip to main content

BrackishMOT: The Brackish Multi-Object Tracking Dataset

  • Conference paper
  • First Online:
Image Analysis (SCIA 2023)

Abstract

There exist no publicly available annotated underwater multi-object tracking (MOT) datasets captured in turbid environments. To remedy this we propose the BrackishMOT dataset with focus on tracking schools of small fish, which is a notoriously difficult MOT task. BrackishMOT consists of 98 sequences captured in the wild. Alongside the novel dataset, we present baseline results by training a state-of-the-art tracker. Additionally, we propose a framework for creating synthetic sequences in order to expand the dataset. The framework consists of animated fish models and realistic underwater environments. We analyse the effects of including synthetic data during training and show that a combination of real and synthetic underwater training data can enhance tracking performance. Links to code and data can be found at https://www.vap.aau.dk/brackishmot.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bergmann, P., Meinhardt, T., Leal-Taixe, L.: Tracking without bells and whistles. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE (Oct 2019). https://doi.org/10.1109/iccv.2019.00103

  2. Bernardin, K., Stiefelhagen, R.: Evaluating multiple object tracking performance: The CLEAR MOT metrics. EURASIP J. Image Video Process. 2008(1), 1–10 (2008). https://doi.org/10.1155/2008/246309

    Article  Google Scholar 

  3. Bewley, M., et al.: Australian sea-floor survey data, with images and expert annotations. Sci. Data 2(1) (Oct 2015). https://doi.org/10.1038/sdata.2015.57

  4. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: Optimal speed and accuracy of object detection (2020). https://doi.org/10.48550/ARXIV.2004.10934

  5. Community, B.O.: Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam (2018), http://www.blender.org

  6. Dendorfer, P., et al.: Mot20: A benchmark for multi object tracking in crowded scenes (2020). https://doi.org/10.48550/ARXIV.2003.09003

  7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (Jun 2009). https://doi.org/10.1109/cvpr.2009.5206848

  8. Fisher, R.B., Chen-Burger, Y.-H., Giordano, D., Hardman, L., Lin, F.-P. (eds.): ISRL, vol. 104. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30208-9

    Book  Google Scholar 

  9. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (Jun 2012). https://doi.org/10.1109/cvpr.2012.6248074

  10. Giordano, D., Palazzo, S., Spampinato, C.: Fish tracking. In: Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data, pp. 123–139. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-30208-9_10

  11. Hartman, C., Benes̆ B.: Autonomous boids. Comput. Animation Virtual Worlds 17(3–4), 199–206 (2006). https://doi.org/10.1002/cav.123

  12. Ishiwaka, Y., et al.: Foids. ACM Trans. Graph. 40(6), 1–15 (2021). https://doi.org/10.1145/3478513.3480520

    Article  Google Scholar 

  13. Jäger, J., Wolff, V., Fricke-Neuderth, K., Mothes, O., Denzler, J.: Visual fish tracking: Combining a two-stage graph approach with CNN-features. In: OCEANS 2017 - Aberdeen. IEEE (Jun 2017). https://doi.org/10.1109/oceanse.2017.8084691

  14. Kano, Y., et al.: An online database on freshwater fish diversity and distribution in mainland southeast asia. Ichthyol. Res. 60(3), 293–295 (2013). https://doi.org/10.1007/s10228-013-0349-8

    Article  Google Scholar 

  15. Kezebou, L., Oludare, V., Panetta, K., Agaian, S.S.: Underwater object tracking benchmark and dataset. In: 2019 IEEE International Symposium on Technologies for Homeland Security (HST). IEEE (Nov 2019). https://doi.org/10.1109/hst47167.2019.9032954

  16. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems. vol. 25. Curran Associates, Inc. (2012), https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf

  17. Leal-Taixé, L., Milan, A., Reid, I., Roth, S., Schindler, K.: Motchallenge 2015: Towards a benchmark for multi-target tracking (2015). https://doi.org/10.48550/ARXIV.1504.01942

  18. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 318–327 (2020). https://doi.org/10.1109/tpami.2018.2858826

    Article  Google Scholar 

  19. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  20. Liu, T., Li, P., Liu, H., Deng, X., Liu, H., Zhai, F.: Multi-class fish stock statistics technology based on object classification and tracking algorithm. Eco. Inform. 63, 101240 (2021). https://doi.org/10.1016/j.ecoinf.2021.101240

    Article  Google Scholar 

  21. Luiten, J., et al.: HOTA: A higher order metric for evaluating multi-object tracking. International Journal of Computer Vision 129(2), 548–578 (Oct 2020). https://doi.org/10.1007/s11263-020-01375-2

  22. Madsen, N., Pedersen, M., Jensen, K.T., Møller, P.R., Andersen, R.E., Moeslund, T.B.: Fishing with c-tucs (cheap tiny underwater cameras) in a sea of possibilities. J. Ocean Technol.16(2), 19–30 (2021), https://www.thejot.net/article-preview/?show_article_preview=1250

  23. Mahmood, A., et al.: Automatic detection of western rock lobster using synthetic data. ICES J. Mar. Sci. 77(4), 1308–1317 (2019). https://doi.org/10.1093/icesjms/fsz223

    Article  Google Scholar 

  24. Mandel, T., et al.: Detection confidence driven multi-object tracking to recover reliable tracks from unreliable detections. Pattern Recogn. 135, 109107 (2023). https://doi.org/10.1016/j.patcog.2022.109107

    Article  Google Scholar 

  25. Martija, M.A.M., Naval, P.C.: SynDHN: Multi-object fish tracker trained on synthetic underwater videos. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE (Jan 2021). https://doi.org/10.1109/icpr48806.2021.9412291

  26. Milan, A., Leal-Taixé, L., Reid, I., Roth, S., Schindler, K.: Mot16: A benchmark for multi-object tracking (2016). https://doi.org/10.48550/ARXIV.1603.00831

  27. Musić, J., Kružić, S., Stančić, I., Alexandrou, F.: Detecting underwater sea litter using deep neural networks: An initial study. In: 2020 5th International Conference on Smart and Sustainable Technologies (SpliTech). IEEE (Sep 2020). https://doi.org/10.23919/splitech49282.2020.9243709

  28. de Oliveira Barreiros, M., de Oliveira Dantas, D., de Oliveira Silva, L.C., Ribeiro, S., Barros, A.K.: Zebrafish tracking using YOLOv2 and kalman filter. Sci. Reports 11(1) (Feb 2021). https://doi.org/10.1038/s41598-021-81997-9

  29. Panetta, K., Kezebou, L., Oludare, V., Agaian, S.: Comprehensive underwater object tracking benchmark dataset and underwater image enhancement with GAN. IEEE J. Oceanic Eng. 47(1), 59–75 (2022). https://doi.org/10.1109/joe.2021.3086907

    Article  Google Scholar 

  30. Pedersen, M., Bruslund Haurum, J., Gade, R., Moeslund, T.B.: Detection of marine animals in a new underwater dataset with varying visibility. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 18–26 (2019)

    Google Scholar 

  31. Pedersen, M., Haurum, J.B., Bengtson, S.H., Moeslund, T.B.: 3d-ZeF: A 3d zebrafish tracking benchmark dataset. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (Jun 2020). https://doi.org/10.1109/cvpr42600.2020.00250

  32. Pedersen, M., Haurum, J.B., Dendorfer, P., Moeslund, T.B.: MOTCOM: The multi-object tracking dataset complexity metric. In: Lecture Notes in Computer Science, pp. 20–37. Springer Nature Switzerland (2022). https://doi.org/10.1007/978-3-031-20074-8_2

  33. Pedersen, M., Madsen, N., Moeslund, T.B.: No machine learning without data: Critical factors to consider when collecting video data in marine environments. J. Ocean Technol. 16(3), (2021)

    Google Scholar 

  34. Podila, S., Zhu, Y.: Animating escape maneuvers for a school of fish. In: Proceedings of the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. ACM (Feb 2017). https://doi.org/10.1145/3023368.3036845

  35. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017). https://doi.org/10.1109/tpami.2016.2577031

    Article  Google Scholar 

  36. Reynolds, C.W.: Flocks, herds and schools: A distributed behavioral model. In: Proceedings of the 14th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’87. ACM Press (1987). https://doi.org/10.1145/37401.37406

  37. Ristani, E., Solera, F., Zou, R., Cucchiara, R., Tomasi, C.: Performance measures and a data set for multi-target, multi-camera tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 17–35. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48881-3_2

    Chapter  Google Scholar 

  38. Romero-Ferrero, F., Bergomi, M.G., Hinz, R.C., Heras, F.J.H., de Polavieja, G.G.: idtracker.ai: tracking all individuals in small or large collectives of unmarked animals. Nature Methods 16(2), 179–182 (Jan 2019). https://doi.org/10.1038/s41592-018-0295-5

  39. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). https://doi.org/10.48550/ARXIV.1409.1556

  40. Stephens, K., Pham, B., Wardhani, A.: Modelling fish behaviour. In: Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia. ACM (Feb 2003). https://doi.org/10.1145/604471.604488

  41. Technologies, U.: Unity (2005), https://www.unity.com, Accessed 21 Mar 2023

  42. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., Abbeel, P.: Domain randomization for transferring deep neural networks from simulation to the real world. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (Sep 2017). https://doi.org/10.1109/iros.2017.8202133

  43. Tremblay, J., et al.: Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE (Jun 2018). https://doi.org/10.1109/cvprw.2018.00143

  44. UnitedNations: Life below water. https://www.un.org/sustainabledevelopment/goal-14-life-below-water/ (2021) Accessed 21 Mar 2023

  45. Wang, H., Zhang, S., Zhao, S., Wang, Q., Li, D., Zhao, R.: Real-time detection and tracking of fish abnormal behavior based on improved YOLOV5 and SiamRPN++. Comput. Electron. Agric. 192, 106512 (2022). https://doi.org/10.1016/j.compag.2021.106512

    Article  Google Scholar 

  46. Xu, Y., Os̆ep, A., Ban, Y., Horaud, R., Leal-Taixé, L., Alameda-Pineda, X.: How to train your deep multi-object tracker. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (Jun 2020). https://doi.org/10.1109/cvpr42600.2020.00682

  47. Yu, F., Wang, D., Shelhamer, E., Darrell, T.: Deep layer aggregation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE (Jun 2018). https://doi.org/10.1109/cvpr.2018.00255

  48. Yu, H., Li, G., Zhang, W., Huang, Q., Du, D., Tian, Q., Sebe, N.: The unmanned aerial vehicle benchmark: object detection, tracking and baseline. Int. J. Comput. Vision 128(5), 1141–1159 (2019). https://doi.org/10.1007/s11263-019-01266-1

    Article  Google Scholar 

  49. Zhang, Y., et al.: ByteTrack: Multi-object tracking by associating every detection box. In: Lecture Notes in Computer Science, pp. 1–21. Springer Nature Switzerland (2022). https://doi.org/10.1007/978-3-031-20047-2_1

  50. Zhang, Y., Wang, C., Wang, X., Zeng, W., Liu, W.: FairMOT: on the fairness of detection and re-identification in multiple object tracking. Int. J. Comput. Vision 129(11), 3069–3087 (2021). https://doi.org/10.1007/s11263-021-01513-4

    Article  Google Scholar 

  51. Zhou, X., Koltun, V., Krähenbühl, P.: Tracking objects as points. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 474–490. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_28

    Chapter  Google Scholar 

Download references

Acknowledgements

This work has been funded by the Independent Research Fund Denmark under the case number 9131-00128B.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Malte Pedersen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pedersen, M., Lehotský, D., Nikolov, I., Moeslund, T.B. (2023). BrackishMOT: The Brackish Multi-Object Tracking Dataset. In: Gade, R., Felsberg, M., Kämäräinen, JK. (eds) Image Analysis. SCIA 2023. Lecture Notes in Computer Science, vol 13885. Springer, Cham. https://doi.org/10.1007/978-3-031-31435-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31435-3_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31434-6

  • Online ISBN: 978-3-031-31435-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics