Skip to main content
Log in

Automated, high-throughput image calibration for parallel-laser photogrammetry

Mammalian Biology Aims and scope Submit manuscript

Cite this article

This article has been updated


Parallel-laser photogrammetry is growing in popularity as a way to collect non-invasive body size data from wild mammals. Despite its many appeals, this method requires researchers to hand-measure (i) the pixel distance between the parallel laser spots (inter-laser distance) to produce a scale within the image, and (ii) the pixel distance between the study subject’s body landmarks (inter-landmark distance). This manual effort is time-consuming and introduces human error: a researcher measuring the same image twice will rarely return the same values both times (resulting in within-observer error), as is also the case when two researchers measure the same image (resulting in between-observer error). Here, we present two independent methods that automate the inter-laser distance measurement of parallel-laser photogrammetry images. One method uses machine learning and image processing techniques in Python, and the other uses image processing techniques in ImageJ. Both of these methods reduce labor and increase precision without sacrificing accuracy. We first introduce the workflow of the two methods. Then, using two parallel-laser datasets of wild mountain gorilla and wild savannah baboon images, we validate the precision of these two automated methods relative to manual measurements and to each other. We also estimate the reduction of variation in final body size estimates in centimeters when adopting these automated methods, as these methods have no human error. Finally, we highlight the strengths of each method, suggest best practices for adopting either of them, and propose future directions for the automation of parallel-laser photogrammetry data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Availability of data and material

Data are available on Dryad data repository at and on GitHub at

Code availability

Code to run the automated methods are available at

Change history

  • 01 October 2022

    Supplementary Informaiton was updated.


Download references


This paper was a collaboration between researchers associated with the Bwindi Gorilla Research Project and the Amboseli Baboon Research Project. We thank the Uganda Wildlife Authority and the Uganda National Council for Science and Technology for permission to conduct research on mountain gorillas in Uganda and for support of this research. We are greatly indebted to the many field assistants who have contributed to this work, and to the Institute for Tropical Forest Conservation for providing logistical support. Particularly, we thank M. Akantorana, D. Musinguzi, J. Mutale, B. Turyananuka, for their long-term contributions to the Bwindi Gorilla Research Project. We also thank Edward Wright for help in the development and training needed for the data collection in Bwindi and Chen Zeng for expertise in developing the method. We thank Anna Lee for collecting baboon images, Elise Malone for measuring baboon images, and Emma Helmich for assisting with the development of the ImageJ method. Particular thanks go to the Amboseli Baboon Research Project directors (J. Altmann, S.C. Alberts, E.A. Archie, J. Tung), and the long-term field team (R.S. Mututua, S. Sayialel, J.K. Warutere, I.L. Siodi). For a complete set of acknowledgments of funding sources, logistical assistance, and data collection and management for the long-term baboon research, please visit


The Bwindi Gorilla Research Project gratefully acknowledges the following funding support for this work: The Wenner-Gren Foundation (ICRG 123), National Science Foundation (NSF BCS 1753651), The George Washington University, The Max Planck Society, United States Fish and Wildlife Service Great Ape Fund and Berggorilla Regenwald Direkthlife. The Amboseli Baboon Research Project gratefully acknowledges the following specific support for this project: the National Science Foundation via a Graduate Research Fellowship to EJL (DGE1644868), the National Institute on Aging (R01AG053308), The Leakey Foundation, the Animal Behavior Society, the Society for the Study of Evolution, and Duke University.

Author information

Authors and Affiliations



Conceptualization: EJL, MER, and SCM; method development skimage: JLR, HY, MER, JG, AC, and SCM; method development ImageJ: EJL, RR, and EEM; data analysis: JLR and EJL; project administration: SCA, MMR, and SCM; writing—original draft: JLR and EJL; writing—review and editing: JLR, EJL, EEM, JG, MMR, SCA, MER, and SCM.

Corresponding author

Correspondence to Shannon C. McFarlin.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare.

Ethics approval

This research was approved by the IACUC at Duke University and adhered to all the laws and regulation governing research in Kenya and Uganda.

Consent to participate

Not applicable.

Consent for publishing

The authors gave consent to publish this paper.

Additional information

Handling editors: Leszek Karczmarski and Scott Y.S. Chui.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliation'.

This article is a contribution to the special issue on “Individual Identification and Photographic Techniques in Mammalian Ecological and Behavioural Research – Part 1: Methods and Concepts” — Editors: Leszek Karczmarski, Stephen C. Y. Chan, Daniel I. Rubenstein, Scott Y.S. Chui and Elissa Z. Cameron.

Electronic supplementary material



Skimage method without machine learning

As noted in ‘Schematics and Workflow for the skimage and ImageJ Methods’, the skimage method can also be performed without the pre-processing step of machine learning to mask the gorilla.

Here we compare the results of the skimage method with machine learning (as presented in the manuscript, Table A1: row 1) to the skimage method using only image processing steps on the full image, not using the machine learning masks, ‘skimage without machine learning’ (Table A1: row 2). This was performed on the original dataset of 100 gorilla images. The image processing takes longer without the machine learning pre-processing step (Table A2). The skimage without machine learning method returned measurements for 100% of images; however, in one image the laser spots were mis-identified (not included in Table A1). Using skimage with and without machine learning produces very similar results on the gorilla dataset (Table A2). Thus, using the skimage method without machine learning may be more suitable for researchers without access to the processing power needed to run the machine learning step, or who are unfamiliar with machine-learning algorithms. However, the image processing code was developed for use with photos of dark animals against green backgrounds. Thus, for researchers who are using images that are distinctly different in color or animal species, implementing the machine learning step on their photographs will almost certainly yield better-quality masking than image processing approaches based on color alone.

Table A1 Percent differences [pixel differences] among inter-laser distances measured manually vs. the skimage method with machine learning and manually vs. the skimage without machine learning method on the gorilla dataset; diff = difference
Table A2 Computing time (minutes) required for skimage, skimage without machine learning, and ImageJ methods to process 100 images. For ImageJ images that need to be hand-cropped, this takes approximately 5 additional seconds per image of manual effort. Note the gorilla images have a longer processing time because they have more pixels per image

Table A3 compares the three methods as in Table 1, but it reports the absolute differences instead of the percent differences. Table A4 compares the three methods as in Table 1, but it provides percent differences as a function of inter-beam distance.

Table A3 Absolute percent differences [and absolute pixel differences] among inter-laser distances measured manually vs. the ImageJ method, manually vs. the skimage method and via the ImageJ method vs. the skimage method on both study animal datasets; diff = difference
Table A4 Percent differences [pixel differences] as a function of inter-beam distance (cm); data are from rows 1 and 4 of Table 1, split by inter-beam distance in centimeters, diff = difference

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Richardson, J.L., Levy, E.J., Ranjithkumar, R. et al. Automated, high-throughput image calibration for parallel-laser photogrammetry. Mamm Biol 102, 615–627 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: