Skip to main content

Automated Video Lifting Posture Classification Using Bounding Box Dimensions

Part of the Advances in Intelligent Systems and Computing book series (AISC,volume 820)

Abstract

A method is introduced for automatically classifying lifting postures using simple features obtained through drawing a rectangular bounding box tightly around the body on the sagittal plane in video recordings. Mannequin postures were generated using the University of Michigan 3DSSPP software encompassing a variety of hand locations and were classified into squatting, stooping, and standing. For each mannequin posture a rectangular bounding box was drawn tightly around the mannequin for views in the sagittal plane and rotated by 30 º horizontally. The bounding box dimensions were measured and normalized based on the standing height of the corresponding mannequin. A classification and regression tree algorithm was trained using the height and width of the bounding box to classify the postures. The resulting algorithm misclassified 0.36% of the training-set cases. The algorithm was tested on 30 lifting postures collected from video recordings a variety of industrial lifting tasks, misclassifying 3.33% of test-set cases. The sensitivity and specificity, respectively were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. The algorithm was capable of classifying lifting postures based only on dimensions of bounding boxes which are simple features that can be measured automatically and continuously. We have developed computer vision software that continuously tracks the subject’s body and automatically applies the described bounding box.

Keywords

  • Computer vision
  • Musculoskeletal disorders
  • Exposure assessment

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-96083-8_72
  • Chapter length: 3 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   229.00
Price excludes VAT (USA)
  • ISBN: 978-3-319-96083-8
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   299.99
Price excludes VAT (USA)

References

  1. Chen CH, Hu YH, Yen TY, Radwin RG (2013) Automated video exposure assessment of repetitive hand activity level for a load transfer task. Hum Fact 55(2):298–308

    CrossRef  Google Scholar 

  2. Akkas O, Lee CH, Hu YH, Yen TY, Radwin RG (2016) Measuring elemental time and duty cycle using automated video processing. Ergonomics 59(11):1514–1525

    CrossRef  Google Scholar 

  3. Greene RL, Azari DP, Hu YH, Radwin RG (2017) Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision. Appl Ergon

    Google Scholar 

  4. ACGIH (American Conference of Governmental Industrial Hygienists). TLV®/BEI® Introduction (2017) http://www.acgih.org/tlv-bei-guidelines/tlv-bei-introduction

  5. Mathworks (2017) Decision Trees. https://www.mathworks.com/help/stats/classification-trees-and-regression-trees.html

  6. Breiman L, Friedman J, Stone CJ, Olshen RA (1984) Classification and regression trees. CRC Press

    Google Scholar 

  7. Bao S, Howard N, Spielholz P, Silverstein B (2006) Quantifying repetitive hand activity for epidemiological research on musculoskeletal disorders–Part II: comparison of different methods of measuring force level and repetitiveness. Ergonomics 49(4):381–392

    CrossRef  Google Scholar 

  8. Lu M, Waters T, Krieg E, Werren D (2014) Efficacy of the revised NIOSH lifting equation for predicting risk of low back pain associated with manual lifting: a one-year prospective study. Hum Fact 56(1):73–85

    CrossRef  Google Scholar 

  9. Safetyvideopreviews (2012) Manual Material Handling/Safe Lifting. https://www.youtube.com/watch?v=rrI2n8qehrY&t=8s

  10. University of Michigan Center for Ergonomics. 3DSSPP: Background Information (2017) https://c4e.engin.umich.edu/tools-services/3dsspp-software/3dsspp-background-information/

  11. University of Michigan Center for Ergonomics (2014) Paper Flopping - Job Modification. https://www.youtube.com/watch?v=61cu5qvH0kM&index=54&list=PLn5IJRj74S88rnFFV6ObxS6nFdDXUFiGW

  12. University of Michigan Center for Ergonomics (2017) Stacking, Facing Line2 CE. https://www.youtube.com/watch?v=MxTgvuhVAJA&t=55s

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert G. Radwin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Greene, R. et al. (2019). Automated Video Lifting Posture Classification Using Bounding Box Dimensions. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds) Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). IEA 2018. Advances in Intelligent Systems and Computing, vol 820. Springer, Cham. https://doi.org/10.1007/978-3-319-96083-8_72

Download citation