Tuning Machine-Learning Algorithms for Battery-Operated Portable Devices

  • Ziheng Lin
  • Yan Gu
  • Samarjit Chakraborty
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6458)


Machine learning algorithms in various forms are now increasingly being used on a variety of portable devices, starting from cell phones to PDAs. They often form a part of standard applications (e.g. for grammar-checking in email clients) that run on these devices and occupy a significant fraction of processor and memory bandwidth. However, most of the research within the machine learning community has ignored issues like memory usage and power consumption of processors running these algorithms. In this paper we investigate how machine learned models can be developed in a power-aware manner for deployment on resource-constrained portable devices. We show that by tolerating a small loss in accuracy, it is possible to dramatically improve the energy consumption and data cache behavior of these algorithms. More specifically, we explore a typical sequential labeling problem of part-of-speech tagging in natural language processing and show that a power-aware design can achieve up to 50% reduction in power consumption, trading off a minimal decrease in tagging accuracy of 3%.


Low-power Machine Learned Models Part-of-speech Tagging Mobile Machine Learning Applications Power-aware Design 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Austin, T., Larson, E., Ernst, D.: SimpleScalar: An infrastructure for computer system modeling. IEEE Computer 35(2), 59–67 (2002)CrossRefGoogle Scholar
  2. 2.
    Blum, A.L., Langley, P.: Selection of relevant features and examples in machine learning. Artificial Intelligence 97(1–2), 245–271 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Heisele, B., Ho, P., Poggio, T.: Face recognition with support vector machines: Global versus component-based approach. In: Proceedings of ICCV 2001, Vancouver, Canada, pp. 688–694 (2001)Google Scholar
  4. 4.
    Kononenko, I.: Machine learning for medical diagnosis: History, state of the art and perspective. Artificial Intelligence in Medicine 23(1), 89–109 (2001) (invited paper)CrossRefGoogle Scholar
  5. 5.
    Marcus, M.P., Santorini, B., Marcinkiewicz, M.A.: Building a large annotated corpus of english: The Penn Treebank. Computational Linguistics 19(2), 313–330 (1994)Google Scholar
  6. 6.
    Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  7. 7.
    Ratnaparkhi, A.: Trainable methods for surface natural language generation. In: Proceedings of ANLP-NAACL 2002, Seattle, Washington, USA, pp. 194–201 (2000)Google Scholar
  8. 8.
    Zhou, G., Su, J.: Named entity recognition using an HMM-based chunk tagger. In: Proceedings of ACL 2002, Philadelphia, PA, USA, pp. 473–480 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Ziheng Lin
    • 1
  • Yan Gu
    • 2
  • Samarjit Chakraborty
    • 3
  1. 1.Department of Computer ScienceNational University of SingaporeSingapore
  2. 2.Continental Automotive Singapore Pte LtdSingapore
  3. 3.Institute for Real-time Computer SystemsTU MunichGermany

Personalised recommendations