Article

Machine Learning

, Volume 90, Issue 1, pp 59-90

First online:

New algorithms for budgeted learning

  • Kun DengAffiliated withDepartment of Statistics, University of Michigan
  • , Yaling ZhengAffiliated withDepartment of Computer Science & Eng., University of Nebraska
  • , Chris BourkeAffiliated withDepartment of Computer Science & Eng., University of Nebraska
  • , Stephen ScottAffiliated withDepartment of Computer Science & Eng., University of Nebraska Email author 
  • , Julie MascialeAffiliated withUnion Pacific

Abstract

We explore the problem of budgeted machine learning, in which the learning algorithm has free access to the training examples’ class labels but has to pay for each attribute that is specified. This learning model is appropriate in many areas, including medical applications. We present new algorithms for choosing which attributes to purchase of which examples, based on algorithms for the multi-armed bandit problem. In addition, we also evaluate a group of algorithms based on the idea of incorporating second-order statistics into decision making. Most of our algorithms are competitive with the current state of art and performed better when the budget was highly limited (in particular, our new algorithm AbsoluteBR2). Finally, we present new heuristics for selecting an instance to purchase after the attribute is selected, instead of selecting an instance uniformly at random, which is typically done. While experimental results showed some performance improvements when using the new instance selectors, there was no consistent winner among these methods.

Keywords

Budgeted learning Multi-armed bandit