Active Learning of Combinatorial Features for Interactive Optimization

  • Paolo Campigotto
  • Andrea Passerini
  • Roberto Battiti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6683)

Abstract

We address the problem of automated discovery of preferred solutions by an interactive optimization procedure. The algorithm iteratively learns a utility function modeling the quality of candidate solutions and uses it to generate novel candidates for the following refinement. We focus on combinatorial utility functions made of weighted conjunctions of Boolean variables. The learning stage exploits the sparsity-inducing property of 1-norm regularization to learn a combinatorial function from the power set of all possible conjunctions up to a certain degree. The optimization stage uses a stochastic local search method to solve a weighted MAX-SAT problem. We show how the proposed approach generalizes to a large class of optimization problems dealing with satisfiability modulo theories. Experimental results demonstrate the effectiveness of the approach in focusing towards the optimal solution and its ability to recover from suboptimal initial choices.

Keywords

Utility Function Multiobjective Optimization Gold Solution Interactive Optimization Stochastic Local Search 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds.): Multiobjective Optimization: Interactive and Evolutionary Approaches. Springer, Heidelberg (2008)MATHGoogle Scholar
  2. 2.
    Battiti, R., Brunato, M., Mascia, F.: Reactive search and intelligent optimization. Springer, Heidelberg (2008)MATHGoogle Scholar
  3. 3.
    Battiti, R., Brunato, M.: Reactive search optimization: Learning while optimizing. In: Gendreau, M., Potvin, J.Y. (eds.) Handbook of Metaheuristics, 2nd edn. Int. Series in Op. Res. & Man. Sci., vol. 146, pp. 543–571. Springer Science, Heidelberg (2010)CrossRefGoogle Scholar
  4. 4.
    Battiti, R., Campigotto, P.: Reactive Search Optimization: Learning While Optimizing. An Experiment in Interactive Multi-Objective Optimization. In: VIII Metaheur. Int. Conf. (MIC 2009), Germany. LNCS, Springer, Heidelberg (2009)Google Scholar
  5. 5.
    Barrett, C., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. In: Handbook of Satisfiability, pp. 825–885. IOS Press, Amsterdam (2009)Google Scholar
  6. 6.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B 58, 267–288 (1996)MathSciNetMATHGoogle Scholar
  7. 7.
    Friedman, J., Hastie, T., Rosset, S., Tibshirani, R.: Discussion of boosting papers. Annals of Statistics 32, 102–107 (2004)MATHGoogle Scholar
  8. 8.
    Suanders, C., Gammerman, A., Vovk, V.: Ridge regression learning algorithm in dual variables. In: ICML 1998 (1998)Google Scholar
  9. 9.
    Khardon, R., Roth, D., Servedio, R.: Efficiency versus convergence of boolean kernels for on-line learning algorithms. Journal of Artif. Int. Res. 24(1), 341–356 (2005)MathSciNetMATHGoogle Scholar
  10. 10.
    Nieuwenhuis, R., Oliveras, A.: DPLL(T) with exhaustive theory propagation and its application to difference logic. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 321–334. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. 11.
    de Moura, L., Bjørner, N.: Satisfiability modulo theories: An appetizer. In: Oliveira, M.V.M., Woodcock, J. (eds.) SBMF 2009. LNCS, vol. 5902, pp. 23–36. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  12. 12.
    Nelson, G., Oppen, D.C.: Simplification by cooperating decision procedures. ACM Trans. Program. Lang. Syst. 1(2), 245–257 (1979)CrossRefMATHGoogle Scholar
  13. 13.
    Dutertre, B., de Moura, L.: A Fast Linear-Arithmetic Solver for DPLL(T). In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 81–94. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  14. 14.
    Nieuwenhuis, R., Oliveras, A.: On sat modulo theories and optimization problems. In: In Theory and App. of Sat. Testing. LNCS, pp. 156–169. Springer, Heidelberg (2006)Google Scholar
  15. 15.
    Cimatti, A., Franzén, A., Griggio, A., Sebastiani, R., Stenico, C.: Satisfiability modulo the theory of costs: Foundations and applications. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 99–113. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  16. 16.
    Settles, B.: Active learning literature survey. Technical Report Computer Sciences Technical Report 1648, University of Wisconsin-Madison (2009)Google Scholar
  17. 17.
    Radlinski, F., Joachims, T.: Active exploration for learning rankings from clickthrough data. In: 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2007), pp. 570–579. ACM Press, New York (2007)CrossRefGoogle Scholar
  18. 18.
    Xu, Z., Akella, R., Zhang, Y.: Incorporating diversity and density in active learning for relevance feedback. In: Amati, G., Carpineto, C., Romano, G. (eds.) ECIR 2007. LNCS, vol. 4425, pp. 246–257. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  19. 19.
    Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Machine Learning 46(1-3), 389–422 (2002)CrossRefMATHGoogle Scholar
  20. 20.
    Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.: Choosing multiple parameters for support vector machines. Machine Learning 46(1-3), 131–159 (2002)CrossRefMATHGoogle Scholar
  21. 21.
    Kaizhu, H., Irwin, K., Michael, R.: Direct Zero-Norm Optimization for Feature Selection. In: IEEE International Conference on Data Mining, pp. 845–850 (2008)Google Scholar
  22. 22.
    Gelain, M., Pini, M.S., Rossi, F., Venable, K.B., Walsh, T.: Elicitation strategies for soft constraint problems with missing preferences: Properties, algorithms and experimental studies. Artif. Intell. 174(3-4), 270–294 (2010)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Gelain, M., Pini, M.S., Rossi, F., Venable, K.B., Wilson, N.: Interval-valued soft constraint problems. Annals of Mat. and Art. Int. 58, 261–298 (2010)MathSciNetMATHGoogle Scholar
  24. 24.
    Zhu, J., Rosset, S., Hastie, T., Tibshirani, R.: 1-norm Support Vector Machines. In: Neural Information Processing Systems. MIT Press, Cambridge (2003)Google Scholar
  25. 25.
    Chakrabarti, S., Khanna, R., Sawant, U., Bhattacharyya, C.: Structured learning for non-smooth ranking losses. In: 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2008, pp. 88–96. ACM, New York (2008)Google Scholar
  26. 26.
    Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M.: Use of the zero norm with linear models and kernel methods. Journal of Mach. Learn. Res. 3, 1439–1461 (2003)MathSciNetMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Paolo Campigotto
    • 1
  • Andrea Passerini
    • 1
  • Roberto Battiti
    • 1
  1. 1.DISI - Dipartimento di Ingegneria e Scienza dell’InformazioneUniversità degli Studi di TrentoItaly

Personalised recommendations