Structural and Multidisciplinary Optimization

, Volume 57, Issue 3, pp 925–945 | Cite as

Active expansion sampling for learning feasible domains in an unbounded input space

  • Wei Chen
  • Mark Fuge


Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples’ feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods — the Neighborhood-Voronoi algorithm and the straddle heuristic — that operate over fixed input variable bounds.


Active learning Adaptive sampling Feasible domain identification Gaussian process Exploitation-exploration trade-off 



The authors thank the anonymous reviewers whose efforts improved the manuscript. This work was funded through a University of Maryland Minta Martin Grant.


  1. Agarwal A (2013) Selective sampling algorithms for cost-sensitive multiclass prediction. ICML (3) 28:1220–1228Google Scholar
  2. Alabdulmohsin I, Gao X, Zhang X (2015) Efficient active learning of halfspaces via query synthesis. In: Proceedings of the Twenty-Ninth AAAI conference on artificial intelligence. AAAI Press, pp 2483–2489Google Scholar
  3. Angluin D (2004) Queries revisited. Theor Comput Sci 313(2):175–194MathSciNetCrossRefzbMATHGoogle Scholar
  4. Argamon-Engelson S, Dagan I (1999) Committee-based sample selection for probabilistic classifiers. J Artif Intell Res(JAIR) 11:335–360zbMATHGoogle Scholar
  5. Awasthi P, Feldman V, Kanade V (2013) Learning using local membership queries Shalev-Shwartz S, Steinwart I (eds), vol 30, Proceedings of Machine Learning Research, PrincetonGoogle Scholar
  6. Baram Y, Yaniv RE, Luz K (2004) Online choice of active learning algorithms. J Mach Learn Res 5:255–291MathSciNetGoogle Scholar
  7. Basudhar A, Missoum S (2008) Adaptive explicit decision functions for probabilistic design and optimization using support vector machines. Comput Struct 86(19):1904–1917CrossRefGoogle Scholar
  8. Basudhar A, Missoum S (2010) An improved adaptive sampling scheme for the construction of explicit boundaries. Struct Multidiscip Optim 42(4):517–529CrossRefGoogle Scholar
  9. Bellman R (1957) Dynamic programming. Princeton University Press, PrincetonzbMATHGoogle Scholar
  10. Bouneffouf D (2016) Exponentiated gradient exploration for active learning. Computers 5(1):1CrossRefGoogle Scholar
  11. Bridson R (2007) Fast poisson disk sampling in arbitrary dimensions. In: ACM SIGGRAPH 2007 sketches SIGGRAPH ’07. ACM, New York,, (to appear in print)
  12. Bryan B, Nichol RC, Genovese CR, Schneider J, Miller CJ, Wasserman L (2006) Active learning for identifying function threshold boundaries. In: Advances in neural information processing systems, pp 163–170Google Scholar
  13. Campbell C, Cristianini N, Smola AJ (2000) Query learning with large margin classifiers. In: Proceedings of the seventeenth international conference on machine learning. Morgan Kaufmann Publishers Inc., pp 111–118Google Scholar
  14. Cavallanti G, Cesa-Bianchi N, Gentile C (2009) Linear classification and selective sampling under low noise conditions. In: Advances in neural information processing systems, pp 249–256Google Scholar
  15. Cesa-Bianchi N, Gentile C, Orabona F (2009) Robust bounds for classification via selective sampling. In: Proceedings of the 26th annual international conference on machine learning. ACM, pp 121–128Google Scholar
  16. Chen W, Fuge M (2017) Beyond the known: detecting novel feasible domains over an unbounded design space. J Mech Des 139(11):111,405CrossRefGoogle Scholar
  17. Chen Z, Qiu H, Gao L, Li X, Li P (2014) A local adaptive sampling method for reliability-based design optimization using kriging model. Struct Multidiscip Optim 49(3):401–416MathSciNetCrossRefGoogle Scholar
  18. Chen Z, Peng S, Li X, Qiu H, Xiong H, Gao L, Li P (2015) An important boundary sampling method for reliability-based design optimization using kriging model. Struct Multidiscip Optim 52(1):55–70MathSciNetCrossRefGoogle Scholar
  19. Chen L, Hassani H, Karbasi A (2016) Near-optimal active learning of halfspaces via query synthesis in the noisy setting. arXiv:160303515
  20. Chen W, Fuge M, Chazan J (2017) Design manifolds capture the intrinsic complexity and dimension of design spaces. J Mech Des 139(5):051,102. CrossRefGoogle Scholar
  21. Cohn D, Atlas L, Ladner R (1994) Improving generalization with active learning. Mach Learn 15 (2):201–221Google Scholar
  22. Dagan I, Engelson SP (1995) Committee-based sampling for training probabilistic classifiers. In: Proceedings of the twelfth international conference on machine learningGoogle Scholar
  23. Dasgupta S, Kalai AT, Monteleoni C (2009) Analysis of perceptron-based active learning. J Mach Learn Res 10:281–299MathSciNetzbMATHGoogle Scholar
  24. Dekel O, Gentile C, Sridharan K (2012) Selective sampling and active learning from single and multiple teachers. J Mach Learn Res 13(Sep):2655–2697MathSciNetzbMATHGoogle Scholar
  25. Devanathan S, Ramani K (2010) Creating polytope representations of design spaces for visual exploration using consistency techniques. J Mech Des 132(8):081,011CrossRefGoogle Scholar
  26. Freund Y, Seung HS, Shamir E, Tishby N (1997) Selective sampling using the query by committee algorithm. Mach Learn 28(2):133–168CrossRefzbMATHGoogle Scholar
  27. Gotovos A, Casati N, Hitz G, Krause A (2013) Active learning for level set estimation. In: Proceedings of the twenty-third international joint conference on artificial intelligence. AAAI Press, pp 1344–1350Google Scholar
  28. Hoang TN, Low BKH, Jaillet P, Kankanhalli M (2014) Nonmyopic 𝜖-bayes-optimal active learning of gaussian processes. In: Xing E P, Jebara T (eds) Proceedings of the 31st international conference on machine learning, PMLR, vol 32. Proceedings of Machine Learning Research, Bejing, pp 739–747Google Scholar
  29. Hoi SC, Jin R, Zhu J, Lyu MR (2009) Semisupervised svm batch mode active learning with applications to image retrieval. ACM Trans Inf Syst (TOIS) 27(3):16CrossRefGoogle Scholar
  30. Hsu WN, Lin HT (2015) Active learning by learning. In: Twenty-Ninth AAAI conference on artificial intelligenceGoogle Scholar
  31. Huang YC, Chan KY (2010) A modified efficient global optimization algorithm for maximal reliability in a probabilistic constrained space. J Mech Des 132(6):061,002CrossRefGoogle Scholar
  32. Huang SJ, Jin R, Zhou ZH (2010) Active learning by querying informative and representative examples. In: Advances in neural information processing systems, pp 892–900Google Scholar
  33. Jackson JC (1997) An efficient membership-query algorithm for learning dnf with respect to the uniform distribution. J Comput Syst Sci 55(3):414–440CrossRefzbMATHGoogle Scholar
  34. Kandasamy K, Schneider J, Póczos B (2017) Query efficient posterior estimation in scientific experiments via bayesian active learning. Artif Intell 243:45–56MathSciNetCrossRefzbMATHGoogle Scholar
  35. Kapoor A, Grauman K, Urtasun R, Darrell T (2010) Gaussian processes for object categorization. Int J Comput Vis 88(2):169–188CrossRefGoogle Scholar
  36. King RD, Whelan KE, Jones FM, Reiser PG, Bryant CH, Muggleton SH, Kell DB, Oliver SG (2004) Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427(6971):247–252CrossRefGoogle Scholar
  37. Krause A, Guestrin C (2007) Nonmyopic active learning of gaussian processes: an exploration-exploitation approach. In: Proceedings of the 24th international conference on machine learning. ACM, pp 449–456Google Scholar
  38. Krempl G, Kottke D, Lemaire V (2015) Optimised probabilistic active learning (opal). Mach Learn 100 (2-3):449–476MathSciNetCrossRefzbMATHGoogle Scholar
  39. Larson BJ, Mattson CA (2012) Design space exploration for quantifying a system model’s feasible domain. J Mech Des 134(4):041,010CrossRefGoogle Scholar
  40. Lee TH, Jung JJ (2008) A sampling technique enhancing accuracy and efficiency of metamodel-based rbdo: constraint boundary sampling. Comput Struct 86(13):1463–1476CrossRefGoogle Scholar
  41. Lewis DD, Catlett J (1994) Heterogeneous uncertainty sampling for supervised learning. In: Proceedings of the eleventh international conference on machine learning, pp 148–156Google Scholar
  42. Lewis DD, Gale WA (1994) A sequential algorithm for training text classifiers. In: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval. Springer-Verlag New York Inc., New York, pp 3–12Google Scholar
  43. Ma Y, Garnett R, Schneider J (2014) Active area search via bayesian quadrature. In: Artificial intelligence and statistics, pp 595–603Google Scholar
  44. Mac Aodha O, Campbell ND, Kautz J, Brostow GJ (2014) Hierarchical subquery evaluation for active learning on a graph. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 564–571Google Scholar
  45. McCallum A, Nigam K et al (1998) Employing em and pool-based active learning for text classification. In: ICML, vol 98, pp 359–367Google Scholar
  46. Nguyen HT, Smeulders A (2004) Active learning using pre-clustering. In: Proceedings of the twenty-first international conference on machine learning ICML ’04. ACM, New York, p 79,, (to appear in print)
  47. Nowacki H (1980) Modelling of design decisions for cad. In: Computer aided design modelling, systems engineering, CAD-Systems. Springer, pp 177–223Google Scholar
  48. Orabona F, Cesa-Bianchi N (2011) Better algorithms for selective sampling. In: Proceedings of the 28th international conference on machine learning (ICML-11), pp 433–440Google Scholar
  49. Osugi T, Kim D, Scott S (2005) Balancing exploration and exploitation: a new algorithm for active machine learning. In: Fifth IEEE international conference on data mining. IEEEGoogle Scholar
  50. Rasmussen C, Williams C (2006) Gaussian processes for machine learning. The MIT PressGoogle Scholar
  51. Ren Y, Papalambros PY (2011) A design preference elicitation query as an optimization process. J Mech Des 133(11):111,004CrossRefGoogle Scholar
  52. Schohn G, Cohn D (2000) Less is more: active learning with support vector machines. In: ICML, pp 839–846Google Scholar
  53. Settles B (2010) Active learning literature survey. Univ Wiscons Madison 52(55–66):11Google Scholar
  54. Settles B, Craven M (2008) An analysis of active learning strategies for sequence labeling tasks. In: Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 1070–1079Google Scholar
  55. Singh P, Van Der Herten J, Deschrijver D, Couckuyt I, Dhaene T (2017) A sequential sampling strategy for adaptive classification of computationally expensive data. Struct Multidiscip Optim 55(4):1425–1438MathSciNetCrossRefGoogle Scholar
  56. Tong S, Koller D (2001) Support vector machine active learning with applications to text classification. J Mach Learn Res 2:45–66zbMATHGoogle Scholar
  57. Yang X, Liu Y, Gao Y, Zhang Y, Gao Z (2015a) An active learning kriging model for hybrid reliability analysis with both random and interval variables. Struct Multidiscip Optim 51(5):1003–1016MathSciNetCrossRefGoogle Scholar
  58. Yang Y, Ma Z, Nie F, Chang X, Hauptmann AG (2015b) Multi-class active learning by uncertainty sampling with diversity maximization. Int J Comput Vis 113(2):113–127MathSciNetCrossRefGoogle Scholar
  59. Yannou B, Moreno F, Thevenot HJ, Simpson TW (2005) Faster generation of feasible design points. In: ASME 2005 international design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers, pp 355–363Google Scholar
  60. Zhu X, Lafferty J, Ghahramani Z (2003) Combining active learning and semi-supervised learning using gaussian fields and harmonic functions. In: ICML 2003 workshop on the continuum from labeled to unlabeled data in machine learning and data mining, vol 3Google Scholar
  61. Zhuang X, Pan R (2012) A sequential sampling strategy to improve reliability-based design optimization with implicit constraint functions. J Mech Des 134(2):021,002CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Mechanical EngineeringUniversity of MarylandCollege ParkUSA

Personalised recommendations