Ageev, A.A., Sviridenko, M.I.: Pipage rounding: a new method of constructing algorithms with proven performance guarantee. J. Comb. Optim. 8(3), 307–328 (2004)
MathSciNet
CrossRef
Google Scholar
Allen-Zhu, Z., Li, Y., Singh, A., Wang, Y.: Near-optimal design of experiments via regret minimization. In: International Conference on Machine Learning, pp. 126–135 (2017)
Google Scholar
Avron, H., Boutsidis, C.: Faster subset selection for matrices and applications. SIAM J. Matrix Anal. Appl. 34(4), 1464–1499 (2013)
MathSciNet
CrossRef
Google Scholar
Barber, C.B., Dobkin, D.P., Huhdanpaa, H.: The quickhull algorithm for convex hulls. ACM Trans. Math. Softw. (TOMS) 22(4), 469–483 (1996)
MathSciNet
CrossRef
Google Scholar
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)
CrossRef
Google Scholar
Bro, R., De Jong, S.: A fast non-negativity-constrained least squares algorithm. J. Chemom. 11(5), 393–401 (1997)
CrossRef
Google Scholar
Brooks, T.F., Pope, D.S., Marcolini, M.A.: Airfoil self-noise and prediction (1989)
Google Scholar
Chaudhuri, K., Kakade, S.M., Netrapalli, P., Sanghavi, S.: Convergence rates of active learning for maximum likelihood estimation. In: Advances in Neural Information Processing Systems, pp. 1090–1098 (2015)
Google Scholar
Dereziński, M., Warmuth, M.K.: Subsampling for ridge regression via regularized volume sampling. arXiv preprint arXiv:1710.05110 (2017)
Dolia, A.N., De Bie, T., Harris, C.J., Shawe-Taylor, J., Titterington, D.M.: The minimum volume covering ellipsoid estimation in kernel-defined feature spaces. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 630–637. Springer, Heidelberg (2006). https://doi.org/10.1007/11871842_61
CrossRef
Google Scholar
Dulá, J.H., Helgason, R.V.: A new procedure for identifying the frame of the convex hull of a finite collection of points in multidimensional space. Eur. J. Oper. Res. 92(2), 352–367 (1996)
CrossRef
Google Scholar
Dulá, J.H., López, F.J.: Competing output-sensitive frame algorithms. Comput. Geom. 45(4), 186–197 (2012)
MathSciNet
CrossRef
Google Scholar
Fedorov, V.V.: Theory of Optimal Experiments. Elsevier, New York (1972)
Google Scholar
Kulesza, A., et al.: Determinantal point processes for machine learning. Found. Trends® Mach. Learn. 5(2–3), 123–286 (2012)
CrossRef
Google Scholar
Lawson, C.L., Hanson, R.J.: Solving least squares problems, vol. 15. SIAM (1995)
Google Scholar
Li, C., Jegelka, S., Sra, S.: Polynomial time algorithms for dual volume sampling. In: Advances in Neural Information Processing Systems, pp. 5045–5054 (2017)
Google Scholar
Lopez, F.J.: Generating random points (or vectors) controlling the percentage of them that are extreme in their convex (or positive) hull. J. Math. Model. Algorithms 4(2), 219–234 (2005)
MathSciNet
CrossRef
Google Scholar
Mair, S., Boubekki, A., Brefeld, U.: Frame-based data factorizations. In: International Conference on Machine Learning, pp. 2305–2313 (2017)
Google Scholar
Mariet, Z.E., Sra, S.: Elementary symmetric polynomials for optimal experimental design. In: Advances in Neural Information Processing Systems, pp. 2136–2145 (2017)
Google Scholar
Nocedal, J., Wright, S.J.: Sequential quadratic programming. In: Nocedal, J., Wright, S.J. (eds.) Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer, New York (2006). https://doi.org/10.1007/978-0-387-40065-5_18
CrossRef
MATH
Google Scholar
Ottmann, T., Schuierer, S., Soundaralakshmi, S.: Enumerating extreme points in higher dimensions. Nord. J. Comput. 8(2), 179–192 (2001)
MathSciNet
MATH
Google Scholar
Pace, R.K., Barry, R.: Sparse spatial autoregressions. Stat. Probab. Lett. 33(3), 291–297 (1997)
CrossRef
Google Scholar
Pukelsheim, F.: Optimal Design of Experiments. SIAM (2006)
Google Scholar
Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2002)
Google Scholar
Smola, A.J., Schölkopf, B., Müller, K.R.: The connection between regularization operators and support vector kernels. Neural Netw. 11(4), 637–649 (1998)
CrossRef
Google Scholar
Sugiyama, M., Nakajima, S.: Pool-based active learning in approximate linear regression. Mach. Learn. 75(3), 249–274 (2009)
CrossRef
Google Scholar
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58, 267–288 (1996)
MathSciNet
MATH
Google Scholar
Wang, Y., Yu, A.W., Singh, A.: On computationally tractable selection of experiments in measurement-constrained regression models. J. Mach. Learn. Res. 18(143), 1–41 (2017)
MathSciNet
MATH
Google Scholar
Yeh, I.C.: Modeling of strength of high-performance concrete using artificial neural networks. Cem. Concr. Res. 28(12), 1797–1808 (1998)
CrossRef
Google Scholar