Abstract
Inspired by the Hardy–Littlewood maximal function, we propose a novel pooling strategy which is called maxfun pooling. It is presented both as a viable alternative to some of the most popular pooling functions, such as max pooling and average pooling, and as a way of interpolating between these two algorithms. We demonstrate the features of maxfun pooling with two applications: first in the context of convolutional sparse coding, and then for image classification.
This paper is dedicated to our friend, Professor John Benedetto, on the occasion of his 80th birthday.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Boureau, Y.-L., Ponce, J., and LeCun, Y. (2010) A theoretical analysis of feature pooling in visual recognition. Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118.
Bruna, J., Szlam, A., and LeCun, Y. (2014) Signal recovery from pooling representations. Proceedings of the 31st International Conference on Machine Learning, 32, 307–315.
Cireșan, D., Meier, U., and Schmidhuber, J. (2012) Multi-column deep neural networks for image classification. Computer Vision and Pattern Recognition (CVPR), 2012 IEEE conference on, pp. 3642–3649, IEEE.
Cireșan, D. C., Meier, U., Masci, J., Maria Gambardella, L., and Schmidhuber, J. (2011) Flexible, high performance convolutional neural networks for image classification. IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, p. 1237, Barcelona, Spain.
Coifman, R. and Fefferman, C. (1974) Weighted norm inequalities for maximal functions and singular integrals. Studia Mathematica, 51, 241–250.
Collobert, R. and Weston, J. (2008) A unified architecture for natural language processing: deep neural networks with multitask learning. Proceedings of the 25th International Conference on Machine Learning (ICML 2008), pp. 160–167.
Donoho, D. L., Elad, M., and Temlyakov, V. N. (2006) Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, 52, 6–18.
Fei-Fei, L., Fergus, R., and Perona, P. (2007) Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. Computer vision and Image understanding, 106, 59–70.
Ferra, A., Aguilar, E., and Radeva, P. (2018) Multiple wavelet pooling for CNNs. Proceedings of the European Conference on Computer Vision (ECCV 2018), pp. 671–685.
Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. (2016) Deep Learning, vol. 1. MIT press Cambridge.
Grafakos, L. (2004) Classical and Modern Fourier Analysis. Prentice Hall, Upper Saddle River, NJ.
Graham, B. (2014) Fractional max-pooling. arXiv preprint arXiv:1412.6071.
Hardy, G. H. and Littlewood, J. E. (1930) A maximal theorem with function-theoretic applications. Acta Math., 54, 81–116.
Hearst, M. A., Dumais, S. T., Osuna, E., Platt, J., and Scholkopf, B. (1998) Support vector machines. IEEE Intelligent Systems and Their Applications, 13, 18–28.
Kabkab, M. (2017) The case for spatial pooling in deep convolutional sparse coding. Fifty-First Asilomar Conference on Signals, Systems and Computers.
Lee, C.-Y., Gallagher, P. W., and Tu, Z. (2016) Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. Artificial Intelligence and Statistics, pp. 464–472.
Mallat, S. (2016) Understanding deep convolutional networks. Philos. Trans. R. Soc. London A, Math. Phys. Sci, 374.
Muckenhoupt, B. (1972) Weighted norm inequalities for the hardy maximal function. Transactions of the American Mathematical Society, 165, 207–226.
Papyan, V., Romano, Y., and Elad, M. (2017) Convolutional neural networks analyzed via convolutional sparse coding. Journal of Machine Learning Research, 18, 27.
Papyan, V., Sulam, J., and Elad, M. (2016) Working locally thinking globally-part ii: Stability and algorithms for convolutional sparse coding. arXiv preprint arXiv:1607.02009.
Rippel, O., Snoek, J., and Adams, R. (2015) Spectral representations for convolutional neural networks. Advances in Neural Information Processing Systems 28 (NIPS 2015), pp. 2449–2457.
Ryu, J., Yang, M.-H., and Lim, J. (2018) DFT-based transformation invariant pooling layer for visual classification. Proceedings of the European Conference on Computer Vision (ECCV 2018), pp. 84–99.
Stein, E. M. (1970) Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton, NJ.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016) Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826.
Williams, T. and Li, R. (2018) Wavelet pooling for convolutional neural networks. Proceedings of the International Conference on Learning Representations (ICLR 2018).
Zeiler, M. D. and Fergus, R. (2013) Stochastic pooling for regularization of deep convolutional neural networks. 1st International Conference on Learning Representations, ICLR 2013.
Zhang, H. and Ma, J. (2018) Hartley spectral pooling for deep learning. arXiv:1810.04028.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Czaja, W., Li, W., Li, Y., Pekala, M. (2021). Maximal Function Pooling with Applications. In: Hirn, M., Li, S., Okoudjou, K.A., Saliani, S., Yilmaz, Ö. (eds) Excursions in Harmonic Analysis, Volume 6. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-69637-5_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-69637-5_21
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-69636-8
Online ISBN: 978-3-030-69637-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)