Skip to main content
Log in

Increasing the Receptive Field of Neurons in Convolutional Neural Networks

  • Published:
Cybernetics and Systems Analysis Aims and scope

The convolutional neural network architectures for classifying 1D and 2D signals are analyzed. The authors have found that for a high-dimensional input signal, one can ensure an adequate classification accuracy only by using a large number of layers. It is impossible to achieve the required accuracy with limited computing resources. However, if the number of layers is limited, the accuracy decreases, starting from some critical dimensionality value. A method for modifying a convolutional neural network with relatively small number of layers to solve this problem has been proposed. Its effectiveness has been experimentally proved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in: 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, NV, USA, June 27–30, 2016), IEEE (2016), pp. 770–778. https://doi.org/10.1109/CVPR.2016.90.

    Article  Google Scholar 

  2. S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in: 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (Honolulu, HI, USA, July 21–26, 2017), IEEE (2017), pp. 5987–5995. https://doi.org/10.1109/CVPR.2017.634.

    Article  Google Scholar 

  3. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in: 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR) (Salt Lake City, UT, USA June 18–23, 2018), IEEE (2018), pp. 7132–7141. https://doi.org/10.1109/CVPR.2018.00745.

    Article  Google Scholar 

  4. M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in: Proc. 36th Intern. Conf. on Machine Learning (ICML 2019) (Long Beach, CA, USA, June 9–15, 2019), PMLR, Vol. 97 (2019), pp. 6105–6114. URL: http://proceedings.mlr.press/v97/tan19a.html.

  5. M. Tan and Q. V. Le, “EfficientNetV2: Smaller models and faster training,” in: Proc. 38th Intern. Conf. on Machine Learning (ICML 2021) (virtual event, July 18–24, 2021), PMLR, Vol. 139 (2021), pp. 10096–10106. URL: https://proceedings.mlr.press/v139/tan21a.html.

  6. S. Li, M. Tan, R. Pang, A. Li, L. Cheng, Q. V. Le, and N. P. Jouppi, “Searching for fast model families on datacenter accelerators,” in: 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR 2021) (Nashville, TN, USA, June 20–25, 2021), IEEE (2021), pp. 8081–8091. https://doi.org/10.1109/CVPR46437.2021.00799.

    Article  Google Scholar 

  7. T. Ridnik, H. Lawen, A. Noy, E. Ben, B. G. Sharir, and I. Friedman, “TResNet: High performance GPU-dedicated architecture,” in: 2021 IEEE Winter Conf. on Applications of Computer Vision (WACV) (Waikoloa, HI, USA, January 3–8, 2021), IEEE (2021), pp. 1399–1408. https://doi.org/10.1109/WACV48630.2021.00144.

    Article  Google Scholar 

  8. A. Van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” arXiv:1609.03499v2 [cs.SD] 19 Sep (2016). https://doi.org/10.48550/arxiv.1609.03499.

  9. S. Shapovalova and Y. Moskalenko, “Methods for increasing the classification accuracy based on modifications of the basic architecture of convolutional neural networks,” ScienceRise, No. 6 (71), 10–16 (2020). https://doi.org/10.21303/2313-8416.2020.001550.

  10. Wavenet with SHIFTED-RFC Proba and CBR, Kaggle/Code. URL: https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr (Accessed: October 8, 2022).

  11. University of Liverpool — Ion Switching, Kaggle/Competitions. URL: https://www.kaggle.com/c/liverpool-ion-switching (Accessed: October 8, 2022).

  12. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in: 2017 IEEE Intern. Conf. on Computer Vision (ICCV) (Venice, Italy, October 22–29, 2017), IEEE (2017), pp. 2999–3007. https://doi.org/10.1109/ICCV.2017.324.

    Article  Google Scholar 

  13. S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmentation using 3D fully convolutional deep networks,” in: Q. Wang, Y. Shi, H.-I. Suk, and K. Suzuki (eds.), Machine Learning in Medical Imaging, MLMI 2017; Lecture Notes in Computer Science, Vol. 10541, Springer, Cham (2017), pp. 379–387. https://doi.org/10.1007/978-3-319-67389-9_44.

  14. T. Sørensen, “A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons,” Biologiske Skrifter, Vol. 5, No. 4, 1–34, Det Kongelige Danske Videnskabernes Selskabs Publikationer (1948).

  15. Quick, Draw! Doodle Recognition Challenge, Kaggle/Competitions. URL: https://www.kaggle.com/c/quickdraw-doodle-recognition (Accessed: October 8, 2022).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Shapovalova.

Additional information

Translated from Kibernetyka ta Systemnyi Analiz, No. 2, March–April, 2023, pp. 182–189.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shapovalova, S., Moskalenko, Y. & Baranichenko, O. Increasing the Receptive Field of Neurons in Convolutional Neural Networks. Cybern Syst Anal 59, 339–345 (2023). https://doi.org/10.1007/s10559-023-00568-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10559-023-00568-0

Keywords

Navigation