Skip to main content

Hierarchical Features Integration and Attention Iteration Network for Juvenile Refractive Power Prediction

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13109))

Included in the following conference series:

  • 1608 Accesses

Abstract

Refraction power has been accredited as one of the significant indicators for the myopia detection in clinical medical practice. Standard refraction power acquirement technique based on cycloplegic autorefraction needs to induce with specific medicine lotions, which may cause side-effects and sequelae for juvenile students. Besides, several fundus lesions and ocular disorders will degenerate the performance of the objective measurement of the refraction power due to equipment limitations. To tackle these problems, we firstly propose a novel hierarchical features integration method and an attention iteration network to automatically obtain the refractive power by reasoning from relevant biomarkers. In our method, hierarchical features integration is used to generate ensembled features of different levels. Then, an end-to-end deep neural network is designed to encode the feature map in parallel and exploit an inter-scale attentive parallel module to enhance the representation through an up-bottom fusion path. The experiment results have demonstrated that the proposed approach is superior to other baselines in the refraction power prediction task, which could further be clinically deployed to assist the ophthalmologists and optometric physicians to infer the related ocular disease progression.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Harrington, S.C., et al.: Refractive error and visual impairment in Ireland schoolchildren. Br. J. Ophthalmol. 103, 1112–1118 (2019)

    Article  Google Scholar 

  2. Zadnik, K., et al.: Ocular predictors of the onset of juvenile myopia. Invest. Ophthalmol. Vis. Sci. 40(9), 1936–1943 (1999)

    Google Scholar 

  3. Mutti, D.O., et al.: Refractive error, axial length, and relative peripheral refractive error before and after the onset of myopia. Invest. Ophthalmol. Vis. Sci. 47(3), 2510–2519 (2007)

    Article  Google Scholar 

  4. Varma, R., et al.: Visual impairment and blindness in adults in the United States: demographic and geographic variations from 2015 to 2050. JAMA Ophthalmol. 134, 802–809 (2016)

    Article  Google Scholar 

  5. Tideman, J.W.L., et al.: Axial length growth and the risk of developing myopia in European children. Acta Ophthalmol. 96(3), 301–309 (2018)

    Article  Google Scholar 

  6. Zhang, M., et al.: Validating the accuracy of a model to predict the onset of myopia in children. Invest. Ophthalmol. Vis. Sci. 52(8), 5836–5841 (2011)

    Article  Google Scholar 

  7. Lin, H., et al.: Prediction of myopia development among Chinese school-aged children using refraction data from electronic medical records: a retrospective, multicentre machine learning study. PLoS Med. 15(11), 1–17 (2018)

    Article  Google Scholar 

  8. Ni, Z.-L., et al.: RAUNet: residual attention U-Net for semantic segmentation of cataract surgical instruments. In: Gedeon, T., Wong, K.W., Lee, M. (eds.) ICONIP 2019. LNCS, vol. 11954, pp. 139–149. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-36711-4_13

    Chapter  Google Scholar 

  9. Shavitt I., Segal, E.: Regularization learning networks: deep learning for tabular datasets. In: Advances in Neural Information Processing Systems. LNCS, pp. 1379–1389 (2018)

    Google Scholar 

  10. Tian, Z., et al.: Prior guided feature enrichment network for few-shot segmentation. IEEE Trans. Pattern Anal. Mach. Intell. arXiv preprint arXiv:2008.01449 (2020)

  11. Ke, G., et al.: TabNN: a universal neural network solution for tabular data (2018). https://openreview.net/pdf?id=r1eJssCqY7

  12. Arik, S.O., Pfister, T.: TabNet: attentive interpretable tabular learning. arXiv preprint arXiv:1908.07442 (2019)

  13. Xu, Y., et al.: FREEtree: a tree-based approach for high dimensional longitudinal data with correlated features. arXiv preprint arXiv:2006.09693 (2020)

  14. Friedman, J.H., et al.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)

    Article  MathSciNet  Google Scholar 

  15. De’ath, G.: Boosted trees for ecological modeling and prediction. Ecology 88(1), 243–251 (2007)

    Article  Google Scholar 

  16. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)

    Google Scholar 

  17. Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. Adv. Neural. Inf. Process. Syst. 30, 3149–3157 (2017)

    Google Scholar 

  18. Prokhorenkova, L., et al.: CatBoost: unbiased boosting with categorical features. In: Advances in Neural Information Processing Systems, pp. 6638–6648 (2018)

    Google Scholar 

  19. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  20. Ke, G., et al.: DeepGBM: a deep learning framework distilled by GBDT for online prediction tasks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 384–394 (2019)

    Google Scholar 

  21. İrsoy, O., et al.: Soft decision trees. In: Proceedings of the 21st International Conference on Pattern Recognition, pp. 1819–1822 (2012)

    Google Scholar 

  22. Peter, K., et al.: Deep neural decision forests. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1467–1475 (2015)

    Google Scholar 

  23. Frosst, N., Hinton, G.: Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784, April 2017

  24. Wang, S., et al.: Using a random forest to inspire a neural network and improving on it. In: Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 1–9 (2017)

    Google Scholar 

  25. Yang, Y., et al.: Deep neural decision trees. arXiv preprint arXiv:1806.06988 (2018)

  26. Popov, S., et al.: Neural oblivious decision ensembles for deep learning on tabular data. arXiv preprint arXiv:1909.06312 (2020)

  27. Luo, Y., et al.: AutoCross: automatic feature crossing for tabular data in real-world applications. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1936–1945 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiang Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Higashita, R., Long, G., Li, R., Santo, D., Liu, J. (2021). Hierarchical Features Integration and Attention Iteration Network for Juvenile Refractive Power Prediction. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13109. Springer, Cham. https://doi.org/10.1007/978-3-030-92270-2_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92270-2_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92269-6

  • Online ISBN: 978-3-030-92270-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics