Skip to main content
Log in

Image Abstraction Framework as a Pre-processing Technique for Accurate Classification of Archaeological Monuments Using Machine Learning Approaches

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

A Correction to this article was published on 15 July 2022

This article has been updated

Abstract

This work extricates the image characteristic features for the classification of archeological monument images. At the pre-processing stage, archeological dataset sample images are treated by using structure safeguarding image abstraction framework, which can deliver the most effective image abstraction output by manipulating the perceptible features in the given low-illuminated and underexposed color image samples. Proposed abstraction-framework effectively boosted the significant image property features like color, edge, sharpness, contrast and suppresses complexity and noise. The image properties were also refined at each phase based on the attained statistical feature disposal information. The work adopted the Harris feature identification technique to identify the most significant image features in the input and enhanced images. The framework also preserves significant features in the foreground of an image by intelligently integrating the series of filters during rigorous experimental work and also diminishes the background content of an input image. The proposed archeological system evaluates every stage of the result with assorted subjective matters and calculates the image quality and properties assessment statistical attributes. By this way prominent features in an image have been recognized. The efficiency of this work has been corroborated by performing the trials on the selected archeological dataset. In addition, user’s visual feedback and the standard image quality assessment techniques were also used to evaluate the proposed pre-processing framework. Based on the obtained abstraction images from the framework, this work extracts the image gray color texture features using GLCM, color texture from CTMs and deep-learning features from AlexNet for the classification of archeological monument classification. This work adopted a support vector machine as a classifier. To corroborate the efficiency of the proposed method, an experiment was conducted on our own data set of Chalukya, Kadamba, Hoysala and new engraving monuments, each domain consisting of 500 archeological data set samples with large intra-class variation, with different environmental lighting condition, low-illumination and different pose. Implementation of this work was carried out in MATLAB-2020 with HPC Nvidia Tesla P100 GPU, and obtained results show that combination of multiple features significantly improves the performance to the extent of 98.10%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Change history

References

  1. Kumar MPP, Poornima B, Nagendraswamy HS, et al. A comprehensive survey on non-photorealistic rendering and benchmark developments for image abstraction and stylization.Springer, Iran J Comput Sci. 2019;2:131–65. https://doi.org/10.1007/s42044-019-00034-1.

    Article  Google Scholar 

  2. Pavan Kumar MP, Poornima B, Nagendraswamy HS, Manjunath C, Rangaswamy BE. Structure preserving image abstraction and artistic stylization from complex background and low illuminated images. Ictact J Image Video Proc. 2020;11(1). https://doi.org/10.21917/ijivp.2020.0316.

    Article  Google Scholar 

  3. Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2010;22(10):1345–59.

    Article  Google Scholar 

  4. Kazimi B, Thiemann F, Malek K, Sester M, Khoshelham K. Deep learning for archaeological object detection in airborne laser scanning data. In: Proceedings of the 2nd workshop on computing techniques for spatio-temporal data in archaeology and cultural heritage co-located with 10th international conference on geographical information science. 2018. https://doi.org/10.4230/LIPIcs.COARCH.2018.

  5. Haralick RM, Shanmugam K, Dinstein I. Textural features for image classification. IEEE Trans Syst Man Cybern. 1973;6:610–21.

    Article  Google Scholar 

  6. Retrieval using texture features in high-resolution, multispectral satellite imagery. In: Data mining and knowledge discovery: theory, tools, and technology, VI, Proceedings of SPIE, vol 5433. SPIE Press, Bellingham, WA, pp 21–32; 2004.

  7. Guru DS, Sharath Kumar YH, Manjunath S. Textural features in flower classification. Math Comput Model. 2011;54(3–4):1030–6. https://doi.org/10.1016/j.mcm.2010.11.032 (ISSN 0895-7177).

    Article  Google Scholar 

  8. Guru D, Kumar YH, Shantharamu M. Texture features and KNN in classification of flower images. Int J Comput Appl Spec Issue RTIPPR. 2010;1:21–9.

    Google Scholar 

  9. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90. https://doi.org/10.1145/3065386.

    Article  Google Scholar 

  10. Kumar MPP, Poornima B, Nagendraswamy HS, et al. Structure-preserving NPR framework for image abstraction and stylization. J Supercomput. 2021;77:8445–513. https://doi.org/10.1007/s11227-020-03547-w.

    Article  Google Scholar 

  11. Pavan Kumar MP, Poornima B, Nagendraswamy HS, Manjunath C, Rangaswamy BE. Image-abstraction framework as a preprocessing technique for extraction of text from underexposed complex background and graphical embossing images. IJDAI. 2021;13(1):1–35. https://doi.org/10.4018/IJDAI.2021010101.

    Article  Google Scholar 

  12. Kyprianidis JE, Collomosse J, Wang T, Isenberg T. State of the “art”: a taxonomy of artistic stylization techniques for images and video. IEEE Trans Vis Comput Graph. 2013;19(5):866–85. https://doi.org/10.1109/TVCG.2012.160.

    Article  Google Scholar 

  13. Shang Y, Wong H-C. Automatic portrait image pixelization. Comput Graph. 2021;95:47–59. https://doi.org/10.1016/j.cag.2021.01.008 (ISSN 0097-8493).

    Article  Google Scholar 

  14. Pavan Kumar MP, Poornima B, Nagendraswamy HS, Manjunath C, Rangaswamy BE. A refined structure preserving image abstraction framework as a pre-processing technique for desire focusing on prominent structure and artistic stylization. WSPC-Vietnam J Comput Sci. 2021. https://doi.org/10.1142/S2196888822500038.

    Article  Google Scholar 

  15. Pavan Kumar MP, Poornima B, Nagendraswamy HS, Manjunath C. Structure preserving non-photorealistic rendering framework for image abstraction and stylization of low-illuminated and underexposed images. IJCVIP. 202111(2):22–45. https://doi.org/10.4018/IJCVIP.2021040102.

    Article  Google Scholar 

  16. Zhao C. A survey on image style transfer approaches using deep learning. J Phys Conf Ser. 2020;1453: 012129. https://doi.org/10.1088/1742-6596/1453/1/012129.

    Article  Google Scholar 

  17. Söchting M, Trapp M. Controlling image-stylization techniques using eye tracking (presentation). 2020. https://doi.org/10.13140/RG.2.2.27256.39688.

  18. Li S, Wen Q, Zhao S, Sun Z, He S. Two-stage photograph cartoonization via line tracing. Comput Graph Forum. 2020;39:587–99. https://doi.org/10.1111/cgf.14170.

    Article  Google Scholar 

  19. Zhuoqi M, Jie L, Nannan W, Xinbo G. Semantic-related image style transfer with dual-consistency loss. Neurocomputing. 2020;406:135–49. https://doi.org/10.1016/j.neucom.2020.04.027 (ISSN 0925-2312).

    Article  Google Scholar 

  20. Ma Z, Li J, Wang N, Gao X. Image style transfer with collection representation space and semantic-guided reconstruction. Neural Netw. 2020;129:123–37. https://doi.org/10.1016/j.neunet.2020.05.028 (ISSN 0893-6080).

    Article  Google Scholar 

  21. Kim J, Lee J. Layered non-photorealistic rendering with anisotropic depth-of-field filtering. Multimed Tools Appl. 2020;79:1291–309. https://doi.org/10.1007/s11042-019-08387-2.

    Article  Google Scholar 

  22. Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell. 1990;12(7):629–39. https://doi.org/10.1109/34.56205.

    Article  Google Scholar 

  23. Bartyzel K. Adaptive Kuwahara filter. SIViP. 2016;10:663–670. https://doi.org/10.1007/s11760-015-0791-3.

    Article  Google Scholar 

  24. Kyprianidis JE, Semmo A, Kang H, Döllner J. Anisotropic Kuwahara filtering with polynomial weighting functions. EG UK Theory Pract Comput Graph. 2010;25–30. https://doi.org/10.2312/LocalChapterEvents/TPCG/TPPCG10/025-030.

    Article  Google Scholar 

  25. Sadreazami H, Asif A, Mohammadi A. Iterative graph-based filtering for image abstraction and stylization. IEEE Trans Circuits Syst II Express Briefs. 2018;65(2):251–5. https://doi.org/10.1109/TCSII2017.2669866.

    Article  Google Scholar 

  26. Azami R, Mould D. Detail and color enhancement in photo stylization. In: Proceedings of the symposium on computational aesthetics (CAE ‘17), Spencer SN, editor. ACM, New York, NY, USA, Article 5, 11 pages. 2017. https://doi.org/10.1145/3092912.3092917.

  27. Nagendra Swamy HS, Pavan Kumar MP. An integrated filter based approach for image abstraction and stylization. In: Swamy P, Guru D, editors. Multimedia processing, communication and computing applications, vol. 213. Lecture Notes in Electrical Engineering. New Delhi: Springer; 2013. https://doi.org/10.1007/978-81-322-1143-3_20.

  28. Shakeri H, Nixon M, DiPaola S. Saliency-based artistic abstraction with deep learning and regression trees. J Imaging Sci Technol. 2017;61(6):60402-1-60402–9.

    Article  Google Scholar 

  29. Kang H, Lee S, Chui CK. Flow-based image abstraction. IEEE Trans Vis Comput Graph. 2009;15(1):62–76. https://doi.org/10.1109/TVCG.2008.81.

    Article  Google Scholar 

  30. Cheng G, Zhou P, Han J. Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images. IEEE Trans Geosci Remote Sens. 2016;54(12):7405–15.

    Article  Google Scholar 

  31. Lawrence S, Giles CL, Tsoi AC, Back AD. Face recognition: a convolutional neural-network approach. IEEE Trans Neural Netw. 1997;8(1):98–113.

    Article  Google Scholar 

  32. Li S, Chan AB. 3D human pose estimation from monocular images with deep convolutional neural network. In: Asian conference on computer vision. Springer; 2014. pp. 332–347.

  33. He Z, Nan F, Li X, Lee S, Yang Y. Traffic sign recognition by combining global and local features based on semi-supervised classification. IET Intell Transp Syst. 2020;14(5):323–30. https://doi.org/10.1049/iet-its.2019.0409.

    Article  Google Scholar 

  34. Rana A, Singh P, Valenzise G, Dufaux F, Komodakis N, Smolic A. Deep tone mapping operator for high dynamic range images. IEEE Trans Image Process. 2019. https://doi.org/10.1109/TIP.2019.2936649.

    Article  Google Scholar 

  35. Hiary H, Saadeh H, Saadeh M, Yaqub M. Flower classification using deep convolutional neural networks. IET Comput Vis. 2018;12(6):855–62.

    Article  Google Scholar 

  36. Guan H, Yongtao Yu, Ji Z, Li J, Zhang Qi. Deep learning-based tree classification using mobile lidar data. Remote Sens Lett. 2015;6(11):864–73.

    Article  Google Scholar 

  37. Yongtao Y, Guan H, Ji Z. Automated detection of urban road manhole covers using mobile laser scanning data. IEEE Trans Intell Transp Syst. 2015;16(6):3258–69.

    Article  Google Scholar 

  38. Ji S, Xu W, Yang M, Yu K. 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell. 2013;35(1):221–31.

    Article  Google Scholar 

  39. Oliveira TP, Barbar JS, Soares AS. Multilayer perceptron and stacked autoencoder for internet traffic prediction. In: Hsu C-H, Shi X, Salapura V, editors. Network and parallel computing. Berlin: Springer; 2014. p. 61–71.

    Google Scholar 

  40. Yongtao Y, Li J, Guan H, Jia F, Wang C. Learning hierarchical features for automated extraction of road markings from 3-D mobile lidar point clouds. IEEE J Sel Top Appl Earth Obs Remote Sens. 2015;8(2):709–26.

    Article  Google Scholar 

  41. Badem H, Caliskan A, Basturk A, Yuksel ME. Classification and diagnosis of the Parkinson disease by stacked autoencoder. In: 2016 National conference on electrical, electronics and biomedical engineering (ELECO), IEEE; 2016. pp. 499–502.

  42. Sahay T, Mehta A, Jadon S. Architecture classification for Indian monuments. Technical report, University of Massachusetts Amherst; 2017. https://doi.org/10.13140/RG.2.2.32105.13920.

  43. Cintas C, Lucena M, Fuertes JM, Delrieux C, Navarro P, González-José R, Molinos M. Automatic feature extraction and classification of Iberian ceramics based on deep convolutional networks. J Cult Herit. 2020;41:106–12. https://doi.org/10.1016/j.culher.2019.06.005 (ISSN 1296-2074).

    Article  Google Scholar 

  44. Rasheed N, Nordin MdJ. Archaeological fragments classification based on RGB color and texture features. J Theor Appl Inf Technol. 2015;3076:358–65 (E-ISSN: 1817-3195).

    Google Scholar 

  45. Amato G, Falchi F, Gennaro C. Fast image classification for monument recognition. J Comput Cult Herit. 2015. https://doi.org/10.1145/2724727.

    Article  Google Scholar 

  46. Bhatt MS, Patalia TP. Genetic programming evolved spatial descriptor for Indian monuments classification. In: 2015 IEEE international conference on computer graphics, vision and information security (CGVIS), Bhubaneswar; 2015. pp. 131–136.

  47. Triantafyllidis G, Kalliatakis G. Image based monument recognition using graph based visual saliency. Electron Lett Comput Vis Image Anal. 2013;12:88–97. https://doi.org/10.5565/rev/elcvia.524.

    Article  Google Scholar 

  48. Desai P, Pujari J, Ayachit NH, Prasad VK. Classification of archaeological monuments for different art forms with an application to CBIR. In: Proceedings of the 2013 international conference on advances in computing, communications and informatics, ICACCI 2013; 2013. pp. 1108–1112. https://doi.org/10.1109/ICACCI.2013.6637332.

  49. Bhatt M, Patalia T. Indian monuments classification using support vector machine. Int J Electr Comput Eng IJECE. 2017;7:1952. https://doi.org/10.11591/ijece.v7i4.pp1952-1963.

    Article  Google Scholar 

  50. Das R, Thepade S, Bhattacharya S, Ghosh S. Retrieval architecture with classified query for content based image recognition. Appl Comput Intell Soft Comput. 2016;1(2016):2.

    Google Scholar 

  51. Ying L, Gang W. Kernel fuzzy clustering based classification of ancient ceramic fragments. In: Proceedings of the conference on information management and engineering, IEEE; 2010. pp. 348–350.

  52. Smith P, Bespalov D, Shokoufandeh A, Jeppson P. Classification of archaeological ceramic fragments using texture and color descriptors. In: IEEE, computer society conference on computer vision and pattern recognition workshops (CVPRW); 2010. pp. 49–54.

  53. Karasik A, Smilansky U. Computerized morphological classification of ceramics. J Archaeol Sci. 2011;38(10):2644–57.

    Article  Google Scholar 

  54. Makridis M, Daras P. Automatic classification of archaeological pottery sherds. ACM J Comput Cult Herit. 2012;5(4):1–21.

    Article  Google Scholar 

  55. Jankovic R. Machine learning models for cultural heritage image classification: comparison based on attribute selection. MDPI Inf. 2020. https://doi.org/10.3390/info11010012.

    Article  Google Scholar 

  56. Abulnour AMH. Protecting the Egyptian monuments: fundamentals of proficiency. Alex Eng J. 2013;52(4):779–85. https://doi.org/10.1016/j.aej.2013.09.003 (ISSN 1110-0168).

    Article  Google Scholar 

  57. Polak A, et al. Hyperspectral imaging combined with data classification techniques as an aid for artwork authentication. J Cult Herit. 2017. https://doi.org/10.1016/j.culher.2017.01.013.

    Article  Google Scholar 

  58. Kulkarni U, Meena SM, Gurlahosur SV, Mudengudi U. Classification of cultural heritage sites using transfer learning. In: 2019 IEEE fifth international conference on multimedia big data (BigMM); 2019. pp. 391–397. https://doi.org/10.1109/BigMM.2019.00020.

  59. Sharma S, Aggarwal P, Bhattacharyya AN, Indu S. Classification of Indian monuments into architectural styles, vol. 841. Singapore: Springer; 2018.

    Google Scholar 

  60. Yi YK, Zhang Y, Myung J. House style recognition using deep convolutional neural network. Autom Constr. 2020;118:103307. https://doi.org/10.1016/j.autcon.2020.103307.

    Article  Google Scholar 

  61. Wojna A, Latkowski R. Rseslib 3: library of rough set and machine learning methods with extensible architecture. In: Transactions on Rough Sets XXI, Springer; 2019. pp. 301–323.

  62. Etaati M, Majidi B, Manzuri MT. Cross platform web-based smart tourism using deep monument mining. In: 2019 4th International conference on pattern recognition and image analysis (IPRIA); 2019. pp. 190–194.

  63. Shukla P, Rautela B, Mittal A. A computer vision framework for automatic description of Indian monuments. In: 2017 13th International conference on signal-image technology & internet-based systems (SITIS); 2017. pp. 116–122. https://doi.org/10.1109/SITIS.2017.29.

  64. Grilli E, Dininno D, Petrucci G, Remondino F. From 2D to 3D supervised segmentation and classification for cultural heritage applications. In: ISPRS TC II mid-term symposium “Towards Photogrammetry 2020”, vol. 42, no. 42; 2018. pp. 399–406.

  65. Verschoof-van der Vaart WB, Lambers K. Learning to look at LiDAR: the use of R-CNN in the automated detection of archaeological objects in LiDAR data from the Netherlands. J Comput Appl Archaeol. 2019;2(1):31–40. https://doi.org/10.5334/jcaa.32.

    Article  Google Scholar 

  66. Navarro P, Cintas C, Lucena M, Fuertes JM, Delrieux C, Molinos M. Learning feature representation of Iberian ceramics with automatic classification models. J Cult Herit. 2021;48:65–73. https://doi.org/10.1016/j.culher.2021.01.003 (ISSN 1296-2074).

    Article  Google Scholar 

  67. Fiorucci M, Khoroshiltseva M, Pontil M, Traviglia A, Del Bue A, James S. Machine learning for cultural heritage: a survey. Pattern Recognit Lett. 2020;133:102–8. https://doi.org/10.1016/j.patrec.2020.02.017 (ISSN 0167-8655).

    Article  Google Scholar 

  68. Paul AJ, Ghose S, Aggarwal K, Nethaji N, Pal S, Purkayastha AD. Machine learning advances aiding recognition and classification of Indian monuments and landmarks. arXiv preprint arXiv:2107.14070. 2021.

  69. El Hajj H. Interferometric SAR and machine learning: using open source data to detect archaeological looting and destruction. J Comput Appl Archaeol. 2021;4(1):47–62. https://doi.org/10.5334/jcaa.70.

    Article  Google Scholar 

  70. Kuntitan P, Chaowalit O. Using deep learning for the image recognition of motifs on the Center of Sukhothai Ceramics. Curr Appl Sci Technol. 2022;22(2).

  71. Hesham S, Khaled R, Yasser D, Refaat S, Shorim N, Ismail FH. Monuments recognition using deep learning vs machine learning. In: 2021 IEEE 11th annual computing and communication workshop and conference (CCWC); 2021. pp. 258–263.

  72. Immerkær J. Fast noise variance estimation. Comput Vis Image Underst. 1996;64(2):300–2. https://doi.org/10.1006/cviu.1996.0060.

    Article  Google Scholar 

  73. Smith SM, Brady JM. Susan - a new approach to low level image processing. Int J Comput Vis. 1997;23(1):45–78. https://doi.org/10.1023/A:1007963824710.

    Article  Google Scholar 

  74. Machado P, Cardoso A. Computing aethetics. In: Proceedings of the 14th Brazilian symposium on artificial intelligence: advances in artificial intelligence (SBIA ‘98), de Oliveira FM, editor. Springer-Verlag, London, UK; 1998. pp. 219–228.

  75. Bahrami K, Kot AC. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE Signal Process Lett. 2014;21(6):751–5. https://doi.org/10.1109/LSP.2014.2314487.

    Article  Google Scholar 

  76. Matković K, Neumann L, Neumann A, Psik T, Purgathofer W. Global contrast factor - a new approach to image contrast. In: Proceedings of the first Eurographics conference on computational aesthetics in graphics, visualization and imaging (Computational Aesthetics'05), Neumann L, Sbert M, Gooch B, Purgathofer W, editors. Eurographics Association, Aire-la-Ville, Switzerland; 2005. pp. 159–167. https://doi.org/10.2312/COMPAESTH/COMPAESTH05/159-167.

  77. Hasler D, Suesstrunk SE. Measuring colorfulness in natural images. Proc SPIE Int Soc Opt Eng. 2003;5007:87–95. https://doi.org/10.1117/12.477378.

    Article  Google Scholar 

  78. Harris C, Stephens M. A combined corner and edge detector. In: Proc. of the fourth Alvey vision conference; 1988. pp. 147–151.

  79. Garcia V, Debreuve E, Barlaud M. Region of interest tracking based on key point trajectories on a group of pictures. In: International workshop on content-based multimedia indexing, Bordeaux; 2007. pp. 198–203. https://doi.org/10.1109/CBMI.2007.385412.

  80. Ashikhmin M. A tone mapping algorithm for high contrast images. In: EUROGRAPHICS 2002, Debevec P, Gibson S, editors, Pisa, Italy; 2002. pp. 1–11.

  81. Banterle F, Artusi A, Sikudova E, Bashford-Rogers T, Ledda P, Bloj M, Chalmers A. Dynamic range compression by differential zone mapping based on psychophysical experiments. In: Proceedings of the ACM symposium on applied perception (SAP ’12). Association for Computing Machinery, New York, NY, USA; 2012. pp. 39–46. https://doi.org/10.1145/2338676.2338685.

  82. Banterle F, Ledda P, Debattista K, et al. A framework for inverse tone mapping. Vis Comput. 2007;23:467–78. https://doi.org/10.1007/s00371-007-0124-9.

    Article  Google Scholar 

  83. Aggarwal U, Trocan M, Coudoux F. An HVS-inspired video deinterlacer based on visual saliency. Vietnam J Comput Sci. 2017;4:61–9. https://doi.org/10.1007/s40595-016-0081-1.

    Article  Google Scholar 

  84. Di Zenzo S. A note on the gradient of a multi-image. Comput Vis Graph Image Process. 1986;33(1):116–25.

    Article  Google Scholar 

  85. Kyprianidis J, Kang H. Image and video abstraction by coherence-enhancing filtering. Comput Graph Forum. 2011;30:593–602. https://doi.org/10.1111/j.1467-8659.2011.01882.x.

    Article  Google Scholar 

  86. Bhat P, Zitnick CL, Cohen M, Curless B. Gradientshop: a gradient-domain optimization framework for image and video filtering. ACM Trans Graph. 2010;29(2):1–14.

    Article  Google Scholar 

  87. Zeng Y, Chen W, Peng Q. A novel variational image model: towards a unified approach to image editing. J Comput Sci Technol. 2006;21:224–31.

    Article  Google Scholar 

  88. Kang H, Lee S. Shape-simplifying image abstraction. Comput Graph Forum. 2008;27:1773–80. https://doi.org/10.1111/j.1467-8659.2008.01322.x.

    Article  Google Scholar 

  89. Kumar P, Swamy N. Line drawing for conveying shapes in HDR images. Int J Innovations Eng Technol. 2013;2(2):353–362 (ISSN 2319-1058)

    Google Scholar 

  90. Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process. 2017;26(7):3142–55.

    Article  MathSciNet  Google Scholar 

  91. Yu H, Li M, Zhang H-J, Feng J. Color texture moments for contents based image retrieval. In: Proceedings of international conference on image processing, IEEE; 2012. pp. 929–932.

  92. Hsu C-W, Lin C-J. A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw. 2002;13:415–25.

    Article  Google Scholar 

  93. Crammer K, Singer Y. On the algorithmic implementation of multiclass kernel-based vector machines. J Mach Learn Res. 2001;2:265–92.

    MATH  Google Scholar 

  94. Mittal A, Soundararajan R, Bovik AC. Making a completely blind image quality analyzer. IEEE Signal Process Lett. 2013;22(3):209–12.

    Article  Google Scholar 

  95. Yeganeh H, Wang Z. objective quality assessment of tone mapped images. IEEE Trans Image Process. 2013;22(2):657–67.

    Article  MathSciNet  Google Scholar 

  96. De Arruda FAPV, de Queiroz JER, Gomes HM. Non-photorealistic neural-sketching. J Braz Comput Soc. 2012;18:237. https://doi.org/10.1007/s13173-012-0061-y.

    Article  Google Scholar 

  97. Venkatanath N, Praneeth D, Chandrasekhar BhM, Channappayya SS, Medasani SS. Blind image quality evaluation using perception based features. In: Proceedings of the 21st national conference on communications (NCC), Piscataway, NJ, IEEE; 2015.

  98. Al-Najjar YAY, Soong DC. Comparison of image quality assessment: PSNR, HVS, SSIM. UIQI Int J Sci Eng Res. 2012;3(8):1 (ISSN 2229-5518).

    Google Scholar 

  99. Mould D, Rosin PL. Developing and applying a benchmark for evaluating image stylization. Comput Graph. 2017;67(C):58–76. https://doi.org/10.1016/j.cag.2017.05.025.

    Article  Google Scholar 

  100. Mould D, Rosin PL. A benchmark image set for evaluating stylization. In: Proceedings of the joint symposium on computational aesthetics and sketch based interfaces and modeling and non-photorealistic animation and rendering (Expresive ‘16). Eurographics Association, Aire-la-Ville 2016. pp. 11–20.

  101. Pavan Kumar MP, Poornima B, Nagendraswamy HS, et al. HDR and image abstraction framework for dirt free line drawing to convey the shapes from blatant range images. Multidim Syst Sign Process. 2021. https://doi.org/10.1007/s11045-021-00803-x

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work is funded by VGST, Government of Karnataka (GoK) under K-FIST L2 scheme (Grant No: No.KSTePS/VGST-K_FIST L2/2019-20/GRD No.758/315) with the INR of 40 lakhs. We thankful to High Performance Computing-Lab, DoS in computer science, University of Mysore, Mysore for facilitating the high speed computation Lab, Mr Adithya N Prabhu and Mr Deekshith K.V, Department of Information Science and Engineering, J.N.N. College of Engineering for archeological dataset courtesy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. P. Pavan Kumar.

Ethics declarations

Conflict of interest

The authors certify that there is no conflict of interest with any organization for the present work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: Reference citation [11, 14] provided in page 11 was incorrect. Now, it has been corrected to [14, 15].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pavan Kumar, M.P., Poornima, B., Nagendraswamy, H.S. et al. Image Abstraction Framework as a Pre-processing Technique for Accurate Classification of Archaeological Monuments Using Machine Learning Approaches. SN COMPUT. SCI. 3, 87 (2022). https://doi.org/10.1007/s42979-021-00935-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00935-8

Keywords

Navigation