Skip to main content
Log in

Wall-Cor Net: wall color replacement via Clifford chance-based deep generative adversarial network

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Color design for interior circumstance is a challenging area due to the numerous aspects that must be matched. Although learning from images is a popular method, it is most effective for natural situations with objects that have generally stable colors. The wall color selection is a challenging work because most of the design of the interior depends on a wide range of colors that relate to the location and design of the space. Our objective is to create a system that automatically paints the walls of indoor scene images based on preferred color. To achieve this goal, a novel deep learning-based wall color replacement using deep learning generative adversarial network (WALL-COR NET) has been proposed for the detection of wall color replacement in images. To improve the quality of the input, the indoor scenes are first preprocessed with the Clifford gradient based on RGB representation. Clifford gradient algorithm is employed to analyze the input information and prevent issues like the accumulation of noise distortions in images. To segment the input indoor scene images, a deep learning-based attention V-Net is used to separate the objects from the wall. The GAN helps both designers and homeowners visualize different wall color schemes instantly, saving both time and money. The GAN architecture is utilized for wall color replacement lies in their ability to realistically synthesize diverse color schemes, providing designers and residents to visually explore and choose optimal color schemes for enhanced interior esthetics. Finally, identify suitable and unsuitable images based on user preference, the indoor scene, segmented mask and Hexa color code are supplied to the discriminator and generator. The average classification accuracy for proposed WALL-COR NET is 99.33%. The WALL-COR NET advances the overall accuracy by 19.8%, 17.33% and 4.33, better than FCN, surrogate-assisted method and CNN, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of data and materials

This paper does not qualify for data sharing because no new findings were generated or examined for this research project.

References

  1. Lee, K.T., Im, J.B., Park, S.J., Kim, J.H.: Conceptual framework to support personalized indoor space design decision-making: a systematic literature review. Buildings 12(6), 716 (2022). https://doi.org/10.3390/buildings12060716

    Article  Google Scholar 

  2. Kalantari, S., Neo, J.R.J.: Virtual environments for design research: lessons learned from use of fully immersive virtual reality in interior design research. J. Inter. Des. 45(3), 27–42 (2020). https://doi.org/10.1111/joid.12171

    Article  Google Scholar 

  3. Dakshina, D.S., Jayapriya, P., Kala, R.: Saree texture analysis and classification via deep learning framework. Int. J. Data Sci. Artif. Intell. 01(01), 20–25 (2023)

    Google Scholar 

  4. Castro, W., Oblitas, J., De-La-Torre, M., Cotrina, C., Bazán, K., Avila-George, H.: Classification of cape gooseberry fruit according to its level of ripeness using machine learning techniques and different color spaces. IEEE Access 7, 27389–27400 (2019). https://doi.org/10.1109/ACCESS.2019.2898223

    Article  Google Scholar 

  5. Meeus, L., Huang, S., Devolder, B., Dubois, H., Martens, M., Pižurica, A.: Deep learning for paint loss detection with a multiscale, translation invariant network. In 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 158–162. IEEE (2019). https://doi.org/10.1109/ISPA.2019.8868659

  6. Sizyakin, R., Voronin, V., Pižurica, A.: Virtual restoration of paintings based on deep learning. In Fourteenth International Conference on Machine Vision (ICMV 2021), 12084, pp. 422–432. SPIE (2022). https://doi.org/10.1117/12.2624371

  7. Lluch, J.S.: Color for Architects (Architecture Brief). Chronicle Books, San Francisco (2019)

    Google Scholar 

  8. Ramji, D.R., Palagan, C.A., Nithya, A., Appathurai, A., Alex, E.J.: Soft computing-based color image demosaicing for medical Image processing. Multimed. Tools Appl. 79, 10047–10063 (2020). https://doi.org/10.1007/s11042-019-08091-1

    Article  Google Scholar 

  9. Liu, L., Catelli, E., Katsaggelos, A., Sciutto, G., Mazzeo, R., Milanic, M., Stergar, J., Prati, S., Walton, M.: Digital restoration of colour cinematic films using imaging spectroscopy and machine learning. Sci. Rep. 12(1), 21982 (2022). https://doi.org/10.1038/s41598-022-25248-5

    Article  Google Scholar 

  10. Park, B.H., Son, K., Hyun, K.H.: Interior design network of furnishing and color pairing with object detection and color analysis based on deep learning. In: International Conference on Computer-Aided Architectural Design Futures, pp. 237–249. Springer Singapore, Singapore (2021). https://doi.org/10.1007/978-981-19-1280-1_15

  11. Gundavarapu M.R., Bachu A., Tadivaka S.S., Koundinya G.S and Nimmala S.: Smart Agent Framework for Color Selection of Wall Paintings. In: Inventive Systems and Control: Proceedings of ICISC, vol. 2022, pp. 219–230. Springer Nature Singapore, Singapore (2022). https://doi.org/10.1007/978-981-19-1012-8_15

  12. Mathew, A., Amudha, P., Sivakumari, S.: Deep learning techniques: an overview. Adv. Mach. Learn. Technol. Appl. Proc. AMLTA 2020, 599–608 (2021). https://doi.org/10.1007/978-981-15-3383-9_54

    Article  Google Scholar 

  13. Marcus G.: Deep learning is hitting a wall. Nautilus, Accessed, 03–11 (2022).

  14. Surendiran, R., Duraisamy, K.: “An Approach in Semantic Web Information Retrieval. SSRG International Journal of Electronics and Communication Engineering 1(1), 17–21 (2014). https://doi.org/10.14445/23488549/IJECE-V1I1P105

    Article  Google Scholar 

  15. Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 1–48 (2019)

    Article  Google Scholar 

  16. Loverdos, D., Sarhosis, V.: Automatic image-based brick segmentation and crack detection of masonry walls using machine learning. Autom. Constr.. Constr. 140, 104389 (2022). https://doi.org/10.1016/j.autcon.2022.104389

    Article  Google Scholar 

  17. Jeong, S., Lee, J., Sohn, K.: Multi-domain unsupervised image-to-image translation with appearance adaptive convolution. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1750–1754. IEEE (2022). https://doi.org/10.1109/ICASSP43922.2022.9746500

  18. Liao, Y., Huang, Y.: Deep learning-based application of image style transfer. Math. Probl. Eng.Probl. Eng. (2022). https://doi.org/10.1155/2022/1693892

    Article  Google Scholar 

  19. Jampour, M., Zare, M., Javidi, M.: Advanced multi-gans towards near to real image and video colorization. J. Ambient. Intell. Humaniz. Comput.Intell. Humaniz. Comput. 14(9), 12857–12874 (2023). https://doi.org/10.1007/s12652-022-04206

    Article  Google Scholar 

  20. Therase, J., Allwin, S., Ahilan, A.: Full duplex media access control protocol for multihop network computing. Comput. Syst. Sci. Eng.. Syst. Sci. Eng. (2023). https://doi.org/10.32604/csse.2023.023515

    Article  Google Scholar 

  21. Fenil, E., Manogaran, G., Vivekananda, G.N., Thanjaivadivel, T., Jeeva, S., Ahilan, A.J.C.N.: Real time violence detection framework for football stadium comprising of big data analysis and deep learning through bidirectional LSTM. Comput. Netw.. Netw. 151, 191–200 (2019). https://doi.org/10.1016/j.comnet.2019.01.028

    Article  Google Scholar 

  22. Agasthian, A., Pamula, R., Kumaraswamidhas, L.A.: Integration of monitoring and security based deep learning network for wind turbine system. Int. J. Syst. Des. Comput. 01(01), 11–17 (2023)

    Google Scholar 

  23. Jegatheesh, A., Kopperundevi, N., Anlin Sahaya Infant Tinu, M.: Brain aneurysm detection via firefly optimized spiking neural network. Int. J. Curr. Bio Med. Eng. 01(01), 23–29 (2023)

    Google Scholar 

  24. Islabudeen, M., Vigneshwaran, P., Sindhuja, M., Ragaventhiran, J., Sharmila, G., Kumar, B. M.: WITHDRAWN: energy efficient for cooperative transmission in clustered cooperative networks (2021). https://doi.org/10.1016/j.matpr.2021.02.572

  25. Liu, T., Wei, Y., Zhao, Y., Liu, S., Wei, S.: Magic-wall: visualizing room decoration by enhanced wall segmentation. IEEE Trans. Image Process. 28(9), 4219–4232 (2019). https://doi.org/10.1109/TIP.2019.2908064

    Article  MathSciNet  Google Scholar 

  26. Lin, J., Xiao, P., Fu, Y., Shi, Y., Wang, H., Guo, S., He, Y., Lee, T.Y.: C3 assignment: camera cubemap color assignment for creative interior design. IEEE Trans. Vis. Comput. Graph.Comput. Graph. 28(8), 2895–2908 (2020). https://doi.org/10.1109/TVCG.2020.3041728

    Article  Google Scholar 

  27. He, Y., Cai, Y., Guo, Y.C., Liu, Z.N., Zhang, S.K., Zhang S.H., Fu, H.B., Chen, S.Y.: Style-compatible object recommendation for multi-room indoor scene synthesis (2020)

  28. Fu, Q., Yan, H., Fu, H., Li, X.: Interactive design and preview of colored snapshots of indoor scenes. Comput. Gr. Forum 39(7), 543–552 (2020). https://doi.org/10.1111/cgf.14166

    Article  Google Scholar 

  29. He, Y., Liu, Y.T., Jin, Y.H., Zhang, S.H., Lai, Y.K., Hu, S.M.: Context-consistent generation of indoor virtual environments based on geometry constraints. IEEE Trans. Vis. Comput. Gr. 28(12), 3986–3999 (2021). https://doi.org/10.1109/TVCG.2021.3111729

    Article  Google Scholar 

  30. Solah, M., Huang, H., Sheng, J., Feng, T., Pomplun, M., Yu, L.F.: Mood-driven colorization of virtual indoor scenes. IEEE Trans. Vis. Comput. Graph.Comput. Graph. 28(5), 2058–2068 (2022). https://doi.org/10.1109/TVCG.2022.3150513

    Article  Google Scholar 

  31. Özler, K.A., Hidayetoğlu, M.L., Yildirim, K.: Effect of wall colors and usage rates on the perception of interior spaces. Gazi Univ. J. Sci. (2022). https://doi.org/10.35378/gujs.1120440

    Article  Google Scholar 

  32. Prabu, S.: Object segmentation based on the integration of adaptive K-means and GrabCut algorithm. In: 2022 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), pp. 213–216. IEEE (2022). https://doi.org/10.1109/WiSPNET54241.2022.9767099

  33. Ramadan, R., Aly, S.: CU-net: a new improved multi-input color U-net model for skin lesion semantic segmentation. IEEE Access 10, 15539–15564 (2022). https://doi.org/10.1109/ACCESS.2022.3148402

    Article  Google Scholar 

  34. Xu, W., Fu, Y.: Deep learning algorithm in ancient relics image colour restoration technology. Multimed. Tools Appl. 82(15), 23119–23150 (2023). https://doi.org/10.1007/s11042-022-14108-z

    Article  Google Scholar 

Download references

Acknowledgements

The readers' thorough, helpful and perceptive feedback on this work is greatly appreciated, and the writers thank them for them.

Author information

Authors and Affiliations

Authors

Contributions

The following contributions to the work are confirmed by the researchers: MSP, MRG, TJ and TR examined the idea and layout, collected the data, analyzed and interpreted the results and prepared the draft/manuscript. After reviewing the findings, all authors gave their approval to the manuscript's final draft.

Corresponding author

Correspondence to M. Sabitha Preethi.

Ethics declarations

Conflict of interest

There is no conflict of interest for this paper's publication.

Ethical approval

The article was properly vetted by a research advisor before being accepted for publication in this journal.

Human and animal rights

None of the authors has conducted any research on people or animals that are included in this paper.

Informed consent

I attest that I have fully informed the aforementioned person about the nature and goals of this study, as well as the possible advantages of participating. The person's inquiries concerning the research have been addressed, and we will remain accessible to answer any other queries.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Preethi, M.S., Geetha, M.R., Jaya, T. et al. Wall-Cor Net: wall color replacement via Clifford chance-based deep generative adversarial network. SIViP (2024). https://doi.org/10.1007/s11760-024-03054-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03054-y

Keywords

Navigation