Skip to main content
Log in

A Digitalized Recomposition Technique Based on Photo Quality Evaluation Criteria

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Photographic evaluation is based mainly on qualitative factors which are very personal and subjective. The qualitative factors must be converted into quantitative measures, however, in order to develop a digitalized recomposition algorithm. In addition, the theories of composition in photography should be explored for digitizing photo quality evaluation criteria. In this paper, a new evaluation algorithm for photographic recomposition based on photo quality evaluation principles will be presented. In detail, we formulate the rule-of-thirds as an optimization problem of the feature vector in the image. Simplicity factors can be formulated as a calculation problem of the size of region of interest (ROI) segments. The size of ROI and the moving direction of foreground object are used to formulate the rule of space. The presented algorithm can be extended to photographic evaluation field. Algorithmic excellence will be proven by experimental results. The proposed technique is fully automatic while previous works need manual interaction. The authors expect that the presented algorithm can be applied in the near future since many related state-of-the-art technologies are embedded on commercial cameras these days.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

References

  1. London, B., Upton, J., Kobre, K., & Brill, B. (2004). Photography (8th ed.). Englewood Cliffs, NJ: Prentice Hall.

    Google Scholar 

  2. Movania, M. M., Chiew, W. M., & Lin, F. (2014). On-site volume rendering with GPU-enabled devices. Wireless Personal Communications, 76(4), 795–812.

    Article  Google Scholar 

  3. Memon, I., Chen, L., Majid, A., Lv, M., Hussain, I., & Chen, G. (2015). Travel recommendation using geo-tagged photos in social media for tourist. Wireless Personal Communications, 80(4), 1347–1362.

    Article  Google Scholar 

  4. Wang, Y., Chen, I.-R., & Wang, D.-C. (2015). A survey of mobile cloud computing applications: Perspectives and challenges. Wireless Personal Communications, 80(4), 1607–1623.

    Article  Google Scholar 

  5. Banerjee, S., & Evans, B. L. (2004). Unsupervised automation of photographic composition rules in digital still cameras. In Proceedings of SPIE conference on sensors, color, cameras, and systems for digital photography (Vol. 6, pp. 364–373).

  6. Banerjee, S., & Evans, B. L. (2004). Unsupervised merger detection and mitigation in still images using frequency and color content analysis. In Proceedings of IEEE international conference on acoustics, speech, and signal processing (pp. 549–552).

  7. Banerjee, S., & Evans, B. L. (2007). In-camera automation of photographic composition rules. IEEE Transactions on Image Processing, 16(7), 1807–1820.

    Article  MathSciNet  Google Scholar 

  8. Santella, A., Agrawala, M., DeCarlo, D., Salesin, D. H., & Cohen, M. F. (2006). Gaze-based interaction for semi-automatic photo cropping. In ACM human factors in computing systems (CHI) (pp. 771–780.

  9. Chen, L., Xie, X., Fan, X., Ma, W., Zhang, H., & Zhou, H. (2003). A visual attention model for adapting images on small displays. Multimedia Systems, 9(4), 353–364.

    Article  Google Scholar 

  10. Suh, B., Ling, H., Bederson, B. B., & Jacobs, D. W. (2003). Automatic thumbnail cropping and its effectiveness. CHI Letters, 5(2), 95–104.

    Google Scholar 

  11. Bai, X., & Sapiro, G. (2009). A geodesic framework for fast interactive image and video segmentation and matting. International Journal of Computer Vision, 82(2), 113–132.

  12. Protiere, A., & Sapiro, G. (2007). Interactive image segmentation via adaptive weighted distances. IEEE Transactions on Image Processing, 16, 1046–1057.

    Article  MathSciNet  Google Scholar 

  13. Wang, J., & Cohen, M. F. (2005). An iterative optimization approach for unified image segmentation and matting. Proceedings of IEEE ICCV, 2005, 936–943.

    Google Scholar 

  14. Yatziv, L., & Sapiro, G. (2006). Fast image and video colorization using chrominance blending. IEEE Transactions on Image Processing, 15(5), 1120–1129.

    Article  Google Scholar 

  15. Lee, T.-H., Hwang, B.-H., Yun, J.-H., & Choi, M.-R. (2014). A road region extraction using OpenCV CUDA to advance the processing speed. Journal of Digital Convergence, 12(6), 231–236.

    Article  Google Scholar 

  16. Kang, S.-K., & Lee, J.-H. (2013). Real-time head tracking using adaptive boosting in surveillance. Journal of Digital Convergence, 11(2), 243–248.

    Google Scholar 

  17. Kang, S.-K., Choi, K.-H., Chung, K.-Y., & Lee, J.-H. (2012). Object detection and tracking using Bayesian classifier in surveillance. Journal of Digital Convergence, 10(6), 297–302.

    Google Scholar 

  18. Kim, S.-H., & Jeong, Y.-S. (2013). Mobile image sensors for object detection using color segmentation. Cluster Computing, 16(4), 757–763.

    Article  Google Scholar 

  19. Balafoutis, E., Panagakis, A., Laoutaris, N., & Stavrakakis, I. (2005). Study of the impact of replacement granularity and associated strategies on video caching. Cluster Computing, 8(1), 89–100.

    Article  Google Scholar 

  20. Zhang, S., McCullagh, P., Zhang, J., & Yu, T. (2014). A smartphone based real-time daily activity monitoring system. Cluster Computing, 17(3), 711–721.

    Article  Google Scholar 

  21. Park, R. C., Jung, H., Shin, D.-K., Kim, G.-J., & Yoon, K.-H. (2014). M2M-based smart health service for human UI/UX using motion recognition. Cluster Computing (accepted for publication).

  22. Wang, J. Z., Li, J., Gray, R. M., & Wiederhold, G. (2001). Unsupervised multiresolution segmentation for images with low depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 85–90.

    Article  Google Scholar 

  23. Beauchemin, S. S., & Barron, J. L. (1995). The computation of optical flow. ACM Computing Surveys, 27(3), 433–466.

    Article  Google Scholar 

  24. Weber, J., & Malik, J. (1995). Robust computation of optical-flow in a multiscale differential Framework. International Journal of Computer Vision, 14(1), 67–81.

    Article  Google Scholar 

  25. Haussecker, H., & Fleet, D. J. (2001). Estimating optical flow with physical models of brightness variation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6), 661–673.

    Article  Google Scholar 

  26. Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In Proceedings of imaging understanding workshop (pp. 121–130).

  27. Jeong, K. (2009). Paradigm shift of camera: Part I. Computational photography. Journal of the Korea Computer Graphics Society, 15(4), 23–30.

    Article  Google Scholar 

  28. Raskar, R., Tumblin, J., Levoy, M., & Nayer, S. (2006). SIGGRAPH 2006 course notes on computational photography. In SIGGRAPH.

  29. Yuan, L., Sun, J., Quan, L., & Shum, H. Y. (2007). Image Deblurring with blurred/noisy image pairs. ACM Transactions on Graphics, 26(3), 1–9.

  30. Eisemann, E., & Durand, F. (2004). Flash photography enhancement via intrinsic relighting. ACM Transactions on Graphics, 23(3), 673–678.

    Article  Google Scholar 

  31. Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., & Toyama, K. (2004). Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics, 23(3), 664–672.

    Article  Google Scholar 

  32. Raskar, R., Tan, K., Feris, R., Yu, J., & Turk, M. (2004). Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. ACM Transactions on Graphics, 23(3), 679–688.

    Article  Google Scholar 

  33. Masselus, V., Peers, P., Dutré, P., & Willems, Y. D. (2003). Relighting with 4D incident light fields. ACM Transactions on Graphics, 22(3), 613–620.

    Article  Google Scholar 

  34. Matusik, W., Loper, M., Pfister, H. (2004). Progressively-refined reflectance functions from natural illumination. In Eurographics symposium on rendering (pp. 299–308).

  35. Sajadi, B., Majumder, A., Hiwada, K., Maki, A., & Raskar, R. (2011). Switchable primaries using shiftable layers of color filter arrays. ACM Transactions on Graphics, 30(4), 1–10.

  36. Bando, Y., Chen, B., & Nishita, T. (2008). Extracting depth and matte using a color-filtered aperture. ACM Transactions on Graphics, 27(5), 1–9.

  37. Cossairt, O., Zhou, C., & Nayar, S. (2010). Diffusion coded photography for extended depth of field. ACM Transactions on Graphics, 29(4), 1–10.

  38. Levin, A., Fergus, R., Durand, F., & Freeman, B. (2007). Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics, 26(3), 70.

  39. Jeong, K., Kim, D., Park, S.-Y., & Lee, S. (2008). Digital shallow depth-of-field adapter for photographs. The Visual Computer, 24(4), 281–294.

    Article  Google Scholar 

  40. Shan, Q., Jia, J., & Agarwala, A. (2008). High-quality motion Deblurring from a single image. ACM Transactions on Graphics, 27(3), 1–10.

    Article  Google Scholar 

  41. Fergus, R., Singh, B., Hertzmann, A., Roweis, S., & Freeman, W. (2006). Removing camera shake from a single image. ACM Transactions on Graphics, 24(3), 787–794.

    Article  Google Scholar 

  42. Cho, S., Matsushita, Y., & Lee, S. (2007). Removing non-uniform motion blur from images. In IEEE international conference on computer vision (pp. 1–8).

  43. Talvala, E., Adams, A., Horowitz, M., & Levoy, M. (2007). Veiling glare in high dynamic range imaging. ACM Transactions on Graphics, 26(3), 37.

  44. Debevec, P., & Malik, J. (1997). Recovering high dynamic range radiance maps from photographs. In Proceedings of ACM SIGGRAPH (pp. 369–378).

  45. Shan, Q., Li, Z., Jia, J., & Tang, C. (2008). Fast image/video upsampling. ACM Transactions on Graphics, 27(5), 1–7.

  46. Sun, J., Xu, Z., & Shum, H. (2008). Image super-resolution using gradient profile prior. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 1–8).

  47. Freeman, G., & Fattal, R. (2010). Image and video upscaling from local self-examples. ACM Transactions on Graphics, 28(3), 1–10.

    Google Scholar 

  48. Kopf, J., Uyttendaele, M., Deussen, O., & Cohen, M. (2007). Capturing and viewing gigapixel images. ACM Transactions on Graphics, 26(3), 93–102.

  49. Kopf, J., Cohen, M., Lischinski, D., & Uyttendaele, M. (2007). Joint bilateral upsampling. ACM Transactions on Graphics, 26(3), 1–5.

  50. Wang, Y., Hsiao, J., Sorkine, O., & Lee, T. (2011). Scalable and coherent video resizing with per-frame optimization. ACM Transactions on Graphics, 30(3). doi:10.1145/2077341.2077343.

  51. Wang, Y., Lin, H., Sorkine, O., & Lee, T. (2010). Motion-based video retargeting with optimized crop-and-warp. ACM Transactions on Graphics, 29(3). doi:10.1145/1833351.1778827.

  52. Rubinsteing, M., Shamir, A., & Avidan, S. (2008). Improved seam carving for video retargeting. ACM Transactions on Graphics, 27(3). doi:10.1145/1360612.1360615

  53. Zheng, Y., Kambhamettu, C., Yu, J., Bauer, T., & Steiner, K. (2008). FuzzyMatte: A computationally efficient scheme for interactive matting. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 1–8).

  54. Wang, J., Agrawala, M., & Cohen, M. (2007). Soft scissors: An interactive tool for realtime high quality matting. ACM Transactions on Graphics, 26(3), 9.

  55. McGuire, M., Matusik, W., Pfister, H., Hughes, J., & Durand, F. (2005). Defocus video matting. ACM Transactions on Graphics, 24(3), 567–576.

    Article  Google Scholar 

  56. Gooch, A. A., Olsen, S. C., Tumblin, J., & Gooch, B. (2005). Color2Gray: Salience-preserving color removal. ACM Transactions on Graphics, 24(3), 1–6.

    Article  Google Scholar 

  57. Kim, Y., Jang, C., Demouth, J. &, Lee, S. (2009). Robust color-to-gray via nonlinear global mapping. ACM Transactions on Graphics, 28(5), 161.

  58. Grundland, M., & Dodgson, N. A. (2007). Decolorize: Fast, contrast enhancing, color to grayscale conversion. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 40(11), 2891–2896.

    Article  Google Scholar 

  59. Smith, K., Landes, P., Thollot, J., & Myszkowski, K. (2008). Apparent Greyscale: A simple and fast conversion to perceptually accurate images and video. Computer Graphics Forum, 27(2), 193–200.

    Article  Google Scholar 

  60. Tang, H., Joshi, N., & Kapoor, A. (2011). Learning a blind measure of perceptual image quality. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 305–312).

  61. Lee, Y.-H., Cho, H.-J., & Lee, J.-H. (2014). Image retrieval using multiple features on mobile platform. Journal of Digital Convergence, 12(2), 237–243.

    Article  Google Scholar 

Download references

Acknowledgments

This research was supported by the Daegu University Research Grant, 2012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han-Jin Cho.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jeong, K., Cho, HJ. A Digitalized Recomposition Technique Based on Photo Quality Evaluation Criteria. Wireless Pers Commun 86, 301–314 (2016). https://doi.org/10.1007/s11277-015-2977-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-015-2977-y

Keywords

Navigation