Skip to main content
Log in

High-dimensional features of adaptive superpixels for visually degraded images

  • Published:
Optoelectronics Letters Aims and scope Submit manuscript

Abstract

This study presents a novel and highly efficient superpixel algorithm, namely, depth-fused adaptive superpixel (DFASP), which can generate accurate superpixels in a degraded image. In many applications, particularly in actual scenes, vision degradation, such as motion blur, overexposure, and underexposure, often occurs. Well-known color-based superpixel algorithms are incapable of producing accurate superpixels in degraded images because of the ambiguity of color information caused by vision degradation. To eliminate this ambiguity, we use depth and color information to generate superpixels. We map the depth and color information to a high-dimensional feature space. Then, we develop a fast multilevel clustering algorithm to produce superpixels. Furthermore, we design an adaptive mechanism to adjust the color and depth information automatically during pixel clustering. Experimental results demonstrate that regardless of boundary recall, under segmentation error, run time, or achievable segmentation accuracy, DFASP is better than state-of-the-art superpixel methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Achanta Radhakrishna and S. Susstrunk, Superpixels and Polygons Using Simple Non-iterative Clustering, 2017 IEEE Conference on Computer Vision and Pattern Rec ognition (CVPR), 2017.

    Google Scholar 

  2. Giordano D., Murabito F., Palazzo S. and Spampinato C., Superpixel-based Video Object Segmentation Using Perceptual Organization and Location Prior, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.

    Google Scholar 

  3. Liang X., Shen X., Feng J., Lin L. and Yan S., Semantic Object Parsing with Graph LSTM, European Conference on Computer Vision, 125 (2016).

  4. D Yeo D., Son J., Han B. and Han J. H., Super-pixel-based Tracking-by-Segmentation Using Markov Chains, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

    Google Scholar 

  5. Tang Youbao and Xiangqian Wu, Saliency Detection via Combining Region-level and Pixel-level Predictions with CNNs, European Conference on Computer Vision, Springer, Cham, 2016.

    Google Scholar 

  6. Stutz D, Hermans A and Leibe B, Superpixels: An Evaluation of the State-of-the-Art, Computer Vision and Image Understanding, 2017.

    Google Scholar 

  7. Liu M. Y., Tuzel O., Ramalingam S. and Chellappa R., Entropy Rate Superpixel Segmentation, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.

    Google Scholar 

  8. Van den Bergh M., Boix X., Roig G., de Capitani B. and Van Gool, Seeds: Superpixels Extracted via Energy-driven Sampling, European Conference on Computer Vision, Springer, Berlin, Heidelberg, 2012.

    Google Scholar 

  9. Shaji A., Smith K., Lucchi A., Fua P. and Susstrunk, IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 2274 (2012).

    Article  Google Scholar 

  10. Achanta Radhakrishna and Sabine Susstrunk, Superpixels and Polygons Using Simple Non-iterative Clustering, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.

    Google Scholar 

  11. Jampani V., Sun D., Ming-Yu Liu, Ming-Hsuan Yang and Kautz J., Superpixel Sampling Networks, European Conference on Computer Vision, 2018.

    Google Scholar 

  12. Dhillon J. S., Guan Y and Kulis B, IEEE Transactions on Pattern Analysis & Machine Intelligence 29, 1944 (2007).

    Article  Google Scholar 

  13. Li Zhengqin and J. Chen, Superpixel Segmentation Using Linear Spectral Clustering, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.

    Google Scholar 

  14. Shi Jianbo and Jitendra Malik, Normalized Cuts and Image Segmentation, Departmental Papers (CIS), 107 (2000).

  15. Silberman N., Hoiem D., Kohli P. and Fergus R., Indoor Segmentation and Support Inference from RGBD Images, European Conference on Computer Vision Springer, Berlin, Heidelberg, 2012.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheng Liu  (刘盛).

Additional information

This work has been supported by the Science Technology Department of Zhejiang Province (No.LGG19F020010), the Department of Education of Zhejiang Province (No.Y201329938), and the National Natural Science Foundation of China (No.61876167).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liao, Ff., Cao, Ky., Zhang, Yx. et al. High-dimensional features of adaptive superpixels for visually degraded images. Optoelectron. Lett. 15, 231–235 (2019). https://doi.org/10.1007/s11801-019-9008-2

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11801-019-9008-2

Document code

Navigation