Abstract
This study presents a novel and highly efficient superpixel algorithm, namely, depth-fused adaptive superpixel (DFASP), which can generate accurate superpixels in a degraded image. In many applications, particularly in actual scenes, vision degradation, such as motion blur, overexposure, and underexposure, often occurs. Well-known color-based superpixel algorithms are incapable of producing accurate superpixels in degraded images because of the ambiguity of color information caused by vision degradation. To eliminate this ambiguity, we use depth and color information to generate superpixels. We map the depth and color information to a high-dimensional feature space. Then, we develop a fast multilevel clustering algorithm to produce superpixels. Furthermore, we design an adaptive mechanism to adjust the color and depth information automatically during pixel clustering. Experimental results demonstrate that regardless of boundary recall, under segmentation error, run time, or achievable segmentation accuracy, DFASP is better than state-of-the-art superpixel methods.
Similar content being viewed by others
References
Achanta Radhakrishna and S. Susstrunk, Superpixels and Polygons Using Simple Non-iterative Clustering, 2017 IEEE Conference on Computer Vision and Pattern Rec ognition (CVPR), 2017.
Giordano D., Murabito F., Palazzo S. and Spampinato C., Superpixel-based Video Object Segmentation Using Perceptual Organization and Location Prior, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
Liang X., Shen X., Feng J., Lin L. and Yan S., Semantic Object Parsing with Graph LSTM, European Conference on Computer Vision, 125 (2016).
D Yeo D., Son J., Han B. and Han J. H., Super-pixel-based Tracking-by-Segmentation Using Markov Chains, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
Tang Youbao and Xiangqian Wu, Saliency Detection via Combining Region-level and Pixel-level Predictions with CNNs, European Conference on Computer Vision, Springer, Cham, 2016.
Stutz D, Hermans A and Leibe B, Superpixels: An Evaluation of the State-of-the-Art, Computer Vision and Image Understanding, 2017.
Liu M. Y., Tuzel O., Ramalingam S. and Chellappa R., Entropy Rate Superpixel Segmentation, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
Van den Bergh M., Boix X., Roig G., de Capitani B. and Van Gool, Seeds: Superpixels Extracted via Energy-driven Sampling, European Conference on Computer Vision, Springer, Berlin, Heidelberg, 2012.
Shaji A., Smith K., Lucchi A., Fua P. and Susstrunk, IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 2274 (2012).
Achanta Radhakrishna and Sabine Susstrunk, Superpixels and Polygons Using Simple Non-iterative Clustering, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
Jampani V., Sun D., Ming-Yu Liu, Ming-Hsuan Yang and Kautz J., Superpixel Sampling Networks, European Conference on Computer Vision, 2018.
Dhillon J. S., Guan Y and Kulis B, IEEE Transactions on Pattern Analysis & Machine Intelligence 29, 1944 (2007).
Li Zhengqin and J. Chen, Superpixel Segmentation Using Linear Spectral Clustering, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
Shi Jianbo and Jitendra Malik, Normalized Cuts and Image Segmentation, Departmental Papers (CIS), 107 (2000).
Silberman N., Hoiem D., Kohli P. and Fergus R., Indoor Segmentation and Support Inference from RGBD Images, European Conference on Computer Vision Springer, Berlin, Heidelberg, 2012.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work has been supported by the Science Technology Department of Zhejiang Province (No.LGG19F020010), the Department of Education of Zhejiang Province (No.Y201329938), and the National Natural Science Foundation of China (No.61876167).
Rights and permissions
About this article
Cite this article
Liao, Ff., Cao, Ky., Zhang, Yx. et al. High-dimensional features of adaptive superpixels for visually degraded images. Optoelectron. Lett. 15, 231–235 (2019). https://doi.org/10.1007/s11801-019-9008-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11801-019-9008-2