Abstract
Countries are increasingly interested in spacecraft surveillance and recognition which play an important role in on-orbit maintenance, space docking, and other applications. Traditional detection methods, including radar, have many restrictions, such as excessive costs and energy supply problems. For many on-orbit servicing spacecraft, image recognition is a simple but relatively accurate method for obtaining sufficient position and direction information to offer services. However, to the best of our knowledge, few practical machine-learning models focusing on the recognition of spacecraft feature components have been reported. In addition, it is difficult to find substantial on-orbit images with which to train or evaluate such a model. In this study, we first created a new dataset containing numerous artificial images of on-orbit spacecraft with labeled components. Our base images were derived from 3D Max and STK software. These images include many types of satellites and satellite postures. Considering real-world illumination conditions and imperfect camera observations, we developed a degradation algorithm that enabled us to produce thousands of artificial images of spacecraft. The feature components of the spacecraft in all images were labeled manually. We discovered that direct utilization of the DeepLab V3+ model leads to poor edge recognition. Poorly defined edges provide imprecise position or direction information and degrade the performance of on-orbit services. Thus, the edge information of the target was taken as a supervisory guide, and was used to develop the proposed Edge Auxiliary Supervision DeepLab Network (EASDN). The main idea of EASDN is to provide a new edge auxiliary loss by calculating the L2 loss between the predicted edge masks and ground-truth edge masks during training. Our extensive experiments demonstrate that our network can perform well both on our benchmark and on real on-orbit spacecraft images from the Internet. Furthermore, the device usage and processing time meet the demands of engineering applications.
![](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs42064-021-0103-3/MediaObjects/42064_2021_103_Fig1_HTML.jpg)
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Francis, G., Collins, E., Chuy, O., Sharma, A. Sampling-based trajectory generation for autonomous spacecraft rendezvous and docking. In: Proceedings of the AIAA Guidance, Navigation, and Control (GNC) Conference, 2013: AIAA 2013–4549.
Xu, Y. L., Zhou, H. J., Dai, H. Y. Design of GEO helix tourist orbit based on perturbation compensation. AIP Conference Proceedings, 2017, 1839(1): 020084.
Skinner, M. A., Kelecy, T. M., Gregory, S. A., Toth, J. P., Liang, D., Yamanaka, D., Kent, S., Tjoelker, R., Margineantu, D., Allison, A. L., et al. Commercial space situational awareness—An investigation of ground-based SSA concepts to support commercial geo satellite operators. In: Proceedings of Advanced Maui Optical and Space Surveillance Technologies Conference, 2013>
Harris, K., McGruvey, M., Chang, H. Y., Ryle, M. Application for RSO automated proximity analysis and IMAging (ARAPAJMA): Development of a nanosat-based space situational awareness mission. Technical report. Space Development and Test Directorate, Albuquerque, United States, 2013.
Isbrucker, V., Stauder, J., Laurin, D., Hollinger, A. Stray light control for asteroid detection at low solar elongation for the NEOSSat micro-satellite telescope. In: Proceeding of SPIE 8442, Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, 2012: 84424J.
Tang, K., Zou, B. Non-cooperative objects recognition based on fusing local feature with integer structure. In: Proceedings of the 1st Chinese Aerospace Safety Symposium, 2015: 3–7. (in Chinese)
Zhi, X., Hou, Q., Zhang, W., Sun, X. Optical identification method of space typical targets based on combined multi-feature metrics. Journal of Harbin Institute of Technology, 2016, 48(10): 44–50. (in Chinese)
Hu, J. Research on optical identification method of typical satellite local components. Master Thesis. Harbin Institute of Technology, 2017. (in Chinese)
Goodfellow, I., Bengio, Y., Courville, A. Deep Learning. Cambridge: The MIT Press, 2016.
Krizhevsky, A., Sutskever, I., Hinton, G. E. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2012, 6(6): 1097–1105.
Zhang, Y., Wu, Z., Wei, S., Zhang, Y. Spatial target recognition based on CNN and LSTM. In: Proceedings of the 12th Chinese Conference on Signal and Intelligent Information Processing and Applications, 2018, 197–200. (in Chinese)
Hochreiter, S., Schmidhuber, J. Long short-term memory. Neural Computation, 1997, 9(8): 1735–1780.
Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. 2017: arXiv: 1704.06857[cs.CV]. Available at https://arxiv.org/abs/1704.06857.
Li, L., Guo, B., Shao K. Geometrically robust image watermarking using scale-invariant feature transform and Zernike moments. Chinese Optics Letters, 2007, 5(6): 332–335.
Dalal, N., Triggs, B. Histograms of oriented gradients for human detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2005, 1: 886–893.
Boykov, Y., Jolly, M. P. Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In: Proceedings of the 8th IEEE International Conference on Computer Vision, 2001, 1: 105–112.
Liu, Z. Research of image segmentation algorithm based on graph theory. Ph.D. Dissertation. Lanzhou University of Technology, 2018. (in Chinese)
Long, J., Shelhamer, E., Darrell, T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3431–3440.
Badrinarayanan, V., Kendall, A., Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481–2495.
Ronneberger, O., Fischer, P., Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015. (Lecture Notes in Computer-Science, volume 9351). Navab, N., Hornegger, J., Wells, W., Frangi, A. Eds. Springer, Cham, 2015.
Paszke, A., Chaurasia, A., Kim, S., Culurciello, E. ENet: A deep neural network architecture for real-time semantic segmentation. 2016: arXiv: 1606.02147[cs.CV]. Available at https://arxiv.org/abs/1606.02147.
Chaurasia, A., Culurciello, E. Linknet: Exploiting encoder representations for efficient semantic segmentation. In: Proceedings of the 2017 IEEE Visual Communications and Image Processing, 2017: 1–4.
Chen, L. C., Zhu, Y. K., Papandreou, G., Schroff, F., Adam, H. In: Computer Vision—ECCV 2018. Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. Eds. Springer, Cham, 2018.
Zhang, H., Liu, Z., Jiang, Z., An, M., Zhao, D. BUAA-SID1.0 space object image dataset. Space Recovery and Remote Sensing, 2010, 31(4): 65–71. (in Chinese)
Information on http://newpaper.dahe.cn/hnsb/html/2008-09/28/content_109711.htm (cited 6 Sep 2020).
Poynton, C. A. A Technical Introduction to Digital Video. John Wiley & Sons, 1996.
Keys, R. Cubic convolution interpolation for digital image processing. In: Proceedings of the IEEE Transactions on Acoustics, Speech, and Signal Processing, 1981, 29(6): 1153–1160.
Gonzalez, R. C., Woods, R. E., Eddins, S. L. Digital Image Processing Using MATLAB. Gatesmark Publishing, 2004.
Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. 2014: arXiv: 1412.7062[cs.CV]. Available at https://arxiv.org/abs/1412.7062.
Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L. DeepLab: Semantic image segmentation with deep convolutional nets and fully connected CRFs. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2017, 40(4): 834–848.
Chen, L. C., Papandreou, G., Schroff, F., Adam, H. Rethinking atrous convolution for semantic image segmentation. 2017: arXiv: 1706.05587[cs.CV]. Available at https://arxiv.org/abs/1706.05587.
He, K., Zhang, X., Ren, S., Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770–778.
Yu, F., Koltun, V., Funkhouser, T. Dilated residual networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 472–480.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L. C. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 4510–4520.
van Vliet, L. J., Young, I. T., Beckers, G. L. An edge detection model based on non-linear laplace filtering. Machine Intelligence and Pattern Recognition, 1988, 7: 63–73.
Cui, L., Zhang, Y. Accurate semantic segmentation in turbulence media. IEEE Access, 2019, 7: 166749–166761.
Gao, W., Zhang, X., Yang, L., Liu, H. An improved sobel edge detection. In: Proceedings of the 3rd International Conference on Computer Science and Information Technology, 2010, 5: 67–71.
Jadon, S. A survey of loss functions for semantic segmentation. 2020: arXiv: 2006.14822[eess.IV]. https://arxiv.org/abs/2006.14822.
Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F. ImageNet: A large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009: 248–255.
Luc, P., Couprie, C., Chintala, S., Verbeek, J. Semantic segmentation using adversarial networks. 2016: arXiv: 1611.08408[cs.CV]. Available at https://arxiv.org/abs/1611.08408.
Acknowledgements
The authors acknowledge support from the National Natural Science Foundation of China (No. 11772023) and Science and Technology on Space Intelligent Control Laboratory (No. KGJZDSYS-2018-14).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Linwei Qiu received his B.S. degree in detection, guidance, and control techniques from Beihang University in 2018. He is currently pursuing his M.S. degree in control science and engineering at the School of Astronautics, Beihang University. His research interests include AI applications and intelligent control.
Liang Tang received his Ph.D. degree in spacecraft design from Beihang University in 2005. He is currently employed as a researcher at the Key Laboratory of Space Intelligent Control Technology. He is also a doctoral supervisor, model deputy chief designer, and senior director researcher in the complex spacecraft attitude control field. He has directed and completed multiple major national projects and experiments in space engineering.
Rui Zhong received his Ph.D. degree in spacecraft design from Beihang University in 2011. He was a postdoctoral fellow in the Department of Earth and Space Science and Engineering at York University from 2011 to 2013. In 2013, he accepted a position as assistant professor at the School of Astronautics at Beihang University, Beijing, China, where he is currently an associate professor. His research interests include tethered satellite systems, spatial multi-flexible body dynamics, spacecraft dynamics and control, and aerospace applications of machine learning.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Qiu, L., Tang, L. & Zhong, R. Toward the recognition of spacecraft feature components: A new benchmark and a new model. Astrodyn 6, 237–248 (2022). https://doi.org/10.1007/s42064-021-0103-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42064-021-0103-3