Skip to main content
Log in

Non-rigid Reconstruction of Casting Process with Temperature Feature

  • 3DR Express
  • Published:
3D Research

Abstract

Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., & et al. (2011) Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of IEEE international symposium on mixed and augmented reality, IEEE. New York, USA, pp. 127–136.

  2. Ming, Z., Fukai, Z., Jiaxiang, Z., & Xin-Guo, L. (2013). Octree-based fusion for realtime 3D reconstruction. Graphical Models, 75(3), 126–136.

    Article  Google Scholar 

  3. Chen, J.-W., Bautembach, D., & Izadi, S. (2013). Scalable real-time volumetric surface reconstruction. ACM Transactions on Graphics, 32(4), 1–16.

    MATH  Google Scholar 

  4. Huibin, D. U., Yiwen, Z. H. A. O., Jianda, H. A. N., Xingang, Z. H. A. O., Zheng, W. A. N. G., & Guoli, S. O. N. G. (2016). Data fusion of human skeleton joint tracking using two kinect sensors and extended set membership filter. Acta Automatica Sinica, 42(12), 1886–1898.

    MATH  Google Scholar 

  5. Steinbruecker, F., Sturm, J., & Cremers, D. (2014). Volumetric 3D mapping in real-time on a CPU. In Proceedings of IEEE international conference on robotics and automation, IEEE. Hong Kong, China, pp. 2021–2028.

  6. Zollhöfer, M., Nießner, M., Izadi, S., Rehmann, C., Zach, C., Fisher, M., et al. (2014). Real-time non-rigid reconstruction using an RGB-D camera. ACM Transactions on Graphics, 33(4), 1–12.

    Article  Google Scholar 

  7. Dou, M., Taylor, J., Fuchs, H., Fitzgibbon, A., & Izadi, S. (2015). 3D scanning deformable objects with a single rgbd sensor. In Proceedings of computer vision and pattern recognition, IEEE. New York, USA, pp. 493–501.

  8. Newcombe, R. A., Fox, D., & Seitz, S. M. (2015). Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In Proceedings of computer vision and pattern recognition, IEEE. New York, USA, pp. 343–352.

  9. Sorkine, O., & Alexa, M. (2007). As-rigid-as-possible surface modeling. In Proceedings of eurographics symposium on geometry processing. Barcelona, Spain: Eurographics Association, pp. 109–116.

  10. Keller, M., Lefloch, D., Lambers, M., Izadi, S., Weyrich, T., & Kolb A. (2013). Real-time 3D reconstruction in dynamic scenes using point-based rusion. In Proceedings of 3D vision-3dv, IEEE. Seattle, WA, USA, pp. 1–8.

  11. Zhou, Q.-Y., & Koltun, V. (2013). Dense scene reconstruction with points of interest. ACM Transactions on Graphics, 32(4), 112.

    MATH  Google Scholar 

  12. Curless, B., & Levoy, M. (1996). A volumetric method for building complex models from range images. In Proceedings of computer graphics and interactive techniques, ACM. New York, USA, pp. 303–312.

  13. Rusinkiewicz, S., & Levoy, M. (2001). Efficient variants of the ICP algorithm. In Proceedings of 3DIM, pp. 145–152.

  14. Steinbrücker, F., Kerl, C., & Cremers, D. (2013). Large-scale multi-resolution surface reconstruction from RGB-D sequences. In Proceedings of IEEE international conference on computer vision, IEEE. Sydney, Australia, pp. 3264–3271.

  15. Choi, S., Zhou, Q. Y., & Koltun, V. (2015). Robust reconstruction of indoor scenes. In Proceedings of IEEE conference on computer vision and pattern recognition, pp. 5556–5565.

  16. Fang, Y., Xie, J., Dai, G., & et al. (2015). 3D deep shape descriptor. In Proceedings of IEEE conference on computer vision and pattern recognition, pp. 2319–2328.

  17. Dai, A., Nießner, M., Zollhöfer, M., & et al. (2016). Bundlefusion: Real-time globally consistent 3D reconstruction using on-the-fly surface re-integration. arXiv preprint.

  18. Collet, A., Chuang, M., Sweeney, P., Gillett, D., Evseev, D., Calabrese, D., et al. (2015). High-quality streamable free-viewpoint video. ACM Transactions on Graphics, 34(4), 1–13.

    Article  Google Scholar 

  19. Guo, K.-W., Xu, F., Wang, Y.-G., Liu, Y.-B., & Dai, Q.-H. (2015). Robust non-rigid motion tracking and surface reconstruction using L0 regularization. In Proceeding of IEEE international conference on computer vision, IEEE. Santiago, Chile, pp. 3083–3091.

  20. Garg, R., Roussos, A., & Agapito, L. (2013). Dense variational reconstruction of non-rigid surfaces from monocular video. In Proceedings of the IEEE conference on computer vision and pattern recognition, IEEE. New York, USA, pp. 1272–1279.

  21. Li, H., Vouga, E., Gudym, A., Barron, T. J., Luo, L.-J., & Gusev, G. (2013). 3D self-portraits. ACM Transactions on Graphics, 32(6), 187.

    Google Scholar 

  22. Wang, R.-Z., Wei, L.-Y., Vouga, E., Huang, Q.-X., Ceylan, D., Medioni, G., & et al. (2016). Capturing dynamic textured surfaces of moving targets. In Proceedings of the European conference on computer vision. doi:10.1007/978-3-319-46478-7_17.

  23. Agudo, A., Moreno-Noguer, F., Calvo, B., & Montiel, J. M. M. (2016). Sequential non-rigid structure from motion using physical priors. PAMI, 38(5), 979–994.

    Article  Google Scholar 

  24. Agudo, A., Moreno-Noguer, F., Calvo, B., & Montiel, J. M. M. (2016). Real-time 3D reconstruction of non-rigid shapes with a single moving camera. CVIU, 153(12), 37–54.

    Google Scholar 

  25. Agudo, A., Montiel, J. M. M., Agapito, L., & Calvo, B. (2017). Modal space: A physics-based model for sequential estimation of time-varying shape from monocular video. JMIV, 57(1), 75–98.

    Article  MathSciNet  Google Scholar 

  26. Agudo, A., Montiel, J. M. M., Calvo, B., & Moreno-Noguer, F. (2016). Mode-shape interpretation: Re-thinking modal space for recovering deformable shapes, WACV, Lake Placid (New York, USA).

  27. Agudo, A., & Moreno-Noguer, F. (2017). Combining local-physical and global-statistical models for sequential deformable shape from motion. IJCV, 122(2), 371–387.

    Article  MathSciNet  Google Scholar 

  28. Zhu, S., Zhang, L., & Smith, B. M. (2010). Model evolution: An incremental approach to non-rigid structure from motion. In CVPR.

  29. Zhou, Q. Y., & Koltun, V. (2014). Color map optimization for 3d reconstruction with consumer depth cameras. ACM Transactions on Graphics, 33(4), 1–10.

    Google Scholar 

  30. Zhou, B., Peng, S., & Liu, X. (2017). Two-layer motion semantic recognition by fusing the restricted Boltzmann machine based generative model and discriminative model. Journal of Computer-Aided Design & Computer Graphics, 29(4), 689–698.

    Google Scholar 

  31. Chen, L., Fan, X., Wang, M., & Xu, G. (2017). Parametric reconstruction of four-sided finite element mesh model. CVPR, 29(4), 680–688.

    Google Scholar 

  32. Zheng, J., Wang, Q., Zhao, P., et al. (2009). Optimization of high-pressure die-casting process parameters using artificial neural network. The International Journal of Advanced Manufacturing Technology, 44(7), 667–674.

    Article  Google Scholar 

  33. Wang, X., Ouyang, J., Zhang, G., & Yang, H. E. (2017). Super-resolution reconstruction of infrared images based on micro-scanner. Journal of Jilin University, 47(1), 235–241.

    Google Scholar 

Download references

Acknowledgements

This work was supported by National High-tech R&D Program (Grant numbers 2014AA7031010B); Science and Technology Project of The thirteenth Five-Year Plan (Grant number 2016345).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinhua Lin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, J., Wang, Y., Li, X. et al. Non-rigid Reconstruction of Casting Process with Temperature Feature. 3D Res 8, 32 (2017). https://doi.org/10.1007/s13319-017-0143-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13319-017-0143-x

Keywords

Navigation