Skip to main content

RGB-D Sensors Data Quality Assessment and Improvement for Advanced Applications

  • Chapter
  • First Online:

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

Abstract

Since the advent of the first Kinect as a motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices, capable of capturing a digital RGB image and the corresponding Depth map (RGB-D), have been introduced in the market. Although initially designed for the video gaming market with the scope of capturing an approximated 3D image of a human body in order to create gesture-based interfaces, RGB-D sensors’ low cost and their ability to gather streams of 3D data in real time with a frame rate of 15–30 fps, boosted their popularity for several other purposes, including 3D multimedia interaction, robot navigation, 3D body scanning for garment design and proximity sensors for automotive design. However, data quality is not the RGB-D sensors’ strong point, and additional considerations are needed for maximizing the amount of information that can be extracted by the raw data, together with proper criteria for data validation and verification. The present chapter provides an overview of RGB-D sensors technology and an analysis of how random and systematic 3D measurement errors affect the global 3D data quality in the various technological implementations. Typical applications are also reported, with the aim of providing readers with the basic knowledge and understanding of the potentialities and challenges of this technology.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alexandrov SV, Prankl J, Zillich M, Vincze M (2016) Calibration and correction of vignetting effects with an application to 3D mapping. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS), 2016-November. IEEE, pp 4217–4223. https://doi.org/10.1109/IROS.2016.7759621, http://ieeexplore.ieee.org/document/7759621/

  2. Alnowami M, Alnwaimi B, Tahavori F, Copland M, Wells K (2012) A quantitative assessment of using the Kinect for Xbox360 for respiratory surface motion tracking. In: Holmes III DR, Wong KH (eds) Proceedings of the SPIE, vol 8316, p. 83161T. https://doi.org/10.1117/12.911463

  3. Beraldin JA, Blais F, Cournoyer L, Godin G, Rioux M (2000) Active 3D sensing. Quaderni della Scuola Normale Superiore di Pisa 10:1–21

    Google Scholar 

  4. Boehler W, Bordas Vicent M, Marbs A (2003) Investigating laser scanner accuracy. Int Arch Photogramm, Remote Sens Spat Inf Sci 34(Part 5), 696–701. http://cipa.icomos.org/wp-content/uploads/2018/11/Boehler-e.a.-Investigating-laser-scanner-accuracy.pdf

  5. Bolt RA (1980) Put-that-there. ACM SIGGRAPH Comput Graph 14(3):262–270. https://doi.org/10.1145/965105.807503

    Article  MathSciNet  Google Scholar 

  6. Boutellaa E, Hadid A, Bengherabi M, Ait-Aoudia S (2015) On the use of Kinect depth data for identity, gender and ethnicity classification from facial images. Pattern Recognit Lett 68:270–277. https://doi.org/10.1016/j.patrec.2015.06.027, https://linkinghub.elsevier.com/retrieve/pii/S0167865515001993

    Article  Google Scholar 

  7. Carfagni M, Furferi R, Governi L, Servi M, Uccheddu F, Volpe Y (2017) On the performance of the intel SR300 depth camera: metrological and critical characterization. IEEE Sens J 17(14):4508–4519. https://doi.org/10.1109/JSEN.2017.2703829, http://ieeexplore.ieee.org/document/7929364/

    Article  Google Scholar 

  8. Chen X, Zhou B, Lu F, Wang L, Bi L, Tan P (2015) Garment modeling with a depth camera. ACM Trans Graph 34(6):1–12. https://doi.org/10.1145/2816795.2818059

    Article  Google Scholar 

  9. Chow JCK, Ang KD, Lichti DD, Teskey WF (2012) Performance analysis of a low-cost triangulation-based 3D camera: Microsoft Kinect system. In: International archives of the photogrammetry, remote sensing and spatial information sciences - ISPRS archives, vol 39, pp 175–180. https://www.scopus.com/inward/record.uri?eid=2-s2.0-84876521082&partnerID=40&md5=6cc10b0385e613e15a0e17b7aa77f888

  10. Chuan CH, Regina E, Guardino C (2014) American sign language recognition using leap motion sensor. In: 13th international conference on machine learning and applications. IEEE, pp 541–544. https://doi.org/10.1109/ICMLA.2014.110, http://ieeexplore.ieee.org/document/7033173/

  11. Dal Mutto C, Zanuttigh P, Cortelazzo GM (2012) Time-of-flight cameras and microsoft kinect™. Springer briefs in electrical and computer engineering. Springer, Boston. https://doi.org/10.1007/978-1-4614-3807-6

    Book  Google Scholar 

  12. DiFilippo NM, Jouaneh MK (2015) Characterization of different Microsoft Kinect sensor models. IEEE Sens J 15(8):4554–4564. https://doi.org/10.1109/JSEN.2015.2422611, http://ieeexplore.ieee.org/document/7084580/

    Article  Google Scholar 

  13. Dong H, Figueroa N, El Saddik A (2014) Towards consistent reconstructions of indoor spaces based on 6D RGB-D odometry and KinectFusion. In: 2014 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 1796–1803. https://doi.org/10.1109/IROS.2014.6942798

  14. Fisher SS (1987) Telepresence master glove controller for dexterous robotic end-effectors. In: Casasent DP (ed) Proceedings of SPIE - the international society for optical engineering, vol 726, p 396. https://doi.org/10.1117/12.937753

  15. Giancola S, Valenti M, Sala R (2018) State-of-the-art devices comparison. Springer, Cham. https://doi.org/10.1007/978-3-319-91761-0_3

    Google Scholar 

  16. Gonzalez-Jorge H, Riveiro B, Vazquez-Fernandez E, Martínez-Sánchez J, Arias P (2013) Metrological evaluation of Microsoft Kinect and Asus Xtion sensors. Measurement 46(6):1800–1806. https://doi.org/10.1016/j.measurement.2013.01.011, https://linkinghub.elsevier.com/retrieve/pii/S0263224113000262

    Article  Google Scholar 

  17. Gonzalez-Jorge H, Rodríguez-Gonzálvez P, Martínez-Sánchez J, González-Aguilera D, Arias P, Gesto M, Díaz-Vilariño L (2015) Metrological comparison between Kinect I and Kinect II sensors. Measurement 70:21–26. https://doi.org/10.1016/j.measurement.2015.03.042, https://linkinghub.elsevier.com/retrieve/pii/S0263224115001888

    Article  Google Scholar 

  18. Guidi G (2013) Metrological characterization of 3D imaging devices. In: Remondino F, Shortis MR, Beyerer J, Puente León F (eds) Proceedings of SPIE - the international society for optical engineering, vol 8791, pp M1–M10. SPIE, Bellingham, WA 98227-0010 (2013). https://doi.org/10.1117/12.2021037, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84880439273&doi=10.1117%2F12.2021037&partnerID=40&md5=32875bc13ad2c67c7eca834ff1f1613e, http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.2021037, http://dx.medra.org/10.1117/12.2021

  19. Guidi G, Beraldin JA, Atzeni C (2004) High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello’s “Maddalena”. IEEE Trans Image Process 13(3), 370–380 (2004). http://www.ncbi.nlm.nih.gov/pubmed/15376928

    Article  Google Scholar 

  20. Guidi G, Frischer B, De Simone M, Cioci A, Spinetti A, Carosso L, Micoli LLL, Russo M, Grasso T (2005) Virtualizing ancient Rome: 3D acquisition and modeling of a Largeplaster-of-Paris model of imperial Rome. In: SPIE videometrics VIII, vol 5665. SPIE, Bellinghaam, WA, 98227-0010, pp 119–133. https://doi.org/10.1117/12.587355

  21. Guidi G, Gonizzi S, Micoli L (2016) 3D capturing performances of low-cost range sensors for mass-market applications. In: ISPRS – international archives of the photogrammetry, remote sensing and spatial information sciences, vol XLI-B5, pp 33–40. https://doi.org/10.5194/isprsarchives-XLI-B5-33-2016, http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B5/33/2016/isprs-archives-XLI-B5-33-2016.pdf, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84979243034&doi=10.5194%2Fisprsarchives-XLI-B5-33-2016&partnerID=40&md5=b89b0993755aed7358a

    Article  Google Scholar 

  22. Guidi G, Russo M, Magrassi G, Bordegoni M (2010) Performance evaluation of triangulation based range sensors. Sensors 10(8):7192–7215. https://doi.org/10.3390/s100807192, http://www.mdpi.com/1424-8220/10/8/7192/

    Article  Google Scholar 

  23. Gülch E (2016) Investigations on google tango development kit for personal indoor mapping. In: Sarjakoski T, Santos MY, Sarjakoski LT (eds) The 19th AGILE international conference on geographic information science. Helsinki, pp 1–3. https://agile-online.org/conference_paper/cds/agile_2016/posters/102_Paper_in_PDF.pdf

  24. Hämmerle M, Höfle B (2016) Direct derivation of maize plant and crop height from low-cost time-of-flight camera measurements. Plant Methods 12(50):1–13. https://doi.org/10.1186/s13007-016-0150-6

    Article  Google Scholar 

  25. Hammerle M, Hofle B, Fuchs J, Schroder-Ritzrau A, Vollweiler N, Frank N (2014) Comparison of Kinect and terrestrial LiDAR capturing natural karst cave 3-D objects. IEEE Geosci Remote Sens Lett 11(11):1896–1900. https://doi.org/10.1109/LGRS.2014.2313599, http://ieeexplore.ieee.org/document/6805129/

    Article  Google Scholar 

  26. Henry P, Krainin M, Herbst E, Ren X, Fox D (2012) RGB-D mapping: using kinect-style depth cameras for dense 3D modeling of indoor environments. Int J Robot Res 31(5):647–663. https://doi.org/10.1177/0278364911434148, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84860151074&doi=10.1177%2F0278364911434148&partnerID=40&md5=0644738abc2ed53e7c3ca98aab092cf2

    Article  Google Scholar 

  27. Henry P, Krainin M, Herbst E, Ren X, Fox D (2014) RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments. Springer tracts in advanced robotics, vol 79, pp 477–491. https://doi.org/10.1007/978-3-642-28572-1_33

    Chapter  Google Scholar 

  28. Hirakawa K, Parks T (2006) Image denoising using total least squares. IEEE Trans Image Process 15(9):2730–2742. https://doi.org/10.1109/TIP.2006.877352, http://ieeexplore.ieee.org/document/1673453/

    Article  Google Scholar 

  29. Huang AS, Bachrach A, Henry P, Krainin M, Maturana D, Fox D, Roy N (2017) Visual odometry and mapping for autonomous flight using an RGB-D camera. Springer tracts in advanced robotics, vol 100, pp 235–252. https://doi.org/10.1007/978-3-319-29363-9_14

    Google Scholar 

  30. Huynh T, Min R, Dugelay JL (2013) An efficient LBP-based descriptor for facial depth images applied to gender recognition using RGB-D face data. In: Park JI, Kim J (eds) Computer vision - ACCV 2012 workshops. Springer, Berlin, pp 133–145. https://doi.org/10.1007/978-3-642-37410-4_12

    Chapter  Google Scholar 

  31. Jasch M, Weber T, Rätsch M (2017) Fast and robust RGB-D scene labeling for autonomous driving. J Comput 13(4):393–400. https://doi.org/10.17706/jcp.13.4.393-400

  32. JCGM: The international vocabulary of metrology–basic and general concepts and associated terms (VIM), 3rd edn, pp 1–92. JCGM (Joint committee for guides in metrology). https://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2012.pdf

  33. Jiang Y, Lim M, Zheng C, Saxena A (2012) Learning to place new objects in a scene. Int J Robot Res 31(9):1021–1043. https://doi.org/10.1177/0278364912438781

    Article  Google Scholar 

  34. Kahn S, Bockholt U, Kuijper A, Fellner DW (2013) Towards precise real-time 3D difference detection for industrial applications. Comput Ind 64(9):1115–1128. https://doi.org/10.1016/j.compind.2013.04.004, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84894901168&doi=10.1016%2Fj.compind.2013.04.004&partnerID=40&md5=f2fe4cb5f2bac7f864ef9125481fafc8, https://linkinghub.elsevier.com/retrieve/pii/S0166361513000766

    Article  Google Scholar 

  35. Khoshelham K (2012) Accuracy analysis of Kinect depth data. In: ISPRS - international archives of the photogrammetry, remote sensing and spatial information sciences, vol XXXVIII-5/, pp 133–138. https://doi.org/10.5194/isprsarchives-XXXVIII-5-W12-133-2011

    Article  Google Scholar 

  36. Khoshelham K, Elberink SO (2012) Accuracy and resolution of Kinect depth data for indoor mapping applications. Sensors 12(2):1437–1454. https://doi.org/10.3390/s120201437, http://www.mdpi.com/1424-8220/12/2/1437

    Article  Google Scholar 

  37. Koppula HS, Gupta R, Saxena A (2013) Learning human activities and object affordances from RGB-D videos. Int J Robot Res 32(8):951–970. https://doi.org/10.1177/0278364913478446

    Article  Google Scholar 

  38. Lachat E, Macher H, Landes T, Grussenmeyer P (2015) Assessment and calibration of a RGB-D camera (Kinect v2 sensor) towards a potential use for close-range 3D modeling. Remote Sens 7(10):13070–13097. https://doi.org/10.3390/rs71013070, http://www.mdpi.com/2072-4292/7/10/13070

    Article  Google Scholar 

  39. Langmann B, Hartmann K, Loffeld O (2012) Depth camera technology comparison and performance evaluation. In: ICPRAM 2012 - proceedings of the 1st international conference on pattern recognition applications and methods, vol 2, pp 438–444. https://www.scopus.com/inward/record.uri?eid=2-s2.0-84862218626&partnerID=40&md5=c83e57bc424e766df04598fa892293c2

  40. Lightman K (2016) Silicon gets sporty. IEEE Spectr 53(3):48–53. https://doi.org/10.1109/MSPEC.2016.7420400

    Article  Google Scholar 

  41. Mallick T, Das PP, Majumdar AK (2014) Characterizations of noise in Kinect depth images: a review. IEEE Sens J 14(6):1731–1740. https://doi.org/10.1109/JSEN.2014.2309987, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84898974692&doi=10.1109%2FJSEN.2014.2309987&partnerID=40&md5=63ea250190e3e3c0576df168f2c031a9

    Article  Google Scholar 

  42. Mankoff KD, Russo TA (2013) The Kinect: a low-cost, high-resolution, short-range 3D camera. Earth Surf Process Landf 38(9):926–936. https://doi.org/10.1002/esp.3332

    Article  Google Scholar 

  43. Marks R (2011) 3D spatial interaction for entertainment. In,: (2011) IEEE symposium on 3D user interfaces (3DUI). IEEE. https://doi.org/10.1109/3DUI.2011.5759209

  44. Martínez-Aranda S, Fernández-Pato J, Caviedes-Voullième D, García-Palacín I, García-Navarro P (2018) Towards transient experimental water surfaces: a new benchmark dataset for 2D shallow water solvers. Adv Water Resour 121:130–149. https://doi.org/10.1016/j.advwatres.2018.08.013, https://linkinghub.elsevier.com/retrieve/pii/S0309170818303658

    Article  Google Scholar 

  45. Molnár B, Toth CK, Detrekői A (2012) Accuracy test of Microsoft Kinect for human morphologic measurements. ISPRS-Int Arch Photogramm, Remote Sens Spat Inf Sci XXXIX-B3, 543–547. https://doi.org/10.5194/isprsarchives-XXXIX-B3-543-2012

    Article  Google Scholar 

  46. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohli P, Shotton J, Hodges S, Fitzgibbon A (2011) KinectFusion: real-time dense surface mapping and tracking. In: IEEE ISMAR. IEEE. http://research.microsoft.com/apps/pubs/default.aspx?id=155378

  47. Nintendo: consolidated financial highlights (2008). www.nintendo.co.jp/ir/pdf/2008/080124e.pdf

  48. Petit A, Lippiello V, Siciliano B (2015) Tracking fractures of deformable objects in real-time with an RGB-D sensor. In: 2015 international conference on 3D vision. IEEE, pp 632–639. https://doi.org/10.1109/3DV.2015.78, http://ieeexplore.ieee.org/document/7335534/

  49. Pons-Moll G, Pujades S, Hu S, Black MJ (2017) Clothcap: seamless 4d clothing capture and retargeting. ACM Trans Graph 36(4):73:1–73:15. https://doi.org/10.1145/3072959.3073711

    Article  Google Scholar 

  50. Rico J, Crossan A, Brewster S (2011) Gesture based interfaces: practical applications of gestures in real world mobile settings. In: England D (ed) Whole body interaction, Chap 14. Springer, London, pp 173–186. https://doi.org/10.1007/978-0-85729-433-3_14

    Chapter  Google Scholar 

  51. Rodriguez-Gonzalvez P, Gonzalez-Aguilera D, Gonzalez-Jorge H, Hernandez-Lopez D (2016) Low-cost reflectance-based method for the radiometric calibration of kinect 2. IEEE Sens J 16(7):1975–1985. https://doi.org/10.1109/JSEN.2015.2508802, http://ieeexplore.ieee.org/document/7355312/

    Article  Google Scholar 

  52. Rodríguez-Gonzálvez P, González-Aguilera D, Hernández-López D, González-Jorge H (2015) Accuracy assessment of airborne laser scanner dataset by means of parametric and non-parametric statistical methods. IET Sci, Meas Technol 9(4):505–513. https://doi.org/10.1049/iet-smt.2014.0053

    Article  Google Scholar 

  53. Rodríguez-Gonzálvez P, Muñoz-Nieto ÁL, Zancajo-Blázquez S, González-Aguilera D (2016) Geomatics and forensic: progress and challenges. In: Forensic analysis - from death to justice. InTech, pp 3–25. https://doi.org/10.5772/63155, http://www.intechopen.com/books/forensic-analysis-from-death-to-justice/geomatics-and-forensic-progress-and-challenges

    Google Scholar 

  54. Rodríguez-Gonzálvez P, Rodríguez-Martín M, Ramos LF, González-Aguilera D (2017) 3D reconstruction methods and quality assessment for visual inspection of welds. Autom Constr 79:49–58. https://doi.org/10.1016/j.autcon.2017.03.002, https://www.scopus.com/inward/record.uri?eid=2-s2.0-85014686028&doi=10.1016%2Fj.autcon.2017.03.002&partnerID=40&md5=52443dfc1458f567799f89bcdb18ea8a

    Article  Google Scholar 

  55. Sarbolandi H, Lefloch D, Kolb A (2015) Kinect range sensing: structured-light versus time-of-flight Kinect. Comput Vis Image Underst 139:1–20. https://doi.org/10.1016/j.cviu.2015.05.006, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84939771517&doi=10.1016%2Fj.cviu.2015.05.006&partnerID=40&md5=fedadae1dc863b854951721e082d408d, https://linkinghub.elsevier.com/retrieve/pii/S1077314215001071

    Article  Google Scholar 

  56. Scherer SA, Zell A (2013) Efficient onbard RGBD-SLAM for autonomous MAVs. In: 2013 IEEE/RSJ international conference on intelligent robots and systems. IEEE, Tokyo, pp 1062–1068. https://doi.org/10.1109/IROS.2013.6696482

  57. Schofield W, Breach M (2007) Engineering surveying. Elsevier, New York. https://epdf.tips/engineering-surveying-sixth-edition.html

  58. Shang Z, Shen Z (2018) Real-time 3D reconstruction on construction site using visual SLAM and UAV. In: Construction research congress 2018: construction information technology - selected papers from the construction research congress 2018, vol 2018-April, pp 305–315. https://doi.org/10.1061/9780784481264.030, https://www.scopus.com/inward/record.uri?eid=2-s2.0-85048697191&doi=10.1061%2F9780784481264.030&partnerID=40&md5=417f079c6ca456d09537d08fa4c3aea0

  59. Shotton J, Girshick R, Fitzgibbon A, Sharp T, Cook M, Finocchio M, Moore R, Kohli P, Criminisi A, Kipman A, Blake A (2013) Efficient human pose estimation from single depth images. IEEE Trans Pattern Anal Mach Intell 35(12):2821–2840. https://doi.org/10.1109/TPAMI.2012.241, http://ieeexplore.ieee.org/document/6341759/

    Article  Google Scholar 

  60. Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from RGBD images. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 7576, LNCS(PART 5), pp 746–760. https://doi.org/10.1007/978-3-642-33715-4_54

    Chapter  Google Scholar 

  61. Soileau L, Bautista D, Johnson C, Gao C, Zhang K, Li X, Heymsfield SB, Thomas D, Zheng J (2016) Automated anthropometric phenotyping with novel Kinect-based three-dimensional imaging method: comparison with a reference laser imaging system. Eur J Clin Nutr 70(4):475–481. https://doi.org/10.1038/ejcn.2015.132, http://www.nature.com/articles/ejcn2015132

    Article  Google Scholar 

  62. Stoyanov T, Mojtahedzadeh R, Andreasson H, Lilienthal AJ (2013) Comparative evaluation of range sensor accuracy for indoor mobile robotics and automated logistics applications. Robot Auton Syst 61(10):1094–1105. https://doi.org/10.1016/j.robot.2012.08.011, https://linkinghub.elsevier.com/retrieve/pii/S0921889012001431

    Article  Google Scholar 

  63. Zhao Y, Liu Z, Cheng H (2013) RGB-depth feature for 3D human activity recognition. China Commun 10(7):93–103. https://doi.org/10.1109/CC.2013.6571292

    Article  Google Scholar 

  64. Zollhöfer M, Stotko P, Görlitz A, Theobalt C, Nießner M, Klein R, Kolb A (2018) State of the art on 3D reconstruction with RGB-D cameras. Comput Graph Forum 37(2):625–652. https://doi.org/10.1111/cgf.13386

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo Rodríguez-Gonzálvez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rodríguez-Gonzálvez, P., Guidi, G. (2019). RGB-D Sensors Data Quality Assessment and Improvement for Advanced Applications. In: Rosin, P., Lai, YK., Shao, L., Liu, Y. (eds) RGB-D Image Analysis and Processing. Advances in Computer Vision and Pattern Recognition. Springer, Cham. https://doi.org/10.1007/978-3-030-28603-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-28603-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-28602-6

  • Online ISBN: 978-3-030-28603-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics