Skip to main content

Clustering Evaluation in High-Dimensional Data

  • Chapter
  • First Online:
Unsupervised Learning Algorithms

Abstract

Clustering evaluation plays an important role in unsupervised learning systems, as it is often necessary to automatically quantify the quality of generated cluster configurations. This is especially useful for comparing the performance of different clustering algorithms as well as determining the optimal number of clusters in clustering algorithms that do not estimate it internally. Many clustering quality indexes have been proposed over the years and different indexes are used in different contexts. There is no unifying protocol for clustering evaluation, so it is often unclear which quality index to use in which case. In this chapter, we review the existing clustering quality measures and evaluate them in the challenging context of high-dimensional data clustering. High-dimensional data is sparse and distances tend to concentrate, possibly affecting the applicability of various clustering quality indexes. We analyze the stability and discriminative power of a set of standard clustering quality measures with increasing data dimensionality. Our evaluation shows that the curse of dimensionality affects different clustering quality indexes in different ways and that some are to be preferred when determining clustering quality in many dimensions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Achtert, E.: Hierarchical Subspace Clustering. Ludwig Maximilians Universitat, Munich (2007)

    Google Scholar 

  2. Aggarwal, C.: On high dimensional projected clustering of uncertain data streams. In: Proceedings of the 25th IEEE International Conference on Data Engineering (ICDE), pp. 1152–1154 (2009)

    Google Scholar 

  3. Aggarwal, C.C.: Re-designing distance functions and distance-based applications for high dimensional data. ACM Sigmod Rec. 30(1), 13–18 (2001)

    Article  Google Scholar 

  4. Aggarwal, C.C.: On k-anonymity and the curse of dimensionality. In: Proceedings of the 31st International Conference on Very Large Data Bases (VLDB), pp. 901–909 (2005)

    Google Scholar 

  5. Aggarwal, C.C.: On randomization, public information and the curse of dimensionality. In: Proceedings of the 23rd IEEE International Conference on Data Engineering (ICDE), pp. 136–145 (2007)

    Google Scholar 

  6. Bai, L., Liang, J., Dang, C., Cao, F.: A novel attribute weighting algorithm for clustering high-dimensional categorical data. Pattern Recogn. 44(12), 2843–2861 (2011)

    Article  MATH  Google Scholar 

  7. Baker, F.B., Hubert, L.J.: Measuring the power of hierarchical cluster analysis. J. Am. Stat. Assoc. 70(349), 31–38 (1975)

    Article  MATH  Google Scholar 

  8. Bellman, R.E.: Adaptive Control Processes – A Guided Tour. Princeton University Press, Princeton (1961)

    Book  MATH  Google Scholar 

  9. Bengio, Y., Delalleau, O., Le Roux, N.: The curse of dimensionality for local kernel machines. Technical Report 1258, Departement d’Informatique et Recherche Operationnelle, Université de Montréal, Montreal, Canada (2005)

    Google Scholar 

  10. Bennett, K.P., Fayyad, U.M., Geiger, D.: Density-based indexing for approximate nearest-neighbor queries. In: Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 233–243 (1999)

    Google Scholar 

  11. Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U.: When is “nearest neighbor” meaningful? In: Proceedings of the International Conference on Database Theory (ICDT), ACM, New York, NY, pp. 217–235 (1999)

    Google Scholar 

  12. Bohm, C., Kailing, K., Kriegel, H.P., Kroger, P.: Density connected clustering with local subspace preferences. In: Proceedings of the Fourth IEEE International Conference on Data Mining (ICDM), pp. 27–34 (2004)

    Google Scholar 

  13. Bouguila, N., Almakadmeh, K., Boutemedjet, S.: A finite mixture model for simultaneous high-dimensional clustering, localized feature selection and outlier rejection. Expert Syst. Appl. 39(7), 6641–6656 (2012)

    Article  Google Scholar 

  14. Bouveyron, C., Brunet-Saumard, C.: Model-based clustering of high-dimensional data: a review. Comput. Stat. Data Anal. 71, 52–78 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Buza, K., Nanopoulos, A., Schmidt-Thieme, L.: INSIGHT: efficient and effective instance selection for time-series classification. In: Proceedings of the 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), Part II, pp. 149–160 (2011)

    Google Scholar 

  16. Caliński, T., Harabasz, J.: A dendrite method for cluster analysis. Commun. Stat. Simul. Comput. 3(1), 1–27 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  17. Carter, K., Raich, R., Hero, A.: On local intrinsic dimension estimation and its applications. IEEE Trans. Signal Process. 58(2), 650–663 (2010)

    Article  MathSciNet  Google Scholar 

  18. Celebi, M.E. (ed.): Partitional Clustering Algorithms. Springer, Berlin (2014)

    MATH  Google Scholar 

  19. Chen, X., Ye, Y., Xu, X., Huang, J.Z.: A feature group weighting method for subspace clustering of high-dimensional data. Pattern Recogn. 45(1), 434–446 (2012)

    Article  MATH  Google Scholar 

  20. Chávez, E., Navarro, G.: A probabilistic spell for the curse of dimensionality. In: Algorithm Engineering and Experimentation, pp. 147–160. Springer, Berlin (2001)

    Google Scholar 

  21. Chávez, E., Navarro, G.: Probabilistic proximity search: fighting the curse of dimensionality in metric spaces. Inf. Process. Lett. 85(1), 39–46 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  22. Davies, D.L., Bouldin, D.W.: A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1(2), 224–227 (1979)

    Article  Google Scholar 

  23. Draper, B., Elliott, D., Hayes, J., Baek, K.: EM in high-dimensional spaces. IEEE Trans. Syst. Man Cybern. 35(3), 571–577 (2005)

    Article  Google Scholar 

  24. Draszawka, K., Szymanski, J.: External validation measures for nested clustering of text documents. In: Ryżko, D., Rybinski, H, Gawrysiak, P., Kryszkiewicz, M. (eds.) Emerging Intelligent Technologies in Industry. Studies in Computational Intelligence, vol. 369, pp. 207–225. Springer, Berlin (2011)

    Google Scholar 

  25. Dunn, J.C.: Well-separated clusters and optimal fuzzy partitions. J. Cybern. 4(1), 95–104 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  26. Durrant, R.J., Kabán, A.: When is ‘nearest neighbour’ meaningful: a converse theorem and implications. J. Complex. 25(4), 385–397 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  27. Evangelista, P.F., Embrechts, M.J., Szymanski, B.K.: Taming the curse of dimensionality in kernels and novelty detection. In: Applied Soft Computing Technologies: The Challenge of Complexity, pp. 425–438. Springer, Berlin (2006)

    Google Scholar 

  28. Fan, W., Bouguila, N., Ziou, D.: Unsupervised hybrid feature extraction selection for high-dimensional non-Gaussian data clustering with variational inference. IEEE Trans. Knowl. Data Eng. 25(7), 1670–1685 (2013)

    Article  Google Scholar 

  29. Farahmand, A.M., Szepesvári, C.: Manifold-adaptive dimension estimation. In: Proceedings of the 24th International Conference on Machine Learning (ICML), ACM, New York, NY, pp. 265–272 (2007)

    Google Scholar 

  30. Färber, I., Günnemann, S., Kriegel, H.P., Kröger, P., Müller, E., Schubert, E., Seidl, T., Zimek, A.: On using class-labels in evaluation of clusterings. In: MultiClust: 1st International Workshop on Discovering, Summarizing and Using Multiple Clusterings Held in Conjunction with KDD (2010)

    Google Scholar 

  31. Fern, X.Z., Brodley, C.E.: Random projection for high dimensional data clustering: a cluster ensemble approach. In: Proceedings of 20th International Conference on Machine learning (ICML), pp. 186–193 (2003)

    Google Scholar 

  32. Fern, X.Z., Brodley, C.E.: Cluster ensembles for high dimensional clustering: an empirical study. Technical Report CS06-30-02, Oregon State University (2004)

    Google Scholar 

  33. Fowlkes, E.B., Mallows, C.L.: A method for comparing two hierarchical clusterings. J. Am. Stat. Assoc. 78(383), 553–569 (1983)

    Article  MATH  Google Scholar 

  34. François, D., Wertz, V., Verleysen, M.: The concentration of fractional distances. IEEE Trans. Knowl. Data Eng. 19(7), 873–886 (2007)

    Article  Google Scholar 

  35. France, S., Carroll, D.: Is the distance compression effect overstated? Some theory and experimentation. In: Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition (MLDM), pp. 280–294 (2009)

    Google Scholar 

  36. Frederix, G., Pauwels, E.J.: Shape-invariant cluster validity indices. In: Proceedings of the 4th Industrial Conference on Data Mining (ICDM), pp. 96–105 (2004)

    Google Scholar 

  37. Gupta, M.D., Huang, T.S.: Regularized maximum likelihood for intrinsic dimension estimation. Comput. Res. Rep. (2012). CoRR abs/1203.3483

    Google Scholar 

  38. Halkidi, M., Batistakis, Y., Vazirgiannis, M.: On clustering validation techniques. J. Intell. Inf. Syst. 17, 107–145 (2001)

    Article  MATH  Google Scholar 

  39. Hinneburg, A., Aggarwal, C., Keim, D.A.: What is the nearest neighbor in high dimensional spaces? In: Proceedings of the 26th International Conference on Very Large Data Bases (VLDB), pp. 506–515. Morgan Kaufmann, New York, NY (2000)

    Google Scholar 

  40. Hou, J., Nayak, R.: The heterogeneous cluster ensemble method using hubness for clustering text documents. In: Lin, X., Manolopoulos, Y., Srivastava, D., Huang, G. (eds.) Proceedings of the 14th International Conference on Web Information Systems Engineering (WISE). Lecture Notes in Computer Science, vol. 8180, pp. 102–110. Springer, Berlin (2013)

    Google Scholar 

  41. Houle, M.E.: The relevant-set correlation model for data clustering. J. Stat. Anal. Data Min. 1(3), 157–176 (2008)

    Article  MathSciNet  Google Scholar 

  42. Houle, M.E., Kriegel, H.P., Kröger, P., Schubert, E., Zimek, A.: Can shared-neighbor distances defeat the curse of dimensionality? In: Proceedings of the 22nd International Conference on Scientific and Statistical Database Management (SSDBM), pp. 482–500 (2010)

    Google Scholar 

  43. Hsu, C.M., Chen, M.S.: On the design and applicability of distance functions in high-dimensional data space. IEEE Trans. Knowl. Data Eng. 21(4), 523–536 (2009)

    Article  MathSciNet  Google Scholar 

  44. Hubert, L., Arabie, P.: Comparing partitions. J. Classif. 2(1), 193–218 (1985)

    Article  MATH  Google Scholar 

  45. Hubert, L.J., Levin, J.R.: A general statistical framework for assessing categorical clustering in free recall. Psychol. Bull. 83(6), 1072 (1976)

    Article  Google Scholar 

  46. Jarvis, R.A., Patrick, E.A.: Clustering using a similarity measure based on shared near neighbors. IEEE Trans. Comput. 22, 1025–1034 (1973)

    Article  Google Scholar 

  47. Jing, L., Ng, M., Huang, J.: An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. IEEE Trans. Knowl. Data Eng. 19(8), 1026–1041 (2007)

    Article  Google Scholar 

  48. Kabán, A.: On the distance concentration awareness of certain data reduction techniques. Pattern Recogn. 44(2), 265–277 (2011)

    Article  MATH  Google Scholar 

  49. Kaban, A.: Non-parametric detection of meaningless distances in high dimensional data. Stat. Comput. 22(2), 375–385 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  50. Li, T., Ma, S., Ogihara, M.: Document clustering via adaptive subspace iteration. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 218–225 (2004)

    Google Scholar 

  51. Liu, B., Xia, Y., Yu, P.S.: Clustering through decision tree construction. In: Proceedings of the 26th ACM SIGMOD International Conference on Management of Data, pp. 20–29 (2000)

    Google Scholar 

  52. Liu, Y., Li, Z., Xiong, H., Gao, X., Wu, J.: Understanding of internal clustering validation measures. In: Proceedings of the 10th IEEE International Conference on Data Mining (ICDM), pp. 911–916 (2010)

    Google Scholar 

  53. Lu, Y., Wang, S., Li, S., Zhou, C.: Particle swarm optimizer for variable weighting in clustering high-dimensional data. Mach. Learn. 82(1), 43–70 (2011)

    Article  MathSciNet  Google Scholar 

  54. Maulik, U., Bandyopadhyay, S.: Performance evaluation of some clustering algorithms and validity indices. IEEE Trans. Pattern Anal. Mach. Intell. 24(12), 1650–1654 (2002)

    Article  Google Scholar 

  55. Milligan, G.W.: A monte carlo study of thirty internal criterion measures for cluster analysis. Psychometrika 46(2), 187–199 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  56. Moise, G., Zimek, A., Kröger, P., Kriegel, H.P., Sander, J.: Subspace and projected clustering: experimental evaluation and analysis. Knowl. Inf. Syst. 21(3), 299–326 (2009)

    Article  Google Scholar 

  57. Müller, E., Günnemann, S., Assent, I., Seidl, T.: Evaluating clustering in subspace projections of high dimensional data. Proc. VLDB Endowment 2(1), 1270–1281 (2009)

    Article  Google Scholar 

  58. Naldi, M.C., Carvalho, A., Campello, R.J.: Cluster ensemble selection based on relative validity indexes. Data Min. Knowl. Disc. 27(2), 259–289 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  59. Ntoutsi, I., Zimek, A., Palpanas, T., Kröger, P., Kriegel, H.P.: Density-based projected clustering over high dimensional data streams. In: Proceedings of the 12th SIAM International Conference on Data Mining (SDM), pp. 987–998 (2012)

    Google Scholar 

  60. Pauwels, E.J., Frederix, G.: Cluster-based segmentation of natural scenes. In: Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 997–1002 (1999)

    Google Scholar 

  61. Pestov, V.: On the geometry of similarity search: Dimensionality curse and concentration of measure. Inf. Process. Lett. 73(1–2), 47–51 (2000)

    Article  MathSciNet  Google Scholar 

  62. Pettis, K.W., Bailey, T.A., Jain, A.K., Dubes, R.C.: An intrinsic dimensionality estimator from near-neighbor information. IEEE Trans. Pattern Anal. Mach. Intell. 1(1), 25–37 (1979)

    Article  MATH  Google Scholar 

  63. Radovanović, M.: Representations and Metrics in High-Dimensional Data Mining. Izdavačka knjižarnica Zorana Stojanovića, Novi Sad, Serbia (2011)

    Google Scholar 

  64. Radovanović, M., Nanopoulos, A., Ivanović, M.: Nearest neighbors in high-dimensional data: The emergence and influence of hubs. In: Proceedings of the 26th International Conference on Machine Learning (ICML), pp. 865–872 (2009)

    Google Scholar 

  65. Radovanović, M., Nanopoulos, A., Ivanović, M.: Hubs in space: popular nearest neighbors in high-dimensional data. J. Mach. Learn. Res. 11, 2487–2531 (2010)

    MathSciNet  MATH  Google Scholar 

  66. Radovanović, M., Nanopoulos, A., Ivanović, M.: Time-series classification in many intrinsic dimensions. In: Proceedings of the 10th SIAM International Conference on Data Mining (SDM), pp. 677–688 (2010)

    Google Scholar 

  67. Radovanović, M., Nanopoulos, A., Ivanović, M.: Reverse nearest neighbors in unsupervised distance-based outlier detection. IEEE Trans. Knowl. Data Eng. 27(5), 1369–1382 (2014)

    Article  Google Scholar 

  68. Ratkowsky, D., Lance, G.: A criterion for determining the number of groups in a classification. Aust. Comput. J. 10(3), 115–117 (1978)

    Google Scholar 

  69. Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)

    Article  MATH  Google Scholar 

  70. Santos, J.M., Embrechts, M.: On the use of the adjusted rand index as a metric for evaluating supervised classification. In: Proceedings of the 19th International Conference on Artificial Neural Networks (ICANN), Part II. Lecture Notes in Computer Science, vol. 5769, pp. 175–184. Springer, Berlin (2009)

    Google Scholar 

  71. Schnitzer, D., Flexer, A., Schedl, M., Widmer, G.: Using mutual proximity to improve content-based audio similarity. In: Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR), pp. 79–84 (2011)

    Google Scholar 

  72. Schnitzer, D., Flexer, A., Schedl, M., Widmer, G.: Local and global scaling reduce hubs in space. J. Mach. Learn. Res. 13(1), 2871–2902 (2012)

    MathSciNet  MATH  Google Scholar 

  73. Serpen, G., Pathical, S.: Classification in high-dimensional feature spaces: Random subsample ensemble. In: Proceedings of the International Conference on Machine Learning and Applications (ICMLA), pp. 740–745 (2009)

    Google Scholar 

  74. Talwalkar, A., Kumar, S., Rowley, H.A.: Large-scale manifold learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008)

    Google Scholar 

  75. Tomašev, N.: Hub miner: A hubness-aware machine learning library. (2014). http://dx.doi.org/10.5281/zenodo.12599

  76. Tomašev, N.: Taming the empirical hubness risk in many dimensions. In: Proceedings of the 15th SIAM International Conference on Data Mining (SDM), pp. 1–9 (2015)

    Google Scholar 

  77. Tomašev, N., Mladenić, D.: Nearest neighbor voting in high dimensional data: learning from past occurrences. Comput. Sci. Inf. Syst. 9(2), 691–712 (2012)

    Article  Google Scholar 

  78. Tomašev, N., Mladenić, D.: Hub co-occurrence modeling for robust high-dimensional kNN classification. In: Proceedings of the European Conference on Machine Learning (ECML), pp. 643–659. Springer, Berlin (2013)

    Google Scholar 

  79. Tomašev, N., Mladenić, D.: Hubness-aware shared neighbor distances for high-dimensional k-nearest neighbor classification. Knowl. Inf. Syst. 39(1), 89–122 (2013)

    Article  Google Scholar 

  80. Tomašev, N., Leban, G., Mladenić, D.: Exploiting hubs for self-adaptive secondary re-ranking in bug report duplicate detection. In: Proceedings of the Conference on Information Technology Interfaces (ITI) (2013)

    Google Scholar 

  81. Tomašev, N., Radovanović, M., Mladenić, D., Ivanović, M.: Hubness-based fuzzy measures for high-dimensional k-nearest neighbor classification. Int. J. Mach. Learn. Cybern. (2013)

    Google Scholar 

  82. Tomašev, N., Rupnik, J., Mladenić, D.: The role of hubs in cross-lingual supervised document retrieval. In: Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), pp. 185–196. Springer, Berlin (2013)

    Google Scholar 

  83. Tomašev, N., Radovanović, M., Mladenić, D., Ivanović, M.: The role of hubness in clustering high-dimensional data. IEEE Trans. Knowl. Data Eng. 26(3), 739–751 (2014)

    Article  Google Scholar 

  84. Tomašev, N., Radovanović, M., Mladenić, D., Ivanović, M.: Hubness-based clustering of high-dimensional data. In: Celebi, M.E. (ed.) Partitional Clustering Algorithms, pp. 353–386. Springer, Berlin (2015)

    Google Scholar 

  85. Vendramin, L., Campello, R.J.G.B., Hruschka, E.R.: Relative clustering validity criteria: a comparative overview. Stat. Anal. Data Min. 3(4), 209–235 (2010)

    MathSciNet  Google Scholar 

  86. Vendramin, L., Jaskowiak, P.A., Campello, R.J.: On the combination of relative clustering validity criteria. In: Proceedings of the 25th International Conference on Scientific and Statistical Database Management, p. 4 (2013)

    Google Scholar 

  87. Verleysen, M., Francois, D., Simon, G., Wertz, V.: On the effects of dimensionality on data analysis with neural networks. In: Proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks, Part II: Artificial Neural Nets Problem Solving Methods, pp. 105–112. Springer, Berlin (2003)

    Google Scholar 

  88. Vinh, N.X., Houle, M.E.: A set correlation model for partitional clustering. In: Zaki, M.J., Yu, J.X., Ravindran, B., Pudi, V. (eds.) Advances in Knowledge Discovery and Data Mining. Lecture Notes in Computer Science, vol. 6118, pp. 4–15. Springer, Berlin (2010)

    Chapter  Google Scholar 

  89. Wu, S., Feng, X., Zhou, W.: Spectral clustering of high-dimensional data exploiting sparse representation vectors. Neurocomputing 135, 229–239 (2014)

    Article  Google Scholar 

  90. Yianilos, P.N.: Locally lifting the curse of dimensionality for nearest neighbor search. In: Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 361–370 (2000)

    Google Scholar 

  91. Yin, J., Fan, X., Chen, Y., Ren, J.: High-dimensional shared nearest neighbor clustering algorithm. In: Fuzzy Systems and Knowledge Discovery. Lecture Notes in Computer Science, vol. 3614, pp. 484–484. Springer, Berlin (2005)

    Google Scholar 

  92. Zhang, Z., Wang, J., Zha, H.: Adaptive manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 34(2), 253–265 (2012)

    Article  Google Scholar 

  93. Zheng, L., Huang, D.: Outlier detection and semi-supervised clustering algorithm based on shared nearest neighbors. Comput. Syst. Appl. 29, 117–121 (2012)

    Google Scholar 

  94. Zimek, A., Schubert, E., Kriegel, H.P.: A survey on unsupervised outlier detection in high-dimensional numerical data. Stat. Anal. Data Min. 5(5), 363–387 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nenad Tomašev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Tomašev, N., Radovanović, M. (2016). Clustering Evaluation in High-Dimensional Data. In: Celebi, M., Aydin, K. (eds) Unsupervised Learning Algorithms. Springer, Cham. https://doi.org/10.1007/978-3-319-24211-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24211-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24209-5

  • Online ISBN: 978-3-319-24211-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics