Advertisement

On Distance Mapping from non-Euclidean Spaces to Euclidean Spaces

Conference paper
  • 1.2k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10410)

Abstract

Most Machine Learning techniques traditionally rely on some forms of Euclidean Distances, computed in a Euclidean space (typically \(\mathbb {R}^{d}\)). In more general cases, data might not live in a classical Euclidean space, and it can be difficult (or impossible) to find a direct representation for it in \(\mathbb {R}^{d}\). Therefore, distance mapping from a non-Euclidean space to a canonical Euclidean space is essentially needed. We present in this paper a possible distance-mapping algorithm, such that the behavior of the pairwise distances in the mapped Euclidean space is preserved, compared to those in the original non-Euclidean space. Experimental results of the mapping algorithm are discussed on a specific type of datasets made of timestamped GPS coordinates. The comparison of the original and mapped distances, as well as the standard errors of the mapped distributions, are discussed.

References

  1. 1.
    Auer, P., Burgsteiner, H., Maass, W.: A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Netw. 21(5), 786–795 (2008)CrossRefzbMATHGoogle Scholar
  2. 2.
    Cambria, E., Huang, G.-B., Kasun, L.L.C., Zhou, H., Vong, C.M., Lin, J., Yin, J., Cai, Z., Liu, Q., Li, K., et al.: Extreme learning machines (trends & controversies). IEEE Intell. Syst. 28(6), 30–59 (2013)CrossRefGoogle Scholar
  3. 3.
    Cover, T.M.: Elements of Information Theory. Wiley, New York (1991)CrossRefzbMATHGoogle Scholar
  4. 4.
    Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. (MCSS) 2(4), 303–314 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Huang, G.-B., Chen, L., Siew, C.K., et al.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)CrossRefGoogle Scholar
  6. 6.
    Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme learning machine: theory and applications. Neurocomputing 70(1), 489–501 (2006)CrossRefGoogle Scholar
  7. 7.
    Kraskov, A., Stögbauer, H., Grassberger, P.: Estimating mutual information. Phys. Rev. E 69(6), 066138 (2004)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: optimally pruned extreme learning machine. IEEE Trans. Neural Netw. 21(1), 158–162 (2010)CrossRefGoogle Scholar
  9. 9.
    Miche, Y., Van Heeswijk, M., Bas, P., Simula, O., Lendasse, A.: TROP-ELM: a double-regularized ELM using LARS and tikhonov regularization. Neurocomputing 74(16), 2413–2421 (2011)CrossRefGoogle Scholar
  10. 10.
    Pál, D., Póczos, B., Szepesvári, C.: Estimation of rényi entropy and mutual information based on generalized nearest-neighbor graphs. In: Advances in Neural Information Processing Systems, pp. 1849–1857 (2010)Google Scholar
  11. 11.
    Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications. Wiley, New York (1971)zbMATHGoogle Scholar
  12. 12.
    Van Heeswijk, M., Miche, Y., Oja, E., Lendasse, A.: Gpu-accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing 74(16), 2430–2437 (2011)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2017

Authors and Affiliations

  1. 1.Nokia Bell LabsEspooFinland
  2. 2.Arcada University of Applied SciencesHelsinkiFinland
  3. 3.The University of IowaIowa CityUSA

Personalised recommendations