Abstract
Equivalence structure (ES) extraction enables us to determine correspondence relations within a dataset or between multiple datasets. Applications of ES extraction include the analysis of time series data, preprocessing of imitation learning, and preprocessing of transfer learning. Currently, pairwise incremental search (PIS) is the fastest method to extract ESs; however, a combinatorial explosion can occur when employing this method. In this paper, we show that combinatorial explosion is a problem that occurs in the PIS, and we propose a new method where this problem does not occur. We evaluate the proposed method via experiments; the results show that our proposed method is 39 times faster than the PIS for synthetic datasets where a 20-dimensional ES exists. For the experiment using video datasets, the proposed method enabled us to obtain a 29-dimensional ES, whereas the PIS did not because the memory usage reached its limit when the number of dimensions was 9. In this experiment, the total processing time for our proposed method up to 29 dimensions was 6.3 times shorter than that for PIS up to even 8 dimensions.
Similar content being viewed by others
References
Alaee S, Kamgar K, Keogh E (2020) Matrix profile XXII: exact discovery of time series motifs under DTW. In: Int Conf on Data Mining (ICDM)
Caselles-Dupré H, Ortiz MG, Filliat D (2019) Symmetry-based disentangled representation learning requires interaction with environments. Adv Neural Inf Process Syst 32:4606
Delhaisse B, Esteban D, Rozo L, Caldwell D (2017) Transfer learning of shared latent spaces between robots with similar kinematic structure. In: Int Joint Conf on Neural Networks (IJCNN)
Gao Y, Lin J (2019) Discovering subdimensional motifs of different lengths in large-scale multivariate time series. In: Int Conf on Data Mining (ICDM)
Higgins I, Amos D, Pfau D, Racaniere S, Matthey L, Rezende D, Lerchner A (2018) Towards a definition of disentangled representations. arXiv:1812.02230
Locatello F, Poole B, Rätsch G, Schölkopf B, Bachem O, Tschannen M (2020) Weakly-supervised disentanglement without compromises. In: Int Conf on Machine Learning (ICML)
Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: Int Conf on Machine Learning (ICML)
Satoh S, Takahashi Y, Yamakawa H (2017) Validation of equivalence structure incremental search. Front Robot AI. https://doi.org/10.3389/frobt.2017.00063
Satoh S, Takahashi Y, Yamakawa H (2018) Accelerated equivalence structure extraction via pairwise incremental search. In: ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining
Satoh S, Yamakawa H (2017) Incremental extraction of high-dimensional equivalence structures. In: Int Joint Conf on Neural Networks (IJCNN)
Senin P (2008) Dynamic time warping algorithm review. Tech rep, Information and Computer Science Department University of Hawaii at Manoa Honolulu USA
Sermanet P, Lynch C, Chebotar Y, Hsu J, Jang E, Schaal S, Levine S, Brain G (2018) Time-contrastive networks: self-supervised learning from video. In: Int Conf on Robotics and Automation (ICRA)
Sun Q, Liu Y, Chua TS, Schiele B (2019) Meta-transfer learning for few-shot learning. In: Conf on Computer Vision and Pattern Recognition (CVPR)
Torabi F, Warnell G, Stone P (2019) Recent advances in imitation learning from observation. In: Int Joint Conf on Artificial Intelligence (IJCAI)
Wang J, Chen Y, Feng W, Yu H, Huang M, Yang Q (2020) Transfer learning with dynamic distribution adaptation. ACM Trans Intell Syst Technol 11(1):1–25
Yamada M, Kim H, Miyoshi K, Iwata T, Yamakawa H (2020) Disentangled representations for sequence data using information bottleneck principle. In: Asian Conf on Machine Learning (ACML)
Yang C, Yuan K, Heng S, Komura T, Li Z (2020) Learning natural locomotion behaviors for humanoid robots using human bias. IEEE Robot Autom Lett 5(2):2610–2617
Yeh CCM, Kavantzas N, Keogh E (2017) Matrix profile VI: meaningful multidimensional motif discovery. In: Int Conf on Data Mining (ICDM)
Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2019) A comprehensive survey on transfer learning. arXiv:1911.02685
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Satoh, S., Yamakawa, H. Bypassing combinatorial explosions in equivalence structure extraction. Knowl Inf Syst 63, 2621–2644 (2021). https://doi.org/10.1007/s10115-021-01599-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10115-021-01599-9