Inferring objects from a multitude of oscillations

Abstract

Oscillations often provide us with information of the origin. For instance, electrical oscillations measured by electroencephalograms and electrocardiograms afford clues to cognitive disorders and cardiac dysfunction, respectively. In particular, vibrations in air-pressure or sounds present rich information about circumstances. Here, we consider the problem of inferring types of coins from the sounds of their collision, and search for mechanisms that make such an inference possible. By devising a Bayesian learning algorithm and using a deep neural network, we reveal that optimizing the inference naturally leads the machines to select frequencies at which individual coins exhibit specific peaks in their sound spectra, indicating that inferences can be efficiently made by detecting the resonance sounds inherent in different coins. Both learning machines achieve high performances in correctly inferring coins. The developed methods are general and may be applicable to not only other sound identification tasks, but also various oscillatory phenomena, such as correlating brain activity to behavior as in the brain–computer interfaces.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  1. 1.

    Shoeb A, Guttag J (2010) Application of machine learning to epileptic seizure detection. In: Proceedings of the twenty-seventh international conference on machine learning, pp 975–982

  2. 2.

    Tzallas AT, Tsipouras MG, Fotiadis DI (2009) Epileptic seizure detection in EEGs using time-frequency analysis. IEEE Trans Inf Technol Biomed 13(5):703–710

    Article  Google Scholar 

  3. 3.

    Sukumaran D, Enyi Y, Shuo S, Basu A, Zhao D, Dauwels J (2012) A low-power, reconfigurable smart sensor system for EEG acquisition and classification. In: Proceedings IEEE Asia Pacific conference on circuits and systems, pp 9–12

  4. 4.

    Sumathi S, Beaulah HL, Vanithamani R (2014) A wavelet transform based feature extraction and classification of cardiac disorder. J Med Syst 38(98):1–11

    Google Scholar 

  5. 5.

    Zhao Q, Zhang L (2005) ECG feature extraction and classification using wavelet transform and support vector machines. In: International conference on neural networks and brain, pp 1089–1092

  6. 6.

    Grunwald JE, Schörnich S, Wiegrebe L (2004) Classification of natural textures in echolocation. Proc Natl Acad Sci 101(15):5670–5674

    Article  Google Scholar 

  7. 7.

    Gorman RP, Sejnowski TJ (1988) Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw 1(1):75–89

    Article  Google Scholar 

  8. 8.

    Pina JL, Echavarri LS, Carlosena A, López-Martin AJ (2002) Device and procedure for verification of coins. Eur Pat Appl 1378866

  9. 9.

    Bremananth R, Balaji B, Sankari M, Chitra A (2005) A new approach to coin recognition using neural pattern analysis. In: Proceedings of IEEE INDICON, pp 366–370

  10. 10.

    Tresanchez M, Pallejà T, Teixidó M, Palacín J (2009) Using the optical mouse sensor as a two-euro counterfeit coin detector. Sensors 9(9):7083–7096

    Article  Google Scholar 

  11. 11.

    Bishop CM (2006) Pattern recognition and machine learning. Springer, New York

    MATH  Google Scholar 

  12. 12.

    Murphy KP (2012) Machine learning: a probabilistic perspective. MIT Press, Cambridge

    MATH  Google Scholar 

  13. 13.

    Cakir E, Heittola T, Huttunen H, Virtanen T (2015) Multi-label vs. combined single-label sound event detection with deep neural networks. In: 23rd European signal processing conference, pp 2551–2555

  14. 14.

    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

    Article  Google Scholar 

  15. 15.

    Tokui S, Oono K, Hido S, Clayton J (2015) Chainer: a next-generation open source framework for deep learning. In: Proceedings of workshop on machine learning systems (LearningSys) in the twenty-ninth annual conference on neural information processing systems (NIPS)

  16. 16.

    Rabiner LR, Juang BH (1993) Fundamentals Of speech recognition. Prentice-Hall, New York

    Google Scholar 

  17. 17.

    Mikolov T, Karafiát M, Burget L, Černocký J, Khudanpur S (2010) Recurrent neural network based language model. In: Proceedings of the 11th annual conference of the international speech communication association, pp 1045–1048

  18. 18.

    Mikolov T, Kombrink S, Burget L, Černocký J, Khudanpur S (2011) Extensions of recurrent neural network language model. In: Proceedings of the IEEE international conference on acoustics, speech, and signal processing, pp 5528–5531

  19. 19.

    Rosen S, Howell P (1991) Signals and systems for speech and hearing. Academic Press, London

    Google Scholar 

  20. 20.

    Scheeringa R, Petersson KM, Oostenveld R, Norris DG, Hagoort P, Bastiaansen MC (2009) Trial-by-trial coupling between EEG and BOLD identifies networks related to alpha and theta EEG power increases during working memory maintenance. Neuroimage 44(3):1224–1238

    Article  Google Scholar 

  21. 21.

    Jutras MJ, Fries P, Buffalo EA (2009) Gamma-band synchronization in the macaque hippocampus and memory formation. J Neurosci 29(40):12521–12531

    Article  Google Scholar 

  22. 22.

    Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP (2002) Instant neural control of a movement signal. Nature 416:141–142

    Article  Google Scholar 

  23. 23.

    Chatterjee A, Aggarwal V, Ramos A, Acharya S, Thakor NV (2007) A brain–computer interface with vibrotactile biofeedback for haptic information. J Neuroeng Rehabil 4:40

    Article  Google Scholar 

Download references

Acknowledgements

We thank Hiromichi Suetani, Tohru Aridome, Hiroki Hata, and two anonymous reviewers for their advice and comments. This study was supported in part by a Grant-in-Aid for Scientific Research to SS from MEXT Japan (26280007) and by JST and CREST.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Shigeru Shinomoto.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Furukawa, M., Shinomoto, S. Inferring objects from a multitude of oscillations. Neural Comput & Applic 30, 2471–2478 (2018). https://doi.org/10.1007/s00521-016-2752-3

Download citation

Keywords

  • Sound identification
  • Bayesian inference
  • Deep learning
  • Resonance frequency