Skip to main content

Automatic Music Generation by Deep Learning

  • Conference paper
  • First Online:
Distributed Computing and Artificial Intelligence, 15th International Conference (DCAI 2018)

Abstract

This paper presents a model capable of generating and completing musical compositions automatically. The model is based on generative learning paradigms of machine learning and deep learning, such as recurrent neural networks. Related works consider music as a text of a natural language, requiring the network to learn the syntax of the sheet music completely and the dependencies among symbols. This involves a very intense training and may produce overfitting in many cases. This paper contributes with a data preprocessing that eliminates the most complex dependencies allowing the musical content to be abstracted from the syntax. Moreover, a web application based on the trained models is presented. The tool allows inexperienced users to generate automatic music from scratch or from a given fragment of sheet music.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://ifdo.ca/~seymour/nottingham/nottingham.html.

  2. 2.

    http://www-etud.iro.umontreal.ca/~boulanni/icml2012.

References

  1. Agarwala, N., Inoue, Y., Sly, A.: CS224N Final Project. https://github.com/yinoue93/CS224N_proj. Accessed Dec 2017

  2. Agarwala, N., Inoue, Y., Sly, A.: Music composition using recurrent neural networks

    Google Scholar 

  3. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)

    Article  Google Scholar 

  4. Boulanger-Lewandowski, N., Bengio, Y., Vincent, P.: Modeling temporal dependencies in high-dimensional sequences: application to polyphonic music generation and transcription. In: Proceedings of the Twenty-nine International Conference on Machine Learning (ICML 2012). ACM (2012)

    Google Scholar 

  5. Google Brain Team. Magenta. https://github.com/tensorflow/magenta. Accessed Dec 2017

  6. Google Brain Team. Magenta Drums RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/drums_rnn. Accessed Jan 2018

  7. Google Brain Team. Magenta Melody RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn. Accessed Jan 2018

  8. Google Brain Team. Magenta Performance RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/performance_rnn. Accessed Jan 2018

  9. Google Brain Team. Magenta Pianoroll RNN-NADE. https://github.com/tensorflow/magenta/tree/master/magenta/models/pianoroll_rnn_nade. Accessed Jan 2018

  10. Google Brain Team. Magenta Polyphony RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/polyphony_rnn. Accessed Jan 2018

  11. Hadjeres, G.: DeepBach. https://github.com/Ghadjeres/DeepBach. Accessed Dec 2017

  12. Hadjeres, G., Pachet, F., Nielsen, F.: Deepbach: a steerable model for bach chorales generation. arXiv preprint arXiv:1612.01010 (2016)

  13. Johnson, D.D.: Generating polyphonic music using tied parallel networks. In: International Conference on Evolutionary and Biologically Inspired Music and Art, pp. 128–143. Springer, Heidelberg (2017)

    Chapter  Google Scholar 

  14. Liang, F.: BachBot: automatic composition in the style of Bach chorales. Ph.D. thesis, Masters thesis, University of Cambridge (2016)

    Google Scholar 

  15. Liang, F., Gotham, M., Tomczak, M., Johnson, M., Shotton, J.: BachBot. https://github.com/feynmanliang/bachbot. Accessed Dec 2017

  16. Serrano, E., Rovatsos, M., Botía, J.A.: Data mining agent conversations: A qualitative approach to multiagent systems analysis. Inf. Sci. 230, 132–146 (2013)

    Article  MathSciNet  Google Scholar 

  17. Tomczak, M.: Bachbot. Ph.D. thesis, Masters thesis, University of Cambridge (2016)

    Google Scholar 

  18. Wirth, R., Hipp, J.: CRISP-DM: towards a standard process model for data mining. In: Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining. Citeseer, pp. 29–39 (2000)

    Google Scholar 

Download references

Acknowledgments

This research work is supported by the Universidad Politécnica de Madrid under the education innovation project “Aprendizaje basado en retos para la Biología Computacional y la Ciencia de Datos”, code IE1718.1003; and by the Spanish Ministry of Economy, Indystry and Competitiveness under the R&D project Datos 4.0: Retos y soluciones (TIN2016-78011-C4-4-R, AEI/FEDER, UE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emilio Serrano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

García, J.C., Serrano, E. (2019). Automatic Music Generation by Deep Learning. In: De La Prieta, F., Omatu, S., Fernández-Caballero, A. (eds) Distributed Computing and Artificial Intelligence, 15th International Conference. DCAI 2018. Advances in Intelligent Systems and Computing, vol 800. Springer, Cham. https://doi.org/10.1007/978-3-319-94649-8_34

Download citation

Publish with us

Policies and ethics