Prof. Dr. Felix Naumann

Music Walks

Generative Deep Learning has received considerable attention in the last few years. This development has sparked repeated interest in an old, philosophical question:

Can artificial intelligence produce art?
or, more general,
Can machines be creative?

In last year’s very successful Master Project “Art Generation with GANs”, we were able to demonstrate that current Generative Adversarial Networks (GANs) can indeed produce high quality artworks. This year, we want to experiment with the generation of music using deep learning. Together with the Museum Barberini and producer/composer Henrik Schwarz, we want to push the boundaries of AI and art even further. For the anniversary of the Museum Barberini, an experiential musical level will be added to the permanent impressionist collection. It combines three components:

  1. The artistic ideas of the impressionists,
  2. Contemporary electronic music, and
  3. Innovative technology based on artificial intelligence.

The result shall be a museum experience that is unique in the world, in which visitors can experience the collection in an audio-visual combination that is different and individual with each visit.

Project Outline

Through layering of several musical levels as well as subtle changes of mood and the convey-
ance of a fleeting atmosphere, the combination process will draw on the working methods of
the impressionists. Influenced by external data, the music will be modulated differently again
and again, and combined into new music. Parameters affecting the composition include

  1. Movement patterns of the visitors
  2. The respective galleries and artworks of the exhibition
  3. The individual length of stay of the visitors
  4. Other signals, e.g., from smartphone sensors

Visitors can thus influence the modulation of the music themselves—and enjoy the Impres-
sionism collection with a new and individual composition created in real time during each visit.

Project Approach

Deep Learning methods are able to compose/generate music given enough training data. In this project, we want to explore these possibilities using GANs and transformers. Further, we want to be able to control the generated music based on external input. To this end, we need to investigate conditional GANs and approaches to include and re-use existing music snippets.
One particular challenge is the application scenario: we want to produce the music in real-time and individually for each visitor. Thus, we probably need to pre-compute specific parts and then arrange and merge these parts on the fly.


The HPI Information Systems chair and Information Profiling and Retrieval Group (ZBW + CAU Kiel) will jointly supervise this project. If you have any questions, please do not hesitate to contact Prof. Dr. Ralf Krestel or Alejandro Sierra