Brain data is often visualized through colourful fMRI images or lines of EEG plots. However, much of what goes on in the brain cannot be intuitively understood or analysed by looking at static images only. There are dynamics, there are "rhythms" in the brain. With 3d sonification models for headphones or even multiple speakers prepared in a room, brain data can be presented in an auditory manner, allowing for more insights into neuronal activity than visual representations can provide.
The aim of a project by Lukas Hartmann, Tim Strauch, Philipp Steigerwald and Luca Hilbrich (from the Technical University of Berlin) is to use EEG data to create a 3D sound installation, which presents rhythms and topographies of brain activity in a way that is intuitively understandable, even without extensive prior training in neuroscience. To control audio parameters, the team tracks energy in different EEG frequency bands over time (alpha, low beta, high beta etc.). The distribution of audio channels mirrors EEG channel locations on the skull: EEG activity captured at frontal positions triggers speakers in the front of the room, while EEG activity from lateral/medial positions on the skull is represented by speakers in the middle of the room and EEG signals at occipital positions (the back of the head) by speakers in the back of the room.
About Sonification
Sonification is the use of non-speech audio to convey information or perceptualize data. It offers advantages of high temporal, spatial, amplitude, and frequency resolution. Thus, it can be a promising alternative or complement to visualization techniques. It has been used in a wide range of application contexts, including healthcare [1], for gravitational waves, [2] auditory altimeters [3], as well as weather sonification [4].
One major concern in sonification projects is the synthesis approach: How can musical cues (such as rhythm or pitch) be used so that listeners understand the given kind of data intuitively? The design of sonification approaches hinges on application purposes. For example when the goal is to trace EEG alpha-activity in a participant who works on a creative task, the acoustic signal for alpha activity needs to stand out in the overall sonic experience.
The Approach
The approach used in this project differs from usual sonification applications where the data is directly sonified (e.g. controlling the pitch of a sound source). Instead the data is converted into signals which then control the distribution of sound in a 3D Space. With this approach, in the case of EEG it is possible to use the spatial distribution of frequency bands to dictate an acoustic topography. Since the data just controls the distribution of a sound source, the audio input is variable and open to a variety of different applications for diagnostic, educational or artistic purposes.
Project Outcomes
This project delivers two kinds of outomes: The first is a Web-Browser based on binaural rendering for demonstration and research purposes; its sound is optimized for headphones and it will be available on the projects website soon. It features a build-in dataset to demonstrate the approach of spatial sonification in a virtual room. The second outcome is an installation in a real room, realized with six speakers and a self written PureData-Program controlling the distribution of sound in the room; the installation will mirror the different regions of the brain and visitors will be able to walk around „the brain“ and experience how the different brainwaves move throughout.
About the Team
The project is conducted by Lukas Hartmann, Luca Hilbrich, Philipp Steigerwald and Tim Strauch, master students in Audio Communication & Technology at the Technische Universität Berlin, who like to apply their interests in sound production, perception and performance in the evolving field of Neurodesign, to explore novel avenues for the sonification of EEG data. The project is supported by Chris Chafe, Director of the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University.
References
[1] Parvizi, J., Gururangan, K., Razavi, B., & Chafe, C. (2018). Detecting silent seizures by their sound. Epilepsia, 59(4), 877-884.
[2] Tech, G.(2016,February 11). LIGO Gravitational Wave Chirp. www.youtube.com/watch
[3] Montgomery, E. T., & Schmitt, R. W. (1997). Acoustic altimeter control of a free vehicle for near-bottom turbulence measurements. Deep Sea Research Part I: Oceanographic Research Papers, 44(6), 1077-1084.
[4] Schuett, J. H., Winton, R. J., Batterman, J. M., & Walker, B. N. (2014, October). Auditory weather reports: demonstrating listener comprehension of five concurrent variables. In Proceedings of the 9th Audio Mostly: A Conference on Interaction With Sound (pp. 1-7).