Emotions are indicators of affective states and play a significant role in human daily life, shaping our thoughts, behaviour and interactions. Giving emotional intelligence to the machines could thus be beneficial to many areas and, for instance, facilitate early detection and prediction of (mental) diseases and symptoms. Electroencephalography (EEG) -based emotion recognition is being widely applied because it measures electrical correlates directly from the brain rather than the indirect measurement of other physiological responses initiated by the brain. In recent years, EEG measures have become more prominent with the increasing popularity of non-invasive, portable EEG sensors, making it possible to provide real-time solutions to many research questions in different areas like human-computer interaction (HCI), cognitive science, marketing, and (connected) healthcare. Moreover, significant individual differences in EEG data as well as changing environmental influences on said data call for methods that can be quickly trained for each new subject and adapt to data distribution changes over time. This thesis presents a real-time emotion classification pipeline, which trains different binary classifiers for the dimensions of Valence and Arousal from an incoming EEG data stream by leveraging online learning techniques. After achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset, this pipeline was applied to a dataset collected by an emotion elicitation experimental framework developed within the scope of this thesis. Following two different protocols, the data of 15 participants were recorded using two different consumer-grade EEG devices while
watching 16 short emotional videos in a controlled environment. For an immediate label setting, the mean F1-Score of 87% and 82% were achieved for Arousal and Valence, respectively. In a live scenario, while continuously being updated on the incoming data stream with delayed labels, the pipeline proved to be fast enough to achieve predictions in real-time. However, the significant discrepancy from classification scores obtained with readily available labels motivates future work to include more data with frequent delayed labels in the live settings.
The results of this project were submitted at the MDPI (Sensors) journal. As potential future works, we plan to extend the work to include multimodal sensors, incorporate bio-feedback, and increase the cohort size of the curated dataset.