Analysis of Deep Transfer Learning Using DeepConvLSTM for Human Activity Recognition from Wearable Sensors. Kalabakov, Stefan; Gjoreski, Martin; Gjoreski, Hristijan; Gams, Matjaz in Informatica (2021). 45(2)
Human Activity Recognition (HAR) from wearable sensors has gained significant attention in the last few decades, largely because of the potential healthcare benefits. For many years, HAR was done using classical machine learning approaches that require extraction of features. With the resurgence of deep learning, a major shift happened and at the moment, HAR researchers are mainly investigating different kinds of deep neural networks. However, deep learning comes with the challenge of having access to large amounts of labeled examples, which in the field of HAR is considered an expensive task, both in terms of time and effort. Another challenge is the fact that the training and testing data in HAR can be different due to the personal preferences of different people when performing the same activity. In order to try and mitigate these problems, in this paper we explore transfer learning, a paradigm for transferring knowledge from a source domain, to another related target domain. More specifically, we explore the effects of transferring knowledge between two open-source datasets, the Opportunity and JSI-FOS datasets, using weight-transfer for the DeepConvLSTM architecture. We also explore the performance of this transfer at different amounts of labeled data from the target domain. The experiments showed that it is beneficial to transfer the weights of fewer layers, and that deep transfer learning can perform better than a domain-specific deep end-to-end model in specific circumstances. Finally, we show that deep transfer learning is a viable alternative to classical machine learning approaches as it produces comparable results and does not require feature extraction.
Head-AR: Human Activity Recognition with Head-Mounted IMU Using Weighted Ensemble Learning. Gjoreski, Hristijan; Kiprijanovska, Ivana; Stankoski, Simon; Kalabakov, Stefan; Broulidakis, John; Nduka, Charles; Gjoreski, Martin in Activity and Behavior Computing, M. A. R. Ahad, S. Inoue, D. Roggen, K. Fujinami (reds.) (2021). 153–167.
This paper describes the machine learning (ML) method Head-AR, which achieved the highest performance in a competition with 11 other algorithms and won the Emteq Activity Recognition challenge. The goal of the challenge was to recognize eight activities of daily life from a device mounted on the head, which provided data from a 3-axis IMU: accelerometer, gyroscope, and magnetometer. The challenge dataset was collected by four subjects, of which one subject was used as a test for the challenge evaluation. The method processes the stream of sensors data and recognizes one of the eight activities every two seconds. The method is based on weighted ensemble learning, which combines three models: (i) a dynamic time warping classification model, which analyzes raw accelerometer data; (ii) a classification model that uses expert features; (iii) and a classification model that uses features selected by a feature selection algorithm. To compute the final output, the predictions of the three models are combined using a novel weighing scheme. The method achieved an F1-score of 61.25% on the competition’s evaluation.