Deep Learning Methodologies for Human Activity Recognition

Alhumayyani, Maha Mohammed; Mounir, Mahmoud; Rasha Ismail;

Abstract


Human activity recognition (HAR) is a field that has shown great attention in recent years. The main reasons are the high demand in several application domains, and the HAR process makes use of the time-series sensor data to deduce activities. In this paper, three main deep learning methodologies are proposed based on RNN architecture. The three methodologies are based on the long-term short memory (LSTM), Bi-directional long short-term memory (Bi-LSTM), and gated recurrent unit (GRU). The proposed methodologies are capable of classifying six main movements with acceptable performance. Data were collected from 30 subjects with 6 main activities obtained from them. Five main classifiers are applied to test the performance of these methodologies, and these classifiers are the random trees, random forests, k-nearest neighbor, artificial neural network (ANN), and support vector machine (SVM). The highest accuracy was achieved using BiLSTM based on ANN classifier reaching an accuracy of 95.2155%. Several performance measurements are provided to test the methodologies' recognition capability. A comparison with other related works is done to exploit how the proposed methods are capable of providing a reasonable accuracy for HAR.


Other data

Title Deep Learning Methodologies for Human Activity Recognition
Authors Alhumayyani, Maha Mohammed; Mounir, Mahmoud ; Rasha Ismail 
Keywords Deep learning;Gated Recurrent Unit;Human activity recognition;Long Short-Term Memory;Recurrent Neural Networks;Sensors;Smartphone
Issue Date 1-Jan-2021
Conference Proceedings 2021 IEEE 10th International Conference on Intelligent Computing and Information Systems Icicis 2021
ISBN [9781665440769]
DOI 10.1109/ICICIS52592.2021.9694111
Scopus ID 2-s2.0-85127032005

Recommend this item

Similar Items from Core Recommender Database

Google ScholarTM

Check



Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.