Human Activity Recognition Based on Transfer Learning with Spatio-Temporal Representations

  • Ghadeer Written by
  • Update: 02/11/2021

Human Activity Recognition Based on Transfer Learning with Spatio-Temporal Representations

Saeedeh Zebhi, SMT Almodarresi, and Vahid Abootalebi

Electrical Engineering Department, Yazd University, Iran

Abstract: A Gait History Image (GHI) is a spatial template that accumulates regions of motion into a single image in which moving pixels are brighter than others. A new descriptor named Time-Sliced Averaged Gradient Boundary Magnitude (TAGBM) is also designed to show the time variations of motion. The spatial and temporal information of each video can be condensed using these templates. Based on this opinion, a new method is proposed in this paper. Each video is split into N and M groups of consecutive frames, and the GHI and TAGBM are computed for each group, resulting spatial and temporal templates. Transfer learning with the fine-tuning technique has been used for classifying these templates. This proposed method achieves the recognition accuracies of 96.50%, 92.30% and 97.12% for KTH, UCF Sport and UCF-11 action datasets, respectively. Also it is compared with state-of-the-art approaches and the results show that the proposed method has the best performance.

Keywords: Deep learning, tuning, VGG-16, action recognition.

Received August 5, 2020; accept January 6, 2021

https://doi.org/10.34028/iajit/18/6/11

Full text

Read 421 times
Share
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…