Vision-Based Human Activity Recognition Using LDCRFs

Vision-Based Human Activity Recognition

Using LDCRFs

Mahmoud Elmezain1,2 and Ayoub Al-Hamadi3

1Faculty of Science and Computer Engineering, Taibah University, KSA

2Computer Science Division, Faculty of Science, Tanta University, Egypt

3Institute of Information Technology and Communications, Otto-Von-Guericke-University, Germany

Abstract: In this paper, an innovative approach for human activity relies on affine-invariant shape descriptors and motion flow is proposed. The first phase of this approach is to employ the modelling background that uses an adaptive Gaussian mixture to distinguish moving foregrounds from their moving cast shadows. Accordingly, the extracted features are derived from 3D spatio-temporal action volume like elliptic Fourier, Zernike moments, mass center and optical flow. Finally, the discriminative model of Latent-dynamic Conditional Random Fields (LCDRFs) performs the training and testing action processes using the combined features that conforms vigorous view-invariant task. Our experiment on an action Weizmann dataset demonstrates that the proposed approach is robust and more efficient to problematic phenomena than previously reported. It also can take place with no sacrificing real-time performance for many practical action applications.

Keywords: Action recognition, Invariant elliptic fourier, Invariant zernike moments, latent-dynamic conditional random fields.

Received August 15, 2015; accepted January 11, 2016

 

Read 2981 times Last modified on Thursday, 17 May 2018 05:44
Share
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…