Fall Motion Detection with Fall Severity Level
Estimation by Mining Kinect 3D Data Stream
Orasa Patsadu1,
Bunthit Watanapa1, Piyapat Dajpratham2, and Chakarida Nukoolkit1
1School of Information Technology, King Mongkut’s
University of Technology Thonburi, Thailand
2Faculty of Medicine Siriraj Hospital, Mahidol University, Thailand
Abstract: This paper proposes an integrative model of fall motion detection
and fall severity level estimation. For the fall motion detection, a continuous
stream of data representing time sequential frames of fifteen body joint
positions was obtained from Kinect’s 3D depth camera. A set of features is then
extracted and fed into the designated machine learning model. Compared with
existing models that rely on the depth image inputs, the proposed scheme
resolves background ambiguity of the human body. The experimental results
demonstrated that the proposed fall detection method achieved accuracy of
99.97% with zero false negative and more robust when compared with the
state-of-the-art approach using depth of image. Another key novelty of our
approach is the framework, called Fall Severity Injury Score (FSIS), for
determining the severity level of falls as a surrogate for seriousness of
injury on three selected risk areas of body: head, hip and knee. The framework
is based on two crucial pieces of information from the fall: 1) the velocity of
the impact position and 2) the kinetic energy of the fall impact. Our proposed
method is beneficial to caregivers, nurses or doctors, in giving first
aid/diagnosis/treatment for the subject, especially, in cases where the subject
loses consciousness or is unable to respond.
Keywords: Kinect 3D data stream, fall motion
detection, fall severity level estimation, machine learning, smart home system.