Soccer Event Detection via Collaborative Multimodal Feature Analysis and Candidate Ranking

Soccer Event Detection via Collaborative Multimodal Feature Analysis and Candidate Ranking

Alfian Abdul Halin1, Mandava Rajeswari2, and Mohammad Abbasnejad3
1Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Malaysia
2School of Computer Sciences, Universiti Sains Malaysia, Malaysia
3College of Engineering & Computer Science, Australian National University, Australia

 

Abstract:
This paper presents a framework for soccer event detection through collaborative analysis of the textual, visual and aural modalities. The basic notion is to decompose a match video into smaller segments until ultimately the desired eventful segment is identified. Simple features are considered namely the minute-by-minute reports from sports websites (i.e. text), the semantic shot classes of far and closeup-views (i.e. visual), and the low-level features of pitch and log-energy (i.e. audio). The framework demonstrates that despite considering simple features, and by averting the use of labeled training examples, event detection can be achieved at very high accuracy. Experiments conducted on ~30-hours of soccer video show very promising results for the detection of goals, penalties, yellow cards and red cards.


Keywords: Soccer event detection, sports video analysis, semantic gap, webcasting text.
 
Received August 20, 2011; accepted December 30, 2011
Read 2977 times Last modified on Sunday, 01 September 2013 02:43
Share
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…