Improved YOLOv3-tiny for Silhouette Detection Using Regularisation Techniques

  • Ghadeer Written by
  • Update: 01/03/2023

Improved YOLOv3-tiny for Silhouette Detection Using Regularisation Techniques

Donia Ammous

National School of Engineers of Sfax, University of Sfax, Sfax

This email address is being protected from spambots. You need JavaScript enabled to view it.

Achraf Chabbouh

Anavid France, Road Penthièvre 10, France

This email address is being protected from spambots. You need JavaScript enabled to view it.

Awatef Edhib

Sogimel: a Consulting Company in Computer Engineering and Video Surveillance, Sfax Technopole, Tunisia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Ahmed Chaari

Anavid France, Road Penthièvre 10, France

This email address is being protected from spambots. You need JavaScript enabled to view it.

Fahmi Kammoun

National School of Engineers of Sfax, University of Sfax, Sfax

This email address is being protected from spambots. You need JavaScript enabled to view it.

Nouri Masmoudi

National School of Engineers of Sfax, University of Sfax, Sfax

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved you Only Look Once (YOLO) v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model. The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.

Keywords: Silhouette/person detection, GPU, loss function, convolutional neural network,YOLOv3-tiny.

Received April 15, 2021; accepted December 14, 2022

https://doi.org/10.34028/iajit/20/2/14

Full text

Read 645 times Last modified on Thursday, 02 March 2023 07:51

Related items

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…