Enhanced Bagging (eBagging): A Novel Approach for
Ensemble Learning
Goksu
Tuysuzoglu1 and Derya Birant2
1Graduate School of Natural and
Applied Sciences, Dokuz Eylul University, Turkey
2Department of Computer Engineering,
Dokuz Eylul University, Turkey
Abstract: Bagging is one of the well-known
ensemble learning methods, which combines several classifiers trained on
different subsamples of the dataset. However, a drawback of bagging is its
random selection, where the classification performance depends on chance to
choose a suitable subset of training objects. This paper proposes a novel
modified version of bagging, named enhanced Bagging (eBagging), which uses a
new mechanism (error-based bootstrapping) when constructing training sets in
order to cope with this problem. In the experimental setting, the proposed
eBagging technique was tested on 33 well-known benchmark datasets and compared
with both bagging, random forest and boosting techniques using well-known
classification algorithms: Support Vector Machines (SVM), decision trees
(C4.5), k-Nearest Neighbour (kNN) and Naive Bayes (NB). The results show that
eBagging outperforms its counterparts by classifying the data points more
accurately while reducing the training error.
Keywords: Bagging, boosting, classification
algorithms, machine learning, random forest, supervised learning.
Received July 31, 2018; accepted December12,
2019