TY - GEN
T1 - On splitting dataset
T2 - 2012 12th International Conference on Control, Automation, Robotics and Vision, ICARCV 2012
AU - Wang, Sheng
AU - Wu, Qiang
AU - He, Xiangjian
AU - Xu, Min
PY - 2012
Y1 - 2012
N2 - In this paper, we study the impact of learning an Adaboost classifier with small sample set (i.e., with fewer training examples). In particular, we make use of car localization as an underlying application, because car localization can be widely used to various real world applications. In order to evaluate the performance of Adaboost learning with a few examples, we simply apply Adaboost learning to a recently proposed feature descriptor - Locally Adaptive Regression Kernel (LARK). As a type of state-of-the-art feature descriptor, LARK is robust against illumination changes and noises. More importantly, we use LARK because its spatial property is also favorable for our purpose (i.e., each patch in the LARK descriptor corresponds to one unique pixel in the original image). In addition to learning a detector from the entire training dataset, we also split the original training dataset into several sub-groups and then we train one detector for each sub-group. We compare those features associated using the detector of each sub-group with that of the detector learnt with the entire training dataset and propose improvements based on the comparison results. Our experimental results indicate that the Adaboost learning is only successful on a small dataset when those learnt features simultaneously satisfy two conditions that: 1. features are learnt from the Region of Interest (ROI), and 2. features are sufficiently far away from each other.
AB - In this paper, we study the impact of learning an Adaboost classifier with small sample set (i.e., with fewer training examples). In particular, we make use of car localization as an underlying application, because car localization can be widely used to various real world applications. In order to evaluate the performance of Adaboost learning with a few examples, we simply apply Adaboost learning to a recently proposed feature descriptor - Locally Adaptive Regression Kernel (LARK). As a type of state-of-the-art feature descriptor, LARK is robust against illumination changes and noises. More importantly, we use LARK because its spatial property is also favorable for our purpose (i.e., each patch in the LARK descriptor corresponds to one unique pixel in the original image). In addition to learning a detector from the entire training dataset, we also split the original training dataset into several sub-groups and then we train one detector for each sub-group. We compare those features associated using the detector of each sub-group with that of the detector learnt with the entire training dataset and propose improvements based on the comparison results. Our experimental results indicate that the Adaboost learning is only successful on a small dataset when those learnt features simultaneously satisfy two conditions that: 1. features are learnt from the Region of Interest (ROI), and 2. features are sufficiently far away from each other.
UR - http://www.scopus.com/inward/record.url?scp=84876034422&partnerID=8YFLogxK
U2 - 10.1109/ICARCV.2012.6485320
DO - 10.1109/ICARCV.2012.6485320
M3 - Conference contribution
AN - SCOPUS:84876034422
SN - 9781467318716
T3 - 2012 12th International Conference on Control, Automation, Robotics and Vision, ICARCV 2012
SP - 1154
EP - 1159
BT - 2012 12th International Conference on Control, Automation, Robotics and Vision, ICARCV 2012
Y2 - 5 December 2012 through 7 December 2012
ER -