Abstract:
An effective and robust face descriptor is an essential component for a good facial expression recognition system. Many popular appearance-based meth- ods such as local binary pattern (LBP), local directional pattern (LDP) and lo- cal ternary pattern (LTP) have been proposed to serve this purpose and have been proven both accurate and efficient. During the last few years, many re- searchers have been providing significant effort and ideas to improve these methods. In this research work, we present a new face descriptor, Adaptive Robust Local Complete Pattern (ARLCP). ARLCP effectively encodes signifi- cant information of emotion-related features by using the sign, magnitude and directional information of edge response that is more robust to noise and illu- mination variation. In this histogram-based approach, obtained feature image is divided into several regions, histogram of each region is computed indepen- dently and all histograms are concatenated to generate a final feature vector. We have experimented our method on several datasets using cross-validation schemes to evaluate the performance. From those experiments, it is evident that our method(ARLCP) provides better accuracy in facial expression recog- nition.