Face Age Estimation With Multi-feature Fusion Model

Authors: Nguyen Van Anh
Conference: ICIC 2024 Posters, Tianjin, China, August 5-8, 2024
Pages: 317-328
Keywords: Estimate age global feature attention local feature CNN

Abstract

Age information is one of the most important features of human, so the task of extracting age features from face images has been received extensive at-tention, attracting many researchers. The appearance of a human face with the growth of age is affected by factors such as the difference of gender, race, environments… so, these tasks are of great significance and also brings challenges. In recent years, many researchers use deep learning techniques to solve the task of face age estimation. Specifically, use the VGG16 model to extract features, then use the classifier to estimate the age. However, disad-vantage of this model which uses a lot of parameters, plus the depth of the network level so that slow model operation. Other researcher, use a rank-consistent ordinal regression method, using the ResNet34 structure to ex-tract features, then combining the two-category extension method to achieve the age prediction task. This model has better results than the pre-vious ordinal regression network on the UTKFace dataset but the MAE value is still large. To overcome the aforementioned shortcomings and im-prove accuracy, we have introduced a composite model that leverages vari-ous types of features, known as TransCNNFusion. The TransCNNFusion model combines the feature extraction abilities of the Attention mechanism with the local facial feature extraction of CNN. Experimental results demonstrate that the proposed model is as effective as or even superior to other Vision Transformer and CNN models, indicating its potential for practical applications.
đź“„ View Full Paper (PDF) đź“‹ Show Citation