AQUAFace: Age-Invariant Quality Adaptive Face Recognition for Unconstrained Selfie vs ID Verification

AAAI 2025

1IIT Jodhpur, 2Swiggy
*Equal Contribution, +Work done while at Swiggy, $Work done as a part of summer internship at IITJ

Figure 1. Examples of image pairs depicting variations in age, quality, and combined age + quality factors. The top row illustrates easy pairs with high recognizability, the middle row depicts pairs of medium difficulty in recognition, and the bottom row showcases hard pairs posing challenges in verification. It is evident that the combined effect of age and quality variations notably reduces recognizability.

Abstract

Face recognition in the presence of age and quality variations poses a formidable challenge. While recent margin-based loss functions have shown promise in addressing these variations individually, real-world scenarios such as selfie versus ID face matching often involve simultaneous variations of both age and quality. In response, we propose a comprehensive framework aimed at mitigating the impact of these variations while preserving vital identity-related information crucial for accurate face recognition. The proposed adaptive margin-based loss function AQUAFace exhibits adaptiveness towards hard samples characterized by significant age and quality variations. This loss function is meticulously designed to prioritize the preservation of identity-related features while simultaneously mitigating the adverse effects of age and quality variations on face recognition accuracy. To validate the effectiveness of our approach, we focus on the specific task of selfie versus ID document matching. Our results demonstrate that AQUAFace effectively handles age and quality differences, leading to enhanced recognition performance. Additionally, we explore the benefits of fine-tuning the recognition model with synthetic data, further boosting performance. As a result, our proposed model, AQUAFace, achieves state-of-the-art performance on six benchmark datasets (CALFW, CPLFW, CFP-FP, AgeDB, IJB-C, and TinyFace), each exhibiting diverse age and quality variations.

AQUAFace Framework

Overview of the proposed AQUAFace model: We have introduced a novel adaptive margin-based loss for age-invariant, quality-aware face recognition, specifically targeting selfie vs. ID verification tasks. It comprises three key components:

  1. AQUALR: A Gaussian Mixture Model (GMM)-based module computes an Age and Quality Likelihood Ratio (AQUALR), integrating age and quality labels into pairwise similarity scores. This enables dynamic sample weighting based on age and quality variations.
  2. Adaptive Contrastive Loss: The loss dynamically adjusts margins using AQUALR to penalize harder samples characterized by large age differences or low quality, enhancing intra-class compactness and inter-class separation.
  3. Identity Preservation: A fine-tuned ArcFace model ensures robust identity-related feature extraction, utilizing margin-based softmax loss to maintain discriminative power under diverse variations. The combined loss function integrates these components to optimize for both identity preservation and resilience against age and quality changes. The framework incorporates real and synthetic datasets for training, using synthetic data fine-tuning to enrich intra-class variability. The architecture leverages a Siamese network with shared weights and cosine similarity for feature comparison, achieving state-of-the-art performance across benchmark datasets.

Dataset

A synthetic dataset comprising 14,017 subjects was created by utilizing single images of each identity from the AgeDB and MORPH datasets. The Lifespan Age Transformation Synthesis (LATS) model (Or-El et al., 2020) was employed to generate age-transformed images at three target age ranges: 15-19, 30-35, and 50-65. To mimic real-world scenarios, these synthetic images underwent quality degradation using an algorithm similar to GFPGAN (Wang et al., 2021), introducing variations in image quality. As a result, the synthetic images exhibit significant differences from the original images in both quality and age.

Figure 2: Samples of AgeDB and MORPH datasets along with their synthesized counterparts.

Results

Table 1: Performance evaluation of various face verification models on Real (VerifyMe) and Synthetic (Syn AM) datasets after bin-wise partitioning. We report GAR@0.1%FAR and GAR@1%FAR in easy, medium and hard categories.

Table 2: Performance comparison of recent methods on benchmark datasets with the AQUAFace model for ResNet100 and ResNe18 backbone. For high quality datasets, 1:1 verification accuracy is reported, following the protocol from (Kim, Jain, and Liu 2022). For mixed quality datasets, TAR@FAR=0.01% is reported. For TinyFace, closed-set rank retrieval (Rank-1 and Rank-5) is reported. Note: The performance of the pretrained models on MS1MV2 is sourced from their respective papers.

Table 3: Comparison on benchmark datasets with the AQUAFace model trained on different data types. The first row represents the model trained on real data (VerifyMe). The second row shows the model trained on the synthetic AgeDB and Morph datasets. The last row presents the model's performance when trained on VerifyMe and fine-tuned on the synthetic AgeDB and Morph. 1:1 verification accuracy is reported, following the protocol from (Kim, Jain, and Liu 2022)

BibTeX

@inproceedings{agarwal2025AQUAFace,
  title={{AQUAFace}: Age-Invariant Quality Adaptive Face Recognition for Unconstrained Selfie vs ID Verification},
  author={Shivang Agarwal and Jyoti Chaudhary and Sadiq Siraj Ebrahim and Mayank Vatsa and Richa Singh and Shyam Prasad Adhikari and Sangeeth Reddy Battu},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={32},
  number={1},
  year={2025}
}