Improving 3D Finger Traits Recognition via Generalizable Neural Rendering

IJCV 2024
Hongbin Xu 1,   Junduan Huang 1,   Yuer Ma 1,   Zifeng Li 1 Wenxiong Kang 1*

1 South China University of Technology

Abstract

3D biometric techniques on finger traits have become a new trend and have demonstrated a powerful ability for recognition and anti-counterfeiting. Existing methods follow an explicit 3D pipeline that reconstructs the models first and then extracts features from 3D models. However, these explicit 3D methods suffer from the following problems: 1) Inevitable information dropping during 3D reconstruction; 2) Tight coupling between specific hardware and algorithm for 3D reconstruction. It leads us to a question: Is it indispensable to reconstruct 3D information explicitly in recognition tasks? Hence, we consider this problem in an implicit manner, leaving the nerve-wracking 3D reconstruction problem for learnable neural networks with the help of neural radiance fields (NeRFs). We propose FingerNeRF, a novel generalizable NeRF for 3D finger biometrics. To handle the shape-radiance ambiguity problem that may result in incorrect 3D geometry, we aim to involve extra geometric priors based on the correspondence of binary finger traits like fingerprints or finger veins. First, we propose a novel Trait Guided Transformer (TGT) module to enhance the feature correspondence with the guidance of finger traits. Second, we involve extra geometric constraints on the volume rendering loss with the proposed Depth Distillation Loss and Trait Guided Rendering Loss. To evaluate the performance of the proposed method on different modalities, we collect two new datasets: SCUT-Finger-3D with finger images and SCUT-FingerVein-3D with finger vein images. Moreover, we also utilize the UNSW-3D dataset with fingerprint images for evaluation. In experiments, our FingerNeRF can achieve 4.37% EER on SCUT-Finger-3D dataset, 8.12% EER on SCUT-FingerVein-3D dataset, and 2.90% EER on UNSW-3D dataset, showing the superiority of the proposed implicit method in 3D finger biometrics.

Method Overview

Qualitative Comparison

we provide qualitative comparison results between the proposed FingerNeRF and the representative method of generalizable NeRF, MVSNeRF. The rendered images and depth maps on a continuous sequence of camera tra- jectories are shown in the figure. Due to the shape-radiance ambiguity, we can find that the rendered depth map of MVSNeRF tends to be incorrect. The depth values on the edge area and the center area of finger have no clear difference. It does not make sense, considering that the shape of finger is similar to an elliptic cylinder. Whereas our method can render both the reasonable depth maps and images as shown in the figure.