• Home
  • Dataset
  • Non-Metrical Evaluation
  • Metrical Evaluation
  • Downloads
  • License
NoWNoW
  • Home
  • Dataset
  • Non-Metrical Evaluation
  • Metrical Evaluation
  • Downloads
  • License
  • Sign In
    Logout

Leaderboard metrical evaluation

(last updated: 26.08.2022)

For the metrical evaluation, no scale is computed for the rigid alignment of reconstructed mesh and reference scan. For metrical evaluation, the unit of measurements of the reconstructions must be provided.

Results submitted to participate in the NoW challenge are kept confidential. No information about the participants or the results is released or shared with others until explicitly specified by the participants, or after appearing in a peer-reviewed conference or journal. In order to publish results on the NoW website, follow the Participation instructions on the front page.

You can find the Bibtex information for all ranked methods here.

Rank Method Median (mm) Mean (mm) Std (mm) Error files PDF Code
1. TokenFace [Zhang et al., ICCV 2023] 0.97 1.24 1.07 Download (415 MB) PDF  
2. MICA [Zielonka et al. 2022] 1.08 1.37 1.17 Download (415 MB) PDF Code
3. DECA [Feng et al., SIGGRAPH 2021] 1.35 1.80 1.64 Download (415 MB) PDF Code
4. Wood et al. [ECCV 2022] 1.36 1.73 1.47 Download (415 MB) PDF  
5. FOCUS [Li et al. 2022] 1.41 1.85 1.70 Download (415 MB) PDF Code
6. FLAME 2020 template [Li et al., SIGGRAPH Asia 2017] 1.49 1.92 1.68 Download (415 MB) PDF  
7. RingNet [Sanyal et al., CVPR 2019] 1.50 1.98 1.77 Download (415 MB) PDF Code
8. 3DDFA-V2 [Guo et al., ECCV 2020]* 1.53 2.06 1.95 Download (415 MB) PDF Code
9. [Dib et al., ICCV 2021] 1.59 2.12 1.93 Download (415 MB) PDF  
10. Deep3DFaceRecon PyTorch [Deng et al., CVPRW 2019] 1.62 2.21 2.08 Download (415 MB) PDF Code
11. MGCNet [Shang et al., ECCV 2020] 1.70 2.47 3.02 Download (415 MB) PDF Code
12. Deep3DFaceRecon [Deng et al., CVPRW 2019] 2.26 2.90 2.51 Download (415 MB) PDF Code
13. SynergyNet [Wu et al., 3DV 2021] 2.28 2.86 2.39 Download (415 MB) PDF Code
14. UMDFA [Koizumi et al., ECCV 2020] 2.31 2.97 2.57 Download (415 MB) PDF  
15. 3DMM-CNN [Tran et al., CVPR 2017] 3.91 4.84 4.02 Download (415 MB) PDF Code

The table only considers methods that were defined as public when submitting to the NoW challenge, or that are publicly available in a peer-reviewed conference or journal. Methods are ranked according to the median reconstruction error. Methods with the same error (after rounding) are sorted w.r.t. the mean error. FLAME 2020 template refers to the performance of the static template of the FLAME face model. Results for [Deng et al. 2019] and 3DDFA-V2 are taken from DECA. To create a cumulative error plot, download the complete error files. *) 3DDFA-V2 failed for few images, the reported errors therefore exclude these images. Methods that fail for more than 5 images in total, are not accepted for comparison.

Referencing the metrical evaluation

The metrical evaluation for the NoW challenge was introduced in MICA.When reporting metrical evaluation errors in a scientific publication, cite the NoW challenge (see bibliographic information on the front page) and additionally:

@inproceedings{MICA:2022,
title = {Towards Metrical Reconstruction of Human Faces},
author = {Zielonka, Wojciech and Bolkart, Timo and Thies, Justus},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2022}
}
© 2020 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
RegisterSign In
© 2020 Max-Planck-Gesellschaft