• Home
  • Dataset
  • Non-Metrical Evaluation
  • Metrical Evaluation
  • Downloads
  • License
NoWNoW
  • Home
  • Dataset
  • Non-Metrical Evaluation
  • Metrical Evaluation
  • Downloads
  • License
  • Sign In
    Logout

Leaderboard non-metrical evaluation (standard)

(last updated: 19.06.2024)

For the non-metrical evaluation, scale is computed for the rigid alignment of reconstructed mesh and reference scan. The reported non-metrical evaluation therefore is invariant to the scale of the reconstructed mesh.

Results submitted to participate in the NoW challenge are kept confidential. No information about the participants or the results is released or shared with others until explicitly specified by the participants, or after appearing in a peer-reviewed conference or journal. In order to publish results on the NoW website, follow the Participation instructions on the front page.

You can find the Bibtex information for all ranked methods here.

Rank Method Median (mm) Mean (mm) Std (mm) Error files PDF Code
1. TokenFace [Zhang et al., ICCV 2023] 0.76 0.95 0.82 Download (415 MB) PDF  
2. FlowFace [Taubner et al., CVPR 2024] 0.87 1.07 0.88 Download (415 MB) PDF  
3. MICA [Zielonka et al., ECCV 2022] 0.90
1.11
0.92
Download (415 MB) PDF Code
4. AlbedoGAN [Rai et al., 2023] 0.98 1.21 0.99 Download (415 MB) PDF Code
5. Wood et al. [ECCV 2022] 1.02 1.28 1.08 Download (415 MB) PDF  
6. FOCUS [Li et al. 2022] 1.04 1.30 1.10 Download (415 MB) PDF Code
7. CCFace [Yang et al., IEEE TMM 2023] 1.08 1.35 1.14 Download (415 MB) PDF  
8. DECA [Feng et al., SIGGRAPH 2021] 1.09 1.38 1.18 Download (415 MB) PDF Code
9. Deep3DFaceRecon PyTorch [Deng et al., CVPRW 2019] 1.11 1.41 1.21 Download (415 MB) PDF Code
10. PyMAF-X [Zhang et al. 2022] 1.13 1.42 1.20 Download (415 MB) PDF Code
11. RingNet [Sanyal et al., CVPR 2019] 1.21 1.53 1.31 Download (415 MB) PDF Code
12. FLAME 2020 template [Li et al., SIGGRAPH Asia 2017] 1.21 1.53 1.31 Download (415 MB) PDF  
13. Deep3DFaceRecon [Deng et al., CVPRW 2019] 1.23 1.54 1.29 Download (415 MB) PDF Code
14. 3DDFA-V2 [Guo et al., ECCV 2020]* 1.23 1.57 1.39 Download (414 MB) PDF Code
15. Dib et al. [ICCV 2021] 1.26 1.57 1.31 Download (415 MB) PDF  
16. SynergyNet [Wu et al., 3DV 2021] 1.27 1.59 1.31 Download (415 MB) PDF Code
17. MGCNet [Shang et al., ECCV 2020] 1.31 1.87 2.63 Download (415 MB) PDF Code
18. PRNet [Feng et al., ECCV 2018] 1.50 1.98 1.88 Download (415 MB) PDF Code
19. UMDFA [Koizumi et al., ECCV 2020] 1.52 1.89 1.57 Download (415 MB) PDF  
20. 3DMM-CNN [Tran et al., CVPR 2017] 1.84 2.33 2.05 Download (415 MB) PDF Code

The table only considers methods that were defined as public when submitting to the NoW challenge, or that are publicly available in a peer-reviewed conference or journal. Methods are ranked according to the median reconstruction error. Methods with the same error (after rounding) are sorted w.r.t. the mean error. FLAME 2020 template refers to the performance of the static template of the FLAME face model. Results for [Deng et al. 2019] and 3DDFA-V2 are taken from DECA. To create a cumulative error plot, download the complete error files. *) 3DDFA-V2 failed for few images, the reported errors therefore exclude these images. Methods that fail for more than 5 images in total, are not accepted for comparison.

© 2020 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
RegisterSign In
© 2020 Max-Planck-Gesellschaft