ShapeNet Intrinsic Images v2.0 Extended

Creators
Publication date 2024
Description
The synthetic ShapeNet intrinsic image decomposition dataset of 90,000 images. 50,000 of them were used for training the deep CNN models of CVIU'2021 - see Section 4 of the paper. This is the extension of the first release of the synthetic ShapeNet intrinsic image decomposition dataset of 20,000 images used for training the deep CNN models IntrinsicNet and RetiNet of CVPR'2018. See Section 4.1 of the CVPR paper for the details of the data rendering. Similar to the initial dataset, both albedo and shading ground-truth images were in HDR, and later normalized to [0,1] using min-max. Then, the composite RGB image was created by element-wise multiplying the related albedo and shading ground truths. - albedo -> albedo (reflectance) ground-truth images - shading -> gray-scale shading (illumination) ground-truth images - mask -> object masks - composite -> composite RGB image (albedo x shading) - shading_prior_initial -> initial sparse shading estimations (see Section 3.3 of the paper) - shading_prior_filled -> dense shading map reconstruction (see Section 3.4 of the paper) shading_prior_filled folder is split into two parts (shading_prior_filled.z01 and shading_prior_filled.zip). To extract them, you need to unzip. If you are not sure how, check this link https://superuser.com/a/336224
Publisher Harvard Dataverse
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Document type Dataset
Related publication Physics-based Shading Reconstruction for Intrinsic Image Decomposition
DOI https://doi.org/10.7910/dvn/1xmmbz
Other links https://dataverse.harvard.edu/citation?persistentId=doi:10.7910/DVN/1XMMBZ
Permalink to this page
Back