4DHumanPercept
4D human animations, containing some of the most commonly generated artefacts, perceptually evaluated compared to their reference versions.
4DHumanPercept is the first dataset of virtual humans animations acquired using a 4D acquisition system that have been distorted using controlled factors with labels to identify the corresponding degree of perceptual similarity. The dataset is composed of a training and validation dataset, and a test dataset :
The former involves 240 simulated images created from 8 acquired reference animations of different actors (1 female, 1 male), motions (walking, jumping) and clothing (tight, loose). Each image has been distorted by 6 error types at 5 distortion levels.
The latter is composed of 10 simulated image created by randomly applying one level of distortion to 8 newly acquired reference animations from 5 subjects (2 females, 3 males) wearing either tight or loose outfits, who were doing either walking or hopping movements.