We present a live demo to explore interactively in 3D and in Virtual Reality (VR) our new Virtu-al Human Actions Dataset (VHAD). Virtual Worlds are rapidly gaining momentum as a reliable technique for visual training data generation. This is particularly the case for video, where manual labelling is extremely difficult or even impossible. This scarcity of adequate labeled training data is widely accepted as a major bottleneck of deep learning algorithms for important video under-standing tasks like action recognition. VHAD is a tentative solution to this issue, and consists in using modern game technology (esp. realistic rendering and physics engines) to generate large scale, densely labeled, high-quality syn-thetic video data without any manual intervention. In contrast to approaches using existing video games to record limited data from human game sessions (e.g., [7]), we build upon the more power-ful approach of “virtual world generation” [1,2], which can be seen as making a kind of serious game (dynamic virtual environment) to be played only by (game) AIs in order to generate training data for other (perceptual) AI algorithms. The objective of our demo is to introduce attendees to the benefits of using these realistic virtual worlds, and to allow them to identify new challenges and opportunities, both in terms of research and applications, in particular for action recognition, scene understanding, autonomous driving, deep learning, domain adaptation, multi-task learning, data generation, and related fundamental scientific problems. Our demo lets users navigate through the dynamic virtual worlds used in VHAD using state-of-the-art VR headsets.