Timezone: »
Real-world learning systems have practical limitations on the quality and quantity of the training datasets that they can collect and consider. How should a system go about choosing a subset of the possible training examples that still allows for learning accurate, generalizable models? To help address this question, we draw inspiration from a highly efficient practical learning system: the human child. Using head-mounted cameras, eye gaze trackers, and a model of foveated vision, we collected first-person (egocentric) images that represents a highly accurate approximation of the "training data" that toddlers' visual systems collect in everyday, naturalistic learning contexts. We used state-of-the-art computer vision learning models (convolutional neural networks) to help characterize the structure of these data, and found that child data produce significantly better object models than egocentric data experienced by adults in exactly the same environment. By using the CNNs as a modeling tool to investigate the properties of the child data that may enable this rapid learning, we found that child data exhibit a unique combination of quality and diversity, with not only many similar large, high-quality object views but also a greater number and diversity of rare views. This novel methodology of analyzing the visual "training data" used by children may not only reveal insights to improve machine learning, but also may suggest new experimental tools to better understand infant learning in developmental psychology.
Author Information
Sven Bambach (The Research Institute at Nationwide Children's Hospital)
David Crandall (Indiana University)
Linda Smith (Indiana University)
Chen Yu (Indiana University)
More from the Same Authors
-
2021 : Enhanced Zero-Resource Speech Challenge 2021: Language Modelling from Speech and Images + Q&A »
Ewan Dunbar · Alejandrina Cristia · Okko Räsänen · Bertrand Higy · Marvin Lavechin · Grzegorz Chrupała · Afra Alishahi · Chen Yu · Maureen De Seyssel · Tu Anh Nguyen · Mathieu Bernard · Nicolas Hamilakis · Emmanuel Dupoux -
2020 Workshop: BabyMind: How Babies Learn and How Machines Can Imitate »
Byoung-Tak Zhang · Gary Marcus · Angelo Cangelosi · Pia Knoeferle · Klaus Obermayer · David Vernon · Chen Yu -
2019 : Panel Discussion »
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency -
2019 : Linda Smith »
Linda Smith -
2019 Poster: A Self Validation Network for Object-Level Human Attention Estimation »
Zehua Zhang · Chen Yu · David Crandall -
2019 Poster: Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition »
Satoshi Tsutsui · Yanwei Fu · David Crandall -
2017 : Panel Discussion »
Felix Hill · Olivier Pietquin · Jack Gallant · Raymond Mooney · Sanja Fidler · Chen Yu · Devi Parikh -
2017 : How infant learn to speak by interacting with the visual world? »
Chen Yu -
2016 Poster: Stochastic Multiple Choice Learning for Training Diverse Deep Ensembles »
Stefan Lee · Senthil Purushwalkam · Michael Cogswell · Viresh Ranjan · David Crandall · Dhruv Batra