Timezone: »

 
Poster
Curriculum Learning by Dynamic Instance Hardness
Tianyi Zhou · Shengjie Wang · Jeffrey A Bilmes

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1333

A good teacher can adjust a curriculum based on students' learning history. By analogy, in this paper, we study the dynamics of a deep neural network's (DNN) performance on individual samples during its learning process. The observed properties allow us to develop an adaptive curriculum that leads to faster learning of more accurate models. We introduce dynamic instance hardness (DIH), the exponential moving average of a sample's instantaneous hardness (e.g., a loss, or a change in output) over the training history. A low DIH indicates that a model retains knowledge about a sample over time. For DNNs, we find that a sample's DIH early in training predicts its DIH in later stages. Hence, we can train a model using samples mostly with higher DIH and safely deprioritize those with lower DIH. This motivates a DIH guided curriculum learning (DIHCL) procedure. Compared to existing CL methods: (1) DIH is more stable over time than using only instantaneous hardness, which is noisy due to stochastic training and DNN's non-smoothness; (2) DIHCL is computationally inexpensive since it uses only a byproduct of back-propagation and thus does not require extra inference. On 11 datasets, DIHCL significantly outperforms random mini-batch SGD and recent CL methods in terms of efficiency and final performance. The code of DIHCL is available at https://github.com/tianyizhou/DIHCL.

Author Information

Tianyi Zhou (University of Washington, Seattle)
Tianyi Zhou

Tianyi Zhou (https://tianyizhou.github.io) is a tenure-track assistant professor of computer science at the University of Maryland, College Park. He received his Ph.D. from the school of computer science & engineering at the University of Washington, Seattle. His research interests are in machine learning, optimization, and natural language processing (NLP). His recent works study curriculum learning that can combine high-level human learning strategies with model training dynamics to create a hybrid intelligence. The applications include semi/self-supervised learning, robust learning, reinforcement learning, meta-learning, ensemble learning, etc. He published >80 papers and is a recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE Computer Society TCSC Most Influential Paper Award.

Shengjie Wang (University of Washington)
Jeffrey A Bilmes (University of Washington, Seattle)

Jeffrey A. Bilmes is a professor at the Department of Electrical and Computer Engineering at the University of Washington, Seattle Washington. He is also an adjunct professor in Computer Science & Engineering and the department of Linguistics. Prof. Bilmes is the founder of the MELODI (MachinE Learning for Optimization and Data Interpretation) lab here in the department. Bilmes received his Ph.D. from the Computer Science Division of the department of Electrical Engineering and Computer Science, University of California in Berkeley and a masters degree from MIT. He was also a researcher at the International Computer Science Institute, and a member of the Realization group there. Prof. Bilmes is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, and a 2012/2013 ISCA Distinguished Lecturer. Prof. Bilmes was, along with Andrew Ng, one of the two UAI (Conference on Uncertainty in Artificial Intelligence) program chairs (2009) and then the general chair (2010). He was also a workshop chair (2011) and the tutorials chair (2014) at NIPS/NeurIPS (Neural Information Processing Systems), and is a regular senior technical chair at NeurIPS/NIPS since then. He was an action editor for JMLR (Journal of Machine Learning Research). Prof. Bilmes's primary interests lie in statistical modeling (particularly graphical model approaches) and signal processing for pattern classification, speech recognition, language processing, bioinformatics, machine learning, submodularity in combinatorial optimization and machine learning, active and semi-supervised learning, and audio/music processing. He is particularly interested in temporal graphical models (or dynamic graphical models, which includes HMMs, DBNs, and CRFs) and ways in which to design efficient algorithms for them and design their structure so that they may perform as better structured classifiers. He also has strong interests in speech-based human-computer interfaces, the statistical properties of natural objects and natural scenes, information theory and its relation to natural computation by humans and pattern recognition by machines, and computational music processing (such as human timing subtleties). He is also quite interested in high performance computing systems, computer architecture, and software techniques to reduce power consumption. Prof. Bilmes has also pioneered (starting in 2003) the development of submodularity within machine learning, and he received a best paper award at ICML 2013, a best paper award at NIPS 2013, and a best paper award at ACMBCB in 2016, all in this area. In 2014, Prof. Bilmes also received a most influential paper in 25 years award from the International Conference on Supercomputing, given to a paper on high-performance matrix optimization. Prof. Bilmes has authored the graphical models toolkit (GMTK), a dynamic graphical-model based software system widely used in speech, language, bioinformatics, and human-activity recognition.

More from the Same Authors