Timezone: »

A Channel Coding Benchmark for Meta-Learning
Rui Li · Ondrej Bohdal · Rajesh K Mishra · Hyeji Kim · Da Li · Nicholas Lane · Timothy Hospedales

Meta-learning provides a popular and effective family of methods for data-efficient learning of new tasks. However, several important issues in meta-learning have proven hard to study thus far. For example, performance degrades in real-world settings where meta-learners must learn from a wide and potentially multi-modal distribution of training tasks; and when distribution shift exists between meta-train and meta-test task distributions. These issues are typically hard to study since the shape of task distributions, and shift between them are not straightforward to measure or control in standard benchmarks. We propose the channel coding problem as a benchmark for meta-learning. Channel coding is an important practical application where task distributions naturally arise, and fast adaptation to new tasks is practically valuable. We use this benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift on meta-learner performance, which can be controlled in the coding problem. Going forward, this benchmark provides a tool for the community to study the capabilities and limitations of meta-learning, and to drive research on practically robust and effective meta-learners.

Author Information

Rui Li (Samsung AI Center)
Ondrej Bohdal (University of Edinburgh)
Rajesh K Mishra (The University of Texas at Austin)
Hyeji Kim (University of Texas at Austin)
Da Li
Nicholas Lane (Samsung AI Center Cambridge & University of Oxford)
Timothy Hospedales (University of Edinburgh)

More from the Same Authors