`

Timezone: »

 
Poster
Learning to Multitask
Yu Zhang · Ying Wei · Qiang Yang

Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #113

Multitask learning has shown promising performance in many applications and many multitask models have been proposed. In order to identify an effective multitask model for a given multitask problem, we propose a learning framework called Learning to MultiTask (L2MT). To achieve the goal, L2MT exploits historical multitask experience which is organized as a training set consisting of several tuples, each of which contains a multitask problem with multiple tasks, a multitask model, and the relative test error. Based on such training set, L2MT first uses a proposed layerwise graph neural network to learn task embeddings for all the tasks in a multitask problem and then learns an estimation function to estimate the relative test error based on task embeddings and the representation of the multitask model based on a unified formulation. Given a new multitask problem, the estimation function is used to identify a suitable multitask model. Experiments on benchmark datasets show the effectiveness of the proposed L2MT framework.

Author Information

Yu Zhang (HKUST)
Ying Wei (Tencent AI Lab)
Qiang Yang (Hong Kong University of Science and Technology)

More from the Same Authors