Timezone: »

 
Tree DNN: A Deep Container Network
Brijraj Singh · Swati Gupta · Mayukh Das · Praveen Doreswamy Naidu · Sharan Allur

Multi-Task Learning (MTL) has shown its importance at user products for fasttraining, data efficiency, reduced overfitting etc. MTL achieves it by sharing  the network parameters and training a network for multiple tasks simultaneously.  However, MTL does not provide the solution, if each task needs training from  a different dataset. In order to solve the stated problem, we have proposed an  architecture named TreeDNN along with it’s training methodology. TreeDNN  helps in training the model with multiple datasets simultaneously, where each  branch of the tree may need a different training dataset. We have shown in the  results that TreeDNN provides competitive performance with the advantage of  reduced ROM requirement for parameter storage and increased responsiveness of  the system by loading only specific branch at inference time.

Author Information

Brijraj Singh (Sony Research India)
Swati Gupta (JIIT NOIDA)
Mayukh Das (Microsoft Research)
Praveen Doreswamy Naidu (Samsung R&D)
Sharan Allur (Samsung Research Institute Bangalore, India)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2018 : Spotlight (poster, demo), Lunch & Poster Session »
    Brijraj Singh · Philip Dow · Robert Dürichen · Paul Whatmough · Chen Feng · Arijit Patra · Shishir Patil · Eunjeong Jeong · Zhongqiu Lin · Yuki Izumi · Isabelle Leang · Mimee Xu · wenhan zhang · Sam Witteveen