Timezone: »
In many sequential tasks, a model needs to remember relevant events from the distant past to make correct predictions. Unfortunately, a straightforward application of gradient based training requires intermediate computations to be stored for every element of a sequence. This requires to store prohibitively large intermediate data if a sequence consists of thousands or even millions elements, and as a result, makes learning of very long-term dependencies infeasible. However, the majority of sequence elements can usually be predicted by taking into account only temporally local information. On the other hand, predictions affected by long-term dependencies are sparse and characterized by high uncertainty given only local information. We propose \texttt{MemUP}, a new training method that allows to learn long-term dependencies without backpropagating gradients through the whole sequence at a time. This method can potentially be applied to any recurrent architecture. LSTM network trained with \texttt{MemUP} performs better or comparable to baselines while requiring to store less intermediate data.
Author Information
Artyom Sorokin (AIRI, MIPT)
Nazar Buzun (Skolkovo Institute of Science and Technology)
Leonid Pugachev (Moscow Institute of Physics and Technology)
Mikhail Burtsev (Artificial Intelligence Research Institute (AIRI))
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Explain My Surprise: Learning Efficient Long-Term Memory by predicting uncertain outcomes »
Tue. Nov 29th through Wed the 30th Room Hall J #424
More from the Same Authors
-
2022 : Fifteen-minute Competition Overview Video »
Maartje Anne ter Hoeve · Mikhail Burtsev · Zoya Volovikova · Ziming Li · Yuxuan Sun · Shrestha Mohanty · Negar Arabzadeh · Mohammad Aliannejadi · Milagro Teruel · Marc-Alexandre Côté · Kavya Srinet · arthur szlam · Artem Zholus · Alexey Skrynnik · Aleksandr Panov · Ahmed Awadallah · Julia Kiseleva -
2022 Spotlight: Lightning Talks 5A-2 »
Qiang LI · Zhiwei Xu · Jiaqi Yang · Thai Hung Le · Haoxuan Qu · Yang Li · Artyom Sorokin · Peirong Zhang · Mira Finkelstein · Nitsan levy · Chung-Yiu Yau · dapeng li · Thommen Karimpanal George · De-Chuan Zhan · Nazar Buzun · Jiajia Jiang · Li Xu · Yichuan Mo · Yujun Cai · Yuliang Liu · Leonid Pugachev · Bin Zhang · Lucy Liu · Hoi-To Wai · Liangliang Shi · Majid Abdolshah · Yoav Kolumbus · Lin Geng Foo · Junchi Yan · Mikhail Burtsev · Lianwen Jin · Yuan Zhan · Dung Nguyen · David Parkes · Yunpeng Baiia · Jun Liu · Kien Do · Guoliang Fan · Jeffrey S Rosenschein · Sunil Gupta · Sarah Keren · Svetha Venkatesh -
2022 Competition: IGLU: Interactive Grounded Language Understanding in a Collaborative Environment »
Julia Kiseleva · Alexey Skrynnik · Artem Zholus · Shrestha Mohanty · Negar Arabzadeh · Marc-Alexandre Côté · Mohammad Aliannejadi · Milagro Teruel · Ziming Li · Mikhail Burtsev · Maartje Anne ter Hoeve · Zoya Volovikova · Aleksandr Panov · Yuxuan Sun · arthur szlam · Ahmed Awadallah · Kavya Srinet -
2022 Poster: Recurrent Memory Transformer »
Aidar Bulatov · Yury Kuratov · Mikhail Burtsev -
2021 : IGLU: Interactive Grounded Language Understanding in a Collaborative Environment + Q&A »
Julia Kiseleva · Ziming Li · Mohammad Aliannejadi · Maartje Anne ter Hoeve · Mikhail Burtsev · Alexey Skrynnik · Artem Zholus · Aleksandr Panov · Katja Hofmann · Kavya Srinet · arthur szlam · Michel Galley · Ahmed Awadallah -
2018 : Mikhail Burtsev and Varvara Logacheva - Wild evaluation of chat-bots »
Mikhail Burtsev · Varvara Logacheva -
2017 : Competition III: The Conversational Intelligence Challenge »
Mikhail Burtsev · Ryan Lowe · Iulian Vlad Serban · Yoshua Bengio · Alexander Rudnicky · Alan W Black · Shrimai Prabhumoye · Artem Rodichev · Nikita Smetanin · Denis Fedorenko · CheongAn Lee · EUNMI HONG · Hwaran Lee · Geonmin Kim · Nicolas Gontier · Atsushi Saito · Andrey Gershfeld · Artem Burachenok