Skip to yearly menu bar Skip to main content


Poster

NeuralThink: Learning Algorithms For Consistent and Efficient Extrapolation Across General Tasks

Bernardo Esteves · Miguel Vasco · Francisco S. Melo


Abstract:

While machine learning methods excel at pattern recognition, they struggle with complex reasoning tasks in a scalable, algorithmic manner. Deep Thinking methods show promise in learning algorithms that extrapolate: that are learned in small environments and executed in larger environments without loss in performance. However, these works are limited to symmetrical tasks (such as image generation) where the input and output dimensionalities are the same. We propose NeuralThink, a novel Deep Thinking architecture that can efficiently and consistently learn algorithms that extrapolate in both symmetrical and asymmetrical tasks, where the dimensionality of the input and output are different. We introduce a set of novel asymmetrical tasks to evaluate the extrapolation performance of Deep Thinking methods. We show that NeuralThink consistently outperforms the prior state-of-the-art Deep Thinking approaches in consistently extrapolating to large observations, considering smaller training sizes and requiring less parameters.

Live content is unavailable. Log in and register to view live content