Timezone: »

Turing Completeness of Bounded-Precision Recurrent Neural Networks
Stephen Chung · Hava Siegelmann

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @

Previous works have proved that recurrent neural networks (RNNs) are Turing-complete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine's time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and bounded-precision RNNs, revisiting and extending the theoretical foundations of RNNs.

Author Information

Stephen Chung (University of Massachusetts Amherst)

My name is Stephen Chung, who graduated from the University of Massachusetts Amherst with a master's degree in 2021.

Hava Siegelmann (UMASS)

More from the Same Authors