Skip to yearly menu bar Skip to main content


Poster

Pretraining with Random Noise for Fast and Robust Learning without Weight Transport

Jeonghwan Cheon · Sang Wan Lee · Se-Bum Paik

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Unlike artificial neural networks, the brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through prenatal, spontaneous neural activity that remotely resembles random noise. However, the mechanism of such a pretraining process has yet to be thoroughly understood, and it is unclear whether this process can benefit the training of machine learning algorithms. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise can greatly increase the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for feedback alignment, a biologically plausible error backpropagation algorithm. As a result, a network with pre-aligned weights learns notably faster than a network without random noise training, even reaching a convergence speed comparable to that of a backpropagation algorithm. We confirm that sequential training with both random noise and data brings weights closer to synaptic feedback than training solely with data, enabling more precise credit assignment and faster learning. We also found that each readout probability approaches the chance level and that the effective dimensionality of weights decreases in a network pretrained with random noise. This pre-regularization through random noise training allows the network to learn simple solutions of a low rank, reducing the generalization loss during subsequent training. We also observed that this enables the network robustly to generalize a novel “out-of-distribution” dataset. Lastly, by examining the task-agnostic property of random training, we confirmed that random noise pretraining reduces the amount of meta-loss, enhancing the network’s ability to adapt to various tasks. Overall, our results suggest that random noise training with feedback alignment, modeled after the early brain's strategy, offers a straightforward yet effective method of network pretraining that facilitates quick and reliable learning without weight transport.

Live content is unavailable. Log in and register to view live content