Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

Activation Functions in Non-Negative Neural Networks

Marlon Becker · Dominik Drees · Frank Brückerhoff-Plückelmann · Carsten Schuck · Wolfram Pernice · Benjamin Risse


Abstract:

Optical neural networks (ONNs) have the potential to overcome scaling limitations of transistor-based systems due to their inherent low latency and large available bandwidth. However, encoding the information directly in the physical properties of light fields also imposes new computational constraints, for example the restriction to only positive intensity values for incoherent photonic processors. In this work, we address design and training challenges of physically constrained information processing with a particular focus on activation functions in non-negative neural networks (4Ns). Building on biological inspirations we revisit the concept of inhibitory (decreasing) and excitatory (increasing) activation functions, explore their effects experimentally and introduce a general approach for weight initialization of non-negative neural networks. Our results indicate the importance of both excitatory and inhibitory elements in activation functions in incoherent ONNs which should be considered for future design of optical activation functions for ONNs. Code is available at https://XXXXXXXX.

Chat is not available.