How does CORNN optimize recurrent neural networks for fast neural dynamics inference?

Original title: CORNN: Convex optimization of recurrent neural networks for rapid inference of neural dynamics

Authors: Fatih Dinc, Adam Shai, Mark Schnitzer, Hidenori Tanaka

The evolution of recording technologies has allowed the tracking of numerous neurons’ activity, unveiling insights into large neural populations’ functions. Extracting computational insights from this wealth of data relies on training data-constrained recurrent neural networks (dRNNs). However, existing training methods for dRNNs are slow and struggle with scalability, limiting their practical use. Enter CORNN—a new training approach that revamps these limitations. In simulations, CORNN achieved training speeds 100 times faster than traditional methods while preserving or improving modeling accuracy. It proved effective in managing computations mimicking a flip-flop or timed responses across thousands of simulated cells. Impressively, CORNN maintained accuracy even with incomplete data or different time scales, robustly reproducing network dynamics. Its speed and capacity to handle millions of parameters make it a crucial tool for real-time analysis of large neural recordings, offering a leap in understanding neural computation.

Original article: https://arxiv.org/abs/2311.10200