Opening the black box: Machine learning for Next-Level Animal Science

Artificial neural networks (ANNs) are typically trained to perform a single task and learning further tasks generates conflicts that reduce accuracy of the model. In striking contrast, even the simplest of animals is able to learn multiple tasks. This project aims to understand how animals can learn multiple tasks, using ANN as minimal models of brains.

The researchers investigate what needs to change in an ANN to facilitate multi-task learning by carrying out numerical or “in-silico” experiments. Although ANNs are a crude approximation of the brain, they share important similarities with biological learning - an objective function, an architecture and a learning rule. In an ANN, once the researchers open the black box and code up their own models, each of these ingredients can be made biologically plausible. Importantly, the minimal approach does not attempt to accurately simulate a biological brain. Instead, it aims to reveal general, yet biologically relevant principles about the biophysics of multi-task learning.

The researchers identified the choice of learning protocol as a further ingredient for multi-task learning. They find that repetitively alternating between tasks during learning can allow the network to accommodate multiple tasks. What is the optimal frequency of alternation? Does it depend on the types of tasks? Furthermore, they find that multi-task learning generates an emergent partitioning of the network, where different sub-tasks are stored in different nodes. How much does the partitioning encode information about the relatedness of tasks? What features of biological networks, such as sparsity, influence the ability to learn and maintain multiple task abilities? The researches aim to answer these questions in this project.

While they use deep learning models to uncover principles of multi-task learning in the biological brain, the work will also allow them to gain a better understanding of deep learning itself and might lead to further bio-inspired development of artificial intelligence with a broad range of applications. For example, their work explores the importance of individual neurons or nodes for different tasks. This shows how the network allocates limited capacity, and how different ingredients determine this allocation.

Progress (September 2022)

The researchers are currently running numerical experiments, training simple neural nets on multiple, unrelated tasks. They are comparing different model ingredients, including making the models more biological than those used for conventional purposes. Presently, they find that some training protocols are much more effective than others. From these experiments the researchers are planning lab protocols for learning experiments with animals to determine whether the same learning protocols are as effective in animal trials.