Minutes of Roundtable Discussion 7

Our seventh roundtable discussion took place after Rishidev Chaudhuri’s MADDD seminar talk “Noise driven synaptic pruning.” The topic of the roundtable discussion was: Challenges and future of graph/network data analysis in neuroscience. It was conducted in a Q&A format between the host and the speaker with responses from the audience.

The speaker put up slides highlighting several interesting aspects of graphs and networks in neuroscience. First, the same brain network can be viewed at multiple levels, ranging from the anatomy to the dynamics to the function, and understanding how these levels relate is a major question. Second, networks in the brain are plastic and change on multiple timescales. Third, an ongoing challenge is figuring out which features of a brain network are relevant and which can be ignored. Fourth, most brain networks are very partially observed, and a major challenge is how to reconstruct the activity or function of a large network from a small sample. And fifth, multiple computations (such as the decoding of error-correcting codes and other inference problems) can be framed in terms of an abstract graph and an interesting emerging approach is thinking of whether and how these abstract graphs might have counterparts in actual brain networks. Finally, the speaker highlighted that there is a shortage of theories and data analysis tools in neuroscience, making it an exciting place in which to work, but a major challenge is combining mathematical knowledge and skills with the domain-specific knowledge needed to ask interesting neuroscience questions.

Q: Over the years, neuroscientists and physiologists are building some very precise model, microscopically, to behave, for example, how a pulse/action potential propagates along nerve axons is described by the Hodgkin-Huxley equations. And they agree with experimental data very well. And then people developed some simplified version of the HH equations, e.g., the FitzHugh-Nagumo equation, to understand action potential behaviors more mathematically. On the other hand, people also measure more macroscopic activities of the brain, for example, via functional MRI (fMRI) data set and so on, and then use system theory type approaches to try to understand the behavior of brains. And finally these days, some mathematicians and scientists are exploring some integration of microscopic models into more macroscopic models using metric (or quantum) graphs where the edges of such a graph is isomorphic to an interval, i.e., one can simulate action potentials along those edges that are described by the HH or FHN equations. But they are still far from the complex brains. So the host asked the speaker how those two different approaches (i.e., bottom up approach and top down approach) communicate with each other. I think that brains are very complex systems and both approaches are necessary, but I am curious to know what people in this field are thinking about these issues.

A: This is like in Physics, where some people are studying elementary particles and other people are studying multi-atomic systems. And it is important to have those levels, and sometimes it is a mistake to try to bring the bottom and top together because it would just be hopelessly complicated, like how you don’t solve the Schrödinger equation to describe multi-electron systems. But each level can constrain the next, speak to the next, and I think the diversity of approaches is very important and each level should try to link to the levels above and below. In general, it is a very difficult task to bring all the levels into a coherent theory and may even be counterproductive.

Reaction from the audience: I think it is quite important for non-biologists working on neuroscientific problems to listen to their biological colleagues because they know what’s important, which is not necessarily obvious from mathematical point of view.

Q from the audience: I am curious how much predictive power your models have at this point. Can you make unexpected predictions by using these models which are then verified by neuroscientists?

A: It depends on the level. It’s a hard subject. For example, there are many good models of the retina at the moment, and while they don’t capture everything happening in the natural world, they do it decently well. A challenge is that if you only study an organism in a reproducible manner, you often make things really simple. Consequently, you take away some natural variation of the system in your models. So you can often predict pretty well what’s happening within that sand box, but may not generalize well out of that sand box. But the visual system is a good place to look for predictive theories: you can often get a lot of features like basic neuron responses coming out of these models. Reinforcement learning has also had some nice successes in predicting what neural responses should encode.

Q: Recently I read an article in Nature Machine Intelligence, saying that the NeurIPS conference, initially relatively small gatherings, have now become something like 8000 participants each time, and many high tech companies send their employees to that conference. And then the article pointed out that over years emphasis of that conference shifted to machine learning compared to neuroscientific talks and presentations. The deep learning itself originally was motivated from the neuroscientific view point, but now it seems to be rather separated from neuroscience. So, what do you think about interactions between deep learning and hardcore neuroscience? Interestingly, some neuroscientists (e.g., Margaret Livingstone, Harvard) started to use deep nets on their experimental data to predict certain things.

A: They are pretty different communities and ideas. First of all, I think it is import to keep the basic science strong. I also think it’s important not to link these two fields superficially, like by saying “Oh, these neuroscientific facts should be the source of deep learning; or deep learning should do what the brain does”. Brains are different regimes from machine learning, and we care about different things. And I feel like studying hierarchical neuronal systems will become important again. A lot of people I worked with have moved to AI and machine learning. And the rest of the people in theoretical neuroscience have to figure out what the field means, whether it should be something between machine learning and neuroscience. But a lot of these things have happened during the last ten years, and we have not yet figured out the issues precisely. Lots of detailed neuroscientific facts may lead to interesting machine learning algorithms or may not.  

Reaction from the audience: I think a deeper question is whether machine learning turns out to be right for the the brain, and what neuroscience informs for deep learning. For example, everyone knows the brain is organized quite hierarchically, e.g., the vision systems having layers after layers that are connected in both forward and backward manners. But they do not look like most of the deep nets. On the other hand, deep nets just make abundantly clear how powerful having hierarchical networks is and how hierarchical networks can do magical things, e.g., image classifications, by avoiding the shortcomings of shallow networks in the old days that fell into local minima and saddle points. I see more people in neuroscience are now trying to think about the relationship of their work with deep nets. In addition, I think deep reinforcement learning might explain why having those deeper representations is critical for good classification performance.

Q: Do deep nets have any feedback mechanism from deeper layers to the shallow ones like real nervous systems?

A from audience: The main algorithm you may have heard about is the back propagation algorithm, i.e., things need to flow back. There is a paper saying that how that is more biologically redundant than required. There, one only needs to pass the error signal from one layer to the next backward layer. And one can make backward flowing connections so that it looks like the biological version of the gradient descent algorithm. Most neuroscientists would say those backward connections are more related to things like attention. In fact you are using your higher systems to tell your lower systems what to do. I would say the deep nets really do not have such a mechanism yet.

The big idea here is predictive coding. The top layer sending down “this is what I know, this is my current prediction of what the world is”, and then the bottom layer should only pass along things that violate the prediction. This is a powerful idea that could be adopted by both  the machine learning community and the neurophysiology community.  


[Scribe: Ji Chen (GGAM)]

Primary Category