Publications & Preprints (* indicates equal contribution)
- Kim, T.D., Luo, T.Z., Can, T., Krishnamurthy, K., Pillow, J.W., Brody, C.D. (2023). Flow-field inference from neural data using deep recurrent networks. bioRxiv.
[abstract | link | bibtex]
Abstract: Computations involved in processes such as decision-making, working memory, and motor control are thought to emerge from the dynamics governing the collective activity of neurons in large populations. But the estimation of these dynamics remains a significant challenge. Here we introduce Flow-field Inference from Neural Data using deep Recurrent networks (FINDR), an unsupervised deep learning method that can infer low-dimensional nonlinear stochastic dynamics underlying neural population activity. Using population spike train data from frontal brain regions of rats performing an auditory decision-making task, we demonstrate that FINDR outperforms existing methods in capturing the heterogeneous responses of individual neurons. We further show that FINDR can discover interpretable low-dimensional dynamics when it is trained to disentangle task-relevant and irrelevant components of the neural population activity. Importantly, the low-dimensional nature of the learned dynamics allows for explicit visualization of flow fields and attractor structures. We suggest FINDR as a powerful method for revealing the low-dimensional task-relevant dynamics of neural populations and their associated computations.
-
Luo, T.Z.*, Kim, T.D.*, Gupta D., Bondy, A.G., Kopec, C.D., Elliot, V.A., DePasquale, B., Brody, C.D. (2023). Transitions in dynamical regime and neural mode underlie perceptual decision-making. bioRxiv.
[abstract | link | bibtex]
Abstract: Perceptual decision-making is the process by which an animal uses sensory stimuli to choose an action or mental proposition. This process is thought to be mediated by neurons organized as attractor networks. However, whether attractor dynamics underlie decision behavior and the complex neuronal responses remains unclear. Here we use an unsupervised, deep learning-based method to discover decision-related dynamics from the simultaneous activity of neurons in frontal cortex and striatum of rats while they accumulate pulsatile auditory evidence. We show that contrary to prevailing hypotheses, attractors play a role only after a transition from a regime in the dynamics that is strongly driven by inputs to one dominated by the intrinsic dynamics. The initial regime mediates evidence accumulation, and the subsequent intrinsic-dominant regime subserves decision commitment. This regime transition is coupled to a rapid reorganization in the representation of the decision process in the neural population (a change in the "neural mode" along which the process develops). A simplified model approximating the coupled transition in the dynamics and neural mode allows inferring, from each trial's neural activity, the internal decision commitment time in that trial, and captures diverse and complex single-neuron temporal profiles, such as ramping and stepping. It also captures trial-averaged curved trajectories, and reveals distinctions between brain regions. Our results show that the formation of a perceptual choice involves a rapid, coordinated transition in both the dynamical regime and the neural mode of the decision process, and suggest pairing deep learning and parsimonious models as a promising approach for understanding complex data.
-
Kim, T.D., Can, T.*, Krishnamurthy, K.* (2023). Trainability, Expressivity and Interpretability in Gated Neural ODEs. Proceedings of the 40th International Conference on Machine Learning (ICML).
[abstract | link | bibtex]
Abstract: Understanding how the dynamics in biological and artificial neural networks implement the computations required for a task is a salient open question in machine learning and neuroscience. In particular, computations requiring complex memory storage and retrieval pose a significant challenge for these networks to implement or learn. Recently, a family of models described by neural ordinary differential equations (nODEs) has emerged as powerful dynamical neural network models capable of capturing complex dynamics. Here, we extend nODEs by endowing them with adaptive timescales using gating interactions. We refer to these as gated neural ODEs (gnODEs). Using a task that requires memory of continuous quantities, we demonstrate the inductive bias of the gnODEs to learn (approximate) continuous attractors. We further show how reduced-dimensional gnODEs retain their modeling power while greatly improving interpretability, even allowing explicit visualization of the structure of learned attractors. We introduce a novel measure of expressivity which probes the capacity of a neural network to generate complex trajectories. Using this measure, we explore how the phase-space dimension of the nODEs and the complexity of the function modeling the flow field contribute to expressivity. We see that a more complex function for modeling the flow field allows a lower-dimensional nODE to capture a given target dynamics. Finally, we demonstrate the benefit of gating in nODEs on several real-world tasks.
-
Kim, T.D., Luo, T.Z., Pillow, J.W., Brody, C.D. (2021). Inferring latent dynamics underlying neural population activity via neural differential equations. Proceedings of the 38th International Conference on Machine Learning (ICML). (long talk)
[abstract | link | bibtex]
Abstract: An important problem in systems neuroscience is to identify the latent dynamics underlying neural population activity. Here we address this problem by introducing a low-dimensional nonlinear model for latent neural population dynamics using neural ordinary differential equations (neural ODEs), with noisy sensory inputs and Poisson spike train outputs. We refer to this as the Poisson Latent Neural Differential Equations (PLNDE) model. We apply the PLNDE framework to a variety of synthetic datasets, and show that it accurately infers the phase portraits and fixed points of nonlinear systems augmented to produce spike train data, including the FitzHugh-Nagumo oscillator, a 3-dimensional nonlinear spiral, and a nonlinear sensory decision-making model with attractor dynamics. Our model significantly outperforms existing methods at inferring single-trial neural firing rates and the corresponding latent trajectories that generated them, especially in the regime where the spike counts and number of trials are low. We then apply our model to multi-region neural population recordings from medial frontal cortex of rats performing an auditory decision-making task. Our model provides a general, interpretable framework for investigating the neural mechanisms of decision-making and other cognitive computations through the lens of dynamical systems.
-
Kim, T.D., Kabir, M., Gold, J.I. (2017). Coupled decision processes update and maintain saccadic prior in a dynamic environment. Journal of Neuroscience.
[abstract | link | bibtex]
Abstract: Much of what we know about how the brain forms decisions comes from studies of saccadic eye movements. However, saccadic decisions are often studied in isolation, which limits the insights that they can provide about real-world decisions with complex interdependencies. Here, we used a serial reaction time (RT) task to show that prior expectations affect RTs via interdependent, normative decision processes that operate within and across saccades. We found that human subjects performing the task generated saccades that were governed by a rise-to-threshold decision process with a starting point that reflected expected state-dependent transition probabilities. These probabilities depended on decisions about the current state (the correct target) that, under some conditions, required the accumulation of information across saccades. Without additional feedback, this information was provided by each saccadic decision threshold, which represented the total evidence in favor of the chosen target. Therefore, the output of the within-saccade process was used, not only to generate the saccade, but also to provide input to the across-saccade process. This across-saccade process, in turn, helped to set the starting point of the next within-saccade process. These results imply a novel role for functional information-processing loops in optimizing saccade generation in dynamic environments.
|
Recent Conference Abstracts
- Kim, T.D., Luo, T.Z., Can, T., Krishnamurthy, K., Pillow, J.W., Brody, C.D. (2024, poster). Flow-field inference from neural data using deep recurrent networks. Computational and Systems Neuroscience (Cosyne).
- Luo, T.Z.*, Kim, T.D.*, Gupta D., Bondy, A.G., Kopec, C.D., Elliot, V.A., DePasquale, B., Brody, C.D. (2024, talk). Transitions in dynamical regime and neural mode underlie perceptual decision-making. Computational and Systems Neuroscience (Cosyne).
- Kim, T.D., Luo, T.Z., Can, T., Krishnamurthy, K., Pillow, J.W., Brody, C.D. (2023, talk). Flow-field inference from neural data using deep recurrent networks. Bernstein Conference.
- Luo, T.Z., Kim, T.D., DePasquale, B., Brody, C.D. (2023, poster). Distinct mechanisms for evidence accumulation and choice memory explain diverse neuronal dynamics. Computational and Systems Neuroscience (Cosyne).
- Kim, T.D., Can, T.*, Krishnamurthy, K.* (2022, poster). Learning and Shaping Manifold Attractors for Computation in Gated Neural ODEs. Symmetry and Geometry in Neural Representations Workshop at Neural Information Processing Systems (NeurIPS).
- Luo, T.Z., Kim, T.D., DePasquale, B., Brody, C.D. (2022, poster). Inference of the time-varying relationship between spike trains and a latent decision variable. Computational and Systems Neuroscience (Cosyne).
- Kim, T.D., Luo, T.Z., Pillow, J.W., Brody, C.D. (2021, poster). Inferring latent dynamics underlying neural population activity via neural differential equations. Computational and Systems Neuroscience (Cosyne).
|
|