Three papers to read at UW's library
Mar. 2nd, 2006 11:49 amNature 412, 787-792 (23 August 2001) | doi: 10.1038/35090500
Efficiency and ambiguity in an adaptive neural code
Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek and Robert R. de Ruyter van Steveninck
Abstract
We examine the dynamics of a neural code in the context of stimuli whose statistical properties are themselves evolving dynamically. Adaptation to these statistics occurs over a wide range of timescales—from tens of milliseconds to minutes. Rapid components of adaptation serve to optimize the information that action potentials carry about rapid stimulus variations within the local statistical ensemble, while changes in the rate and statistics of action-potential firing encode information about the ensemble itself, thus resolving potential ambiguities. The speed with which information is optimized and ambiguities are resolved approaches the physical limit imposed by statistical sampling and noise.
Neural Computation. 2003;15:1789-1807
The MIT Press
What Causes a Neuron to Spike?
Blaise Agüera y Arcas
blaisea@princeton.edu, Rare Books Library, Princeton University, Princeton, NJ 08544, U.S.A.
Adrienne L. Fairhall
fairhall@princeton.edu, Department of Molecular Biology, Princeton University, Princeton, NJ 08544, U.S.A.
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to "ask" neurons which dimensions in stimulus space they are sensitive to and to characterize the nonlinearity of the response. In this article, we apply reverse correlation to the simplest model neuron with temporal dynamics—the leaky integrate-and-fire model—and find that for even this simple case, standard techniques do not recover the known neural computation. To overcome this, we develop novel reverse-correlation techniques by selectively analyzing only "isolated" spikes and taking explicit account of the extended silences that precede these isolated spikes. We discuss the implications of our methods to the characterization of neural adaptation. Although these methods are developed in the context of the leaky integrate-and-fire model, our findings are relevant for the analysis of spike trains from real neurons.
Neural Computation. 2003;15:1715-1749
© 2003 The MIT Press
Computation in a Single Neuron: Hodgkin and Huxley Revisited
Blaise Agüera y Arcas
blaisea@princeton.edu, Rare Books Library, Princeton University, Princeton, NJ 08544, U.S.A.
Adrienne L. Fairhall
fairhall@princeton.edu, NEC Research Institute, Princeton, NJ 08540, and Department of Molecular Biology, Princeton, NJ 08544, U.S.A.
William Bialek
wbialek@princeton.edu, NEC Research Institute, Princeton, NJ 08540, and Department of Physics, Princeton, NJ 08544, U.S.A.
A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low–dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin–Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering "feature space" as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as "integrate and fire," the HH model is not an integrator nor is it well described by a single threshold.
Efficiency and ambiguity in an adaptive neural code
Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek and Robert R. de Ruyter van Steveninck
Abstract
We examine the dynamics of a neural code in the context of stimuli whose statistical properties are themselves evolving dynamically. Adaptation to these statistics occurs over a wide range of timescales—from tens of milliseconds to minutes. Rapid components of adaptation serve to optimize the information that action potentials carry about rapid stimulus variations within the local statistical ensemble, while changes in the rate and statistics of action-potential firing encode information about the ensemble itself, thus resolving potential ambiguities. The speed with which information is optimized and ambiguities are resolved approaches the physical limit imposed by statistical sampling and noise.
Neural Computation. 2003;15:1789-1807
The MIT Press
What Causes a Neuron to Spike?
Blaise Agüera y Arcas
blaisea@princeton.edu, Rare Books Library, Princeton University, Princeton, NJ 08544, U.S.A.
Adrienne L. Fairhall
fairhall@princeton.edu, Department of Molecular Biology, Princeton University, Princeton, NJ 08544, U.S.A.
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to "ask" neurons which dimensions in stimulus space they are sensitive to and to characterize the nonlinearity of the response. In this article, we apply reverse correlation to the simplest model neuron with temporal dynamics—the leaky integrate-and-fire model—and find that for even this simple case, standard techniques do not recover the known neural computation. To overcome this, we develop novel reverse-correlation techniques by selectively analyzing only "isolated" spikes and taking explicit account of the extended silences that precede these isolated spikes. We discuss the implications of our methods to the characterization of neural adaptation. Although these methods are developed in the context of the leaky integrate-and-fire model, our findings are relevant for the analysis of spike trains from real neurons.
Neural Computation. 2003;15:1715-1749
© 2003 The MIT Press
Computation in a Single Neuron: Hodgkin and Huxley Revisited
Blaise Agüera y Arcas
blaisea@princeton.edu, Rare Books Library, Princeton University, Princeton, NJ 08544, U.S.A.
Adrienne L. Fairhall
fairhall@princeton.edu, NEC Research Institute, Princeton, NJ 08540, and Department of Molecular Biology, Princeton, NJ 08544, U.S.A.
William Bialek
wbialek@princeton.edu, NEC Research Institute, Princeton, NJ 08540, and Department of Physics, Princeton, NJ 08544, U.S.A.
A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low–dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin–Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering "feature space" as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as "integrate and fire," the HH model is not an integrator nor is it well described by a single threshold.
no subject
Date: 2006-03-02 08:34 pm (UTC)(Neural Computation is in Engr; Nature is, well, everywhere. We have it, but so does everyone else.)
I had hoped that you'd be in my library tomorrow, but oh well. :)
no subject
Date: 2006-03-02 08:43 pm (UTC)no subject
Date: 2006-03-02 09:32 pm (UTC)no subject
Date: 2006-03-02 09:13 pm (UTC)no subject
Date: 2006-03-02 09:14 pm (UTC)no subject
Date: 2006-03-02 09:33 pm (UTC)"Where is book x?"
"Here's how to use the catalog. (teaches)"
"Yeah, but where *is* it? I'm too lazy to use the catalog."
"Oh...fine. (call number) (directions)"
no subject
Date: 2006-03-02 09:35 pm (UTC)