Noise tolerance of attractor and feedforward memory models

Sukbin Lim, Mark S Goldman

Research output: Contribution to journalArticle

14 Citations (Scopus)

Abstract

In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset.Here, we compare the performance of two prominent classes ofmemory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model inwhich neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.

Original languageEnglish (US)
Pages (from-to)332-390
Number of pages59
JournalNeural Computation
Volume24
Issue number2
DOIs
StatePublished - Feb 2012

Fingerprint

Noise
Neurons
Short-Term Memory
Tolerance
Memory Model
Attractor

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Arts and Humanities (miscellaneous)

Cite this

Noise tolerance of attractor and feedforward memory models. / Lim, Sukbin; Goldman, Mark S.

In: Neural Computation, Vol. 24, No. 2, 02.2012, p. 332-390.

Research output: Contribution to journalArticle

Lim, Sukbin ; Goldman, Mark S. / Noise tolerance of attractor and feedforward memory models. In: Neural Computation. 2012 ; Vol. 24, No. 2. pp. 332-390.
@article{ac4511b0df4d4d54ba6215f57a30e146,
title = "Noise tolerance of attractor and feedforward memory models",
abstract = "In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset.Here, we compare the performance of two prominent classes ofmemory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model inwhich neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.",
author = "Sukbin Lim and Goldman, {Mark S}",
year = "2012",
month = "2",
doi = "10.1162/NECO_a_00234",
language = "English (US)",
volume = "24",
pages = "332--390",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "2",

}

TY - JOUR

T1 - Noise tolerance of attractor and feedforward memory models

AU - Lim, Sukbin

AU - Goldman, Mark S

PY - 2012/2

Y1 - 2012/2

N2 - In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset.Here, we compare the performance of two prominent classes ofmemory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model inwhich neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.

AB - In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset.Here, we compare the performance of two prominent classes ofmemory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model inwhich neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.

UR - http://www.scopus.com/inward/record.url?scp=84856384696&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84856384696&partnerID=8YFLogxK

U2 - 10.1162/NECO_a_00234

DO - 10.1162/NECO_a_00234

M3 - Article

VL - 24

SP - 332

EP - 390

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 2

ER -