Approximating distributions in stochastic learning

Todd K. Leen, Robert Friel, David Nielsen

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.

Original languageEnglish (US)
Pages (from-to)219-228
Number of pages10
JournalNeural Networks
Volume32
DOIs
StatePublished - Aug 2012

Fingerprint

Fokker Planck equation
Learning
Markov Chains
Plasticity
Learning systems
Physics
Markov processes
Learning algorithms
Machine Learning

Keywords

  • Fokker-Planck equation
  • Master equation
  • Online learning
  • Perturbation theory
  • State-space distributions
  • STDP

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cognitive Neuroscience

Cite this

Approximating distributions in stochastic learning. / Leen, Todd K.; Friel, Robert; Nielsen, David.

In: Neural Networks, Vol. 32, 08.2012, p. 219-228.

Research output: Contribution to journalArticle

Leen, Todd K. ; Friel, Robert ; Nielsen, David. / Approximating distributions in stochastic learning. In: Neural Networks. 2012 ; Vol. 32. pp. 219-228.
@article{e31b0a3ec9bd49ec800e06d1207aa4da,
title = "Approximating distributions in stochastic learning",
abstract = "On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.",
keywords = "Fokker-Planck equation, Master equation, Online learning, Perturbation theory, State-space distributions, STDP",
author = "Leen, {Todd K.} and Robert Friel and David Nielsen",
year = "2012",
month = "8",
doi = "10.1016/j.neunet.2012.02.006",
language = "English (US)",
volume = "32",
pages = "219--228",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Approximating distributions in stochastic learning

AU - Leen, Todd K.

AU - Friel, Robert

AU - Nielsen, David

PY - 2012/8

Y1 - 2012/8

N2 - On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.

AB - On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.

KW - Fokker-Planck equation

KW - Master equation

KW - Online learning

KW - Perturbation theory

KW - State-space distributions

KW - STDP

UR - http://www.scopus.com/inward/record.url?scp=84861784567&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84861784567&partnerID=8YFLogxK

U2 - 10.1016/j.neunet.2012.02.006

DO - 10.1016/j.neunet.2012.02.006

M3 - Article

C2 - 22418034

AN - SCOPUS:84861784567

VL - 32

SP - 219

EP - 228

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -