### Abstract

On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.

Original language | English (US) |
---|---|

Pages (from-to) | 219-228 |

Number of pages | 10 |

Journal | Neural Networks |

Volume | 32 |

DOIs | |

State | Published - Aug 2012 |

### Fingerprint

### Keywords

- Fokker-Planck equation
- Master equation
- Online learning
- Perturbation theory
- State-space distributions
- STDP

### ASJC Scopus subject areas

- Artificial Intelligence
- Cognitive Neuroscience

### Cite this

*Neural Networks*,

*32*, 219-228. https://doi.org/10.1016/j.neunet.2012.02.006

**Approximating distributions in stochastic learning.** / Leen, Todd K.; Friel, Robert; Nielsen, David.

Research output: Contribution to journal › Article

*Neural Networks*, vol. 32, pp. 219-228. https://doi.org/10.1016/j.neunet.2012.02.006

}

TY - JOUR

T1 - Approximating distributions in stochastic learning

AU - Leen, Todd K.

AU - Friel, Robert

AU - Nielsen, David

PY - 2012/8

Y1 - 2012/8

N2 - On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.

AB - On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed.We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does . not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.

KW - Fokker-Planck equation

KW - Master equation

KW - Online learning

KW - Perturbation theory

KW - State-space distributions

KW - STDP

UR - http://www.scopus.com/inward/record.url?scp=84861784567&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84861784567&partnerID=8YFLogxK

U2 - 10.1016/j.neunet.2012.02.006

DO - 10.1016/j.neunet.2012.02.006

M3 - Article

C2 - 22418034

AN - SCOPUS:84861784567

VL - 32

SP - 219

EP - 228

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -