### Abstract

Typical theoretical descriptions of the ensemble dynamics of stochastic learning algorithms rely on a truncated expansion to approximate the time-evolution operator appearing in the master equation. In this paper we give an exact expression for the time-evolution operator for Manhattan learning, a variant of stochastic gradient-descent learning in which the weights are updated in proportion to the sign of the cost function gradient. This closed form for the time evolution captures the full nonlinearity of the problem without approximation, allowing exact study of the ensemble dynamics.

Original language | English (US) |
---|---|

Pages (from-to) | 1262-1265 |

Number of pages | 4 |

Journal | Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics |

Volume | 56 |

Issue number | 1 SUPPL. B |

State | Published - Jul 1997 |

### Fingerprint

### ASJC Scopus subject areas

- Mathematical Physics
- Physics and Astronomy(all)
- Condensed Matter Physics
- Statistical and Nonlinear Physics

### Cite this

*Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics*,

*56*(1 SUPPL. B), 1262-1265.

**Stochastic Manhattan learning : Time-evolution operator for the ensemble dynamics.** / Leen, Todd K.; Moody, John E.

Research output: Contribution to journal › Article

*Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics*, vol. 56, no. 1 SUPPL. B, pp. 1262-1265.

}

TY - JOUR

T1 - Stochastic Manhattan learning

T2 - Time-evolution operator for the ensemble dynamics

AU - Leen, Todd K.

AU - Moody, John E.

PY - 1997/7

Y1 - 1997/7

N2 - Typical theoretical descriptions of the ensemble dynamics of stochastic learning algorithms rely on a truncated expansion to approximate the time-evolution operator appearing in the master equation. In this paper we give an exact expression for the time-evolution operator for Manhattan learning, a variant of stochastic gradient-descent learning in which the weights are updated in proportion to the sign of the cost function gradient. This closed form for the time evolution captures the full nonlinearity of the problem without approximation, allowing exact study of the ensemble dynamics.

AB - Typical theoretical descriptions of the ensemble dynamics of stochastic learning algorithms rely on a truncated expansion to approximate the time-evolution operator appearing in the master equation. In this paper we give an exact expression for the time-evolution operator for Manhattan learning, a variant of stochastic gradient-descent learning in which the weights are updated in proportion to the sign of the cost function gradient. This closed form for the time evolution captures the full nonlinearity of the problem without approximation, allowing exact study of the ensemble dynamics.

UR - http://www.scopus.com/inward/record.url?scp=0031188949&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0031188949&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:0031188949

VL - 56

SP - 1262

EP - 1265

JO - Physical Review E

JF - Physical Review E

SN - 2470-0045

IS - 1 SUPPL. B

ER -