Abstract
Typical theoretical descriptions of the ensemble dynamics of stochastic learning algorithms rely on a truncated expansion to approximate the time-evolution operator appearing in the master equation. In this paper we give an exact expression for the time-evolution operator for Manhattan learning, a variant of stochastic gradient-descent learning in which the weights are updated in proportion to the sign of the cost function gradient. This closed form for the time evolution captures the full nonlinearity of the problem without approximation, allowing exact study of the ensemble dynamics.
Original language | English (US) |
---|---|
Pages (from-to) | 1262-1265 |
Number of pages | 4 |
Journal | Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics |
Volume | 56 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 1997 |
ASJC Scopus subject areas
- Statistical and Nonlinear Physics
- Statistics and Probability
- Condensed Matter Physics