Constraints on generalization by adaptive networks

M. Pavel, Holly Jimison, Rebecca T. Moore

Research output: Contribution to journalConference articlepeer-review

Abstract

There are three classes of constraints that can be used to optimize generalization: Constraints on the network architecture, constraints on the learning algorithms, and constraints imposed by the representation of inputs and outputs. In this paper we focus on the constraints imposed by the architecture. These constraints are related to the topology of the networks, limitations on the number of units, and connectivity. In this project we used a computational approach (rather than simulation) to examine the effects of architectural constraints on generalization. Our analysis is restricted to two-layer non-recurrent networks with linear threshold units. Using this approach we characterized the effects of a variety of constraints, including minimizing the number of hidden units.

Original languageEnglish (US)
Pages (from-to)41
Number of pages1
JournalNeural Networks
Volume1
Issue number1 SUPPL
DOIs
StatePublished - 1988
Externally publishedYes
EventInternational Neural Network Society 1988 First Annual Meeting - Boston, MA, USA
Duration: Sep 6 1988Sep 10 1988

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Constraints on generalization by adaptive networks'. Together they form a unique fingerprint.

Cite this