Abstract
There are three classes of constraints that can be used to optimize generalization: Constraints on the network architecture, constraints on the learning algorithms, and constraints imposed by the representation of inputs and outputs. In this paper we focus on the constraints imposed by the architecture. These constraints are related to the topology of the networks, limitations on the number of units, and connectivity. In this project we used a computational approach (rather than simulation) to examine the effects of architectural constraints on generalization. Our analysis is restricted to two-layer non-recurrent networks with linear threshold units. Using this approach we characterized the effects of a variety of constraints, including minimizing the number of hidden units.
Original language | English (US) |
---|---|
Number of pages | 1 |
Journal | Neural Networks |
Volume | 1 |
Issue number | 1 SUPPL |
DOIs | |
State | Published - Jan 1 1988 |
Event | International Neural Network Society 1988 First Annual Meeting - Boston, MA, USA Duration: Sep 6 1988 → Sep 10 1988 |
ASJC Scopus subject areas
- Cognitive Neuroscience
- Artificial Intelligence