It is observed that the computational model of the olfactory cortex given by J. Ambros-Ingerson et al. (1990) is closely related to multistage vector quantization. Variations of the architecture and learning rules are given. The authors evaluate the performance of the various models applied to encode and classify vowels extracted from spoken letters. The efficacy of neural implementation of multistage and tree-search quantization is demonstrated. For fixed branching ratio it is seen that the tree-search quantizer consistently outperforms the multistage structure, though at considerable resource cost. For networks with equal neural resources, the multistage architecture returns significantly lower MSE than the flat and tree-search architectures. Experiments show that pattern rescaling offers a degree of noise immunity.