Generalizing a neuropsychological model of visual categorization to auditory categorization of vowels

W. Todd Maddox, Michelle R. Molis, Randy L. Diehl

Research output: Contribution to journalArticle

23 Scopus citations

Abstract

Twelve male listeners categorized 54 synthetic vowel stimuli that varied in second and third formant frequency on a Bark scale into the American English vowel categories /I/, /℧/, and /(Latin small letter reversed open E)t/. A neuropsychologically plausible model of categorization in the visual domain, the Striatal Pattern Classifier (SPC; Ashby & Waldron, 1999), is generalized to the auditory domain and applied separately to the data from each observer. Performance of the SPC is compared with that of the successful Normal A Posteriori Probability model (NAPP; Nearey, 1990; Nearey & Hogan, 1986) of auditory categorization. A version of the SPC that assumed piece-wise linear response region partitions provided a better account of the data than the SPC that assumed linear partitions, and was indistinguishable from a version that assumed quadratic response region partitions. A version of the NAPP model that assumed nonlinear response regions was superior to the NAPP model with linear partitions. The best fitting SPC provided a good account of each observer's data but was outperformed by the best fitting NAPP model. Implications for bridging the gap between the domains of visual and auditory categorization are discussed.

Original languageEnglish (US)
Pages (from-to)584-597
Number of pages14
JournalPerception and Psychophysics
Volume64
Issue number4
DOIs
StatePublished - Jan 1 2002

    Fingerprint

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Sensory Systems
  • Psychology(all)

Cite this