Dual-channel acoustic detection of nasalization states

Xiaochuan Niu, Jan P.H. Van Santen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Automatic detection of different oral-nasal configurations during speech is useful for understanding normal nasalization and assessing certain speech disorders. We propose an algorithm to extract nasalization features from dual-channel acoustic signals that are acquired by a simple two-microphone setup. The feature is based on a dual-channel acoustic model and the associated analysis method. We successfully test this feature in speaker-dependent and speaker-independent tasks by comparing it with the conventional single-channel MFCC feature. The proposed feature uniformly performs better in both tasks. speech production, nasalization, speech pathology, velopharyngeal function, nasal resonance.

Original languageEnglish (US)
Title of host publicationInternational Speech Communication Association - 8th Annual Conference of the International Speech Communication Association, Interspeech 2007
Pages1077-1080
Number of pages4
StatePublished - Dec 1 2007
Event8th Annual Conference of the International Speech Communication Association, Interspeech 2007 - Antwerp, Belgium
Duration: Aug 27 2007Aug 31 2007

Publication series

NameProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2
ISSN (Electronic)1990-9772

Other

Other8th Annual Conference of the International Speech Communication Association, Interspeech 2007
CountryBelgium
CityAntwerp
Period8/27/078/31/07

Keywords

  • Nasal resonance
  • Nasalization
  • Speech pathology
  • Speech production
  • Velopharyngeal function

ASJC Scopus subject areas

  • Computer Science Applications
  • Software
  • Modeling and Simulation
  • Linguistics and Language
  • Communication

Fingerprint Dive into the research topics of 'Dual-channel acoustic detection of nasalization states'. Together they form a unique fingerprint.

Cite this