Automated vocal emotion recognition using phoneme class specific features

Géza Kiss, Jan Van Santen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Methods for automated vocal emotion recognition often use acoustic feature vectors that are computed for each frame in an utterance, and global statistics based on these acoustic feature vectors. However, at least two considerations argue for usage of phoneme class specific features for emotion recognition. First, there are well-known effects of phoneme class on some of these features. Second, it is plausible that emotion influences the speech signal in ways that differ between phoneme classes. A new method based on the concept of phoneme class specific features is proposed in which different features are selected for regions associated with different phoneme classes and then optimally combined, using machine learning algorithms. A small but significant improvement was found when this method was compared with an otherwise identical method in which features were used uniformly over different phoneme classes.

Original languageEnglish (US)
Title of host publicationProceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010
Pages1161-1164
Number of pages4
StatePublished - 2010
Event11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010 - Makuhari, Chiba, Japan
Duration: Sep 26 2010Sep 30 2010

Other

Other11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010
CountryJapan
CityMakuhari, Chiba
Period9/26/109/30/10

Fingerprint

Emotions
Acoustics
Emotion Recognition
Phoneme

Keywords

  • Biomedical application
  • Emotion recognition
  • Phoneme class specific features

ASJC Scopus subject areas

  • Language and Linguistics
  • Speech and Hearing

Cite this

Kiss, G., & Van Santen, J. (2010). Automated vocal emotion recognition using phoneme class specific features. In Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010 (pp. 1161-1164)

Automated vocal emotion recognition using phoneme class specific features. / Kiss, Géza; Van Santen, Jan.

Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010. 2010. p. 1161-1164.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kiss, G & Van Santen, J 2010, Automated vocal emotion recognition using phoneme class specific features. in Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010. pp. 1161-1164, 11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010, Makuhari, Chiba, Japan, 9/26/10.
Kiss G, Van Santen J. Automated vocal emotion recognition using phoneme class specific features. In Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010. 2010. p. 1161-1164
Kiss, Géza ; Van Santen, Jan. / Automated vocal emotion recognition using phoneme class specific features. Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010. 2010. pp. 1161-1164
@inproceedings{fe3198c7896b4a4dac744486cf0cd743,
title = "Automated vocal emotion recognition using phoneme class specific features",
abstract = "Methods for automated vocal emotion recognition often use acoustic feature vectors that are computed for each frame in an utterance, and global statistics based on these acoustic feature vectors. However, at least two considerations argue for usage of phoneme class specific features for emotion recognition. First, there are well-known effects of phoneme class on some of these features. Second, it is plausible that emotion influences the speech signal in ways that differ between phoneme classes. A new method based on the concept of phoneme class specific features is proposed in which different features are selected for regions associated with different phoneme classes and then optimally combined, using machine learning algorithms. A small but significant improvement was found when this method was compared with an otherwise identical method in which features were used uniformly over different phoneme classes.",
keywords = "Biomedical application, Emotion recognition, Phoneme class specific features",
author = "G{\'e}za Kiss and {Van Santen}, Jan",
year = "2010",
language = "English (US)",
pages = "1161--1164",
booktitle = "Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010",

}

TY - GEN

T1 - Automated vocal emotion recognition using phoneme class specific features

AU - Kiss, Géza

AU - Van Santen, Jan

PY - 2010

Y1 - 2010

N2 - Methods for automated vocal emotion recognition often use acoustic feature vectors that are computed for each frame in an utterance, and global statistics based on these acoustic feature vectors. However, at least two considerations argue for usage of phoneme class specific features for emotion recognition. First, there are well-known effects of phoneme class on some of these features. Second, it is plausible that emotion influences the speech signal in ways that differ between phoneme classes. A new method based on the concept of phoneme class specific features is proposed in which different features are selected for regions associated with different phoneme classes and then optimally combined, using machine learning algorithms. A small but significant improvement was found when this method was compared with an otherwise identical method in which features were used uniformly over different phoneme classes.

AB - Methods for automated vocal emotion recognition often use acoustic feature vectors that are computed for each frame in an utterance, and global statistics based on these acoustic feature vectors. However, at least two considerations argue for usage of phoneme class specific features for emotion recognition. First, there are well-known effects of phoneme class on some of these features. Second, it is plausible that emotion influences the speech signal in ways that differ between phoneme classes. A new method based on the concept of phoneme class specific features is proposed in which different features are selected for regions associated with different phoneme classes and then optimally combined, using machine learning algorithms. A small but significant improvement was found when this method was compared with an otherwise identical method in which features were used uniformly over different phoneme classes.

KW - Biomedical application

KW - Emotion recognition

KW - Phoneme class specific features

UR - http://www.scopus.com/inward/record.url?scp=79959815825&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79959815825&partnerID=8YFLogxK

M3 - Conference contribution

SP - 1161

EP - 1164

BT - Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010

ER -