Landmark-based speech recognition: Report of the 2004 Johns Hopkins summer workshop

Mark Hasegawa-Johnson, James Baker, Sarah Borys, Ken Chen, Emily Coogan, Steven Greenberg, Amit Juneja, Katrin Kirchhoff, Karen Livescu, Srividya Mohan, Jennifer Muller, Mustafa (Kemal) Sonmez, Tianyu Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

52 Citations (Scopus)

Abstract

Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multi-frame acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

Original languageEnglish (US)
Title of host publication2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing
PublisherInstitute of Electrical and Electronics Engineers Inc.
VolumeI
ISBN (Print)0780388747, 9780780388741
DOIs
StatePublished - 2005
Externally publishedYes
Event2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Philadelphia, PA, United States
Duration: Mar 18 2005Mar 23 2005

Other

Other2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05
CountryUnited States
CityPhiladelphia, PA
Period3/18/053/23/05

Fingerprint

landmarks
speech recognition
Speech recognition
summer
Support vector machines
Bayesian networks
Entropy
Acoustics
entropy
dynamic programming
artificial intelligence
phonetics
Speech analysis
acoustics
output
Dynamic programming
Artificial intelligence
prototypes

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Acoustics and Ultrasonics

Cite this

Hasegawa-Johnson, M., Baker, J., Borys, S., Chen, K., Coogan, E., Greenberg, S., ... Wang, T. (2005). Landmark-based speech recognition: Report of the 2004 Johns Hopkins summer workshop. In 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing (Vol. I). [1415088] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP.2005.1415088

Landmark-based speech recognition : Report of the 2004 Johns Hopkins summer workshop. / Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Mustafa (Kemal); Wang, Tianyu.

2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing. Vol. I Institute of Electrical and Electronics Engineers Inc., 2005. 1415088.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Hasegawa-Johnson, M, Baker, J, Borys, S, Chen, K, Coogan, E, Greenberg, S, Juneja, A, Kirchhoff, K, Livescu, K, Mohan, S, Muller, J, Sonmez, MK & Wang, T 2005, Landmark-based speech recognition: Report of the 2004 Johns Hopkins summer workshop. in 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing. vol. I, 1415088, Institute of Electrical and Electronics Engineers Inc., 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05, Philadelphia, PA, United States, 3/18/05. https://doi.org/10.1109/ICASSP.2005.1415088
Hasegawa-Johnson M, Baker J, Borys S, Chen K, Coogan E, Greenberg S et al. Landmark-based speech recognition: Report of the 2004 Johns Hopkins summer workshop. In 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing. Vol. I. Institute of Electrical and Electronics Engineers Inc. 2005. 1415088 https://doi.org/10.1109/ICASSP.2005.1415088
Hasegawa-Johnson, Mark ; Baker, James ; Borys, Sarah ; Chen, Ken ; Coogan, Emily ; Greenberg, Steven ; Juneja, Amit ; Kirchhoff, Katrin ; Livescu, Karen ; Mohan, Srividya ; Muller, Jennifer ; Sonmez, Mustafa (Kemal) ; Wang, Tianyu. / Landmark-based speech recognition : Report of the 2004 Johns Hopkins summer workshop. 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing. Vol. I Institute of Electrical and Electronics Engineers Inc., 2005.
@inproceedings{f9932b18a1124b7babd9d4fe3302d54b,
title = "Landmark-based speech recognition: Report of the 2004 Johns Hopkins summer workshop",
abstract = "Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multi-frame acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.",
author = "Mark Hasegawa-Johnson and James Baker and Sarah Borys and Ken Chen and Emily Coogan and Steven Greenberg and Amit Juneja and Katrin Kirchhoff and Karen Livescu and Srividya Mohan and Jennifer Muller and Sonmez, {Mustafa (Kemal)} and Tianyu Wang",
year = "2005",
doi = "10.1109/ICASSP.2005.1415088",
language = "English (US)",
isbn = "0780388747",
volume = "I",
booktitle = "2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Landmark-based speech recognition

T2 - Report of the 2004 Johns Hopkins summer workshop

AU - Hasegawa-Johnson, Mark

AU - Baker, James

AU - Borys, Sarah

AU - Chen, Ken

AU - Coogan, Emily

AU - Greenberg, Steven

AU - Juneja, Amit

AU - Kirchhoff, Katrin

AU - Livescu, Karen

AU - Mohan, Srividya

AU - Muller, Jennifer

AU - Sonmez, Mustafa (Kemal)

AU - Wang, Tianyu

PY - 2005

Y1 - 2005

N2 - Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multi-frame acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

AB - Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multi-frame acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

UR - http://www.scopus.com/inward/record.url?scp=27144481719&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=27144481719&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2005.1415088

DO - 10.1109/ICASSP.2005.1415088

M3 - Conference contribution

AN - SCOPUS:27144481719

SN - 0780388747

SN - 9780780388741

VL - I

BT - 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05 - Proceedings - Image and Multidimensional Signal Processing Multimedia Signal Processing

PB - Institute of Electrical and Electronics Engineers Inc.

ER -