Deriving phrase-based language models

Peter A. Heeman, Geraldine Damnati

Research output: Contribution to conferencePaperpeer-review

7 Scopus citations

Abstract

Phrase-based language models have grown in popularity since they allow the speech recognition process to make use of more context in recognizing the words. Previous approaches have used perplexity reduction to identify groups of words to be linked into phrases and have used these phrases as the basis for computing the language model probabilities. In this paper, we argue that perplexity reduction is only one of three aspects to be considered in choosing the phrases. We also argue that the chosen phrases should not be the basis for computing the language model probabilities. Rather, the probabilities should be derived from a language model built at the lexical level.

Original languageEnglish (US)
Pages41-48
Number of pages8
StatePublished - 1997
EventProceedings of the 1997 IEEE Workshop on Automatic Speech Recognition and Understanding - Santa Barbara, CA, USA
Duration: Dec 14 1997Dec 17 1997

Other

OtherProceedings of the 1997 IEEE Workshop on Automatic Speech Recognition and Understanding
CitySanta Barbara, CA, USA
Period12/14/9712/17/97

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Deriving phrase-based language models'. Together they form a unique fingerprint.

Cite this