An anticorrelation kernel for subsystem training in multiple classifier systems

Luciana Ferrer, Mustafa (Kemal) Sonmez, Elizabeth Shriberg

Research output: Contribution to journalArticle

6 Scopus citations

Abstract

We present a method for training support vector machine (SVM)-based classification systems for combination with other classification systems designed for the same task. Ideally, a new system should be designed such that, when combined with existing systems, the resulting performance is optimized. We present a simple model for this problem and use the understanding gained from this analysis to propose a method to achieve better combination performance when training SVM systems. We include a regularization term in the SVM objective function that aims to reduce the average class-conditional covariance between the resulting scores and the scores produced by the existing systems, introducing a trade-off between such covariance and the system's individual performance. That is, the new system "takes one for the team", falling somewhat short of its best possible performance in order to increase the diversity of the ensemble. We report results on the NIST 2005 and 2006 speaker recognition evaluations (SREs) for a variety of subsystems. We show a gain of 19% on the equal error rate (EER) of a combination of four systems when applying the proposed method with respect to the performance obtained when the four systems are trained independently of each other.

Original languageEnglish (US)
Pages (from-to)2079-2114
Number of pages36
JournalJournal of Machine Learning Research
Volume10
Publication statusPublished - 2009

    Fingerprint

Keywords

  • Ensemble diversity
  • Kernel methods
  • Multiple classifier systems
  • Speaker recognition
  • Support vector machines
  • System combination

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Cite this