Multi-task and multi-view learning of user state

Melih Kandemir, Akos Vetek, Mehmet Gönen, Arto Klami, Samuel Kaski

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user[U+05F3]s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.

Original languageEnglish (US)
Pages (from-to)97-106
Number of pages10
JournalNeurocomputing
Volume139
DOIs
StatePublished - Sep 2 2014
Externally publishedYes

Keywords

  • Affect recognition
  • Machine learning
  • Multi-task learning
  • Multi-view learning

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Multi-task and multi-view learning of user state'. Together they form a unique fingerprint.

Cite this