Multi-task and multi-view learning of user state

Melih Kandemir, Akos Vetek, Mehmet Gonen, Arto Klami, Samuel Kaski

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user[U+05F3]s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.

Original languageEnglish (US)
Pages (from-to)97-106
Number of pages10
JournalNeurocomputing
Volume139
DOIs
StatePublished - Sep 2 2014
Externally publishedYes

Fingerprint

Learning
Sensors
Learning systems

Keywords

  • Affect recognition
  • Machine learning
  • Multi-task learning
  • Multi-view learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Cognitive Neuroscience

Cite this

Multi-task and multi-view learning of user state. / Kandemir, Melih; Vetek, Akos; Gonen, Mehmet; Klami, Arto; Kaski, Samuel.

In: Neurocomputing, Vol. 139, 02.09.2014, p. 97-106.

Research output: Contribution to journalArticle

Kandemir, Melih ; Vetek, Akos ; Gonen, Mehmet ; Klami, Arto ; Kaski, Samuel. / Multi-task and multi-view learning of user state. In: Neurocomputing. 2014 ; Vol. 139. pp. 97-106.
@article{b2531591f6a94f148582bca4717cbc0f,
title = "Multi-task and multi-view learning of user state",
abstract = "Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user[U+05F3]s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.",
keywords = "Affect recognition, Machine learning, Multi-task learning, Multi-view learning",
author = "Melih Kandemir and Akos Vetek and Mehmet Gonen and Arto Klami and Samuel Kaski",
year = "2014",
month = "9",
day = "2",
doi = "10.1016/j.neucom.2014.02.057",
language = "English (US)",
volume = "139",
pages = "97--106",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier",

}

TY - JOUR

T1 - Multi-task and multi-view learning of user state

AU - Kandemir, Melih

AU - Vetek, Akos

AU - Gonen, Mehmet

AU - Klami, Arto

AU - Kaski, Samuel

PY - 2014/9/2

Y1 - 2014/9/2

N2 - Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user[U+05F3]s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.

AB - Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user[U+05F3]s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.

KW - Affect recognition

KW - Machine learning

KW - Multi-task learning

KW - Multi-view learning

UR - http://www.scopus.com/inward/record.url?scp=84900859285&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84900859285&partnerID=8YFLogxK

U2 - 10.1016/j.neucom.2014.02.057

DO - 10.1016/j.neucom.2014.02.057

M3 - Article

VL - 139

SP - 97

EP - 106

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

ER -