Representing the reinforcement learning state in a negotiation dialogue

Peter A. Heeman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Scopus citations

Abstract

Most applications of Reinforcement Learning (RL) for dialogue have focused on slot-filling tasks. In this paper, we explore a task that requires negotiation, in which conversants need to exchange information in order to decide on a good solution. We investigate what information should be included in the system's RL state so that an optimal policy can be learned and so that the state space stays reasonable in size. We propose keeping track of the decisions that the system has made, and using them to constrain the system's future behavior in the dialogue. In this way, we can compositionally represent the strategy that the system is employing. We show that this approach is able to learn a good policy for the task. This work is a first step to a more general exploration of applying RL to negotiation dialogues.

Original languageEnglish (US)
Title of host publicationProceedings of the 2009 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2009
Pages450-455
Number of pages6
DOIs
StatePublished - 2009
Event2009 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2009 - Merano, Italy
Duration: Dec 13 2009Dec 17 2009

Publication series

NameProceedings of the 2009 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2009

Other

Other2009 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2009
Country/TerritoryItaly
CityMerano
Period12/13/0912/17/09

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Signal Processing

Fingerprint

Dive into the research topics of 'Representing the reinforcement learning state in a negotiation dialogue'. Together they form a unique fingerprint.

Cite this