Work

Reading Users: Using Language Models to Close the Pragmatic Gap in Task-Oriented Conversations

Public

Task-oriented conversational systems are becoming increasingly popular, as shown by the rise of conversational recommendation systems across multiple platforms (e.g., Google Home, Alexa, and Siri) and domains (e.g., local establishments, e-commerce, books, music, and movies). However, users are still largely limited in what preferences they can express and how, as current systems tend to predefine the set of possible user intents and their slots (e.g., looking for establishments by price and number of stars). In this dissertation, we focus on building conversational recommendation systems that can understand users’ preferences without making strong assumptions about their use of language. First, we consider that making recommendations from free-form preferences is a multi-step task: for example, completing the prompt “You don’t like cold weather. Between London and Lisbon, you should visit…” involves steps as diverse as inferring a user preference (“You prefer warmer weather”) and finding the destination that best satisfies the semantics of such a preference (Lisbon, as the temperature is higher than in London). Then, we investigate two ways of utilizing large neural language models (LMs) to perform this task: deploying an LM to address a specific step in which language is particularly challenging (e.g., inferring positive preferences from critiques); or deploying an LM to address the entire recommendation prompt. We conclude with a comprehensive comparison of the two deployment strategies along different dimensions: effectiveness, robustness, generalizability, adherence to current system architectures, deployment risks, and future perspectives.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items