Work

“Alexa, how can I trust you again?” Trust Repair in Human-AI Teams

Public

The advent of advanced computing and AI has led to social technologies becoming agentic teammates in human-autonomy teams. Interpersonal trust, vital for team functioning, is crucial in determining these teams' success or failure. Trust, while essential, can be easily broken and requires maintenance and repair. This dissertation addresses two questions: Which factors drive trust reparation? And, how can AI teammates effectively navigate trust reparation? An integrative review of trust literature is presented, providing a framework for understanding human-autonomy team trust reparation. Hypotheses are developed and tested in a laboratory experiment consisting of two studies.The first study employs an MTurk sample and a vignette study to fine-tune manipulations of trust violations and reparative responses. The second study uses Wizard of Oz methodology with a live team of participants and a confederate AI. The findings contribute to understanding the complex interplay between response behavior, violation type, and attributions in AI trust violations. The findings from Study 1 suggest that team members' attributions of stability and controllability to an AI's behavior in response to a trust violation depend on the type of response given by the AI and the type of violation committed. And the findings for Study 2 are inconclusive due to small sample size. In summary, this dissertation establishes the foundation for future research on trust violations in human-autonomy teams, providing guiding principles for trust reparation behavior for AI.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items