Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis

Paper Model

Abstract

Turn-taking is a fundamental aspect of human communication where speakers convey their intention to either hold, or yield, their turn through prosodic cues. Using the recently proposed Voice Activity Projection model, we propose an automatic evaluation approach to measure these aspects for conversational speech synthesis. We investigate the ability of three commercial, and two open-source, Text-To-Speech (TTS) systems ability to generate turn-taking cues over simulated turns. By varying the stimuli, or controlling the prosody, we analyze the models performances. We show that while commercial TTS largely provide appropriate cues, they often produce ambiguous signals, and that further improvements are possible. TTS, trained on read or spontaneous speech, produce strong turn-hold but weak turn-yield cues. We argue that this approach, that focus on functional aspects of interaction, provides a useful addition to other important speech metrics, such as intelligibility and naturalness.


Authors

Erik Ekstedt, Siyang Wang, Eva Szekely, Joakim Gustafson & Gabriel Skantze

[erikekst, siyangw, szekely, jkgu, skantze]@kth.se

KTH, Royal Institute of Technology, Stockholm, Sweden

Amazon

Amazon Visualization

Microsoft

Microsoft Visualization

Google

Google Visualization

Tacotron2

TCC Visualization

FastPitch

FastPitch Visualization