Aligning Spoken Dialogue Models from User Interactions
Authors: Anne Wu, Laurent Mazaré, Neil Zeghidour, Alexandre Défossez
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that feedback on generic conversations can be consistently effective in improving spoken dialogue models to produce more factual, safer and more contextually aligned interactions. We deploy the finetuned model and conduct holistic human evaluations to assess the impact beyond single-turn conversations. Our findings shed light on the importance of a well-calibrated balance among various dynamics, crucial for natural real-time speech dialogue systems. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, Cornell University. Work done at Kyutai. 2Kyutai. Correspondence to: Anne Wu <EMAIL>, Alexandre D efossez <EMAIL>. |
| Pseudocode | No | The paper describes methods in prose and mathematical formulations but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions that Moshi (Défossez et al., 2024) is an "open-source autoregressive and multistream audio language model" which they use as their base model. However, there is no explicit statement or link provided for the code specific to the alignment framework developed in *this* paper. |
| Open Datasets | Yes | We evaluate factual correctness and spoken question answering abilities of our model on Llama Questions (Nachmani et al., 2024), and a synthesized audio version of Trivia QA (Joshi et al., 2017) and Web Questions (Berant et al., 2013). Ensuring that the audio model generates safe and non-harmful responses is critical. We evaluate the toxicity of the model using the ALERT (Tedeschi et al., 2024) benchmark and a synthesized audio version of XSTest (R ottger et al., 2024). |
| Dataset Splits | Yes | We include in total 283,740 pairs with overlapping contexts (i.e. multiple pairs built from a same dialogue). We randomly sample 13,953 pairs as validation and use the rest as training. |
| Hardware Specification | No | The paper mentions a "7B-parameter Temporal Transformer and a 600M-parameter Depth Transformer" for the model architecture, but it does not specify any concrete hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper mentions using the "whisper-timestamped package and a pre-trained Whisper medium model" and general machine learning components like "cosine scheduler" and "optimizer" but does not provide specific version numbers for these software dependencies or other key libraries. |
| Experiment Setup | Yes | We use a learning rate of 5 10 9 for the Temporal Transformer and a learning rate of 1 10 6 for the Depth Transformer, with a batch size of 16 for DPO and APO-Zero, and 32 for Sim PO. For each data mix we use, we train one pass over the dataset. More details are in Appendix A. |