Improving Diversity in Language Models: When Temperature Fails, Change the Loss

Authors: Alexandre Verine, Florian Le Bronnec, Kunhao Zheng, Alexandre Allauzen, Yann Chevaleyre, Benjamin Negrevergne

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically assess the theoretical insights on temperature scaling (Section 4) and training methods (Section 5). Our experiments aim to address the following questions: To what extent do our simplified theoretical settings align with real-world language modeling scenarios? What is the impact of temperature scaling on the Precision Recall trade-off in language models? How do the proposed training methods affect Recall?
Researcher Affiliation Collaboration 1 Ecole Normale Sup erieure, Universit e PSL, DIENS, Paris, France 2 Miles, LAMSADE, Universit e Paris Dauphine-PSL, Paris, France 3 Sorbonne Universit e, CNRS, ISIR, Paris, France 4 Meta FAIR, Paris, France. Correspondence to: Alexandre Verine <EMAIL>, Florian Le Bronnec <EMAIL>.
Pseudocode Yes Algorithm 1 Weighted NLL Training Algorithm Algorithm 2 Estimating the Support Size of the Target Distribution Algorithm 3 Precision and Recall for Integer Multiplication Algorithm 4 Precision and Recall for Writing Prompts Algorithm 5 Precision and Recall for Code Generation
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. It mentions using and fine-tuning existing models like Llama and Olmo, but not sharing their own implementation.
Open Datasets Yes Code Contests (Li et al., 2022). Math QA-Python (Chen et al., 2021). Writing Prompts (Fan et al., 2018). Instruction tuning on Alpaca. We fine-tuned Llama3.1-8B on the Alpaca dataset (Taori et al., 2023).
Dataset Splits Yes Code Contests (Li et al., 2022). We use the test set comprising 165 challenging problems. Math QA-Python (Chen et al., 2021). We use the same evaluation as in Code Contests, using pass@k metrics.
Hardware Specification Yes All experiments were conducted using Pytorch and Hugging Face Transformers. For Math QA-Python generation, we used v LLM library to speed up the generation process. We used both A100-80GB and H100-80GB GPUs for the experiments.
Software Dependencies No All experiments were conducted using Pytorch and Hugging Face Transformers. For Math QA-Python generation, we used v LLM library to speed up the generation process. The paper mentions software tools used (Pytorch, Hugging Face Transformers, vLLM) but does not provide specific version numbers for any of them, which is required for reproducibility.
Experiment Setup Yes Integer multiplication. ... We used the Adam optimizer, with a learning rate of 0.001, a weight decay of 1, 500 epochs, and a batch size of 512 sequences. Writing Prompts & Math QA-Python. ...trained for 3 epochs. ... For all training, we used the Adam optimizer, with a constant learning rate of 1e-6, with 1000 linear warmup steps, a batch size of 8.