Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM

Authors: Zheng Wei Lim, Nitish Gupta, Honglin Yu, Trevor Cohn

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs.
Researcher Affiliation Collaboration Zheng Wei Lim Nitish Gupta Honglin Yu Trevor Cohn , The University of Melbourne Google
Pseudocode No The paper describes the Mufu process through textual explanations and figures (Figure 1, Table 1) but does not include a structured pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes As a low-data setup, we train and validate on the FLORES-200 dev split (Costa-jussà et al., 2022)...
Dataset Splits Yes Out of 997 source sentences in the split, we randomly sampled 787 sentences as the train set, 100 sentences as the validation data, and another 100 sentences to perform initial prompt selection.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud instance specifications used for running the experiments. It only mentions 'LLMs' and 'Mufu models' without specifying the underlying hardware.
Software Dependencies No The paper mentions various models like Pa LM2, Gemma, and BLOOMZ 1B7, but it does not specify any software dependencies such as libraries, frameworks, or programming languages with their version numbers that are needed to replicate the experimental setup.
Experiment Setup Yes We perform full parameter updates for 25 epochs across all models... All Gemma models are finetuned at a learning rate of 1e-5. We set the initial learning rate to 1e-4 for Pa LM2 models. When the models fail to converge, we reduce the rate to 1e-5 in the reruns.