Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages
Authors: Xiang Yue, Yueqi Song, Akari Asai, Seungone Kim, JEAN NYANDWI, Simran Khanuja, Anjali Kantharuban, Lintang Sutawika, Sathyanarayanan Ramamoorthy, Graham Neubig
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results show that PANGEA significantly outperforms existing open-source models in multilingual settings and diverse cultural contexts. Ablation studies further reveal the importance of English data proportions, language popularity, and the number of multimodal training samples on overall performance. We fully open-source our data, code, and trained checkpoints, to facilitate the development of inclusive and robust multilingual MLLMs, promoting equity and accessibility across a broader linguistic and cultural spectrum. |
| Researcher Affiliation | Academia | Xiang Yue , Yueqi Song , Akari Asai, Seungone Kim, Jean de Dieu Nyandwi, Simran Khanuja, Anjali Kantharuban, Lintang Sutawika, Sathyanarayanan Ramamoorthy, Graham Neubig EMAIL Carnegie Mellon University |
| Pseudocode | No | The paper describes a data generation pipeline and training process but does not present any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | We fully open-source our data, code, and trained checkpoints, to facilitate the development of inclusive and robust multilingual MLLMs, promoting equity and accessibility across a broader linguistic and cultural spectrum. |
| Open Datasets | Yes | This paper introduces PANGEA, a multilingual multimodal LLM trained on PANGEAINS, a diverse 6M instruction dataset spanning 39 languages. PANGEAINS features: 1) high-quality English instructions, 2) carefully machine-translated instructions, and 3) culturally relevant multimodal tasks to ensure cross-cultural coverage. To rigorously assess models capabilities, we introduce PANGEABENCH, a holistic evaluation suite encompassing 14 datasets covering 47 languages. ...We fully open-source PANGEAINS, PANGEABENCH, PANGEA-7B, and code, to advance culturally inclusive MLLMs across diverse languages. |
| Dataset Splits | Yes | PANGEABENCH assesses MLLMs performance on open-domain multimodal chat, image captioning, cultural understanding, multimodal reasoning, and text-only tasks including question answering and complex math reasoning. A key highlight of PANGEABENCH is the introduction of x Chat, a human-crafted benchmark designed to evaluate open-ended, information-seeking multimodal conversations. ... x MMMU is a machine-translated version of MMMU (Yue et al., 2024a), testing college-level multimodal reasoning across seven languages. We randomly sample 300 questions from MMMU (Yue et al., 2024a) validation set and employ GPT-4o for the six languages translation. |
| Hardware Specification | Yes | We pretrain and finetune the model for 1 epoch, where pretraining took 4 hours with 8 H100 (32 GPU hours), and finetuning took 168 hours with 8 H100 (1344 GPU hours). |
| Software Dependencies | No | The paper does not explicitly state specific version numbers for software dependencies like Python, PyTorch, or other libraries. It only mentions using a text backbone (Qwen2-7B-Instruct) and a vision encoder (clip-vit-largepatch14-336). |
| Experiment Setup | Yes | The model uses LLaVA-Next as architecture (Liu et al., 2024), Qwen2-7BInstruct (Yang et al., 2024) as the language model backbone and clip-vit-largepatch14-336 (Radford et al., 2021) as the vision encoder. The training consists of two stages. First, we pretrain the visionlanguage connector that aligns the outputs of vision encoder to backbone, with the LLaVA LCS-558K1 (Liu et al., 2023b;a). Then, we perform finetuning on PANGEAINS, where we employ a learning rate of 2e-5, a batch size of 512, coupled with a cosine decay schedule with 0.03 warmup steps. We pretrain and finetune the model for 1 epoch... |