SaMer: A Scenario-aware Multi-dimensional Evaluator for Large Language Models
Authors: Kehua Feng, Keyan Ding, Jing Yu, Yiwen Qu, Zhiwen Chen, chengfei lv, Gang Yu, Qiang Zhang, Huajun Chen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on eight single rating and pairwise comparison datasets demonstrate that Sa Mer outperforms existing baselines in a variety of evaluation tasks, showcasing its robustness, versatility, and generalizability. |
| Researcher Affiliation | Collaboration | Kehua Feng1,2 , Keyan Ding2 , Jing Yu2,3, Yiwen Qu5, Zhiwen Chen5, Chengfei Lv5, Gang Yu5, Qiang Zhang2,4 , Huajun Chen1,2 1College of Computer Science and Technology, Zhejiang University 2ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University 3Polytechnic Institute, Zhejiang University 4ZJU-UIUC Institute, Zhejiang University 5Alibaba Group |
| Pseudocode | No | The paper describes the model architecture and training process in Section 4 and illustrates it in Figure 3, but it does not include any explicit pseudocode blocks or algorithms with structured steps. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | To collect data across a wide range of scenarios, we gathered a large volume of publicly available preference data from multiple sources, including Chatbot Arena Conversations (55K) (Zheng et al., 2023b), Synthetic GPT-J (Havrilla, 2023), Stanford SHP-2(Ethayarajh et al., 2022), Help Steer2(Wang et al., 2024b), Ultra Feedback (Cui et al., 2023), PKU-Safe RLHF (Ji et al., 2024), and Preference Collection (200K) (Kim et al., 2024). |
| Dataset Splits | No | The paper mentions collecting and balancing data across scenarios (e.g., "balanced the number of samples per scenario between 2K and 5K to maintain similar proportions across all scenarios, with 135,402 data in total.") and refers to evaluation benchmarks, but it does not specify explicit training, validation, and test splits for the 'fine-grained preference dataset' (D) used to train Sa Mer, beyond noting its use for training. |
| Hardware Specification | Yes | To efficiently train the model, we leverage the Deep Speed library(Rasley et al., 2020), Zero Redundancy Optimizer (Ze RO) Stage 2 (Rajbhandari et al., 2020), and Flash Attention2 (Dao, 2023) across 2 NVIDIA Ge Force RTX 4090. ... Experiments were conducted using an NVIDIA Ge Force RTX 4090 GPU, with all model parameters stored in bf16 precision. |
| Software Dependencies | No | The paper mentions several software components like "Deep Speed library", "Zero Redundancy Optimizer (Ze RO) Stage 2", "Flash Attention2", "Adam W", and "Hugging Face transformers Python library" but does not provide specific version numbers for most of them, which is required for reproducibility. |
| Experiment Setup | Yes | During training, we set γ = 0.3 in Eq. (2) and (3), λ1 = λ2 = 1 in Eq. (4). Particularly, the impact of λ1 and λ2 on the performance of Sa Mer is discussed in the Appendix A4.1. The text embedding dimension h is 4096, consistent with the hidden size of Llama-3-8B, while the dimension N of the three MLP layers is 42, representing the total number of pre-defined dimensions. ... We adopt Adam W (Loshchilov, 2017) as our optimizer, with β1 = 0.9, β2 = 0.95, and a weight decay of 0.1. The peak learning rate is set to 5e-5, with 10% warm-up steps, and a cosine decay to 0. We set the batch size to 32 and the maximum sequence length to 8,192. The model is trained for 3 epochs to ensure convergence and optimal performance. |