Understanding the Logic of Direct Preference Alignment through Logic
Authors: Kyle Richardson, Vivek Srikumar, Ashish Sabharwal
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report on a small-scale case study demonstrating the feasibility of this approach, motivating an exciting avenue for future work. (Introduction) ... Table 5 shows the results of a feasibility study involving Qwen-0.5B tuned on the new losses (rows) compared against the known loss ℓCPO (second column) on ultrafeedback (Cui et al., 2024) test in aggregate (2nd column) and on subsets (right columns). |
| Researcher Affiliation | Collaboration | 1Allen Institute for AI 2University of Utah. Correspondence to: Kyle Richardson <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Translation of loss to logic (decompilation) |
| Open Source Code | Yes | Our full code is available at https://github.com/allenai/declarative_preference_alignment. |
| Open Datasets | Yes | We train models on the ultrafeedback dataset (Cui et al., 2024), which contains around 60k binarized preference pairs aggregated from several individual preference datasets (the different categories are listed in Table 5). |
| Dataset Splits | Yes | For tuning (detailed below) we used a custom held-out development set containing around 1.3k examples taken from the train set and reserve the test set (containing 2k examples) for final evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU models, CPU models, or cloud computing instances with their specifications) used for running its experiments. It mentions using a Qwen-0.5B LLM and vllm for inference but no hardware. |
| Software Dependencies | No | The paper mentions using the 'trl library' (von Werra et al., 2020) and the 'Sympy' computer algebra library (Meurer et al., 2017), but does not specify version numbers for these or any other key software dependencies. |
| Experiment Setup | Yes | we kept set β to 1, and experimented with learning rates in the range {1e-6, 3e-6, 8e-6}, number of epochs in the range of {3, 5, 8} and batches sizes in the range { 32, 128 } (for efficiency reasons, most tuning with done with a batch size of 32)... We used λs in the range of {0.0, 0.01, 0.1, 0.3, 1.0} (we found lower values, around 0.01 and 0.1, to be most effective). |