Federated Disentangled Tuning with Textual Prior Decoupling and Visual Dynamic Adaptation

Authors: Yihao Yang, Wenke Huang, Guancheng Wan, Bin Yang, Mang Ye

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various image classification tasks show the effectiveness of our work in addressing data heterogeneity. ... We conduct extensive experiments on four datasets: Office31, PACS, Office Home, and Domain Net.
Researcher Affiliation Academia 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China. Correspondence to: Mang Ye <EMAIL>.
Pseudocode Yes Algorithm 1 Fed DDA Input: Communication rounds T, participant set M, i-th client s private dataset Di and parameter collections θi = {Pg,i, Pl,i, Wg,i, Ws,i, Wgate,i}, learning rate η and tokenized embedding of guidance words GW.
Open Source Code Yes The codes are released at https: //github.com/Moratal Yang/Fed DDA.
Open Datasets Yes Datasets. We extensively evaluate our method on the following four multi-domain classification tasks: Office31 (Saenko et al., 2010) contains 31 classes of common objects in office scenarios across 3 domains: Amazon (A), Webcam (W), and DSLR (D). PACS (Li et al., 2017) includes 4 domains: Art-painting (A), Cartoon (C), Photo (P), and Sketch (S), with 7 classes. Office Home (Venkateswara et al., 2017) consists of 4 domains: Art (A), Clipart (C), Product (P), and Real world (R), each with 65 categories. Domain Net (Peng et al., 2019) includes 6 domains: Clipart (C), Infograph (I), Painting (P), Quickdraw (Q), Real (R), and Sketch (S), each with 345 categories.
Dataset Splits No The paper discusses how data is distributed among clients for federated learning (e.g., 'evenly assign each client a distinct domain', 'partition the data within each domain based on a Dirichlet distribution') but does not explicitly provide the standard training, validation, and testing splits for evaluating the model on these datasets, or refer to specific predefined splits by citation for reproduction.
Hardware Specification No The supercomputing system at the Supercomputing Center of Wuhan University supported the numerical calculations in this paper.
Software Dependencies No The paper mentions using the publicly available CLIP model (Radford et al., 2021) with Vi T-B/16 as the backbone and the SGD optimizer (Robbins & Monro, 1951), but does not specify version numbers for these or other software libraries/environments.
Experiment Setup Yes Implementation Details. ... We utilize SGD optimizer (Robbins & Monro, 1951) to optimize selected candidate parameters for 50 communication rounds with 1 local epoch. The learning rate lr is 0.001 and the train batch size of images is 32. We fixed the random seed at 1 to ensure reproduction. ... The prompt length is set to 16 and the prompts are randomly initialized with the normal distribution.