Test-time Adaptation for Image Compression with Distribution Regularization

Authors: Kecheng Chen, Pingping Zhang, Tiexin Qin, Shiqi Wang, Hong Yan, Haoliang Li

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on six in- and cross-domain datasets demonstrate that our proposed method not only improves the R-D performance compared with other latent refinement counterparts, but also can be flexibly integrated into existing TTA-IC methods with incremental benefits. Our code is available at https://tonyckc.github.io/TTA-IC-DR/.
Researcher Affiliation Academia Kecheng Chen, Pingping Zhang, Tiexin Qin, Shiqi Wang, Hong Yan, Haoliang Li City University of Hong Kong, Hong Kong SAR. Corresponding author. EMAIL; EMAIL
Pseudocode No The paper describes methods using mathematical formulations, equations, and figures, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Our code is available at https://tonyckc.github.io/TTA-IC-DR/.
Open Datasets Yes We collect six different datasets with four types of image styles to comprehensively evaluate the R-D performance of different approaches on cross-domain TTA-IC tasks, including natural image (Kodak), screen content image (SIQAD Yang et al. (2015), SCID (Ni et al., 2017), CCT (Min et al., 2017)), pixel-style gaming image (Lv et al. (2023) self-collected), and painting image (Domain Net (Peng et al., 2019)) datasets. The details of the used dataset can be found in the Appendix. ... Kodak1 ... https://r0k.us/graphics/kodak/
Dataset Splits No The paper mentions using 'pre-trained models' and datasets for 'in-domain evaluations' and 'cross-domain evaluations'. However, it does not specify any explicit training, validation, or test splits for these datasets within the context of the experiments conducted in the paper.
Hardware Specification Yes Table 4: Correlation between adaptation performance and adaptation time (using a single NVIDIA Ge Force 3090 GPU) on SIQAD.
Software Dependencies No We use Compress AI (B egaint et al., 2020) to implement our proposed and baseline methods. ... The Adam optimizer is utilized...
Experiment Setup Yes For TTA-IC, we use the same value of hyperparameter λ = [0.0018, 0.0035, 0.0067, 0.013, 0.025, 0.048] for latent refinement. The Adam optimizer is utilized to update the latent variables in a learning rate of 1 10 3 with 2000 iterations. T is empirically set to 20 for MC sampling. We discuss different implementations and hyperparameter settings (e.g., β) of dropout variational inference in sec. 4.3. ... For HLR, we follow Yang et al. (2020) to use a temperature annealing schedule with defaulted hyperparameters, where τ0 = 0.5, c0 = 0.001, and t0 = 700.