Towards Loss-Resilient Image Coding for Unstable Satellite Networks

Authors: Hongwei Sha, Muchen Dong, Quanyou Luo, Ming Lu, Hao Chen, Zhan Ma

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations show that our approach outperforms traditional and deep learning-based methods in terms of compression performance and stability under diverse packet loss, offering robust and efficient progressive transmission even in challenging environments.
Researcher Affiliation Academia Hongwei Sha, Muchen Dong, Quanyou Luo, Ming Lu*, Hao Chen, Zhan Ma School of Electronic Science and Engineering, Nanjing University EMAIL, EMAIL
Pseudocode No The paper describes the proposed methods (Spatial-Channel Rearrangement and Mask Conditional Aggregation) but does not provide them in a structured pseudocode or algorithm block format.
Open Source Code Yes Code is available at https://github.com/NJUVISION/Loss Resilient LIC.
Open Datasets Yes We use the Flicker2W dataset (Liu et al. 2020) for training which will be randomly cropped into the size of 256 256. We perform the evaluation experiments using the Kodak dataset (Kodak 1993) with the resolution of 768 512, which is the size acceptable for extremely lowbandwidth transmission.
Dataset Splits No The paper mentions using the Flicker2W dataset for training and the Kodak dataset for evaluation but does not specify explicit training, validation, or test splits (e.g., percentages or sample counts) for the evaluation dataset beyond stating Kodak is used for evaluation experiments. It mentions 'randomly cropped into the size of 256 256' for training data but no split information.
Hardware Specification Yes All training is conducted on an NVIDIA RTX 3090 GPU, with 200 epochs and a fixed batch size of 16.
Software Dependencies No The paper mentions 'Compress AI library' and 'mbt-mean model (Ball e et al. 2018)' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Three lower bitrate points are selected to cover the extremely low bandwidth range of satellite networks, corresponding to lambda values of 0.0018, 0.0035, and 0.0067 when using Mean Squared Error (MSE) optimization. During training, we need to simultaneously simulate taildrop loss in progressive coding and network packet loss. The tail-drop rate is randomly selected within the range [0, 1.0] as described in (Hojjat, Haberer, and Landsiedel 2023). To simulate varying packet loss rates with moderate fluctuations in practice (Cheng et al. 2024), we randomly select values from [0.5%, 1.5%, 2.5%, 3.5%, 5%] to approximate the 5% packet loss rate, from [1%, 3%, 5%, 7%, 10%] to simulate the 10%, and from [2%, 6%, 10%, 14%, 20%] to simulate the 20%. For real-world satellite network simulation during training, we set the GE model parameters [p, r, h, k] to [0.378, 0.883, 0.810, 0.938], as derived from (Pieper 2023), to simulate a 10% packet loss rate, based on our practical evaluations. All training is conducted on an NVIDIA RTX 3090 GPU, with 200 epochs and a fixed batch size of 16. We choose Adam as the optimizer and the learning rate is set at 10 4.