Block-Based Multi-Scale Image Rescaling
Authors: Jian Li, Siwang Zhou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that BBMR significantly enhances the SR image quality on the of 2K and 4K test dataset compared to initial network image rescaling methods. |
| Researcher Affiliation | Academia | Hunan University, China EMAIL |
| Pseudocode | Yes | Algorithm 1: Block Scaling Rate Allocation Algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We use the DIV2k and Flickr2k (Timofte et al. 2017) dataset for training. With BICUBIC downscaling, LR image sizes are 64 64 for 2 and 4 scaling factors, and 32 32 for 8. Down-Net employs end-to-end training with 256 256 HR input and SR output. For Joint SR, we randomly crop 768 768 regions from DIV2k and Flickr2k as HR images and divide them into 36 HR sub-blocks with the size of 128 128. Subsequently, we use three different scaling rates to randomly select 12 sub-blocks for downscaling as the input of Joint SR. Data augmentation includes random horizontal flips and 90/270 degree rotations. Testing data Test2K and Test4K each contain 100 images, which were selected from the DIV8K (Gu et al. 2019) dataset and downscaled using bicubic interpolation. |
| Dataset Splits | Yes | Testing data Test2K and Test4K each contain 100 images, which were selected from the DIV8K (Gu et al. 2019) dataset and downscaled using bicubic interpolation. We averaged the HR images of the DIV2K validation dataset into 128 128 sub-blocks and validated them using BICUBIC downscaling and Omni SR upscaling methods. |
| Hardware Specification | Yes | All models are built with Py Torch and trained on NVIDIA Ge Force RTX 4090 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam W optimizer' but does not specify version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The batch size for Omni SR and CRAFT at all upscaling rates is set to 32, trained for 1000 epochs. Joint SR uses a batch size of 4 and trained for 100 epochs. All networks start with a learning rate of 0.0005, for SR networks halved every 250 epochs, and for Joint SR halved every 20 epochs. Training utilizes the Adam W optimizer with L1 loss. |