Achieving Lightweight Super-Resolution for Real-Time Computer Graphics

Authors: Yu Wen, Chen Zhang, Chenhao Xie, Xin Fu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation results show that CGSR significantly reduces parameter size, multi-add operations, and inference time while maintaining high SR quality across various backbone SR networks. Our qualitative and quantitative analysis of the SR process and rendering reveals that readily accessible rendering information can significantly enhance neural network design by serving as additional features.
Researcher Affiliation Academia 1University of Houston, 2Beihang University EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and diagrams (Figure 2, Figure 3, Figure 4) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Project Page https://github.com/UH-ECOMS-Lab/CGSR
Open Datasets No Traditionally, training an efficient SR network relies on large datasets specific to each application to enhance SR quality. However, due to the absence of publicly available datasets containing real-time rendering information, we took the initiative to pioneer the generation of such datasets.
Dataset Splits Yes 10% percent of the frames, uniformly sampled from the entire period, are reserved as test sets to evaluate the performance and SR quality of CGSR. Additionally, 70% of the frames are used for training, while the remaining 20% are set aside for validation.
Hardware Specification Yes The final network is evaluated on 10 NVIDIA Tesla V100 GPUs.
Software Dependencies Yes The CGSR framework is implemented in Py Torch 1.5.0 and integrated into the selected backbone to create the CG-enhanced network.
Experiment Setup Yes For architecture and parameter shrinking, we use a population size of 50 and a candidate selection ratio of 0.2. In each generation, candidate networks are trained for 5 epochs using an Adam optimizer with a cosine annealing learning rate that decays from 1e 3 to 1e 6. The CG-optimized network is then trained for 30 epochs. During rendering-guided hybrid pruning, the CG-optimized network is fine-tuned for 2 epochs in each dynamic pruning run.