PlaNet: Learning to Mitigate Atmospheric Turbulence in Planetary Images
Authors: Yifei Xia, Chu Zhou, Chengxuan Zhu, Chao Xu, Boxin Shi
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method achieves state-of-the-art performance on both synthetic and real-world images. |
| Researcher Affiliation | Academia | 1National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 2National Engineering Research Center of Visual Technology, School of Computer Science, Peking University 3National Institute of Informatics, Japan 4National Key Lab of General AI, School of Intelligence Science and Technology, Peking University EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the architecture and methodology in detail within the text and figures (e.g., Figure 3: Architecture of the proposed Pla Net), but it does not include a distinct pseudocode block or algorithm section. |
| Open Source Code | No | The paper mentions software like Auto Stakkert and provides a URL for it, but there is no explicit statement or link indicating that the authors' own source code for Pla Net is publicly available. |
| Open Datasets | No | The paper states: "We first collect 27 different 3D planet models in our solar system... obtained from the NASA website" and "collect some planetary images from the Internet7.". It provides a general NASA website and a URL for internet images (http://www.skyimaging.com/astronomy-videos.php) but does not provide a specific link or repository for the *dataset they generated* from these sources, nor for the real-world images they captured themselves using a Celestron C11 telescope and a ZWO ASI 290 camera. |
| Dataset Splits | Yes | We first collect 27 different 3D planet models in our solar system... Then, we split them into two parts that contain 22 and 5 different 3D models for making the training and test sets respectively. ...so that the training (testing) set contains 1056 (60) different image sequences finally. |
| Hardware Specification | Yes | We implement our method using Py Torch on a computer with an Intel Xeon Platinum 8358P CPU and two NVIDIA A100 GPUs. |
| Software Dependencies | No | We implement our method using Py Torch on a computer with an Intel Xeon Platinum 8358P CPU and two NVIDIA A100 GPUs. The network is trained for 100 epochs with a batch size of 8. For optimization, we use Adam optimizer (β1 = 0.9, β2 = 0.999) with a constant learning rate of 10−4 during training. We add an instance normalization (Ulyanov, Vedaldi, and Lempitsky 2016) layer and a ReLU activation function after each convolution layer. No specific version numbers for PyTorch or other libraries are provided. |
| Experiment Setup | Yes | The overall loss function L is defined as L = Lout + λLdec, where λ is a weighting coefficient set to be 10.0. ...where N is the number of input frames, L1 denotes the ℓ1 loss, the superscript j denotes the j-th scale, and α1,2,3 are weighting coefficients set to be 9.0, 3.0, and 1.0 respectively. ...The network is trained for 100 epochs with a batch size of 8. For optimization, we use Adam optimizer (Kingma and Ba 2014) (β1 = 0.9, β2 = 0.999) with a constant learning rate of 10−4 during training. |