Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
EvFocus: Learning to Reconstruct Sharp Images from Out-of-Focus Event Streams
Authors: Lin Zhu, Xiantao Ma, Xiao Wang, Lizhi Wang, Hua Huang
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both simulated and real-world datasets demonstrate that Ev Focus outperforms existing methods across varying lighting conditions and blur sizes, proving its robustness and practical applicability in event-based defocus imaging. |
| Researcher Affiliation | Academia | 1School of Computer Science& Technology, Beijing Institute of Technology, Beijing, China 2School of Computer Science, Anhui University, Hefei, China 3School of Artificial Intelligence, Beijing Normal University, Beijing, China. Correspondence to: Hua Huang <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Synthetic Data Generation Pipeline Require: A set of background images {Bi} A set of foreground images {Fi,j} for each background i Motion parameters (e.g. translation, rotation) for background & foreground Number of time steps T Ensure: Synthetic dataset containing rendered scenes with events & optical flow 1: for each scene i do 2: Select one background image Bi 3: Select M foreground images {Fi,j}M j=1 4: Generate motion trajectories for background and each foreground: traj B Generate Trajectory(motion parameters) traj Fi,j Generate Trajectory(motion parameters) 5: for t = 1, . . . , T do 6: Sample camera pose pt Sample Pose() 7: Sample camera distortion dt Sample Distortion() 8: Render defocus brightness image: It Render(Bi, {Fi,j}, traj B[t], {traj Fi,j[t]}, pt, dt) 9: Compute brightness change It = It It 1 (if t > 1) 10: Generate events Et Event Generation( It) 11: Compute optical flow ut Optical Flow(It) 12: end for 13: Store {It}T t=1, {Et}T t=1, {ut}T t=1 as the dataset for scene i 14: end for |
| Open Source Code | No | The text does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | As stated in Sec. 2, we generate sequences of defocus events, sharp images, and optical flows, in which 41 sequences are used in the training set and 6 sequences in the test set. To verify the effectiveness of our model on real data, we use the DAVIS 346 cameras to capture 7 real-world scenes. |
| Dataset Splits | Yes | As stated in Sec. 2, we generate sequences of defocus events, sharp images, and optical flows, in which 41 sequences are used in the training set and 6 sequences in the test set. |
| Hardware Specification | Yes | Our model is trained for 300 epochs with batch size of 1 on 3 NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | Our model is implemented using the Py Torch framework. The paper mentions a software framework (PyTorch) but does not specify its version number or any other software dependencies with version information. |
| Experiment Setup | Yes | We adopt a constant strategy of learning rate during training, which is set at 1e-4. Our model is trained for 300 epochs with batch size of 1 on 3 NVIDIA Ge Force RTX 3090 GPUs. |