EyeSeg: An Uncertainty-Aware Eye Segmentation Framework for AR/VR
Authors: Zhengyuan Peng, Jianqing Xu, Shen Li, Jiazhen Ji, Yuge Huang, Jingyun Zhang, Jinmin Li, Shouhong Ding, Rizen Guo, Xin Tan, Lizhuang Ma
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on multiple real-world datasets and challenge situations, demonstrating significant improvements in metrics such as MIo U, E1, F1, and ACC with 1.53G FLOPs. [...] Segmentation. Tab. 1 shows the segmentation results across four datasets. [...] Uncertainty Estimation. We conduct experiments to evaluate the performance of our uncertainty-aware approach on hard samples in four settings: in-domain, cross-domain, Occlusion and Blur. [...] Ablation Study. We conduct experimental comparisons of the two optimization objectives proposed Eq. (9) in this paper. From the results presented in the Tab. 3, it can be observed that both the optimization objectives yield improvements. |
| Researcher Affiliation | Collaboration | 1Shanghai Jiao Tong University 2Tencent 3National University of Singapore 4Tsinghua University 5East China Normal University |
| Pseudocode | Yes | Algorithm 1: Training Algorithm [...] Algorithm 2: Test Algorithm |
| Open Source Code | No | The text discusses releasing 'annotated labels' for a dataset, but not the source code for the methodology described in the paper. No explicit statement of code release or repository link for Eye Seg is found. |
| Open Datasets | Yes | We conduct extensive experiments to evaluate our proposed Eye Seg on multiple widely used datasets, including Open EDS [Garbin et al., 2019], LPW [Tonsen et al., 2015], Dikablis [Fuhl et al., 2022]. These datasets provide a diverse range of real-world scenarios and variations in imaging conditions. Furthermore, we also curate a meticulously annotated dataset, Else [Fuhl et al., 2016b] to assess the performance of competing methods under more challenging conditions. The annotated labels will be open-sourced. |
| Dataset Splits | No | The paper mentions using different datasets for training and evaluation in various settings (e.g., 'trained on Else dataset and tested on others datasets', 'trained on Else and evaluated on Open EDS'). However, it does not specify concrete percentages or sample counts for training, validation, and test splits within these datasets. |
| Hardware Specification | No | The paper mentions 'computational efficiency with only 1.53G FLOPs' but does not provide specific hardware details such as GPU/CPU models or memory specifications used for training or inference. |
| Software Dependencies | No | The paper mentions using 'YOLOv3 [Redmon and Farhadi, 2018]' for object detection and 'Deep VOG [Yiu et al., 2019]' and 'Dense El Net [Kothari et al., 2021]' as backbones. However, it does not provide specific version numbers for these or other software libraries/frameworks (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | No | The paper mentions general experimental settings like 'conventional data augmentation techniques such as random rotation, translation, scaling, and horizontal flipping', 'gamma correction', and 'input size of the network is standardized to 96 x 96'. However, specific hyperparameters such as learning rate, batch size, optimizer type, or number of training epochs are not provided in the main text. |