Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

kgMBQA: Quality Knowledge Graph-driven Multimodal Blind Image Assessment

Authors: Wuyuan Xie, Tingcheng Bian, Miaohui Wang

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that our kg MBQA achieves the best performance compared to recent representative methods on the Kon IQ-10k, LIVE Challenge, BIQ2021, TID2013, and AIGC-3K datasets.
Researcher Affiliation Academia 1College of Computer Science & Software Engineering, Shenzhen University 2Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University EMAIL, EMAIL
Pseudocode No The paper describes the methodology in prose and figures but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: "We have implemented the proposed kg MBQA on the Py Torch platform", but it does not provide any concrete access information (e.g., repository link, explicit release statement) for the source code.
Open Datasets Yes We have carried out the comparison experiments on five different image datasets: Kon IQ-10k [Hosu et al., 2020]: This dataset contains 10,073 natural images... LIVE-Challenge [Ghadiyaram and Bovik, 2015]: This dataset contains 1,162 natural images... BIQ2021 [Ahmed and Asif, 2022]: Each image in the BIQ2021 dataset has a MOS... TID2013 [Ponomarenko et al., 2015]: TID2013 contains 25 reference images and 3,000 distorted images... AIGC-3K [Li et al., 2023a]: The AIGC-3K dataset employed six different image generation models to generate 2,982 images.
Dataset Splits Yes In the experiments, we randomly split the dataset into training, validation, and test sets in an 8:1:1 ratio.
Hardware Specification Yes All experiments in this paper have been conducted on a computing platform with an Intel(R) Xeon(R) Silver 4210R@2.40GHz CPU, 62GB RAM, and NVIDIA A100PCIE-40GB 6 GPUs.
Software Dependencies No We have implemented the proposed kg MBQA on the Py Torch platform. While PyTorch is mentioned, no specific version number is provided for it or any other software component.
Experiment Setup Yes During the training, the initial learning rate is set to 5e-5 and decreased to 90% of the original value every 10 epochs. Additionally, the batch size is set to 20, and the adaptive moment estimation method (Adam) is used to optimize the learning parameters. We directly use a pre-trained model [Wu et al., 2022] for local text generation. In the experiments, we train our kg MBQA model for a total of 100 epochs. To alleviate memory pressure, we crop the input images to 224x224.