Disentangled Table-Graph Representation for Interpretable Transmission Line Fault Location

Authors: Na Yu, Yutong Deng, Shunyu Liu, Kaixuan Chen, Tongya Zheng, Mingli Song

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the 7-bus system, 36-bus system and a realistic 325-bus system in China demonstrate that the proposed method adapt to different topological structures and handle different types of faults. Compared to traditional methods, DTG4Power achieves high accuracy in both fault lines and fault points.
Researcher Affiliation Academia 1State Key Laboratory of Blockchain and Data Security, Zhejiang University 2Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security 3Nanyang Technological Univerisity 4Big Graph Center, Hangzhou City University EMAIL, EMAIL, doujiang EMAIL
Pseudocode No The paper describes the methodology in detail through equations and textual explanations, but it does not include a clearly labeled pseudocode block or algorithm section with structured steps.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets No The paper states: "The dataset utilized in this study is derived from real data and generated through fault simulation using PSASP (Zhongxi and Xiaoxin 1998)." It describes the characteristics of the generated data (e.g., "7 and 36 buses, and a real regional power system in China composed of 325 buses") but does not provide concrete access information (link, DOI, repository) for these datasets, nor does it refer to them as publicly available.
Dataset Splits No The paper mentions "all sample data were classified into a total of 144 categories, amounting to 144,000 samples" and discusses training. However, it does not specify explicit training, validation, or test dataset splits (e.g., percentages or exact counts) needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions the use of "Adam optimizer" and refers to general methods like "K-Nearest Neighbors (KNN)", "Random Forest", "Multi-Layer Perceptron (MLP)", and "Graph Convolutional Network (GCN)", as well as the simulation software "PSASP (Zhongxi and Xiaoxin 1998)". However, it does not specify version numbers for any of the software libraries or frameworks used in the implementation (e.g., specific versions of PyTorch, TensorFlow, scikit-learn, etc.).
Experiment Setup Yes The experiment employed a batch size of 16 and a learning rate of 1e-3, ensuring the convergence of all models within 300 epochs. The training process utilized the Adam optimizer and cross-entropy for task loss calculation.