ALTBI: Constructing Improved Outlier Detection Models via Optimization of Inlier-Memorization Effect
Authors: Seoyoung Cho, Jaesung Hwang, Kwan-Young Bak, Dongha Kim
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide extensive experimental results to demonstrate that ALTBI achieves state-ofthe-art performance in identifying outliers compared to other recent methods, even with lower computation costs. Additionally, we show that our method yields robust performances when combined with privacy-preserving algorithms. |
| Researcher Affiliation | Collaboration | 1Department of Statistics, Sungshin Women s University 2SK Telecom 3Data Science Center, Sungshin Women s University EMAIL, EMAIL |
| Pseudocode | Yes | The pseudo algorithm of ALTBI is presented in Algorithm 1. |
| Open Source Code | No | The paper states, "We acknowledge that we implemented ALTBI and ODIM ourselves," but it does not provide an explicit statement of code release or a link to a code repository for the methodology described in the paper. |
| Open Datasets | Yes | We analyze all 57 outlier detection benchmark datasets from ADBench (Han et al. 2022), including tabular, image, and text data. |
| Dataset Splits | No | The paper mentions evaluating AUC values of the training data and running experiments for three trials with random parameter initializations, but it does not specify explicit training/test/validation dataset splits (e.g., percentages, sample counts, or references to predefined splits) needed for reproduction. |
| Hardware Specification | Yes | We use the Pytorch framework to run our algorithm using a single NVIDIA TITAN XP GPU. |
| Software Dependencies | No | The paper mentions using the "Pytorch framework" and "Adam" optimizer, but it does not provide specific version numbers for these software components or any other key libraries. |
| Experiment Setup | Yes | For the optimizer, we use Adam (Kingma and Ba 2014) with a learning rate of 1e 3. Throughout our experimental analysis, we fix the hyperparameters, necessary for our proposed method (n0, γ, ρ, T0, T1, T2) to (128,1.03,0.92,10,60,80), unless stated otherwise. |