A Max-Min Approach to the Worst-Case Class Separation Problem
Authors: Mohammad Mahdi Omati, Prabhu babu, Petre Stoica, Arash Amini
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we propose a novel discriminative feature learning method... Experiments on several machine learning datasets demonstrate the effectiveness of the MM4MM approach. |
| Researcher Affiliation | Academia | Mohammad Mahdi Omati EMAIL Department of Electrical Engineering Sharif University of Technology, Tehran, Iran Prabhu Babu EMAIL Centre for Applied Research in Electronics (CARE) Indian Institute of Technology Delhi, New Delhi-110016, India Petre Stoica EMAIL Division of Systems and Control, Department of Information Technolog Uppsala University, Uppsala, Sweden 75237 Arash Amini EMAIL Department of Electrical Engineering Sharif University of Technology, Tehran, Iran |
| Pseudocode | Yes | Algorithm 1 MM4MM for WCCS (SDP approach) ... Algorithm 2 Alternating Minimization Approach for Solving (30) ... Algorithm 3 MM4MM for WCCS with sparsity penalty |
| Open Source Code | No | The text does not contain any explicit statements about the release of source code for the methodology described in this paper, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | The evaluation is conducted on six real-world datasets from the UCI Machine Learning Repository and Kaggle. These datasets are briefly described below: The Iris dataset... The Wine dataset... The Seeds dataset... The Prestige dataset... The Diamonds dataset... Finally, the Digit dataset Prabhu (2019) is a high-dimensional dataset... |
| Dataset Splits | Yes | For the five non-Digit datasets (Iris, Wine, Seeds, Prestige, Diamond), we performed five-fold cross-validation: each dataset was divided into five subsets, with four subsets used for training and one for testing per fold. For the high-dimensional Digit dataset, we followed Wang et al. (2024), performing 20 independent experimental runs where 50% of samples were randomly selected for training and the remainder for testing in each run. |
| Hardware Specification | No | The paper discusses computational complexity of the algorithms but does not provide specific hardware details (e.g., CPU/GPU models, memory, or processing units) used for running the experiments. |
| Software Dependencies | Yes | The problem (20) is convex and can be transformed into a semidefinite program (SDP):...which can be efficiently handled using, for example, CVX Grant & Boyd (2014). (Referencing: Michael Grant and Stephen Boyd. CVX: MATLAB software for disciplined convex programming, version 2.1, 2014.) |
| Experiment Setup | Yes | PCA preprocessing was applied to all datasets following Wang et al. (2024); Su et al. (2015), preserving 98% of the variance. For MM4MM (Sparse), we performed a grid search over λ {0.001, , 0.1, , 0.2, . . . , 1.0} and, for each value, computed the objective function according to the optimal solution formulation. We then selected the λ that yielded the highest objective. Algorithms 1, 2, and 3 specify a convergence threshold ϵ = 10 5. |