Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models
Authors: Alkis Kalavasis, Amin Karbasi, Argyris Oikonomou, Katerina Sotiraki, Grigoris Velegkas, Manolis Zampetakis
NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our work is theoretical. |
| Researcher Affiliation | Academia | Alkis Kalavasis Yale University EMAIL Amin Karbasi Yale University EMAIL Argyris Oikonomou Yale University EMAIL Katerina Sotiraki Yale University EMAIL Grigoris Velegkas Yale University EMAIL Manolis Zampetakis Yale University EMAIL |
| Pseudocode | No | The paper describes procedures in descriptive text and numbered steps (e.g., Section 5), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper is theoretical and does not mention releasing any source code. The NeurIPS Paper Checklist explicitly marks 'NA' for questions related to code availability. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with specific datasets. It provides a generic definition of a dataset 'S = {(xi, yi)}m i=1' for its theoretical framework, but no concrete dataset is specified or made available. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental validation or specific dataset splits (training, validation, testing) for empirical results. |
| Hardware Specification | No | The paper is theoretical and does not mention any specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters or system-level training settings. |