Position: Contextual Integrity is Inadequately Applied to Language Models
Authors: Yan Shvartzshnaider, Vasisht Duddu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory s fundamental tenets. |
| Researcher Affiliation | Academia | 1York University 2University of Waterloo. Correspondence to: Yan Shvartzshnaider <EMAIL>, Vasisht Duddu <EMAIL>. |
| Pseudocode | No | The paper describes theoretical concepts and critiques methodologies, but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is a position paper and does not describe a methodology for which open-source code would typically be provided or referenced. |
| Open Datasets | No | The paper is a position paper that critiques existing research on Contextual Integrity for LLMs and does not present new experimental results requiring a specific dataset. |
| Dataset Splits | No | As the paper does not present new experimental results, there is no mention of dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper focuses on theoretical arguments and critiques of existing literature; it does not describe experimental procedures that would require specific hardware specifications. |
| Software Dependencies | No | The paper is a theoretical position paper and does not detail any experimental implementation requiring specific software dependencies or their version numbers. |
| Experiment Setup | No | The paper presents a theoretical position and critique of existing work, rather than conducting new experiments, thus no experimental setup details such as hyperparameters are provided. |