Position: Humanity Faces Existential Risk from Gradual Disempowerment

Authors: Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, David Duvenaud

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of gradual disempowerment , in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. This position paper argues that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity.
Researcher Affiliation Collaboration 1ACS research group, CTS, Charles University 2Telic Research 3Advanced Research + Invention Agency (ARIA) 4AI Objectives Institute 5Metaculus 6Mila, University of Montreal 7University of Toronto. The affiliations include universities (Charles University, University of Montreal, University of Toronto), research institutes (Telic Research, AI Objectives Institute), a government agency (ARIA), and a company (Metaculus), indicating a collaboration.
Pseudocode No The paper describes conceptual ideas and arguments about AI risks and societal dynamics. It does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper is a position paper discussing theoretical risks and does not present any methodology that would involve source code. There is no mention of code release or links to repositories.
Open Datasets No The paper is a theoretical position paper that discusses concepts and societal implications of AI. It does not involve experimental data or mention any datasets, public or otherwise.
Dataset Splits No The paper is a theoretical position paper and does not describe any experimental methodology involving datasets, therefore, no dataset splits are provided.
Hardware Specification No The paper is a theoretical position paper and does not describe any experiments or computational work that would require specific hardware. No hardware specifications are mentioned.
Software Dependencies No The paper is a theoretical position paper and does not describe any computational implementation. Therefore, no software dependencies with version numbers are mentioned.
Experiment Setup No The paper is a theoretical position paper and does not describe any experimental procedures or model training. Therefore, no experimental setup details, such as hyperparameters or training configurations, are provided.