The Geometry of Categorical and Hierarchical Concepts in Large Language Models
Authors: Kiho Park, Yo Joong Choe, Yibo Jiang, Victor Veitch
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate these theoretical results on the Gemma and LLa MA-3 large language models, estimating representations for 900+ hierarchically related concepts using data from Word Net.1 |
| Researcher Affiliation | Academia | Kiho Park, Yo Joong Choe, Yibo Jiang, and Victor Veitch University of Chicago |
| Pseudocode | No | The paper describes methods and proofs using mathematical notation, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at github.com/Kiho Park/LLM Categorical Hierarchical Representations. |
| Open Datasets | Yes | To that end, we extract concepts from the Word Net hierarchy (Miller, 1995), estimate their representations, and show that the geometric structure of the representations aligns with the semantic hierarchy of Word Net. |
| Dataset Splits | Yes | To evaluate this, for each synset w in Word Net we split Y(w) into train words (70%) and test words (30%), fit the LDA estimator to the train words, and examine the projection of the unembedding vectors for the test and random words onto the estimated vector representation. |
| Hardware Specification | No | The paper mentions using 'Gemma-2B model' and 'LLa MA-3-8B model' but does not provide specific details about the hardware (e.g., GPU, CPU models) used for running the experiments. |
| Software Dependencies | Yes | We employ the Gemma-2B version of the Gemma model (Mesnard et al., 2024), which is accessible online via the huggingface library. Its two billion parameters are pre-trained on three trillion tokens. This model utilizes 256K tokens and 2,048 dimensions for the representation space. |
| Experiment Setup | Yes | The results in this paper rely on transforming the representation spaces so that the Euclidean inner product is a causal inner product, aligning the embedding and unembedding representations. Following Park et al. (2024), we estimate the required transformation as: g(y) = Cov(γ) 1/2(γ(y) E[γ]) ... Formally, we estimate the vector representation of a binary feature W for an attribute w as ℓw = g w E(gw) gw, with gw = Cov(gw) E(gw) Cov(gw) E(gw) 2 , where gw is the unembedding vector of a word sampled uniformly from Y(w) and Cov(gw) is a pseudo-inverse of the covariance matrix. We estimate the covariance matrix Cov(gw) using the Ledoit-Wolf shrinkage estimator (Ledoit & Wolf, 2004), because the dimension of the representation spaces is much higher than the number of samples. |