Laplace-HDC: Understanding the Geometry of Binary Hyperdimensional Computing

Authors: Saeid Pourmand, Wyatt D. Whiting, Alireza Aghasi, Nicholas F. Marshall

JAIR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several numerical experiments highlighting the improved accuracy of Laplace-HDC in contrast to alternative methods are presented. We also numerically study other aspects of the proposed framework, such as robustness and the underlying translation-equivariant encoding. [...] 2. Experiments We present several numerical experiments on the binary HDC schemes described in this paper and their limitations and extensions.
Researcher Affiliation Academia Saeid Pourmand EMAIL School of Electrical Engineering and Computer Science Oregon State University Wyatt D. Whiting EMAIL (Corresponding author) Department of Mathematics, Oregon State University Alireza Aghasi EMAIL School of Electrical Engineering and Computer Science Oregon State University Nicholas F. Marshall EMAIL Department of Mathematics, Oregon State University
Pseudocode Yes More precisely, given a desired covariance structure K Rm m, Yu et al. (2022) uses the following algorithm to construct a matrix V { 1, +1}N m of m hypervectors. Require: Similarity matrix K Rn n, hyperdimension N 2 K (where the function sin is applied entrywise) USU T W (eigendecomposition) Generate G RN m with i.i.d. standard Gaussian entries V sign(GS1/2 + U) (with sign applied entrywise and S+ sets negative entries to 0). return V { 1, +1}N m
Open Source Code Yes Code for all presented methods is publically available at: https://github.com/HDStat/Laplace-HDC
Open Datasets Yes In this section, we run some experiments to highlight the effect of having uncorrelated or loosely correlated features when using an HDC framework. We note that, in some cases, large datasets may have this property without needing a decorrelation process such as SVD. To evaluate the effects of SVD transformation, we applied the method to the MNIST and Fashion MNIST data and used a total of 8 variations of permutation schemes and classifier pairs.
Dataset Splits Yes In each case, we use the largest possible hyperdimensionality N 104. For example, the 1D-Block family of permutations requires that N = d M for some positive integer M. The image data we consider has dimension d = 282, so we choose M = 104/282 = 12, which results in N = 9408. When setting the bandwidth parameter λ > 0, we use (15) with c = 1 except for binary SGD where c = 4 provides better performance, and we estimate the median of the ℓ1-distances of the data using 1000 samples selected uniformly at random from X. The accuracy for each is presented as the mean one standard deviation in all cases; see Table 1.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using Adam (Kingma & Ba, 2017) as an optimizer and stochastic gradient descent, but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes The Float SGD classifier determines the class representatives ψi by optimizing a cross-entropy loss function using stochastic gradient descent; more precisely, we use Adam (Kingma & Ba, 2017) with a learning rate parameter α = 0.01, where the model takes ψx and outputs one of the c classes. We perform 3 epochs training passes over the data in X in all experiments. ... When setting the bandwidth parameter λ > 0, we use (15) with c = 1 except for binary SGD where c = 4 provides better performance, and we estimate the median of the ℓ1-distances of the data using 1000 samples selected uniformly at random from X.