Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach

Authors: Giovanni Luca Marchetti, Gabriele Cesa, Kumar Pratik, Arash Behboodi

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement and empirically compare our model with the classical LLL algorithm for lattice reduction. We show that we can achieve comparable results in terms of complexity-performance. An intriguing insight from the result is that, hypothetically, without the knowledge of LLL, one can employ self-supervised learning to discover a lattice reduction algorithm.
Researcher Affiliation Collaboration Giovanni Luca Marchetti EMAIL Royal Institute of Technology (KTH) Stockholm Gabriele Cesa, Pratik Kumar, Arash Behboodi EMAIL Qualcomm AI Research EMAIL Amsterdam EMAIL
Pseudocode Yes Algorithm 1 LLL Algorithm Require: Basis of an (integral) lattice B GLn(R) Zn n Ensure: Siegel-reduced basis B
Open Source Code No The paper does not provide an explicit statement about releasing source code for the methodology described, nor does it include a link to a code repository.
Open Datasets No For data generation in our experiments, we used a 5G-compliant physical downlink shared channel (PDSCH) simulator implemented with MATLAB s 5G Toolbox (Inc., 2022b;a). We used the standard wireless channel models to simulate the communication pipeline. For the sake of simplicity, we assume perfect channel estimation at the receiver. To get the perfect channel estimate, we used MATLAB s nr Perfect Channel Estimate function. We simulated tapped delay line (TDL) channel model with TDLA30 delay profile and a Doppler shift of 100 Hz.
Dataset Splits Yes For evaluation, we sample once 4000 lattices from each distribution and use them to evaluate both our method and LLL. For each experimental setting, we use a fixed training set of 20000 grids and evaluate the methods on a fixed set of 10000 grids.
Hardware Specification No The paper mentions software tools like MATLAB's 5G Toolbox and channel models, but does not specify any particular hardware (CPU, GPU models, etc.) used for running the experiments.
Software Dependencies Yes For data generation in our experiments, we used a 5G-compliant physical downlink shared channel (PDSCH) simulator implemented with MATLAB s 5G Toolbox (Inc., 2022b;a). The Math Works Inc. Matlab version: 9.13.0 (r2022b), 2022b.
Experiment Setup Yes Additionally, to facilitate optimization, we find useful to increase the number of steps k gradually up to k = 2n during the training iterations. The number of extended Gauss moves is set k = 2n in the single lattice experiments and k = n + 1 in the joint lattice reduction ones. Training is performed via the Adam optimizer (Kingma & Ba, 2014).