TT-TFHE: a Torus Fully Homomorphic Encryption-Friendly Neural Network Architecture

Authors: Adrien Benamira, Tristan Guérand, Thomas Peyrin, Sayandeep Saha

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation shows that TT-TFHE greatly outperforms in terms of time and accuracy all Homomorphic Encryption (HE) set-ups on three tabular datasets, all other features being equal. On image datasets such as MNIST and CIFAR-10, we show that TT-TFHE consistently and largely outperforms other TFHE set-ups and is competitive against other HE variants such as BFV or CKKS...
Researcher Affiliation Academia Adrien Benamira EMAIL Nanyang Technological University Singapore Tristan Guérand EMAIL Nanyang Technological University Singapore Thomas Peyrin EMAIL Nanyang Technological University Singapore Sayandeep Saha EMAIL IIT Bombay India
Pseudocode No The paper describes methods and architectures verbally and with diagrams (Figure 1) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions that the Concrete library, which TT-TFHE utilizes, is open-source and provides a link to its documentation (https://docs.zama.ai/concrete/2.10). It also states that 'The code used to obtain this table is available on Zama website' referring to specific data in Table 9. However, it does not explicitly state that the authors' specific implementation of the TT-TFHE framework or its design toolbox is open-source or provide a direct link to their own implementation code.
Open Datasets Yes The paper uses well-known public datasets like MNIST, CIFAR-10, and ImageNet (Krizhevsky et al., 2017). Additionally, it provides specific URLs for the tabular datasets: Adult (https://archive.ics.uci.edu/ml/datasets/Adult), Cancer (https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)), and Diabetes (https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008).
Dataset Splits Yes All datasets have been split 5 times in a 80-20 train-test split for k-fold testing.
Hardware Specification Yes Our workstation consists of 4 Nvidia Ge Force 3090 GPUs (only for training) with 24576 Mi B memory and eight cores Intel(R) Core(TM) i7-8650U CPU clocked at 1.90 GHz, 16 GB RAM. For all experiments, the CPU Turbo Boost is deactivated and the processes were limited to using four cores.
Software Dependencies Yes The project implementation was done in Python, with the Py Torch library (Paszke et al., 2019) for training, in Numpy for testing in clear, and the Concrete library v2.10.02 for FHE inference.
Experiment Setup Yes To improve the accuracy of our model, we took several steps to optimize the training process. First, we removed the use of PGD attacks during training... Next, we employed the Do Re Fa-Net method from Zhou et al. (2016) for CIFAR-10... Finally, to overcome the limitations of the TTnet grouping method, we extended the training to 500 epochs... All our models use a table lookup bitwidth of n = 5, except for Diabetes where we use n = 6.