Improving BDD enumeration for LWE problem using GPU

Esseissah, Mohamed S.; Bhery, Ashraf; Bahig, Hatem M.;

Abstract


In this paper, we present a GPU-based parallel algorithm for the Learning With Errors (LWE) problem using a lattice-based Bounded Distance Decoding (BDD) approach. To the best of our knowledge, this is the first GPU-based implementation for the LWE problem. Compared to the sequential BDD implementation of Lindner-Peikert and pruned-enumeration strategies by Kirshanova [1], our GPU-based implementation is almost faster by a factor 6 and 9 respectively. The used GPU is NVIDIA GeForce GTX 1060 6G. We also provided a parallel implementation using two GPUs. The results showed that our algorithm is scalable and faster than the sequential version (Lindner-Peikert and pruned-enumeration) by a factor of almost 13 and 16 respectively. Moreover, the results showed that our parallel implementation using two GPUs is more efficient than Kirshanova et al.'s parallel implementation using 20 CPU-cores.


Other data

Title Improving BDD enumeration for LWE problem using GPU
Authors Esseissah, Mohamed S.; Bhery, Ashraf; Bahig, Hatem M. 
Keywords bounded distance decoding;closest vector problem;cryptanalysis;GPU;lattice-based cryptography;Learning with error;LLL algorithm;shortest vector problem
Issue Date 1-Jan-2020
Publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Journal IEEE Access 
Volume 8
Start page 19737
End page 19749
ISSN 2169-3536
DOI 10.1109/ACCESS.2019.2961091
Scopus ID 2-s2.0-85079822961
Web of science ID WOS:000525389200006

Recommend this item

Similar Items from Core Recommender Database

Google ScholarTM

Check

Citations 4 in scopus


Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.