Advertisement

Learning with Errors over Rings

  • Oded Regev
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6197)

Abstract

The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications.

Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. After a short introduction to the area, we will discuss recent work on making LWE and its applications truly efficient by exploiting extra algebraic structure. Namely, we will define the ring-LWE problem, and prove that it too enjoys very strong hardness guarantees.

Based on joint work with Vadim Lyubashevsky and Chris Peikert.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Oded Regev
    • 1
  1. 1.Blavatnik School of Computer ScienceTel Aviv UniversityTel AvivIsrael

Personalised recommendations