Introduction
Lattice theory sits at the intersection of linear algebra, number theory, and optimization. If you're starting your journey in cryptography, computational algebra, or optimization, you'll encounter these algorithms repeatedly. This guide breaks down the 20 essential lattice algorithms with bite-sized examples that you can verify by hand.
What's a lattice anyway? Think of it as all integer linear combinations of some basis vectors. For example, with basis vectors b₁ = (1, 0) and b₂ = (0, 1), the lattice is simply ℤ² - all integer coordinate points.
Part I: Foundation - Orthogonalization
1. Gram-Schmidt Algorithm for Lattice Basis
What it does: Converts your basis into an orthogonal one (perpendicular vectors).
Example:
Input basis:
b₁ = (3, 1)
b₂ = (2, 2)
Step 1: b₁* = b₁ = (3, 1)
Step 2: Calculate projection coefficient
μ₂,₁ = ⟨b₂, b₁*⟩ / ⟨b₁*, b₁*⟩ = (6 + 2) / (9 + 1) = 8/10 = 0.8
Step 3: b₂* = b₂ - μ₂,₁ · b₁*
= (2, 2) - 0.8(3, 1)
= (2, 2) - (2.4, 0.8)
= (-0.4, 1.2)
Output: Orthogonal basis {(3, 1), (-0.4, 1.2)}
Note: The original lattice and the orthogonalized vectors generate different lattices, but we use b* for analysis.
Part II: Classical Reduction
2. Lagrange Basis Reduction Algorithm
What it does: Finds the two shortest independent vectors in a 2D lattice.
Example:
Input: b₁ = (5, 2), b₂ = (7, 3)
Step 1: If |b₂| < |b₁|, swap them
|b₁| = √29 ≈ 5.39, |b₂| = √58 ≈ 7.62 ✓ OK
Step 2: Reduce: b₂ ← b₂ - ⌊⟨b₂,b₁⟩/⟨b₁,b₁⟩⌉ · b₁
⟨b₂,b₁⟩/⟨b₁,b₁⟩ = (35 + 6)/29 = 41/29 ≈ 1.41 → round to 1
b₂ ← (7, 3) - 1·(5, 2) = (2, 1)
Step 3: Now |b₂| = √5 < |b₁| = √29, so swap!
b₁ = (2, 1), b₂ = (5, 2)
Step 4: Reduce again: (5,2) - 2·(2,1) = (1, 0)
Final reduced basis: b₁ = (1, 0), b₂ = (2, 1)
These are much shorter and "nicer" than our starting vectors!
3-4. Size-Reduce(i, j): Partial Size Reduction
What it does: Makes vector bᵢ smaller by subtracting multiples of bⱼ.
Example:
b₁ = (1, 0)
b₂ = (17, 3)
Size-reduce b₂ with respect to b₁:
μ₂,₁ = ⟨b₂, b₁*⟩/⟨b₁*, b₁*⟩ = 17/1 = 17
b₂ ← b₂ - ⌊17⌉·b₁ = (17, 3) - 17·(1, 0) = (0, 3)
Result: b₁ = (1, 0), b₂ = (0, 3) [much better!]
The full Size-reduce version does this systematically for all vector pairs.
Part III: The LLL Family
5. LLL (Lenstra-Lenstra-Lovász) Algorithm
What it does: The gold standard - finds a reasonably short basis in polynomial time.
Example (2D for simplicity):
Input: b₁ = (6, 2), b₂ = (3, 5) [δ = 0.75]
Step 1: Size reduce
μ₂,₁ = 18/40 = 0.45 → ⌊0.45⌉ = 0, no change needed
Step 2: Check Lovász condition
|b₂*|² ≥ (δ - μ²₂,₁)|b₁*|²
Need: |b₂*|² ≥ (0.75 - 0.2025)·40 = 21.9
b₂* = b₂ - 0.45·b₁ = (3, 5) - (2.7, 0.9) = (0.3, 4.1)
|b₂*|² = 0.09 + 16.81 = 16.9 < 21.9 ✗ FAILS!
Step 3: Swap b₁ and b₂
New basis: b₁ = (3, 5), b₂ = (6, 2)
Repeat until satisfied...
Real power: Works in high dimensions where hand computation is impossible!
6. GSOUpdate-LLL(k): Efficient GSO Updates
What it does: When you swap vectors at position k, you don't recalculate everything—just update affected parts.
Example concept:
Basis: [b₁, b₂, b₃, b₄, b₅]
If we swap b₃ and b₄:
❌ Don't: Recompute all GSO vectors b₁*, b₂*, b₃*, b₄*, b₅*
✓ Do: Only update b₃*, b₄*, b₅* using stored values
Speedup: O(n) instead of O(n²) per swap
This is crucial for practical implementations!
7. DeepLLL: Going Deeper
What it does: Regular LLL checks position k with k-1. DeepLLL checks k with earlier positions too.
Example:
After regular LLL: [b₁, b₂, b₃]
DeepLLL checks:
- b₃ vs b₂ ✓ (LLL does this)
- b₃ vs b₁ ✓ (DeepLLL additionally checks)
If swapping b₃ with b₁ gives better reduction → do it!
Trade-off: Slower but finds shorter vectors.
8. GSOUpdate-DeepLLL(i, k): Deep Updates
Same efficient update principle as #6, but for DeepLLL's extended swaps.
9. MLLL: Multi-Level LLL
What it does: Combines different precision levels—do quick low-precision passes first, then high-precision refinement.
Analogy:
Like photo editing:
1. Rough crop (low precision, fast)
2. Color adjustment (medium precision)
3. Fine detail work (high precision, slow)
Instead of working at maximum precision the whole time!
Part IV: Exact Shortest Vectors
10. ENUM: Enumeration of Shortest Lattice Vectors
What it does: Systematically searches for the truly shortest non-zero vector.
Example (2D):
Basis (already LLL-reduced): b₁ = (1, 0), b₂ = (1, 3)
Search for vectors shorter than R = 4:
Check:
x₁·b₁ + x₂·b₂ where |result|² < 16
x₁=0, x₂=1: (1, 3) → |v|² = 10 ✓
x₁=1, x₂=0: (1, 0) → |v|² = 1 ✓ [shortest!]
x₁=1, x₂=1: (2, 3) → |v|² = 13 ✓
x₁=-1,x₂=1: (0, 3) → |v|² = 9 ✓
...
Shortest: (1, 0) or (-1, 0) with length 1
Reality check: In dimension 100, this becomes astronomically slow!
11. BKZ (Block Korkine-Zolotarev)
What it does: Processes basis in overlapping blocks, applying SVP solving (like ENUM) to each block.
Example (blocksize = 2):
Basis: [b₁, b₂, b₃, b₄]
Round 1:
- Block [b₁, b₂]: Apply 2-dimensional SVP → improve
- Block [b₂, b₃]: Apply 2-dimensional SVP → improve
- Block [b₃, b₄]: Apply 2-dimensional SVP → improve
Round 2:
- Repeat until no improvement
Result: Much shorter than LLL, but slower
Key parameter: Blocksize β. Larger β → better quality, exponentially slower.
12. GSOupdate-BKZ(k, r): BKZ GSO Updates
Efficient update strategy when BKZ modifies a block—avoids full recomputation.
13. Slide: Slide Reduction
What it does: A variant that processes the basis using a sliding window of blocks.
Visualization:
Basis: [b₁ b₂ b₃ b₄ b₅ b₆]
Windows:
[b₁ b₂ b₃] → reduce
[b₂ b₃ b₄] → reduce
[b₃ b₄ b₅] → reduce
[b₄ b₅ b₆] → reduce
Slides along like a moving window!
Property: Guarantees better quality than LLL with theoretical bounds.
Part V: Randomized & Sampling Approaches
14. SA: Random Sampling Algorithm
What it does: Randomly generates lattice vectors and keeps the shortest ones found.
Example:
Basis: b₁ = (3, 1), b₂ = (1, 4)
Trial 1: 2b₁ + 3b₂ = (9, 14) → |v| = √277
Trial 2: -b₁ + b₂ = (-2, 3) → |v| = √13
Trial 3: b₁ - 2b₂ = (1, -7) → |v| = √50
...
After 10,000 trials: shortest found = (-2, 3)
Advantage: Sometimes faster than deterministic methods in high dimensions.
15. RSR: Random Sampling Reduction
Combines random sampling with reduction techniques—sample randomly, but guide the search intelligently.
16. (2ᵗ, m, k)-VSSP Solver
What it does: Solves specific variants of the Shortest Vector Problem with additional structure parameters.
Example concept:
Standard SVP: Find shortest v in lattice L
VSSP variant: Find shortest v satisfying:
- v ≡ target (mod 2ᵗ)
- Other constraints defined by m, k
Specialized algorithm exploits this extra structure!
17. GBS: Generalized Birthday Sampling
What it does: Uses the birthday paradox trick—generate two large lists and look for collisions.
Example (simplified):
Goal: Find short vector v = v₁ + v₂
Step 1: Generate List A with 1000 random short vectors
A = {a₁, a₂, ..., a₁₀₀₀}
Step 2: Generate List B with 1000 random short vectors
B = {b₁, b₂, ..., b₁₀₀₀}
Step 3: Look for collisions: aᵢ + bⱼ ≈ target
With ~1M combinations, high probability of finding good v!
Birthday paradox: √(lattice size) samples → collision
Part VI: Closest Vector Problems
18. Babai's Nearest Plane Algorithm (Revised 5.1.1)
What it does: Given a target point t, find the closest lattice point.
Example:
Basis: b₁ = (1, 0), b₂ = (0, 1) [simple integer lattice]
Target: t = (2.7, 3.2)
Step 1: Express t in basis coordinates
t = 2.7·b₁ + 3.2·b₂
Step 2: Round coefficients
v = ⌊2.7⌉·b₁ + ⌊3.2⌉·b₂ = 3·b₁ + 3·b₂ = (3, 3)
Result: (3, 3) is closest lattice point to (2.7, 3.2)
Distance: √(0.7² + 0.2²) = √0.53 ≈ 0.73
19. Babai's Nearest Plane Algorithm [8] (High-Speed Version)
What it does: Same as #18 but optimized for speed using precomputed data structures.
Speedup techniques:
- Precompute Gram-Schmidt orthogonalization
- Store projection coefficients
- Use fast rounding operations
Example effect:
Standard version: 100 operations per query
High-speed version: 10 operations per query (10× faster!)
Essential when you need millions of closest-point queries.
20. Target Vector Enumeration
What it does: List ALL lattice vectors within distance R of target t.
Example:
Basis: b₁ = (1, 0), b₂ = (0, 1)
Target: t = (0, 0)
Radius: R = 2
Enumerate all v with |v| ≤ 2:
(0, 0): |v| = 0 ✓
(±1, 0): |v| = 1 ✓
(0, ±1): |v| = 1 ✓
(±1, ±1): |v| = √2 ≈ 1.41 ✓
(±2, 0): |v| = 2 ✓
(0, ±2): |v| = 2 ✓
(±2, ±1): |v| > 2 ✗
...
Total: 13 vectors within radius 2
Use case: Counting or listing all "good enough" approximate solutions.
The Big Picture: When to Use What?
| Need | Algorithm | Speed | Quality |
|---|---|---|---|
| Quick & practical | LLL | Fast | Good |
| Better basis | BKZ-20 | Medium | Better |
| Best possible | ENUM | Very Slow | Optimal |
| Closest point | Babai | Fast | Approximate |
| High dimensions | Sampling methods | Medium | Varies |
The fundamental trade-off: Computation time vs. solution quality. In cryptography, even the difference between "LLL-reduced" and "BKZ-20-reduced" can be the difference between secure and broken!
Conclusion
These 20 algorithms form the computational backbone of modern lattice-based cryptography, integer programming, and computational number theory. Start with understanding LLL deeply—it's polynomial time and surprisingly powerful. Then explore BKZ when you need better quality, and sampling methods when dimensions grow too large for deterministic approaches.
Next steps:
- Implement LLL in your favorite language (2D first!)
- Read the original LLL paper (1982)
- Explore lattice-based cryptography (NTRU, Learning With Errors)
Resources:
- Galbraith's "Mathematics of Public Key Cryptography" (Chapter 17)
- Nguyen & Vallée's "The LLL Algorithm" (comprehensive survey)
- Micciancio & Goldwasser's "Complexity of Lattice Problems"
Happy lattice exploring! 🔢