the approaches for SVP (Shortest Vector Problem) and CVP (Closest Vector Problem) in small lattices.
SVP (Shortest Vector Problem)
Find the shortest non-zero vector in the lattice
Steps for small lattices:
-
Lattice Reduction (LLL or BKZ)
- Apply LLL algorithm to get a reduced basis
- The first basis vector often approximates the shortest vector
-
Enumeration Method
- Set a search radius R (start with ||b₁||)
- Enumerate all lattice points within sphere of radius R
- Find the shortest non-zero vector
- Use pruned enumeration for efficiency
-
Babai's Nearest Plane Algorithm (approximation)
- Simple rounding approach
- Good for quick approximations
Example (2D mini lattice):
Basis: b₁ = (3, 1), b₂ = (1, 3)
Step 1: Check combinations: ±b₁, ±b₂, ±b₁±b₂
Step 2: Compute norms:
- ||b₁|| = √10
- ||b₂|| = √10
- ||b₁ - b₂|| = ||(2, -2)|| = 2√2 ✓ shortest!
CVP (Closest Vector Problem)
Find lattice point closest to target vector t
Steps:
-
Babai's Rounding
- Express t in basis coordinates: t = Σ cᵢbᵢ
- Round coordinates: v = Σ ⌊cᵢ⌉bᵢ
- Fast but approximate
-
Babai's Nearest Plane
- Use Gram-Schmidt orthogonalization
- Project sequentially onto hyperplanes
- More accurate than rounding
-
Exact Enumeration (for mini lattices)
- Center search at Babai approximation
- Enumerate nearby lattice points
- Find true closest point
Example (2D):
Target: t = (5, 4)
Basis: b₁ = (3, 1), b₂ = (1, 3)
Babai rounding:
t = 1.25·b₁ + 0.75·b₂
Round: v = 1·b₁ + 1·b₂ = (4, 4)
Distance: ||t - v|| = 1
For larger lattices, use BKZ reduction + sophisticated enumeration or approximation algorithms!
---
SVP & CVP for Mid-Size Lattices (dimensions ~50-200)
SVP (Shortest Vector Problem)
Step 1: Strong Lattice Reduction
BKZ Algorithm (Block Korkine-Zolotarev)
Input: Basis B, block size β
1. LLL-reduce the basis (preprocessing)
2. For i = 1 to n:
- Extract local block of size β
- Solve SVP in this block (enumeration/sieving)
- Update basis
- Re-orthogonalize
3. Repeat until no improvement
Block sizes for mid-size:
- β = 20-30: Fast, good approximation
- β = 40-60: Moderate time, better quality
- β = 60+: Slow but near-optimal
Step 2: Enumeration with Pruning
Extreme Pruning (Gama-Nguyen-Regev)
1. Set pruning bounds R = (R₁, R₂, ..., Rₙ)
- Rᵢ controls search radius at level i
- Aggressive pruning: R exponentially decreasing
2. Enumeration:
for each level i:
if ||projection|| > Rᵢ: prune this branch
else: continue enumeration
3. Success probability: typically 0.1-0.5
4. Repeat multiple times with randomization
Example pruning function:
Rᵢ = R · (i/n)^k where k = 1-2
Step 3: Sieving (Alternative)
Gauss Sieve (for dimensions 50-90)
1. Initialize list L with random short vectors
2. Repeat:
- Sample random vector v
- Reduce v using L (find closest pair)
- If ||v|| is short enough, add to L
- Remove redundant vectors
3. Shortest vector in L is solution
Complexity: 2^(0.415n) vs enumeration's exponential
CVP (Closest Vector Problem)
Approach 1: Babai + Enumeration
Input: Basis B, target t
Step 1: Babai's Nearest Plane
- Compute Gram-Schmidt B* = {b₁*, ..., bₙ*}
- For i = n down to 1:
cᵢ = ⟨t, bᵢ*⟩ / ||bᵢ*||²
v = v + ⌊cᵢ⌉ · bᵢ
- Output: approximation v_babai
Step 2: Bounded Enumeration
- R = ||t - v_babai|| × 1.5
- Enumerate lattice points within radius R of t
- Pruning: use similar bounds as SVP
Step 3: Verify and refine
- Check if better solution exists
- Possibly increase R and repeat
Approach 2: CVP via SVP Embedding
Kannan's Embedding:
1. Create embedded lattice L':
Basis: [B | t ]
[0 | M ]
where M >> ||B||
2. Solve SVP in L':
Shortest vector has form (v - t, M)
where v ∈ L is closest to t
3. Extract solution:
v = shortest_vector + t
Choosing M:
- M ≈ √n · λ₁(L) works well
- Too small: wrong solution
- Too large: reduction becomes harder
Approach 3: List Decoding (Probabilistic)
1. Generate k random close vectors:
vᵢ = Babai(t + noise_i)
2. Use as starting points for local search
3. Enumerate small region around each vᵢ
4. Return best solution found
Practical Workflow for Mid-Size
SVP Pipeline:
1. Preprocessing (minutes):
- LLL reduction
- Quality check: δ ≈ 0.99
2. BKZ-β (hours to days):
- Start with β = 20, increase gradually
- Monitor first vector norm
- Stop when diminishing returns
3. Final enumeration (hours):
- Extreme pruning, success prob = 0.2
- Run 10-20 trials
- Randomize basis between trials
4. Post-processing:
- Verify solution
- Check if ||v|| < GH bound
CVP Pipeline:
1. Basis reduction: BKZ-β (β ≈ 30-50)
2. Initial approximation: Babai O(n²)
3. Main solve:
Option A: Direct enumeration (radius 2× Babai)
Option B: Embedding + BKZ + enumeration
4. Multiple trials with randomization
Time: minutes to hours for n = 100
Complexity Comparison
| Method |
Time |
Quality |
Dimension Range |
| LLL only |
O(n⁵) |
~2^(n/2) approx |
Any |
| BKZ-20 |
Hours |
~2^(n/3) approx |
50-200 |
| BKZ-50 |
Days |
Near-optimal |
50-150 |
| Enumeration |
2^(0.3n) |
Exact |
< 80 |
| Gauss Sieve |
2^(0.415n) |
Exact |
50-90 |
Software Tools
Implementation:
-
FPLLL: BKZ, enumeration, pruning
-
G6K: Advanced sieving
-
NTL: General lattice operations
-
SageMath: Prototyping and testing
Example (FPLLL):
from fpylll import IntegerMatrix, BKZ, Enumeration
# SVP
A = IntegerMatrix.from_file("basis.txt")
BKZ.reduction(A, BKZ.Param(block_size=50))
# First row approximates shortest vector
Lattice Volume & Information Theory Bounds
1. Lattice Volume (Determinant)
Definition:
For lattice L with basis B = [b₁, b₂, ..., bₙ]:
vol(L) = det(L) = |det(B)| = √det(B^T B)
Geometric Interpretation:
Volume of fundamental parallelepiped (unit cell)
Examples:
2D Lattice:
b₁ = (3, 1), b₂ = (1, 3)
det(B) = |3×3 - 1×1| = 8
vol(L) = 8
n-dimensional:
B = [b₁ | b₂ | ... | bₙ]
vol(L) = |det(B)|
Invariant: Same for any basis of L
2. Gaussian Heuristic
Core Idea:
Lattice points are "randomly" distributed in space
Expected shortest vector length:
λ₁ ≈ √(n/(2πe)) · vol(L)^(1/n)
Where:
- λ₁ = length of shortest non-zero vector
- n = dimension
- vol(L)^(1/n) = geometric mean spacing
Derivation (Heuristic):
Volume of n-ball:
Vₙ(R) = (π^(n/2) / Γ(n/2 + 1)) · R^n
For large n: Vₙ(R) ≈ (2πe/n)^(n/2) · R^n
Expected number of lattice points in ball of radius R:
N(R) ≈ Vₙ(R) / vol(L)
Set N(R) = 1 to find expected shortest vector:
(2πe/n)^(n/2) · R^n / vol(L) = 1
R^n = vol(L) · (n/(2πe))^(n/2)
R = vol(L)^(1/n) · √(n/(2πe))
GH(L) := √(n/(2πe)) · vol(L)^(1/n)
3. Logarithmic Form (Information Theory)
Log-Volume (Entropy Connection):
log vol(L) = log det(L)
For basis B with orthogonal Gram-Schmidt B*:
log vol(L) = Σᵢ log ||bᵢ*||
Information interpretation:
- log vol(L) measures "uncertainty" in lattice
- Bits needed to specify position in fundamental domain
Log-Gaussian Heuristic:
log λ₁ ≈ (1/n) log vol(L) + (1/2) log(n/(2πe))
Simplifies to:
log λ₁ ≈ (1/n) log vol(L) + O(log n)
Key insight: Shortest vector grows with geometric mean of basis norms
4. Information-Theoretic Bounds
Ball-Box Bound (Lower Bound):
Any n points in a ball of radius R must have:
min distance ≥ R / n^(1/n)
For lattice in ball of radius R:
Number of points ≤ (2R)^n / vol(L)
Therefore:
λ₁ ≥ vol(L)^(1/n) / c_n
where c_n is packing constant
Minkowski's Theorem (Upper Bound):
λ₁ ≤ √n · vol(L)^(1/n)
In log form:
log λ₁ ≤ (1/n) log vol(L) + (1/2) log n
Tight bounds:
Ω(√n · vol(L)^(1/n)) ≤ λ₁ ≤ O(√n · vol(L)^(1/n))
5. Rough Calculation Examples
Example 1: Small Lattice
Dimension: n = 10
Basis norms: ||bᵢ|| ≈ 1000 (typical)
vol(L) ≈ 1000^10 = 10^30
log₂ vol(L) = 10 × log₂(1000) ≈ 10 × 10 = 100 bits
Gaussian Heuristic:
λ₁ ≈ √(10/(2πe)) · 10^3
λ₁ ≈ √0.92 · 10^3 ≈ 960
log₂ λ₁ ≈ log₂(1000) ≈ 10 bits
Example 2: Mid-Size Lattice
Dimension: n = 100
Determinant: det(L) = 2^500
log₂ vol(L) = 500 bits
Gaussian Heuristic:
log₂ λ₁ ≈ (1/100) × 500 + (1/2) log₂(100/(2πe))
log₂ λ₁ ≈ 5 + (1/2) log₂(13.5)
log₂ λ₁ ≈ 5 + 1.9 ≈ 6.9 bits
λ₁ ≈ 2^6.9 ≈ 120
Example 3: LWE-Based Cryptography
Learning With Errors (LWE) lattice:
n = 512, q = 2^15 (modulus)
vol(L) = q^(n/2) = (2^15)^256 = 2^3840
log₂ vol(L) = 3840 bits
Expected λ₁:
log₂ λ₁ ≈ 3840/512 + (1/2) log₂(512/(2πe))
log₂ λ₁ ≈ 7.5 + 3.5 ≈ 11 bits
λ₁ ≈ 2^11 ≈ 2048
6. Complexity Bounds (Information-Theoretic)
Search Space Size:
Number of vectors to check for exact SVP:
N ≈ Vₙ(c·λ₁) / vol(L)
log N ≈ n log(c·λ₁) - log vol(L)
log N ≈ n log λ₁ - log vol(L)
Substituting λ₁ ≈ vol(L)^(1/n):
log N ≈ n · (1/n) log vol(L) - log vol(L)
log N ≈ 0
Correction: Actually need to search larger ball
log N ≈ n log(√n) ≈ (n/2) log n
Enumeration Complexity:
Time ≈ n! / (pruning factor)
log Time ≈ n log n - n + pruning savings
For well-pruned enumeration:
log₂ Time ≈ 0.1n² (empirical)
n = 50: ~250 operations
n = 100: ~1000 operations
Sieving Complexity:
Gauss Sieve: 2^(0.415n)
log₂ Time = 0.415n
n = 100: ~41.5 bits = 10^12 operations
n = 200: ~83 bits = 10^25 operations
7. Information-Theoretic Hardness
Bit Security:
For cryptographic lattices:
Security ≈ 0.3n to 0.4n bits
n = 512: ~128-205 bit security
n = 1024: ~307-410 bit security
Volume-Security Relation:
High volume (det) → Easy lattice
Low volume → Can be hard, depends on structure
Hermite factor δ:
||b₁|| ≤ δ^n · vol(L)^(1/n)
δ = 1.01: Good reduction (easier)
δ = 1.005: Excellent reduction (harder)
8. Practical Formula Card
╔════════════════════════════════════════╗
║ LATTICE ROUGH CALCULATIONS ║
╚════════════════════════════════════════╝
Volume:
vol(L) = |det(B)|
Shortest Vector (Gaussian Heuristic):
λ₁ ≈ √(n/(2πe)) · vol(L)^(1/n)
λ₁ ≈ 0.6√n · vol(L)^(1/n)
Log form:
log λ₁ ≈ (log vol(L))/n + 0.5 log n
Attack complexity (bits):
Enumeration: 0.1n² to 0.2n²
Sieving: 0.292n (quantum) to 0.415n (classical)
Cryptographic security:
λ ≈ 128 bits → n ≈ 512
λ ≈ 256 bits → n ≈ 1024
Quick Sanity Checks
Is my lattice hard?
import math
def check_hardness(n, det):
log_det = math.log2(det)
log_lambda = log_det / n + 0.5 * math.log2(n / (2 * math.pi * math.e))
lambda_1_estimate = 2 ** log_lambda
# BKZ-β can achieve
delta = 1.0219 ** (1/(2*50)) # β=50
achievable = delta ** n * (det ** (1/n))
print(f"Expected λ₁: {lambda_1_estimate:.2f}")
print(f"BKZ-50 achieves: {achievable:.2f}")
print(f"Security estimate: {0.3*n:.0f}-{0.4*n:.0f} bits")
Next steps
- Dive deeper into entropy connections?
- Show concrete cryptographic parameter examples?
- Explain the connection to coding theory bounds?