2D Lattice SVP Examples
A 2D lattice is defined by a basis of two vectors, $\mathbf{b_1}$ and $\mathbf{b_2}$.
Example 1: Orthogonal Basis (The "Easy" Case)
This is the simplest lattice, where the basis vectors are already short and perpendicular.
$\mathbf{b_1} = (1, 0)$
$\mathbf{b_2} = (0, 1)$
This basis generates the standard integer grid $Z^2$. The shortest non-zero vectors are obviously $\mathbf{b_1}$, $\mathbf{b_2}$, and their negatives, all with a length of 1.
Example 2: Slightly Skewed Basis
This example shows how the shortest vector isn't always one of the basis vectors.
$\mathbf{b_1} = (5, 3)$
$\mathbf{b_2} = (2, 2)$
The basis vectors have lengths 52+32=34≈5.83 and 22+22=8≈2.83. However, a shorter vector can be found by combining them:
v=b1−2b2=(5,3)−2(2,2)=(5−4,3−4)=(1,−1)
The length of v is 12+(−1)2=2≈1.41, which is the shortest vector in this lattice.
Example 3: Highly Skewed Basis (The "Hard" Case)
This is a "bad" basis, where the vectors are long and nearly parallel. Finding the short vector is non-trivial and demonstrates why reduction algorithms like LLL are needed.
$\mathbf{b_1} = (65537, 65536)$
$\mathbf{b_2} = (65536, 65535)$
Both basis vectors are very long (length >92000). But a simple integer combination reveals a tiny vector:
v=b1−b2=(65537−65536,65536−65535)=(1,1)
The length of v is 12+12=2≈1.41.
3D Lattice SVP Examples
A 3D lattice is defined by a basis of three vectors, $\mathbf{b_1}$, $\mathbf{b_2}$, and $\mathbf{b_3}$.
Example 1: Orthogonal Basis
Similar to the 2D case, this is the standard $Z^3$ integer lattice.
$\mathbf{b_1} = (1, 0, 0)$
$\mathbf{b_2} = (0, 1, 0)$
$\mathbf{b_3} = (0, 0, 1)$
The shortest non-zero vectors have a length of 1.
Example 2: Symmetric, Non-Orthogonal Basis
This is a non-trivial but structured lattice.
$\mathbf{b_1} = (15, -7, -7)$
$\mathbf{b_2} = (-7, 15, -7)$
$\mathbf{b_3} = (-7, -7, 15)$
The basis vectors are relatively long (length $\approx 18.2$). The shortest vector is not immediately obvious from this basis. (For reference, a short vector in this lattice is $(1, 1, -1)$, which can be formed by a combination like $2\mathbf{b_1} + 3\mathbf{b_2} + 3\mathbf{b_3}$).
Example 3: Highly Skewed "Bad" Basis
This is a classic example used to show the power of the LLL algorithm. The basis vectors are enormous and almost orthogonal, yet they hide a very short vector.
$\mathbf{b_1} = (1, 1894885908, 0)$
$\mathbf{b_2} = (0, 1, 1894885908)$
$\mathbf{b_3} = (0, 0, 2147483648)$
The shortest vector in this lattice is v=(−3,17,4), which has a length of (−3)2+172+42=9+289+16=314≈17.7. This vector is found using a massive integer combination:
v=−3b1+5684657741b2−5015999938b3
(next - Upper vs Lower Boundary in caluculation size )
Here are my guesses for the calculation size boundaries for each of those 6 cases.
First, a quick clarification: "Calculation size" in this context isn't a single number but a measure of computational complexity. This complexity depends almost entirely on two things:
Dimension ($n$): Fixed at 2 or 3. Because the dimension is so small, the problem is "easy" and can be solved efficiently.
Bit-Length ($B$): The size of the numbers in your basis vectors. This is the main factor that will change the calculation time for your examples.
For a fixed dimension $n$, the time to solve SVP is polynomial in the bit-length $B$.
Lower Bound (all cases): $O(B)$. An algorithm must at least read the numbers, so the bare-minimum effort is proportional to the size of the numbers.
Upper Bound (2D): $O(B^2)$. 2D SVP is solved by Gauss's algorithm, which is very fast—its complexity is related to the square of the bit-length.
Upper Bound (3D): $O(B^3)$. 3D SVP can be solved by algorithms like LLL. The complexity is also polynomial, roughly scaling with the cube of the bit-length.
2D Lattice Guesses
Example 1: Orthogonal Basis
Basis: $\mathbf{b_1} = (1, 0)$, $\mathbf{b_2} = (0, 1)$
Bit-Length ($B$): Trivial ($B=1$).
Lower Bound: $O(1)$ operations.
Upper Bound: $O(1)$ operations.
Guess: This is a "zero-calculation" problem. Any algorithm would check the vectors, see they are orthogonal and of length 1, and stop immediately.
Example 2: Slightly Skewed Basis
Basis: $\mathbf{b_1} = (5, 3)$, $\mathbf{b_2} = (2, 2)$
Bit-Length ($B$): Very small ($B \approx 3$ bits).
Lower Bound: A few operations (proportional to $B$).
Upper Bound: A few dozen operations (proportional to $B^2$, so maybe $3^2 = 9$).
Guess: This is extremely fast. It's equivalent to running the Euclidean algorithm to find $\gcd(5, 2)$. It would take 1-2 reduction steps.
Example 3: Highly Skewed Basis
Basis: $\mathbf{b_1} = (65537, 65536)$, $\mathbf{b_2} = (65536, 65535)$
Bit-Length ($B$): Small ($B \approx 16$ bits).
Lower Bound: Proportional to 16.
Upper Bound: Proportional to $16^2 = 256$.
Guess: Still instantaneous for a computer. This is a "bad" basis but for such small numbers, the $O(B^2)$ complexity is trivial. It would perform a few more reduction steps than Example 2, but the calculation size is still very low.
3D Lattice Guesses
Example 1: Orthogonal Basis
Basis: $\mathbf{b_1} = (1, 0, 0)$, $\mathbf{b_2} = (0, 1, 0)$, $\mathbf{b_3} = (0, 0, 1)$
Bit-Length ($B$): Trivial ($B=1$).
Lower Bound: $O(1)$ operations.
Upper Bound: $O(1)$ operations.
Guess: Identical to the 2D orthogonal case. It's a "zero-calculation" problem.
Example 2: Symmetric, Non-Orthogonal Basis
Basis: $\mathbf{b_1} = (15, -7, -7)$, etc.
Bit-Length ($B$): Very small ($B \approx 4$ bits).
Lower Bound: Proportional to 4.
Upper Bound: Proportional to $4^3 = 64$.
Guess: Very fast. A 3D reduction algorithm (like LLL) would perform a handful of vector swaps and subtractions and finish.
Example 3: Highly Skewed "Bad" Basis
Basis: $\mathbf{b_1} = (1, 1894885908, 0)$, etc.
Bit-Length ($B$): Moderate ($B \approx 31$ bits, as $1.8 \times 10^9 \approx 2^{31}$).
Lower Bound: Proportional to 31.
Upper Bound: Proportional to $31^3 \approx 30,000$.
Guess: This is the only example with a non-trivial calculation size. The algorithm must perform many arithmetic operations (additions, multiplications, divisions) on 31-bit numbers. While still "instant" (microseconds) on a modern computer, its calculation size is thousands of times larger than any of the other examples. This is where you first see the algorithm actually having to do work.
(try to generalize it)
Here's the generalization for the upper and lower boundaries for 2D and 3D SVP, based on the bit-length $B$ of the largest number in your basis vectors.
The most important takeaway is that for a fixed, small dimension (like 2 or 3), the problem is not exponentially hard. The difficulty scales polynomially with the size of the numbers ($B$), not the dimension.
Lower Boundary (For both 2D and 3D)
The lower bound is determined by the "cost of reading the input." An algorithm can't solve the problem without at least looking at all the numbers.
Generalization: $O(B)$
Explanation: The input consists of a fixed number of integers (4 for 2D, 9 for 3D). The algorithm must read each of these integers, and the time this takes is proportional to their bit-length, $B$. Therefore, the absolute minimum time required is linear in $B$.
Upper Boundary (The "Hardest Case")
The upper bound is determined by the worst-case performance of the best-known algorithm for that dimension.
2D SVP
Algorithm: Gauss's Reduction Algorithm (which is essentially the Euclidean algorithm for vectors).
Generalization: $O(B^2)$
Explanation: The Gauss-Lagrange algorithm is guaranteed to find the shortest vector in 2D. Its computational complexity is well-known to be polynomial. It performs a number of steps related to $\log(B)$, but each step involves arithmetic operations (like division) on numbers of size $B$. This results in a total bit-complexity that is quadratic in $B$.
3D SVP
Algorithm: LLL Algorithm (Lenstra–Lenstra–Lovász).
Generalization (Classic): $O(B^3)$
Generalization (Modern): $O(B^2)$
-
Explanation: This is slightly more complex as algorithms have improved.
The classic LLL algorithm's complexity analysis (when fixing the dimension $n=3$) involves operations on numbers that can grow during the calculation, leading to a worst-case bound of $O(B^3)$.
More modern variants of LLL (like $L^2$) use clever floating-point arithmetic to control the size of the numbers, achieving a faster worst-case bound of $O(B^2)$.
For practical purposes, you can think of both 2D and 3D SVP as having a worst-case complexity of $O(B^2)$ using the best algorithms. The $O(B^3)$ bound is a more traditional, but still valid, upper bound for 3D if you are using a simpler LLL implementation.