https://dl.acm.org/doi/10.1145/800157.805047
(from several websites)
This paper, "The Complexity of Theorem-Proving Procedures" by Stephen A. Cook, introduces the concept of polynomial-time reducibility to classify the difficulty of computational problems1111. It famously demonstrates that the problem of determining whether a propositional formula is a tautology is "at least as hard" as any problem that can be solved by a nondeterministic Turing machine in polynomial time2.
Polynomial-Time Reducibility
The paper establishes a formal way to relate the complexity of different problems.
-
P-Reducibility: A set of strings $S$ is P-reducible (polynomial-time reducible) to a set $T$ if $S$ can be solved by a deterministic Turing machine in polynomial time, provided it has access to an "oracle" that can instantly decide membership in $T$3333. This is defined using "query machines"4.
-
Polynomial Degrees: This notion of reducibility is transitive 5, meaning it can be used to group problems into equivalence classes that share the same polynomial degree of difficulty6666.
-
The Class $L_*$ (P): The paper uses $L_*$ to denote the class of sets that can be recognized by a deterministic Turing machine in polynomial time7.
Theorem 1: The Cook-Levin Theorem
The paper's first main theorem establishes the foundational concept of NP-completeness (though not using that term).
Theorem 1 states that if a set $S$ is accepted by any nondeterministic Turing machine within polynomial time, then $S$ is P-reducible to the set of {DNF tautologies} (propositional formulas in disjunctive normal form that are tautologies)8.
The proof involves a "master reduction":
-
Given a nondeterministic Turing machine $M$ and an input $w$, a propositional formula $A(w)$ in conjunctive normal form (CNF) is constructed9.
-
This formula is built using proposition symbols that describe the machine's computation (e.g., $P_{s,t}^i$ for "tape square $s$ has symbol $\sigma_i$ at time $t$" 10, $Q_t^i$ for "machine is in state $q_i$ at time $t$" 11, and $S_{s,t}$ for "tape square $s$ is scanned at time $t$" 12).
-
The formula $A(w)$ is constructed in parts (B, C, D, E, F, G, H, I) that assert the machine starts correctly 13, follows its transition rules 14, and reaches an accepting state15.
-
Crucially, $A(w)$ is satisfiable if and only if $M$ accepts $w$16.
-
Therefore, its negation, $\neg A(w)$, is a tautology if and only if $w$ is not in $S$ ($w \notin S$)17.
-
This $\neg A(w)$ can be easily converted to disjunctive normal form (DNF) using De Morgan's laws18.
-
Since this entire construction can be performed in polynomial time19, it reduces the problem $S$ to determining tautologyhood.
A corollary notes that this implies many other problems, such as the subgraph problem 2020, are also P-reducible to {DNF tautologies}, because these problems (or their complements) are accepted by nondeterministic machines in polynomial time21.
Theorem 2: Equivalent Difficult Problems
The second main theorem shows that several problems are equivalent in difficulty.
Theorem 2 states that the following sets are all P-reducible to each other, and thus share the same polynomial degree of difficulty2222:
{tautologies}
{DNF tautologies}
-
$D_3$ (DNF tautologies where each disjunct has at most three conjuncts) 23232323
-
{subgraph pairs} (determining if one graph is isomorphic to a subgraph of another) 24242424
The proof demonstrates the necessary reductions, including:
-
{DNF tautologies} to $D_3$: A DNF formula with large disjuncts (e.g., $R_1 \& \dots \& R_s$ where $s>3$) is reduced to a new formula with smaller disjuncts by introducing new proposition symbols, a process that can be repeated in polynomial time 25.
-
$D_3$ to {subgraph pairs}: For a $D_3$ formula $A$, two graphs $G_1$ and $G_2$ are constructed 26. The construction is designed so that $G_1$ can be embedded into $G_2$ if and only if the formula $A$ is not a tautology ($A \notin D_3$)27.
The paper remarks that {primes} and {isomorphic graph pairs} were not successfully added to this list of equivalent problems28.
Implications and Discussion
The theorems provide strong evidence that these problems are "difficult" to solve29292929.
-
The author suggests it is "fruitless" to search for a polynomial-time procedure for the subgraph problem, because success would provide polynomial-time procedures for many other "apparently intractible problems"30.
-
Cook conjectures that {tautologies} is a "good candidate for an interesting set not in $L_ [cite_start]*$ " (the class P)31.
-
He states that proving this conjecture (i.e., proving P $\neq$ NP) would be a "major breakthrough in complexity theory"32.
Complexity of Predicate Calculus
The paper also introduces a method to measure the complexity of proof procedures for the predicate calculus, which is known to be undecidable33333333.
-
This measure is based on the Herbrand theorem34.
-
It defines $\phi(A)$ as the smallest number $k$ such that the conjunction of the first $k$ substitution instances ($A_1 \& \dots \& A_k$) of a formula $A$ is truth-functionally inconsistent35.
-
The efficiency of a proof procedure $Q$ is then measured by a function $T_Q(k)$, which is the time required to terminate for a formula $A$ where $\phi(A) \le k$ and the length of $A$ is limited ($|A| \le \log_2 k$)36.
-
The paper provides a lower bound for $T_Q(k)$ 37373737and an upper bound 38383838, noting the "large gap" between them39.
-
This discussion is linked back to the P vs. NP problem, suggesting that improving these bounds is related to solving major open questions in complexity theory 40.
Based on the provided text, the logical structure of the proofs is sound and follows a clear, deductive path. The paper's arguments are built on constructive reductions, where one problem is methodically transformed into another.
Here is a validation of the logic for the key theorems as presented in the document.
Theorem 1: NP P-reducible to {DNF tautologies}
The logic of this proof is to show that any problem $S$ in NP can be "disguised" as a tautology problem.
-
The Setup: The proof starts with a nondeterministic Turing machine (NDTM) $M$ that accepts a set of strings $S$ in polynomial time, $Q(n)$1.
-
The Construction: For any given input string $w$, the goal is to build a specific propositional formula $A(w)$ in Conjunctive Normal Form (CNF)2.
-
The Formula's Meaning: This formula $A(w)$ is meticulously constructed using proposition symbols that describe the machine's computation (e.g., tape contents, machine state, and head position at each step) 3. The formula itself is a large conjunction of clauses that enforce all the rules of the Turing machine:
-
It must be in exactly one state at a time4444.
-
It must have exactly one symbol per tape square5.
-
It must start in the correct initial configuration for input $w$6666.
-
It must follow its transition rules at every step7.
-
It must eventually reach an accepting state8.
-
-
The Logical Pivot: Because of this construction, the formula $A(w)$ has a critical property: $A(w)$ is satisfiable if and only if the machine $M$ accepts the input $w$9. If $M$ accepts $w$, there exists a valid, accepting computation, and the steps of that computation provide a "satisfying assignment" that makes $A(w)$ true10101010. If $M$ does not accept $w$, no such computation exists, and the formula $A(w)$ will be contradictory (unsatisfiable).
-
The Final Transformation: The proof then uses a standard logical equivalence:
-
Since $A(w)$ is satisfiable if and only if $w \in S$, its negation, $\neg A(w)$, must be a tautology (always true) if and only if $w \notin S$11.
-
By De Morgan's laws, $\neg A(w)$ (the negation of a CNF formula) can be easily put into Disjunctive Normal Form (DNF)12.
-
This entire construction process (building $A(w)$ and negating it) is feasible in polynomial time13.
-
Validation: The logic holds. It successfully reduces the problem of "Is $w \in S$?" to the problem of "Is this corresponding DNF formula $\neg A(w)$ a tautology?". This establishes that {DNF tautologies} is at least as hard as any problem in NP.
Theorem 2: Equivalence of Problems
The logic here is to show that several problems are P-reducible to each other, forming a "cycle" of reductions which proves they all have the same degree of difficulty14.
The proof logic, as described, relies on two key new reductions:
1. {DNF tautologies} is P-reducible to $D_3$
-
Goal: To take any DNF formula $A$ and transform it into a $D_3$ formula $A'$ (where disjuncts have at most 3 conjuncts) such that $A$ is a tautology iff $A'$ is15.
-
Logic: The proof uses a "splitting" trick. If a disjunct $B_1$ has too many parts (e.g., $R_1 \& R_2 \& \dots \& R_s$), it is replaced by introducing a new, unused atom $P$ 16.
-
The single disjunct $B_1$ is replaced by two new ones: $(P \& R_3 \& \dots \& R_s) \vee (\neg P \& R_1 \& R_2)$17.
-
This transformation preserves tautologyhood. The process is repeated until all disjuncts are small enough18.
-
Validation: This is a standard and logically sound reduction. The transformation can be completed in polynomial time19.
2. $D_3$ is P-reducible to {subgraph pairs}
-
Goal: To take a $D_3$ formula $A$ and transform it into two graphs, $G_1$ and $G_2$, such that $G_1$ is isomorphic to a subgraph of $G_2$ if and only if $A$ is not a tautology20.
-
Logic: This is the most complex reduction in the paper.
-
The formula $A$ is a disjunction of clauses: $C_1 \vee \dots \vee C_k$21.
$A$ is not a tautology if there is some "falsifying assignment" that makes every clause $C_i$ false.
Since each $C_i$ is a conjunction ($R_{i1} \& R_{i2} \& R_{i3}$), making $C_i$ false means making at least one of its literals ($R_{ij}$) false.
-
Graph $G_1$ is a complete graph with $k$ vertices, $\{v_1, \dots, v_k\}$ (one vertex per clause)22.
-
Graph $G_2$ has vertices $\{u_{ij}\}$ (one vertex for each literal in the formula)23.
-
Edges in $G_2$ are created very specifically: an edge connects $u_{ij}$ and $u_{rs}$ only if the corresponding literals $R_{ij}$ and $R_{rs}$ are not contradictory (i.e., they are not an "opposite pair" like $P$ and $\neg P$) 24.
-
-
The Logical Pivot: A mapping from the vertices of $G_1$ to the vertices of $G_2$ (a "homomorphism") corresponds to picking one literal to falsify from each clause 25. The edge structure of $G_2$ guarantees that such a mapping can only exist if the chosen literals are consistent (i.e., you never try to falsify both $P$ and $\neg P$).
-
Therefore, a subgraph embedding exists if and only if a consistent falsifying assignment exists 26.
-
The paper then refines this by adding "gadget" graphs to $G_1$ and $G_2$ to ensure the mapping is one-to-one (an isomorphism), but the core logic remains 27.
Validation: The logic of this construction is sound. It successfully translates the properties of the formula $A$ into the properties of the graphs $G_1$ and $G_2$.
Overall Conclusion: The proof logic described in the paper is valid. It relies on a series of ingenious, polynomial-time constructions that correctly map the properties of one problem (like NP machine acceptance or DNF tautology) onto the properties of another (like DNF tautology or subgraph isomorphism).
Of course. The provided text is a landmark academic paper, but it's famously dense. To make the core ideas more readable, you can apply several modern communication techniques.
The key is to add structure, define terms in plain English, and use analogies before presenting the formal logic.
Here is a breakdown of how to "refactor" the theorem's explanation for clarity.
1. Start with a "Why It Matters" Summary
Before diving into "P-reducibility" or "Turing machines," tell the reader the main takeaway. People will follow the complex details if they know the destination.
Original Concept: "It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be 'reduced' to the problem of determining whether a given propositional formula is a tautology."
More Readable Version:
"This paper proves that a huge class of 'hard' problems (called NP) can all be translated into one single, famous problem: SAT (Boolean Satisfiability).
This means that if you could find a fast way to solve SAT, you would automatically have a fast way to solve thousands of other hard problems, like the 'subgraph problem', protein folding, or cracking many forms of encryption.
The paper proves this by showing how to build a 'problem-translating machine' (a "reduction") that turns any NP problem into a giant SAT formula."
2. Define Key Concepts with Simple Analogies
The paper assumes the reader is an expert. A readable explanation defines its terms first.
Concept 1: Nondeterministic Turing Machine (The "Perfect Guesser")
Paper's Language: "a polynomial time-bounded nondeterministic Turing machine"
-
Simple Analogy: "Imagine a computer that can perfectly guess the right path.
A normal (deterministic) computer trying to solve a maze has to try one path, hit a dead end, backtrack, and try another. This takes a long time.
A nondeterministic computer gets to a fork in the road and clones itself, with each clone trying one path simultaneously. If any clone finds the exit, the machine says 'Yes, a path exists!'
Problems in NP (Nondeterministic Polynomial time) are all the problems this 'perfect guesser' could solve quickly (in polynomial time)."
Concept 2: Tautology / Satisfiability (The "Logic Puzzle")
Paper's Language: "determining whether a given propositional formula is a tautology." (Note: The paper's proof actually constructs a satisfiable formula, which is the flip-side of a tautology. We'll focus on Satisfiability (SAT) as it's more direct.)
-
Plain English: "A SAT problem is a big logic formula, like:
(A or B) AND (NOT A or C) AND (B or NOT C)
The Goal: Can you find a set of
True/Falsevalues for A, B, and C that makes the entire statementTrue? (For this one, the answer isTrue. Try A=True, B=False, C=True).Why it's hard: For 3 variables, there are only $2^3 = 8$ combinations to check. For 100 variables, there are $2^{100}$—more atoms than in the observable universe. There's no known "shortcut."
Concept 3: P-Reducibility (The "Problem Translator")
Paper's Language: "Here 'reduced' means... that the first problem can be solved deterministically in polynomial time provided an oracle is available for solving the second."
-
Simple Analogy: "This is the 'problem translator.' We are 'reducing' Problem A to Problem B.
Imagine you don't know how to solve Sudoku (Problem A), but you have a magic 'friend' (an oracle) who can instantly solve any SAT puzzle (Problem B).
A reduction is a rulebook that lets you translate any Sudoku puzzle into a giant SAT formula. This translation process itself must be fast (polynomial time).
How you solve Sudoku: You get a new Sudoku, use your rulebook to translate it into a SAT formula, and hand it to your friend. When they instantly give you the
True/Falseanswer, that answer also tells you the solution to your Sudoku.What this proves: This proves that Sudoku (Problem A) is "no harder than" SAT (Problem B).
3. Re-structure the Proof Logic
Now, you can put these pieces together to explain the actual proof clearly.
Readable Proof Outline:
The paper's "master-stroke" is proving that any NP problem can be translated into a SAT problem. It does this with a universal "rulebook."
The Goal: Take any program $M$ (our "perfect guesser" NDTM) and an input $w$, and build a giant SAT formula $A(w)$ that is satisfiable if and only if the program $M$ accepts the input $w$.
-
The "Blueprint" for the Formula: The formula is built from thousands of variables that represent the entire computation history of the machine over time. The variables are like cells in a giant spreadsheet:
State_is_q_at_time_t: Is the machine in state $q$ at step $t$? (True/False)Head_is_at_cell_s_at_time_t: Is the tape head at position $s$ at step $t$? (True/False)Cell_s_has_symbol_i_at_time_t: Is the symbol $\sigma_i$ written in cell $s$ at step $t$? (True/False)
-
The "Rules" of the Formula: The formula $A(w)$ is a massive
ANDof several groups of rules, which force the variables to behave like a real computation:Part E (Start): "At time $t=1$, the machine must be in the start state, and the tape must contain the input $w$."
Part C (Unique state): "At any given time $t$, the machine must be in one and only one state."
Parts F, G, H (Valid moves): "If at time $t$ the machine is in state $q_i$ and reads symbol $\sigma_j$, then at time $t+1$ it must be in a valid next state and the tape must be updated according to the machine's rules."
Part I (Finish): "At some point in time, the machine must enter the 'accepting' state."
The "Aha!" Moment: If the machine $M$ can accept the input $w$, then a valid, accepting computation history exists. This history provides the exact set of
True/Falsevalues that will make the entire giant formula $A(w)$ satisfiable. If no such computation exists, the rules will contradict each other, and the formula will be unsatisfiable.
By doing this, Cook created a "translator" that turns any NP problem into a SAT problem, proving that SAT is the "hardest" problem in all of NP.
Writing a complete, machine-verified proof of the Cook-Levin theorem in Lean is a massive, research-level undertaking that would span thousands of lines.
However, I can provide a rigorous outline of the proof in Lean, defining the key structures and theorems. This demonstrates how the proof from the paper is formally constructed, which is a common practice in formalization.
We will follow the paper's structure precisely, building the formula $A(w)$ from parts B, C, D, E, F, G, H, and I.
1. Preliminaries: Defining Boolean Logic
First, we define what a propositional formula (PropForm) and satisfiability (Sat) mean. We will use a Fin n type to represent a finite set of $n$ atomic variables.
import Mathlib.Data.Fin.VecNotation
import Mathlib.Data.Polynomial.Basic
-- A valuation is an assignment of True/False to `n` variables
def Valuation (n : Nat) := Fin n → Bool
-- Define propositional formulas inductively
inductive PropForm (n : Nat)
| atom : Fin n → PropForm n
| neg : PropForm n → PropForm n
| and : PropForm n → PropForm n → PropForm n
| or : PropForm n → PropForm n → PropForm n
-- Define what it means for a valuation `v` to satisfy a formula `f`
def Sat (v : Valuation n) : PropForm n → Prop
| .atom i => v i = true
| .neg f => ¬ (Sat v f)
| .and f₁ f₂ => (Sat v f₁) ∧ (Sat v f₂)
| .or f₁ f₂ => (Sat v f₁) ∨ (Sat v f₂)
-- The set of satisfiable formulas
def IsSat (f : PropForm n) : Prop := ∃ (v : Valuation n), Sat v f
2. Defining Nondeterministic Turing Machines (NDTM)
The paper's proof relies on a nondeterministic Turing machine (NDTM). mathlib has deterministic TMs. We must define a nondeterministic one, where step returns a Finset (finite set) of possible next configurations.
-- Define the components of a Turing Machine
structure NDTM where
(State : Type)
(TapeSymbol : Type)
(blank : TapeSymbol)
(initialState : State)
(acceptState : State)
[finState : Fintype State]
[finTape : Fintype TapeSymbol]
-- A configuration is the machine's state, tape, and head position
structure Config (M : NDTM) where
state : M.State
tape : ℤ → M.TapeSymbol
headPos : ℤ
-- The nondeterministic transition function
-- Given a state and a symbol, returns a set of next moves
-- (new state, symbol to write, direction to move)
def Transition (M : NDTM) :=
M.State → M.TapeSymbol → Finset (M.State × M.TapeSymbol × (ℤ → ℤ))
-- A single nondeterministic step
def nd_step (M : NDTM) (t : Transition M) (c : Config M) : Finset (Config M) :=
let (q, σ) := (c.state, c.tape c.headPos)
t q σ.map (fun (q', σ', move) =>
{ state := q',
tape := Function.update c.tape c.headPos σ',
headPos := move c.headPos }
)
-- A computation is a sequence of configurations
def Computation (M : NDTM) (t : Transition M) (c₀ : Config M) : Prop :=
∃ (steps : ℕ → Config M),
steps 0 = c₀ ∧
(∀ n, steps (n + 1) ∈ nd_step M t (steps n))
-- Acceptance: there exists a computation that reaches the accept state
def Accepts (M : NDTM) (t : Transition M) (w : String) : Prop :=
let c₀ : Config M := -- ... (definition of initial config from w) ...
∃ (comp : ℕ → Config M),
(comp 0 = c₀) ∧
(∀ n, comp (n + 1) ∈ nd_step M t (comp n)) ∧
(∃ n, (comp n).state = M.acceptState)
3. Defining the Variables for the Formula
The proof constructs a formula $A(w)$ based on a polynomial time bound $T = Q(n)$. The variables describe the machine's computation tableau (a $T \times T$ grid).
We follow the paper's variable names: $P$, $Q$, and $S$. We need a way to map these 3D indices (e.g., (i, s, t)) to a single flat index for our PropForm (n).
variable (T : ℕ) (l : ℕ) (r : ℕ) -- T = time, l = symbols, r = states
-- We need (T * T * l) + (T * r) + (T * T) total variables
def num_vars (T l r : ℕ) : ℕ := (T * T * l) + (T * r) + (T * T)
-- We define functions that map our semantic indices to the flat `Fin n`
-- These are our "proposition symbols" from the paper
def varP (s t i : Fin T) (i : Fin l) : Fin (num_vars T l r) :=
-- ... (math to compute a unique flat index) ...
def varQ (t : Fin T) (i : Fin r) : Fin (num_vars T l r) :=
-- ... (math to compute a unique flat index) ...
def varS (s t : Fin T) : Fin (num_vars T l r) :=
-- ... (math to compute a unique flat index) ...
-- Helper functions to create atoms from these variables
def P (s t i) : PropForm (num_vars T l r) := .atom (varP s t i)
def Q (t i) : PropForm (num_vars T l r) := .atom (varQ t i)
def S (s t) : PropForm (num_vars T l r) := .atom (varS s t)
4. Rigorous Construction of the Formula $A(w)$
Now we build the formula $A(w)$, which the paper calls B & C & D & E & F & G & H & I. We use PropForm.and for & and PropForm.or for v.
-- Helper: "Exactly one of these formulas is true"
def ExactlyOne (fs : List (PropForm n)) : PropForm n :=
let atLeastOne := fs.foldl (.or) (.neg (.atom 0)) -- Assuming non-empty
let atMostOne := -- ... (complex definition) ...
.and atLeastOne atMostOne
-- B: At each time t, one and only one square is scanned
def B_t (t : Fin T) : PropForm (num_vars T l r) :=
ExactlyOne (List.finRange T |>.map (fun s => S s t))
def B : PropForm (num_vars T l r) :=
List.foldl (.and) (.atom 0) (List.finRange T |>.map B_t) -- Assuming non-empty
-- C: At each time t and square s, one and only one symbol is present
def C_s_t (s t : Fin T) : PropForm (num_vars T l r) :=
ExactlyOne (List.finRange l |>.map (fun i => P s t i))
def C : PropForm (num_vars T l r) :=
-- ... (a conjunction of C_s_t for all s and t) ...
-- D: At each time t, there is one and only one state
def D_t (t : Fin T) : PropForm (num_vars T l r) :=
ExactlyOne (List.finRange r |>.map (fun i => Q t i))
def D : PropForm (num_vars T l r) :=
-- ... (a conjunction of D_t for all t) ...
-- E: Asserts initial conditions (w = σ_i₁...σ_in)
-- This depends on the specific input `w`
def E (w_symbols : List (Fin l)) : PropForm (num_vars T l r) :=
let t₀ := 0 -- time 0
let q₀ := 0 -- initial state
let s₀ := 0 -- initial square
let b := 0 -- blank symbol
let initialState := Q t₀ q₀
let initialScan := S s₀ t₀
let tapeInit := -- ... (conjunction of P s t₀ i for each symbol i in w_symbols)
let blankInit := -- ... (conjunction of P s t₀ b for squares after w)
.and initialState (.and initialScan (.and tapeInit blankInit))
-- F, G, H: Assert that the computation follows the transition rules
-- This is the most complex part, encoding the machine's logic.
-- G_i_j_t asserts: if at time t, state is q_i and symbol is σ_j...
-- ... then at time t+1, the state is q_k (as per the transition)
def G (t : Transition M) : PropForm (num_vars T l r) :=
-- ... (A massive conjunction over all s, t, i, j encoding the `nd_step` logic)
-- Form: (¬Q t i) ∨ (¬S s t) ∨ (¬P s t j) ∨ (Q (t+1) k)
-- This must be adapted for nondeterminism (a disjunction of possible next states)
sorry
-- I: Asserts that the machine reaches an accepting state
def I (q_accept : Fin r) : PropForm (num_vars T l r) :=
-- "At some time t, the machine is in the accept state"
List.foldl (.or) (.neg (.atom 0)) (List.finRange T |>.map (fun t => Q t q_accept))
5. The Main Theorem
Finally, we state the Cook-Levin theorem.
Theorem 1 from the paper states that any set $S$ in NP is P-reducible to {DNF tautologies}. The proof constructs a satisfiable CNF formula $A(w)$ iff $M$ accepts $w$. The negation $\neg A(w)$ is then a DNF tautology iff $w \notin S$.
We will state the satisfiability part.
theorem CookLevinTheorem (M : NDTM) (t : Transition M) (w : String) :
-- There exists a polynomial bound Q(n)
∃ (Q : Polynomial ℕ),
let n := w.length
let T := Q.eval n -- The time bound
-- The formula A(w)
let A_w := .and B (.and C (.and D (.and (E w) (.and G I)))) --
-- The reduction function `f` (that builds A_w) is poly-time
(PolyTimeConstructible A_w) ∧
-- The core equivalence:
-- M accepts w if and only if A(w) is satisfiable
(Accepts M t w ↔ IsSat A_w)
:=
begin
-- The proof in Lean would be a massive, constructive argument.
-- 1. `exists`: Provide the polynomial `Q` that bounds the NDTM.
-- 2. `constructor`: Prove the two parts of the `∧`.
-- 3. `left`: Prove that the construction of A_w is polynomial time.
-- This involves showing the size of A_w is polynomial in `n`.
-- 4. `right`: Prove the `↔` (iff).
-- - `mp`: (Accepts → IsSat)
-- Assume `h : Accepts M t w`.
-- From `h`, we get an accepting computation (a sequence of Configs).
-- We must build a `Valuation v` from this computation.
-- Then prove `Sat v A_w` by showing it satisfies B, C, D... I.
-- - `mpr`: (IsSat → Accepts)
-- Assume `h : IsSat A_w`.
-- From `h`, we get a satisfying valuation `v`.
-- We must construct a valid accepting computation `steps : ℕ → Config M`.
-- We prove `steps` is valid by showing that `v` satisfying B, C, D...
-- forces the `steps` to follow the NDTM's transition rules.
sorry
end