Here’s a list of ~12 recent papers (with authors + URLs) on lightweight / resource‐constrained post‐quantum cryptography (PQC) — useful if you’re diving deeper into the field.
“A post-quantum lattice based lightweight authentication and code-based hybrid encryption scheme for resource-constrained IoT” — Comnet, 2022. (ACM Digital Library)
9
D. Xu, X. Wang, Y. Hao, Z. Zhang, Q. Hao, H. Jia, H. Dong, L. Zhang
“Ring-ExpLWE: A High-Performance and Lightweight Post-Quantum Encryption Scheme for Resource-Constrained IoT Devices” — referenced via MDPI survey. (MDPI)
10
The authors of “A comprehensive and realistic performance evaluation of post-quantum cryptography algorithms in consumer IoT devices”
“A comprehensive and realistic performance evaluation of post-quantum cryptography algorithms in relation to consumer IoT devices” — SciDirect, 2025. (ScienceDirect)
11
Authors of “A Lightweight BRLWE-based Post-Quantum Cryptosystem with Side…”
Here are the labs/companies that most clearly straddle both worlds—active work on lattice/PQC (incl. SVP/LWE, reduction/attacks, HE) and serious LLM research—so they’re closest to the “SVP builder/solver + LLM R&D” vibe:
Meta AI (FAIR) — published concrete LWE attack benchmarks (uSVP, SALSA, etc.) and tooling, while leading major LLM lines (Llama 3/3.1/3.2). (facebookresearch.github.io)
Microsoft Research (MSR) — long-running lattice/PQC work (e.g., LatticeCrypto library) and an in-house LLM program (Phi-2/3/3.5, SLM research). (microsoft.com)
Google Research / DeepMind — PQC adoption and standards engagement on the security side; simultaneously pushing flagship LLMs (Gemini 2.5) and agents for algorithm design. (Google Online Security Blog)
IBM Research — extensive lattice/PQC research and standardization contributions, plus foundation-model/LLM work via watsonx.ai. (IBM Research)
SandboxAQ — explicitly positions at the intersection of AI and quantum/PQC; publishes on lattice-crypto risk and runs PQC deployments with partners. (sandboxaq.com)
Zama — lattice-based FHE stack (RLWE schemes) with an explicit push for encrypted LLM inference (Concrete-ML, HE for transformers). (docs.zama.ai)
NTT Research / NTT R&D — deep cryptography lab plus contemporary LLM/AI research output (ICLR/ICML papers). (NTT Research, Inc.)
Notes
The closest match to “SVP builder/solver” in a modern, public program is Meta AI’s LWE attack benchmarking (which concretely exercises lattice reduction pipelines and reports BKZ/uSVP performance), and academic/industry work exploring ML guidance for BKZ (e.g., RL-tuned BKZ). (facebookresearch.github.io)
If you also want HE-for-LLMs specifically (practical blend of lattice crypto with LLM inference), Zama is the clearest industry example. (docs.zama.ai)
===
Here is a curated list of research papers that lie at the intersection of lattice/cryptography (e.g., lattice reduction, FHE) and large‐language models (LLMs) or transformers. I’ve included the URL for each, plus a short note on relevance.
Focuses on lattice reduction (module-BKZ) analysis rather than LLMs; relevant for the lattice side of the blend.
6
Practical Secure Inference Algorithm for Fine-tuned Large Language Model Based on Fully Homomorphic Encryption — arXiv: https://arxiv.org/abs/2501.01672 (arXiv)
Combines FHE + PEFT (LoRA) in LLMs; relates to protecting inference & model weights via lattice‐based crypto.
7
Investigating Deep Reinforcement Learning for BKZ Lattice Reduction — (ResearchGate) [indirect link] (ResearchGate)
Applies RL (which overlaps ML) to improve BKZ, a core lattice‐reduction algorithm (SVP/SIVP context).
8
A Complete Analysis of the BKZ Lattice Reduction Algorithm — (ACM) [doi link] (ACM Digital Library)
A rigorous analysis of lattice reduction; again more on the lattice side, less on LLM, but relevant for the “solver” component.
===
PQC/lattices ⇄ LLM/ML intersection. I grouped them so you can skim.
The source, a transcript from a YouTube video titled "Innovation Manager Corner - PQC news," focuses on recent developments in Post-Quantum Cryptography (PQC) and the transition toward its adoption. It reports on the status of the NIST PQC standardization process, noting that the Augmented Key Establishment (AQC) algorithm will be standardized for key establishment. The video also discusses European cybersecurity initiatives, highlighting the release of version 2.0 of the Agreed Cryptographic Mechanism document, which emphasizes the need for hybridization using both classical and PQC schemes. Furthermore, the source examines various side-channel attacks targeting PQC algorithms like CRYSTALS-Kyber and Falcon, and announces a new European project, PQC Attack Resilience, aimed at creating robust cryptographic solutions. Finally, it mentions advancements in quantum computing hardware with devices like Microsoft's Majorana one and the launch of a 50-qubit quantum computer in Europe, along with Google's plan to implement quantum-safe digital signatures in its Key Management Service (KMS).
1. People and Organizations
1.1 Maria Chiara
Role: Security Engineer
Organization: Security Pattern
1.2 Security Pattern
Involvement: Collaborating in the Cub European Project
2. Cub European Project
Start Date: September 2023
Goal:
To design a reference and replicable transition process to Post Quantum Cryptography (PQC) for protocols, networks, and systems.
3. NIST Post Quantum Cryptography Standardization (IR8545)
3.1 Document Overview
Title: IR8545 — Status Report on the Fourth Round of NIST PQC Standardization
Publication Date: March 2025
Content: Describes the evaluation and selection process for key establishment algorithm candidates.
3.2 Fourth-Round Candidate Algorithms
Bike
Classic McEliece (Classic Mec)
SIKE
AQC
3.3 Selection Outcome
Standardized Algorithm: AQC (only key establishment algorithm to be standardized by NIST)
4. European Cybersecurity Certification Group (ECCG) and ENISA
The text provides excerpts from a mathematical talk concerning the study of singularities of pairs in both positive characteristic ($p$) and characteristic zero number theory. The speaker explains that traditional methods for studying singularities in characteristic zero, such as the use of resolution of singularities and vanishing theorems of cohomology, are unavailable in positive characteristics, necessitating the creation of a "bridge" to transport results between the two fields. A key object of study is the pair $(X, \mathcal{A}^e)$, consisting of a variety $X$ and a formal product of ideals $\mathcal{A}$ with real exponents $e$, and the speaker introduces several key invariants like the log discrepancy and the minimal log discrepancy (MLD). The main goal is to prove that certain crucial theorems regarding properties like discreteness and the ACC (Ascending Chain Condition) hold true in positive characteristics, mirroring established results in characteristic zero via a novel lifting theorem.
==
The subject of the talk is the singularities of pairs in characteristic $p$ and characteristic zero.
The primary approach to studying singularities in characteristic zero relies on the existence of resolution of singularity.
Other key properties available in characteristic zero include generic smoothness of morphism.
Characteristic zero studies also utilize Vanishing theorems of cohomology.
These crucial properties—resolution, generic smoothness, and vanishing theorems—are not available in positive characteristics.
The speaker's project aims to construct a "bridge" to transport statements proven in characteristic zero to positive characteristics.
The core object of study is a pair, denoted $(A, \mathcal{J}^e)$.
In this pair, $A$ represents a variety over a field.
$\mathcal{J}^e$ is a product of ideals, written formally as $\mathcal{J}_1^{e_1} \dots \mathcal{J}_r^{e_r}$.
The exponents $e_i$ in the multi-ideal are required to be positive real numbers.
An ideal raised to a real exponent is considered only a formal product, and not necessarily an ideal itself.
The concept of the pair originated within the field of birational geometry.
Initial singularity studies naturally focused only on the variety $X$.
Birational geometers then began studying the pair consisting of the variety and a divisor, $(X, D)$.
The pair concept is convenient for utilizing the induction of dimension frequently used in birational geometry.
The pair $(X, D)$ is equivalent to the pair consisting of $X$ and the defining ideal of $D$.
This generalized naturally to the pair $(X, \mathcal{A})$, where $\mathcal{A}$ is simply an ideal.
Using integer exponents in the pair is natural, as the result remains an ideal.
Rational exponents are generally acceptable because raising an ideal to an integer multiple corresponding to the denominator still yields an ideal.
Real exponents appear naturally in birational geometry, such as in the BCHM (Cascini-Hacon-McKernan-Mustaţǎ) context, when considering the limit of rational exponents.
The speaker’s viewpoint is justified by a "surprising theorem of Sommer".
For the purpose of the talk, the object is simplified to the setting where the variety $A$ is assumed to be smooth.
$A$ is a smooth variety defined over $k$, where $k$ is a field of arbitrary characteristic.
$E$ is defined as a prime divisor over $A$ with center at zero.
The existence of such a prime divisor $E$ implies the existence of a birational modification $\pi: \tilde{A} \to A$ where $E$ is an irreducible divisor on the normal variety $\tilde{A}$.
$v_E$ denotes the discrete valuation corresponding to the prime divisor $E$.
For an ideal $\mathcal{A}$, $v_E(\mathcal{A})$ is defined as the minimum value of $v_E(x)$ for elements $x$ in $\mathcal{A}$.
$k_E$ is the coefficient of $E$ in the relative canonical divisor component and is a non-negative integer.
The log discrepancy for the pair is defined as $a_E(A, \mathcal{J}^e) = k_E + 1 - \sum e_i v_E(\mathcal{A}_i)$.
This log discrepancy value is a real number, even if $e_i$ are real numbers.
The Minimal Log Discrepancy (MLD) at point 0 is defined as the infimum of $a_E(A, \mathcal{J}^e)$ over all prime divisors $E$ over $A$ with center at zero.
MLD is constrained to either be greater than or equal to zero, or equal to minus infinity.
If one log discrepancy becomes negative, the MLD automatically becomes $-\infty$.
A larger MLD value implies a better or "milder" singularity, meaning it is closer to being non-singular.
A prime divisor computing MLD does not always exist.
In characteristic zero, a prime divisor computing MLD always exists, relying on the existence of appropriate resolution of singularities.
A pair is defined as log canonical if its MLD is greater than or equal to zero.
Log canonical singularity is considered "marginally acceptable" in birational geometry.
Talachita's result, for characteristic zero, states that the set of log canonical multi-ideals is a discrete set.
In characteristic zero, the Log Canonical Threshold (LCT) is known to be a rational number.
The main application of the bridge is demonstrating that the discreteness result (Talachita's theorem) holds for positive characteristic when $A$ is a smooth point with a perfect residue field.
The “Hodge conjecture in toric varieties” sits at the crossroads of algebraic geometry, combinatorics, and topology — with emphasis on intersection cohomology, Hodge modules, and combinatorial structures on fans.
Below is a consolidated roster of leading figures and lines of research, drawn from all three sources.
🧩 Foundational Figures in Intersection Cohomology
Robert MacPherson (IAS) & Mark Goresky
Co-inventors of Intersection Homology (IH) — the replacement for ordinary cohomology on singular spaces.
Foundational for all later work on toric and singular varieties.
The document addresses the Intersection Cohomology of a Fan and the Hodge Conjecture for Toric Varieties1.
The problem involves a projective, (potentially singular) toric variety $X_{\Sigma}$ defined by a fan $\Sigma$ in a lattice $N$2.
The Intersection Hodge Conjecture for $X_{\Sigma}$ asserts that the "Hodge-theoretic" cycle class map is surjective3.
The map in question is $cl_{IH}:\bigoplus_{k}\mathcal{Z}^{k}(X_{\Sigma})_{\mathbb{Q}}\rightarrow\bigoplus_{k}IH^{2k}(X_{\Sigma},\mathbb{Q})\cap IH^{k,k}(X_{\Sigma})$4.
$\mathcal{Z}^{k}(X_{\Sigma})$ represents the group of algebraic cycles of codimension k5.
$IH^{*}$ denotes the intersection cohomology6.
The fundamental open problem is to find a purely combinatorial description for both Hodge-theoretic and algebraic cycle classes7.
This description should be in terms of the fan $\Sigma$ and the lattice $N$8.
The problem also requires proving the equivalence of these two descriptions9.
The Two Objectives
Objective 1 (Hodge Side): Develop a combinatorial algorithm using only fan data to compute a basis for "combinatorial Hodge classes"10.
"Combinatorial Hodge classes" are defined as the rational classes in $IH^{k,k}(X_{\Sigma})$11.
Objective 2 (Algebraic Side): Prove that the space from Objective 1 is spanned precisely by the intersection cohomology classes of torus-invariant subvarieties $V(\tau)$12.
This must hold for all cones $\tau \in \Sigma$13.
Background and Context
This problem is described as a specialized, combinatorial version of one of the deepest unsolved problems in mathematics14.
The Hodge Conjecture: It states that for a smooth projective variety $X$, any class in $H^{2k}(X,\mathbb{Q})\cap H^{k,k}(X)$ is the class of an algebraic cycle15.
The conjecture connects the abstract topology of $X$ to its concrete algebraic geometry16.
Intersection Cohomology: Most toric varieties are singular17.
For singular varieties, standard cohomology $H^{*}(X)$ lacks good properties like Poincaré Duality18.
Intersection Cohomology ($IH^{*}(X)$) is the correct replacement for singular varieties19.
Therefore, the Hodge Conjecture must be reformulated in terms of $IH^{*}(X)$20.
The Fan: For a toric variety $X_{\Sigma}$, every geometric and topological property is completely encoded in the combinatorial data of its fan $\Sigma$21.
The "natural" algebraic cycles on $X_{\Sigma}$ are the torus-invariant subvarieties $V(\tau)$22.
These subvarieties correspond one-to-one with the cones $\tau \in \Sigma$23.
The cohomology of smooth toric varieties is well-understood combinatorially and related to the Stanley-Reisner ring of the fan24.
The combinatorial description of intersection cohomology $IH^{*}(X_{\Sigma})$ is much more complex25.
Key Challenges
Challenge 1: Computing $IH^{*}(X_{\Sigma})$ combinatorially is hard, as there is no simple, general formula26. A successful approach might need a new combinatorial invariant that captures the "failure" of $X_{\Sigma}$ to be smooth27.
Challenge 2: A main creative step is to define a "combinatorial Hodge class" by finding a "combinatorial signature" within the fan data that identifies classes of type $(k, k)$28.
This signature would be some new invariant of the cones and their relationships within the lattice $N$29.
Challenge 3: A solution would need to prove surjectivity by showing the new combinatorial description (from Challenge 2) generates a set identical to the set of torus-invariant cycles $V(\tau)$30.
Proving that no other algebraic cycles are needed would be a major breakthrough31.
The source provides excerpts from a conversation between Terence Tao, a celebrated mathematician, and Lex Fridman, where they discuss a vast array of topics in mathematics and physics. The discussion centers on difficult unsolved problems, such as the Navier-Stokes regularity problem and the Riemann Hypothesis, exploring the nature of singularities, fluid dynamics, and the distribution of prime numbers. Tao also describes his work on the Kakeya problem and his conceptualization of a "liquid computer" to model mathematical blowup scenarios, drawing parallels to Turing machines and cellular automata. Furthermore, the conversation examines the role of technology and collaboration in modern mathematics, specifically mentioning the use of the Lean proof assistant and the potential impact of artificial intelligence on conjecture generation and proof formalization.
==
Terence Tao is widely considered one of the greatest mathematicians in history, often referred to as the Mozart of math. He has been recognized with both the Fields Medal and the Breakthrough Prize in mathematics.
Here are 60 detailed points concerning mathematics, physics, AI, and the work discussed in the sources:
Terence Tao has contributed groundbreaking work across an astonishing range of fields in mathematics and physics.
Tao’s ability to go both deep and broad in mathematics is reminiscent of the great mathematician Hilbert.
Tao identifies primarily as a fox, favoring broad knowledge and seeing connections between disparate fields, over the hedgehog style of singular deep focus.
He values mathematical arbitrage: taking tricks learned in one field and adapting them to a seemingly unrelated field.
Really interesting mathematical problems lie on the boundary between what is easy and what is considered hopeless.
The Kakeya problem caught Tao's eye during his PhD studies and has recently been solved.
Historically, the Kakeya problem originated as a puzzle posed by Japanese mathematician Soichi Kakeya around 1918.
The 2D Kakeya puzzle asks for the minimum area required to turn a unit needle around on a plane.
Besicovitch demonstrated that in 2D, the needle can be turned around using arbitrarily small area (e.g., 0.001).
The 3D Kakeya conjecture concerned the minimum volume needed to rotate a very thin object (like a telescope tube of thickness delta) to point in every direction.
The 3D conjecture proposed that this minimum volume decreases very slowly, roughly logarithmically, as the thickness delta diminishes.
The Kakeya problem connects surprisingly to partial differential equations (PDEs), number theory, geometry, and combinatorics.
One connection is to wave propagation, where a localized wave packet (like a light ray) occupies a tube-like region in space and time.
The Navier-Stokes regularity problem is a famous unsolved Millennium Prize Problem offering a million-dollar prize.
This problem concerns the Navier-Stokes equations, which govern the flow of incompressible fluids, such as water.
The key question is whether the velocity of the fluid can ever concentrate so much that it becomes infinite at a point, known as a singularity.
Tao published a 2016 paper on "Finite Time Blowup for an Averaged Three-Dimensional Navier-Stokes Equation," exploring this difficulty.
Finite time blow up occurs if all the energy of a fluid concentrates into a single point in a finite amount of time.
Water is naturally viscous, meaning that if energy is spread out (dispersed), viscosity damps the energy down.
The difficulty arises from the possibility of a "Maxwell's demon" effect, where energy is pushed into smaller and smaller scales faster than viscosity can control it.
The Navier-Stokes equation is a struggle between linear dissipation (viscosity, which calms things down) and nonlinear transport (which causes problems).
3D Navier-Stokes is considered supercritical, meaning that at small scales, the nonlinear transport terms dominate the viscosity terms.
In 2D, blowup was disproved because the equations are critical, where transport and viscosity forces are roughly equal even at small scales.
Tao engineered a blowup for an averaged Navier-Stokes equation to create an obstruction that rules out certain methods for solving the true equation.
This engineered blowup required sophisticated programming of delays, functioning like an electronic circuit or a Rube Goldberg machine described mathematically.
This work suggests the possibility of constructing a liquid computer—a fluid analog of a Turing or von Neumann machine—that could induce blowup through self-replication and scaling.
The concept of a fluid machine that creates a smaller, faster version of itself, transferring all its energy to the new state, provides a roadmap for finite time blowup.
The idea of liquid computers has precedent in cellular automata like Conway's Game of Life, where simple rules lead to complex structures, including self-replicating objects.
The most incomprehensible thing about the universe is that it is comprehensible (the unreasonable effectiveness of mathematics, noted by Einstein).
Universality helps explain comprehensibility: macro-scale laws often emerge from micro-scale complexity depending only on a few parameters (e.g., temperature and pressure).
The Central Limit Theorem is a basic example of universality, explaining the ubiquitous appearance of the Gaussian bell curve in nature.
Mathematics primarily deals with abstract models of reality and exploring the logical consequences of the axioms within those models.
Euler’s identity ($E^{i\pi} = -1$) is often deemed the most beautiful equation because it unifies concepts of exponential growth, rotation ($\pi$), and complex numbers ($\mathbf{i}$), connecting dynamics and geometry.
Noether’s theorem fundamentally connects symmetries in a physical system (like time translation invariance) to conservation laws (like conservation of energy).
The search for a Theory of Everything requires finding the right mathematical language, similar to how Riemannian geometry was ready for Einstein's general relativity.
The history of physics, like mathematics, has been characterized by unification (e.g., Maxwell unifying electricity and magnetism).
Prime numbers are often referred to as the atoms of mathematics, fundamental to the multiplicative structure of natural numbers.
The Twin-Primes Conjecture proposes that there are infinitely many pairs of primes that differ by two.
Twin primes are sparse and sensitive; their existence cannot be proven merely by aggregate statistical analysis of the primes.
The Green-Tao theorem proves that prime numbers contain arithmetic progressions of any arbitrary length.
Arithmetic progressions are remarkably robust; they remain present even if 99% of primes are eliminated.
Current mathematical work has established that there are infinitely many pairs of primes that differ by at most 246.
The main obstacle to proving the Twin-Primes Conjecture is the parity barrier, which prevents current techniques from establishing a sufficiently high density of primes within "almost primes".
The Riemann Hypothesis conjectures that the primes behave as randomly as possible (square root cancellation) when considering multiplicative properties.
The Collatz conjecture states that applying the rule (3N+1 if odd, N/2 if even) to any natural number eventually leads to 1.
Statistically, the Collatz sequences behave like a random walk with a downward drift, suggesting most numbers will fall to a smaller value.
The Collatz problem is difficult because there might exist a special outlier number—a "heavier than air flying machine" encoded within the number—that shoots off to infinity.
Lean is a formal proof programming language that produces computationally verifiable "certificates" guaranteeing the correctness of mathematical arguments.
Lean is like explaining a proof to an "extremely pedantic colleague," requiring explicit justification for every step.
Formalizing a proof in Lean currently requires about 10 times the effort of writing it down in a conventional math paper.
The immense Lean project Mathlib contains tens of thousands of formalized useful mathematical facts.
The ability to localize errors and rely on certificates makes Lean advantageous for updating proofs (e.g., changing a constant like 12 to 11 without rechecking every line).
Lean enables trustless mathematics collaboration, allowing Tao to work with dozens of people globally, relying on the system's verification rather than personal trust.
Tao used Lean to organize the Equational Theories Project, a crowdsourced effort involving around 50 authors tackling 22 million problems in abstract algebra.
The goal of the Equational Theories Project was to map the entire graph of which algebraic laws imply which other laws.
AI tools are being applied to Lean for tasks like Lemma Search and sophisticated autocomplete, helping to reduce the friction of formalization.
AI-generated mathematical proofs can be dangerous because they often look superficially flawless and odorless (lacking the "code smell" of bad human work), but contain subtle, stupid errors.
The Fields Medal winner Grigori Perelman famously declined both the medal and the associated million-dollar Millennium Prize for solving the Poincare conjecture.
Perelman's proof, involving the Ricci flow equation, required classifying all potential singularities—a difficult undertaking that transformed the problem from a supercritical one into a critical one.
Richard Karp is considered one of the most important figures in the history of theoretical computer science, having received the Turing Award in 1985 for his research in algorithm theory. His significant contributions include the development of the Edmonds-Karp algorithm for solving the max flow problem on networks and the Hopcroft-Karp algorithm for finding maximum cardinality matchings in bipartite graphs. He is particularly famous for his landmark paper in complexity theory, "Reducibility among combinatorial problems," which proved that 21 problems were NP-complete, acting as the most important catalyst for the explosion of interest in the P versus NP problem.
Richard Karp is a professor at Berkeley and one of the most important figures in the history of theoretical computer science.
He received the Turing Award in 1985 for his research in the theory of algorithms.
His contributions include the development of the Edmonds-Karp algorithm for solving the max flow problem on networks.
He also developed the Hopcroft-Karp algorithm for finding maximum cardinality matchings in bipartite graphs.
Karp is known for his landmark paper in complexity theory, titled "Reducibility among combinatorial problems".
This paper proved that 21 problems were NP-complete.
The paper served as the most important catalyst for the explosion of interest in the study of NP-completeness and the P versus NP problem.
At age 13, Karp was first exposed to plane geometry and was "wonder struck by the power and elegance of formal proofs".
He enjoyed the fact that pure reasoning could establish a geometric fact "beyond dispute".
Karp found solving puzzles in plain geometry much more enjoyable than earlier mathematics courses focused on arithmetic operations.
He was surprised and convinced by the ease of the proof that the sum of the angles of a triangle is 180 degrees.
Karp notes that he lacked three-dimensional vision and intuition for visualizing 3D objects or hyperplanes.
When working with tools like linear programming, he relies on algebraic properties because he lacks high-dimensional intuition.
When designing algorithms, he visualizes the process as an inner loop where the distance from the desired solution is iteratively reducing until the exact solution is reached.
He finds compelling beauty in the certainty of convergence, where the gap from the optimum point decreases monotonically.
Karp connects his appreciation for the orderly, systematic nature of innovative algorithms to a desire he might have had for orderly activities like woodworking.
He used to amuse himself by performing mental arithmetic, such as multiplying four-digit decimal numbers in his head.
Mathematics offers an "escape from the messiness of the real world where nothing can be proved".
The Assignment Problem requires finding a one-to-one matching (e.g., $N$ boys and $N$ girls) that minimizes the sum of associated costs.
The Hungarian Algorithm solves the Assignment Problem.
A key observation enabling the Hungarian Algorithm is that the optimal assignment is unchanged if a constant is subtracted from any row or column of the cost matrix.
The algorithm proceeds by subtracting constants from rows or columns while ensuring all elements remain non-negative, ultimately aiming for a full permutation of zeros.
Jack Edmonds and Karp were the first to show that the Assignment Problem could be solved in polynomial time, specifically $N^3$, improving on earlier $N^4$ algorithms.
As a PhD student in 1955, Karp was at the computational lab at Harvard, where Howard Aiken had built the Mark I and the Mark IV computers.
The Mark IV computer filled a large room, and Karp could walk around inside its rows of relays.
He noted that the machine would sometimes fail due to "bugs," which literally meant flying creatures landing on the switches.
The lab eventually acquired a Univac computer with 2,000 words of storage, which necessitated careful allocation due to varying access times.
Karp was primarily attracted to the underlying algorithms rather than the physical implementation of the machines.
He did not anticipate the future of personal computing or having computers in pockets.
Karp read Turing's paper on the Turing Test but felt the test was too subjective to accurately calibrate intelligence.
He is doubtful that algorithms can achieve human-level intelligence.
Karp suggests that multiplying the speed of computer switches by a large factor will not be useful until the organizational principle behind the network of switches is understood.
A combinatorial algorithm deals with a system of discrete objects that need to be arranged or selected to achieve some goal or minimize a cost function.
A graph is a set of points (vertices) where certain pairs are joined by lines (edges), often representing interconnections.
The maximum flow problem, which Karp worked on, involves finding the maximum rate at which a commodity (like gas, water, or information) can flow from a source to a destination through channels with capacity limits.
An algorithm runs in polynomial time (P) if the number of computational steps grows only as some fixed power of the size of the input (e.g., $N, N^2, N^3$).
Theorists generally take polynomial time as the definition of an efficient algorithm.
Complexity theory measures the performance of an algorithm based on its performance in the worst case.
NP (Non-deterministic Polynomial time) is the class of problems where, although solving the problem may be hard, verifying a potential solution can be done efficiently (in polynomial time).
For example, finding the largest clique is hard (NP), but checking whether a given set of vertices forms a clique is easy (P).
The central problem in computational complexity is whether P is equal to NP (if every problem easy to check is also easy to solve).
Karp strongly suspects that P is unequal to NP because centuries of intensive study have failed to find polynomial-time algorithms for many easy-to-check problems, such as factoring large numbers.
If P $\neq$ NP, researchers will know that for the great majority of NP-complete problems, they cannot expect to get optimal solutions and must rely on heuristics or approximations.
NP-complete problems are defined as the hardest decision (yes/no) problems within the class NP.
NP-hard problems are optimization problems that correspond to the hardest problems in the class, such as finding the largest clique rather than just deciding if one exists.
Stephen Cook showed that the Satisfiability problem (SAT) of propositional logic is as hard as any problem in the class P (contextually, NP).
Cook proved this using the abstract Turing machine, showing that any NP problem can be translated into an equivalent SAT instance.
Karp extended this, showing that SAT could be reduced to 21 other fundamental problems (e.g., integer programming, clique), establishing their complexity equivalence.
Karp considers the Stable Matching Problem (Stable Marriage Problem) to be one of the most beautiful combinatorial algorithms.
A matching is stable if there is no pair who would prefer to run away with each other, leaving their current partners behind.
An algorithm developed by Gale and Shapley ensures that a stable matching exists and can be found by having one side (e.g., boys) propose and the other side (girls) tentatively accept.
In the Gale and Shapley algorithm, the proposing side (the boys) ends up doing at least as well as they could in any other stable matching.
Karp is especially proud of the Rabin-Karp algorithm for string searching because it demonstrates the power of randomization.
This algorithm associates a fingerprint (a number derived using a random prime) with the word being searched.
The use of randomization, such as taking a random sample in an election, works well because phenomena that occur almost all the time are likely to be found via random selection.
Although problems like Satisfiability and the Traveling Salesman Problem are NP-hard (poor worst-case performance), practical instances arising in digital design or geometry can often be solved efficiently by specialized "sat solvers" and codes.
Karp studied average-case analysis by modeling random graphs, but concluded that results based on such simplistic assumptions about typical problems often lacked practical "bite".
He believes that if P=NP is proven, it will involve concepts and approaches that "we do not now have".
Karp dedicated his Turing Award lecture to the memory of his father.
He inherited a great desire to be a teacher from his father, remembering his ability to draw perfect circles by hand on the blackboard and engage his students.
His top three pieces of advice for teaching are preparation, preparation, and preparation.
Here’s an ISO-aligned checklist for good customer support, drawing on standards such as ISO 10002:2018 (Customer satisfaction — Guidelines for complaints handling) and ISO 9001:2015 (Quality management systems).
🧭 1. Customer Focus & Policy
A documented customer service policy is in place.
The policy aligns with the organization’s quality objectives.
Customer focus is embedded in company culture and communicated to all staff.
Roles and responsibilities for customer support are clearly defined.
📋 2. Complaint Handling Process (ISO 10002)
A formal, accessible, and simple complaint process is established.
Complaints can be submitted via multiple channels (email, web form, phone, etc.).
Each complaint is acknowledged promptly.
A unique reference number is assigned for tracking.
Resolution timelines are clearly communicated to the customer.
Complaint records are retained and reviewed for trends and improvement.
Escalation procedures are documented and applied consistently.
📞 3. Communication & Responsiveness
Support channels are available and responsive (as per service-level targets).
Staff provide clear, polite, and professional communication.
Customers receive regular updates on open issues.
Support scripts/templates are standardized but allow personalization.
Multilingual support and accessibility options are available (if applicable).
👩💼 4. Competence & Training
Customer support staff undergo regular training on products, systems, and soft skills.
Training records are maintained.
Performance reviews include customer satisfaction metrics.
Staff are empowered to resolve issues within defined authority levels.
📊 5. Monitoring, Measurement & Feedback
Customer satisfaction surveys are regularly conducted.
Key Performance Indicators (KPIs) are tracked, such as:
Average response/resolution time
Customer satisfaction score (CSAT)
Net Promoter Score (NPS)
First Contact Resolution (FCR) rate
Feedback results are analyzed and used for improvement.
Reports are reviewed by management periodically.
🔄 6. Continuous Improvement
A Corrective Action process is in place for recurring issues.
Lessons learned are shared across departments.
The complaint handling process is periodically audited and updated.
Improvement actions are documented and tracked to closure.
🧱 7. Documentation & Record Control
Policies, procedures, and records are document-controlled (revision history, approvals, etc.).
Records of customer interactions and resolutions are secure and confidential.
Data protection complies with ISO 27001 and GDPR (where applicable).