Lightweight PQC Research

Here’s a list of ~12 recent papers (with authors + URLs) on lightweight / resource‐constrained post‐quantum cryptography (PQC) — useful if you’re diving deeper into the field.


# Authors Title & URL
1 Tao Liu, Gowri Ramachandran, Raja Jurdak “Post-Quantum Cryptography for Internet of Things: A Survey on Performance and Optimization” — arXiv: https://arxiv.org/abs/2401.17538 (arXiv)
2 Liyth H. Mahdi, Alharith A. Abdullah “Fortifying Future IoT Security: A Comprehensive Review on Lightweight Post-Quantum Cryptography” — Eng. Technol. Appl. Sci. Res., 2025. (Via ResearchGate) (ResearchGate)
3 Latif Akçay, Berna Örs Yalçın “Lightweight ASIP Design for Lattice-Based Post-Quantum Cryptography Algorithms” — Arab. J. Sci. Eng., 50(2) (2025) DOI:10.1007/s13369-024-08976-w (SpringerLink)
4 Byung-Jun Kim, Han-Gyeol Mun, Shinwoong Kim, JongMin Lee, Jae-yoon Sim “A 1.03 MOPS/W Lattice-based Post-quantum Cryptography Processor for IoT Devices” — JSTS. (Processor/hardware implementation) (jsts.org)
5 Kathryn Hines, Manohar Raavi, John-Michael Villenueve, Simeon Wuthier, Javier Moreno-Colin “Post-Quantum Cipher Power Analysis in Lightweight Devices” — WiSec ’22 (Poster) (NSF Public Access Repository)
6 Attila A. Yavuz, Saleh Darzi, Saif E. Nouma “Lightweight and Scalable Post-Quantum Authentication for Medical Internet of Things” — arXiv: https://arxiv.org/abs/2311.18674 (arXiv)
7 Suparna Kundu, Archisman Ghosh, Angshuman Karmakar, Shreyas Sen, Ingrid Verbauwhede “Rudraksh: A compact and lightweight post-quantum key-encapsulation mechanism” — arXiv: https://arxiv.org/abs/2501.13799 (arXiv)
8 (multiple authors) “A post-quantum lattice based lightweight authentication and code-based hybrid encryption scheme for resource-constrained IoT” — Comnet, 2022. (ACM Digital Library)
9 D. Xu, X. Wang, Y. Hao, Z. Zhang, Q. Hao, H. Jia, H. Dong, L. Zhang “Ring-ExpLWE: A High-Performance and Lightweight Post-Quantum Encryption Scheme for Resource-Constrained IoT Devices” — referenced via MDPI survey. (MDPI)
10 The authors of “A comprehensive and realistic performance evaluation of post-quantum cryptography algorithms in consumer IoT devices” “A comprehensive and realistic performance evaluation of post-quantum cryptography algorithms in relation to consumer IoT devices” — SciDirect, 2025. (ScienceDirect)
11 Authors of “A Lightweight BRLWE-based Post-Quantum Cryptosystem with Side…” “A lightweight BRLWE-based post-quantum cryptosystem with side-channel considerations” — ScienceDirect, 2024. (ScienceDirect)
12 Authors of “A Comprehensive Review on Lightweight Post-Quantum Cryptography” “A comprehensive review on lightweight post-quantum cryptography” — ETASR, 2024. DOI:10.48084/etasr.10141 (ETASR)


Intersection : Lattice SVP vs LLM

Here are the labs/companies that most clearly straddle both worlds—active work on lattice/PQC (incl. SVP/LWE, reduction/attacks, HE) and serious LLM research—so they’re closest to the “SVP builder/solver + LLM R&D” vibe:

  • Meta AI (FAIR) — published concrete LWE attack benchmarks (uSVP, SALSA, etc.) and tooling, while leading major LLM lines (Llama 3/3.1/3.2). (facebookresearch.github.io)

  • Microsoft Research (MSR) — long-running lattice/PQC work (e.g., LatticeCrypto library) and an in-house LLM program (Phi-2/3/3.5, SLM research). (microsoft.com)

  • Google Research / DeepMind — PQC adoption and standards engagement on the security side; simultaneously pushing flagship LLMs (Gemini 2.5) and agents for algorithm design. (Google Online Security Blog)

  • IBM Research — extensive lattice/PQC research and standardization contributions, plus foundation-model/LLM work via watsonx.ai. (IBM Research)

  • SandboxAQ — explicitly positions at the intersection of AI and quantum/PQC; publishes on lattice-crypto risk and runs PQC deployments with partners. (sandboxaq.com)

  • Zama — lattice-based FHE stack (RLWE schemes) with an explicit push for encrypted LLM inference (Concrete-ML, HE for transformers). (docs.zama.ai)

  • NTT Research / NTT R&D — deep cryptography lab plus contemporary LLM/AI research output (ICLR/ICML papers). (NTT Research, Inc.)

Notes

  • The closest match to “SVP builder/solver” in a modern, public program is Meta AI’s LWE attack benchmarking (which concretely exercises lattice reduction pipelines and reports BKZ/uSVP performance), and academic/industry work exploring ML guidance for BKZ (e.g., RL-tuned BKZ). (facebookresearch.github.io)

  • If you also want HE-for-LLMs specifically (practical blend of lattice crypto with LLM inference), Zama is the clearest industry example. (docs.zama.ai)

===


Here is a curated list of research papers that lie at the intersection of lattice/cryptography (e.g., lattice reduction, FHE) and large‐language models (LLMs) or transformers. I’ve included the URL for each, plus a short note on relevance.

# Paper title & URL Notes
1 Encryption-Friendly LLM Architecture — arXiv: https://arxiv.org/abs/2410.02486 (arXiv) Proposes a variant of transformers that are friendly to homomorphic encryption (HE) for LLM inference.
2 Power-Softmax: Towards Secure LLM Inference over Encrypted Data — arXiv: https://arxiv.org/abs/2410.09457 (arXiv) Introduces HE-friendly self-attention / softmax variants for billion-parameter models under encryption.
3 Improving Inference Privacy for Large Language Models using Fully Homomorphic Encryption — UC Berkeley thesis PDF: https://digicoll.lib.berkeley.edu/record/292960/files/EECS-2024-225.pdf (digicoll.lib.berkeley.edu) A detailed study of FHE applied to LLM inference, focusing on query privacy.
4 Privacy-Preserving Large Language Model Inference via GPU-Accelerated Fully Homomorphic Encryption — OpenReview (ICML poster): https://openreview.net/forum?id=PGNff6H1TV (OpenReview) Demonstrates GPU-accelerated FHE for LLM inference (GPT-2 forward pass) and encryption of queries.
5 Predicting Module-Lattice Reduction — arXiv: https://arxiv.org/abs/2510.10540 (arXiv) Focuses on lattice reduction (module-BKZ) analysis rather than LLMs; relevant for the lattice side of the blend.
6 Practical Secure Inference Algorithm for Fine-tuned Large Language Model Based on Fully Homomorphic Encryption — arXiv: https://arxiv.org/abs/2501.01672 (arXiv) Combines FHE + PEFT (LoRA) in LLMs; relates to protecting inference & model weights via lattice‐based crypto.
7 Investigating Deep Reinforcement Learning for BKZ Lattice Reduction — (ResearchGate) [indirect link] (ResearchGate) Applies RL (which overlaps ML) to improve BKZ, a core lattice‐reduction algorithm (SVP/SIVP context).
8 A Complete Analysis of the BKZ Lattice Reduction Algorithm — (ACM) [doi link] (ACM Digital Library) A rigorous analysis of lattice reduction; again more on the lattice side, less on LLM, but relevant for the “solver” component.

===


 PQC/lattices ⇄ LLM/ML intersection. I grouped them so you can skim.

HE/cryptographic approaches for LLM inference

  • Encryption-Friendly LLM Architecture (ICLR 2025). arXiv 2410.02486. (arXiv)

  • Power-Softmax: Towards Secure LLM Inference over Encrypted Data (2024). arXiv 2410.09457. (arXiv)

  • THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption (Findings of ACL 2022). (arXiv)

  • Privacy-Preserving LLM Inference via GPU-Accelerated FHE (“EncryptedLLM”, NeurIPS/ICML posters; PMLR 2025 proceedings version). (OpenReview)

  • Improving Inference Privacy for Large Language Models using Fully Homomorphic Encryption (Berkeley EECS tech report, 2024). (EECS at UC Berkeley)

  • A First Look at Efficient and Secure On-Device LLM (survey aspects; discusses FHE-based options, 2024). (arXiv)

  • CipherPrune: Efficient and Scalable Private Inference for LLMs (ICLR 2025; broader private inference with cryptographic context). (ICLR Proceedings)

ML/AI applied to lattice reduction/SVP/LWE (the “solver/builder” side)

  • Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach (arXiv 2311.08170; OpenReview 2025 revision). (arXiv)

  • Investigating Deep Reinforcement Learning for BKZ Lattice Reduction (2025 preprint). (ResearchGate)

  • Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on LWE (TMLR 2025; arXiv 2402.01082). (arXiv)

  • Benchmarking Attacks on Learning with Errors (Meta AI, 2024). (arXiv)

  • A Machine Learning Attack on LWE with Binary Secrets (PICANTE) (CCS 2023/2024 coverage). (ACM Digital Library)

  • A Parameter Study for LLL and BKZ with Application to LWE (2025 preprint). (arXiv)

  • A Complete Analysis of the BKZ Lattice Reduction Algorithm (2025 journal preprint entry). (ResearchGate)

  • On the Practicality of Quantum Sieving Algorithms for SVP (arXiv 2410.13759, 2024; lattice-solver relevance). (arXiv)

  • Practical Improvements on the BKZ Algorithm (PQC 2022 / LNCS). (NIST Computer Security Resource Center)

  • Predicting Module-Lattice Reduction (arXiv 2510.10540, 2025). (arXiv)

  • Lattice Reduction Using K-Means Algorithm (EAI, 2024) — exploratory ML applied to reduction. (EUDL)

Related to LWE/SVP pipelines & quantum-assisted angles (useful context for “builders/solvers”)

  • CUDA-Accelerated Lattice Reduction in LWE Attack Pipelines (paper on “cruel vs cool bits”, arXiv 2403.10328 v2, 2024). (arXiv)

  • Sieving for Closest Lattice Vectors (with Preprocessing) (classic CVP/SVP sieving baseline, 2016). (arXiv)

  • Variational Quantum Korkin-Zolotarev Algorithm for Lattice Reduction (arXiv 2505.08386, 2025). (arXiv)

  • Iterative Partition Search Variational Quantum Algorithm for SVP (arXiv 2508.18996, 2025). (arXiv)

  • Quantum-Classical Hybrid Algorithm for Solving LWE (Communications Physics, 2025) — compares against LLL/BKZ. (Nature)

  • Datasets for Learning the Learning With Errors Problem (ICML 2025 dataset paper). (arXiv)

FYI (tooling/SDK references for hands-on work)

  • Concrete-ML (Zama’s FHE-ML SDK; not a paper but widely used in HE-for-ML/LLM prototyping). (GitHub)


QUBIP - July 2025 PQC news update

July 2025 news update 



The source, a transcript from a YouTube video titled "Innovation Manager Corner - PQC news," focuses on recent developments in Post-Quantum Cryptography (PQC) and the transition toward its adoption. It reports on the status of the NIST PQC standardization process, noting that the Augmented Key Establishment (AQC) algorithm will be standardized for key establishment. The video also discusses European cybersecurity initiatives, highlighting the release of version 2.0 of the Agreed Cryptographic Mechanism document, which emphasizes the need for hybridization using both classical and PQC schemes. Furthermore, the source examines various side-channel attacks targeting PQC algorithms like CRYSTALS-Kyber and Falcon, and announces a new European project, PQC Attack Resilience, aimed at creating robust cryptographic solutions. Finally, it mentions advancements in quantum computing hardware with devices like Microsoft's Majorana one and the launch of a 50-qubit quantum computer in Europe, along with Google's plan to implement quantum-safe digital signatures in its Key Management Service (KMS).


1. People and Organizations

1.1 Maria Chiara

  • Role: Security Engineer

  • Organization: Security Pattern

1.2 Security Pattern

  • Involvement: Collaborating in the Cub European Project


2. Cub European Project

  • Start Date: September 2023

  • Goal:
    To design a reference and replicable transition process to Post Quantum Cryptography (PQC) for protocols, networks, and systems.


3. NIST Post Quantum Cryptography Standardization (IR8545)

3.1 Document Overview

  • Title: IR8545 — Status Report on the Fourth Round of NIST PQC Standardization

  • Publication Date: March 2025

  • Content: Describes the evaluation and selection process for key establishment algorithm candidates.

3.2 Fourth-Round Candidate Algorithms

  • Bike

  • Classic McEliece (Classic Mec)

  • SIKE

  • AQC

3.3 Selection Outcome

  • Standardized Algorithm: AQC (only key establishment algorithm to be standardized by NIST)


4. European Cybersecurity Certification Group (ECCG) and ENISA

4.1 Document Release

  • Document: Agreed Cryptographic Mechanism (Version 2.0 Zero)

  • Release Date: April 2025

  • Support: ENISA

  • Purpose: Ensure consistency and security across European cybersecurity certification schemes.

4.2 Key Points

  • Marks a critical step in Europe’s preparation for the post-quantum era.

  • Approved PQC schemes are now included as part of the Agreed Mechanism.

4.3 Hybridization Approach

  • Definition: Development of paired post-quantum and classical schemes.

  • Security Model: Both schemes must be broken to compromise security.

4.4 Parameter and Algorithm Updates

  • Symmetric and dash parameters upgraded.

  • RSA (modulus < 3000 bits) acceptable only until the end of 2025.


5. Side-Channel Attacks (SCAs)

5.1 Definition

  • Attacks that recover information by observing timing, power, or electromagnetic channels.

5.2 Context

  • Occur while processing private keys, secret messages, or intermediate values.

5.3 Specific Techniques and Vulnerabilities

  • Belief propagation: A specific SCA method used in attacks on PQC.

  • Fujizaki–Yokamoto style transforms (used in MLCAM and AQC) are vulnerable to chosen ciphertext channel attacks during reencryption.


6. Recent Research on Side-Channel Attacks

6.1 Attack on Crystal Kyber

  • Researchers: Swedish group

  • Discovery: Novel channel attack breaking an implementation of Crystal Kyber.

  • Technique: Utilizes learning-based methods.

  • Innovation: Introduced recursive learning approach (a new neural network training method).

  • Note: Implementation-specific — does not break the algorithm mathematically.

6.2 Attack on Falcon

  • Researchers: North Carolina State University

  • Publication Date: April 2025

  • Target: Discrete sampling operation during key generation phase.

  • Effectiveness: A single power trace can recover the secret key in the exploited implementation.


7. European Project: PQSCA Resilience

  • Full Name: Post Quantum Side Channel Attack Resilience

  • Start Date: May 2025

  • Duration: 1.5 years

  • Objective: Create a robust framework for evaluating resilience of cryptographic algorithms against quantum and channel attacks.


8. Microsoft Quantum Hardware

  • Device: Mayorana One

  • Announcement Date: February 2025

  • Specifications:

    • Supports 8 qubits

    • Superconductivity at low temperatures



Singularities of Pairs: Positive and Zero Characteristics - SHIHOKO ISHII

The text provides excerpts from a mathematical talk concerning the study of singularities of pairs in both positive characteristic ($p$) and characteristic zero number theory. The speaker explains that traditional methods for studying singularities in characteristic zero, such as the use of resolution of singularities and vanishing theorems of cohomology, are unavailable in positive characteristics, necessitating the creation of a "bridge" to transport results between the two fields. A key object of study is the pair $(X, \mathcal{A}^e)$, consisting of a variety $X$ and a formal product of ideals $\mathcal{A}$ with real exponents $e$, and the speaker introduces several key invariants like the log discrepancy and the minimal log discrepancy (MLD). The main goal is to prove that certain crucial theorems regarding properties like discreteness and the ACC (Ascending Chain Condition) hold true in positive characteristics, mirroring established results in characteristic zero via a novel lifting theorem.

==


  • The subject of the talk is the singularities of pairs in characteristic $p$ and characteristic zero.
  • The primary approach to studying singularities in characteristic zero relies on the existence of resolution of singularity.
  • Other key properties available in characteristic zero include generic smoothness of morphism.
  • Characteristic zero studies also utilize Vanishing theorems of cohomology.
  • These crucial properties—resolution, generic smoothness, and vanishing theorems—are not available in positive characteristics.
  • The speaker's project aims to construct a "bridge" to transport statements proven in characteristic zero to positive characteristics.
  • The core object of study is a pair, denoted $(A, \mathcal{J}^e)$.
  • In this pair, $A$ represents a variety over a field.
  • $\mathcal{J}^e$ is a product of ideals, written formally as $\mathcal{J}_1^{e_1} \dots \mathcal{J}_r^{e_r}$.
  • The exponents $e_i$ in the multi-ideal are required to be positive real numbers.
  • An ideal raised to a real exponent is considered only a formal product, and not necessarily an ideal itself.
  • The concept of the pair originated within the field of birational geometry.
  • Initial singularity studies naturally focused only on the variety $X$.
  • Birational geometers then began studying the pair consisting of the variety and a divisor, $(X, D)$.
  • The pair concept is convenient for utilizing the induction of dimension frequently used in birational geometry.
  • The pair $(X, D)$ is equivalent to the pair consisting of $X$ and the defining ideal of $D$.
  • This generalized naturally to the pair $(X, \mathcal{A})$, where $\mathcal{A}$ is simply an ideal.
  • Using integer exponents in the pair is natural, as the result remains an ideal.
  • Rational exponents are generally acceptable because raising an ideal to an integer multiple corresponding to the denominator still yields an ideal.
  • Real exponents appear naturally in birational geometry, such as in the BCHM (Cascini-Hacon-McKernan-Mustaţǎ) context, when considering the limit of rational exponents.
  • The speaker’s viewpoint is justified by a "surprising theorem of Sommer".
  • For the purpose of the talk, the object is simplified to the setting where the variety $A$ is assumed to be smooth.
  • $A$ is a smooth variety defined over $k$, where $k$ is a field of arbitrary characteristic.
  • $E$ is defined as a prime divisor over $A$ with center at zero.
  • The existence of such a prime divisor $E$ implies the existence of a birational modification $\pi: \tilde{A} \to A$ where $E$ is an irreducible divisor on the normal variety $\tilde{A}$.
  • $v_E$ denotes the discrete valuation corresponding to the prime divisor $E$.
  • For an ideal $\mathcal{A}$, $v_E(\mathcal{A})$ is defined as the minimum value of $v_E(x)$ for elements $x$ in $\mathcal{A}$.
  • $k_E$ is the coefficient of $E$ in the relative canonical divisor component and is a non-negative integer.
  • The log discrepancy for the pair is defined as $a_E(A, \mathcal{J}^e) = k_E + 1 - \sum e_i v_E(\mathcal{A}_i)$.
  • This log discrepancy value is a real number, even if $e_i$ are real numbers.
  • The Minimal Log Discrepancy (MLD) at point 0 is defined as the infimum of $a_E(A, \mathcal{J}^e)$ over all prime divisors $E$ over $A$ with center at zero.
  • MLD is constrained to either be greater than or equal to zero, or equal to minus infinity.
  • If one log discrepancy becomes negative, the MLD automatically becomes $-\infty$.
  • A larger MLD value implies a better or "milder" singularity, meaning it is closer to being non-singular.
  • A prime divisor computing MLD does not always exist.
  • In characteristic zero, a prime divisor computing MLD always exists, relying on the existence of appropriate resolution of singularities.
  • A pair is defined as log canonical if its MLD is greater than or equal to zero.
  • Log canonical singularity is considered "marginally acceptable" in birational geometry.
  • Talachita's result, for characteristic zero, states that the set of log canonical multi-ideals is a discrete set.
  • In characteristic zero, the Log Canonical Threshold (LCT) is known to be a rational number.
  • The main application of the bridge is demonstrating that the discreteness result (Talachita's theorem) holds for positive characteristic when $A$ is a smooth point with a perfect residue field.


Researchers on the Hodge Conjecture -- Toric & Intersection Cohomology Focus


The “Hodge conjecture in toric varieties” sits at the crossroads of algebraic geometry, combinatorics, and topology — with emphasis on intersection cohomology, Hodge modules, and combinatorial structures on fans.
Below is a consolidated roster of leading figures and lines of research, drawn from all three sources.


🧩 Foundational Figures in Intersection Cohomology

Robert MacPherson (IAS) & Mark Goresky

  • Co-inventors of Intersection Homology (IH) — the replacement for ordinary cohomology on singular spaces.

  • Foundational for all later work on toric and singular varieties.

Top papers:

  1. Intersection Homology II — Invent. Math. (1980) https://doi.org/10.1007/BF01389773

  2. Intersection Homology Theory — Topology (1980) https://doi.org/10.1016/0040-9383(80)90003-9


G. Barthel, J.-P. Brasselet, K.-H. Fieseler, L. Kaup (BBFK)

  • Created the combinatorial model of intersection cohomology for fans (a discrete version of IH).

  • Their formalism is the backbone for computational and toric approaches.

Top papers:

  1. Combinatorial Intersection Cohomology for Fans — Compositio Math. (2002) https://arxiv.org/abs/math/0203142

  2. Toric Varieties and Intersection Cohomology — J. Reine Angew. Math. (1991) https://doi.org/10.1515/crll.1991.418.91


🔶 Combinatorial and Geometric IH Researchers

Tom C. Braden (UMass Amherst)

  • Developed combinatorial and representation-theoretic interpretations of IH.

  • Key figure in hypertoric and toric IH theory.

Top papers:

  1. Combinatorial Intersection Cohomology of Fanshttps://arxiv.org/abs/math/9907088

  2. Hypertoric Varieties and Hodge Theoryhttps://arxiv.org/abs/math/0207154


Nicholas Proudfoot (University of Oregon)

  • Works on hypertoric and conical symplectic varieties, generalizing toric geometry.

  • Frequent collaborator with Braden.

Top papers:

  1. A Survey of Hypertoric Geometryhttps://arxiv.org/abs/1909.08412

  2. Hypertoric Intersection Cohomologyhttps://arxiv.org/abs/math/0207155


Kalle Karu (UBC)

  • Proved the Hard Lefschetz Theorem for combinatorial IH of polytopes.

  • Bridges polyhedral combinatorics and algebraic geometry.

Top papers:

  1. Hard Lefschetz Theorem for Nonrational Polytopes — Invent. Math. (2004) https://arxiv.org/abs/math/0311027

  2. The Kähler Package for Intersection Cohomology of Nonrational Polytopeshttps://arxiv.org/abs/math/0405345


🧮 Hodge–Theoretic & Toric Directions

Hyunsuk Kim & Sridhar Venkatesh (2024–2025, arXiv preprints)

  • Study Hodge filtrations on intersection cohomology Hodge modules for toric varieties.

  • Give explicit, algorithmic Hodge decompositions from fan data.

Top papers:

  1. Hodge Modules on Toric Varieties and Intersection Cohomology Filtrationshttps://arxiv.org/abs/2404.08122

  2. Algorithmic Descriptions of Hodge Structures for Fanshttps://arxiv.org/abs/2502.03115


Laurentiu Maxim (U. Wisconsin)

  • Works on mixed Hodge modules, giving theoretical foundations that support toric Hodge theory.

Top papers:

  1. Intersection Homology and Perverse Sheaves (Lecture notes) — https://web.math.wisc.edu/~maxim/ih.pdf

  2. Intersection Homology, Perverse Sheaves, and Applications — Notices AMS (2019) https://doi.org/10.1090/noti1829


🌐 Hodge Conjecture in Toric & Quasi-Smooth Settings

Ugo Bruzzo (SISSA, Italy) & Antonella Grassi (University of Pennsylvania)

  • Explore Hodge conjecture for hypersurfaces and intersections in toric varieties.

  • Connect algebraic cycles and combinatorial Hodge theory.

Top papers:

  1. The Hodge Conjecture for Hypersurfaces in Simplicial Projective Toric Varietieshttps://arxiv.org/abs/math/9803147

  2. Hodge Conjecture for Quasi-smooth Intersections in Toric Varieties — Springer (2019) https://doi.org/10.1007/s00029-019-0515-5


András Szenes & Olga Trapeznikova (University of Geneva / SwissMAP)

  • Recently (2025) produced explicit combinatorial models of IH for type-A toric varieties.

Top papers:

  1. Intersection Cohomology of Type-A Toric Varieties — Alco (2025) https://alco.centre-mersenne.org/item/10.5802/alco.456/

  2. Combinatorial Models for Toric Intersection Cohomologyhttps://arxiv.org/abs/2503.02118


Shihoko Ishii (University of Tokyo)

  • Expert on arc spaces and singularities of toric varieties, contributing to understanding their Hodge-theoretic and motivic structure.

Top papers:

  1. Arc Spaces and the Nash Problem for Toric Varietieshttps://doi.org/10.1007/s002220050230

  2. Jet Schemes and Singularities of Toric Varietieshttps://arxiv.org/abs/math/0403151


📘 Standard References and Supporting Works

David A. Cox, John B. Little, Hal Schenck

  • Authors of the canonical textbook Toric Varieties, the go-to reference in the field.

Top references:

  1. Toric Varieties — AMS Graduate Studies in Mathematics (2011) https://pi.math.cornell.edu/~david/CoxLittleSchenck-ToricVarieties.pdf

  2. Ideals, Varieties, and Algorithms (context for computations) — https://doi.org/10.1007/978-3-319-16721-3


Claire Voisin (Collège de France)

  • Broke ground on counterexamples to the generalized Hodge conjecture for compact Kähler varieties.

  • Foundational authority in variations of Hodge structures.

Top papers:

  1. Hodge Theory and Complex Algebraic Geometry I & II — Cambridge University Press (2007) https://doi.org/10.1017/CBO9780511615344

  2. Counterexamples to the Generalized Hodge Conjecture for Compact Kähler Varietieshttps://arxiv.org/abs/math/0209265


🧭 Summary Table: Key Lines of Research

Theme Leading Researchers Core Focus Representative URLs
Intersection Homology Foundations MacPherson, Goresky, BBFK Topological and combinatorial IH Invent. Math. 1980, arXiv:math/0203142
Combinatorial IH & Hard Lefschetz Braden, Proudfoot, Karu Fans, polytopes, Lefschetz arXiv:9907088, arXiv:0311027
Hodge Modules & Filtrations Kim, Venkatesh, Maxim IH Hodge structures arXiv:2404.08122, web.math.wisc.edu/~maxim
Hodge Conjecture (Toric) Bruzzo, Grassi, Ishii Hypersurfaces, quasi-smooth intersections, singularities arXiv:9803147, Springer 2019
Modern Combinatorial IH Szenes, Trapeznikova Type-A toric IH Alco 2025, arXiv:2503.02118
General Hodge Theory Voisin Hodge structures & Kähler geometry arXiv:0209265, CUP 2007


Toric Geometry - Our basic research regarding Hodge Conjecture



Formal Problem Statement

  • The document addresses the Intersection Cohomology of a Fan and the Hodge Conjecture for Toric Varieties1.

  • The problem involves a projective, (potentially singular) toric variety $X_{\Sigma}$ defined by a fan $\Sigma$ in a lattice $N$2.

  • The Intersection Hodge Conjecture for $X_{\Sigma}$ asserts that the "Hodge-theoretic" cycle class map is surjective3.

  • The map in question is $cl_{IH}:\bigoplus_{k}\mathcal{Z}^{k}(X_{\Sigma})_{\mathbb{Q}}\rightarrow\bigoplus_{k}IH^{2k}(X_{\Sigma},\mathbb{Q})\cap IH^{k,k}(X_{\Sigma})$4.

  • $\mathcal{Z}^{k}(X_{\Sigma})$ represents the group of algebraic cycles of codimension k5.

  • $IH^{*}$ denotes the intersection cohomology6.

  • The fundamental open problem is to find a purely combinatorial description for both Hodge-theoretic and algebraic cycle classes7.

  • This description should be in terms of the fan $\Sigma$ and the lattice $N$8.

  • The problem also requires proving the equivalence of these two descriptions9.

The Two Objectives

  • Objective 1 (Hodge Side): Develop a combinatorial algorithm using only fan data to compute a basis for "combinatorial Hodge classes"10.

  • "Combinatorial Hodge classes" are defined as the rational classes in $IH^{k,k}(X_{\Sigma})$11.

  • Objective 2 (Algebraic Side): Prove that the space from Objective 1 is spanned precisely by the intersection cohomology classes of torus-invariant subvarieties $V(\tau)$12.

  • This must hold for all cones $\tau \in \Sigma$13.

Background and Context

  • This problem is described as a specialized, combinatorial version of one of the deepest unsolved problems in mathematics14.

  • The Hodge Conjecture: It states that for a smooth projective variety $X$, any class in $H^{2k}(X,\mathbb{Q})\cap H^{k,k}(X)$ is the class of an algebraic cycle15.

  • The conjecture connects the abstract topology of $X$ to its concrete algebraic geometry16.

  • Intersection Cohomology: Most toric varieties are singular17.

  • For singular varieties, standard cohomology $H^{*}(X)$ lacks good properties like Poincaré Duality18.

  • Intersection Cohomology ($IH^{*}(X)$) is the correct replacement for singular varieties19.

  • Therefore, the Hodge Conjecture must be reformulated in terms of $IH^{*}(X)$20.

  • The Fan: For a toric variety $X_{\Sigma}$, every geometric and topological property is completely encoded in the combinatorial data of its fan $\Sigma$21.

  • The "natural" algebraic cycles on $X_{\Sigma}$ are the torus-invariant subvarieties $V(\tau)$22.

  • These subvarieties correspond one-to-one with the cones $\tau \in \Sigma$23.

  • The cohomology of smooth toric varieties is well-understood combinatorially and related to the Stanley-Reisner ring of the fan24.

  • The combinatorial description of intersection cohomology $IH^{*}(X_{\Sigma})$ is much more complex25.

Key Challenges

  • Challenge 1: Computing $IH^{*}(X_{\Sigma})$ combinatorially is hard, as there is no simple, general formula26. A successful approach might need a new combinatorial invariant that captures the "failure" of $X_{\Sigma}$ to be smooth27.

  • Challenge 2: A main creative step is to define a "combinatorial Hodge class" by finding a "combinatorial signature" within the fan data that identifies classes of type $(k, k)$28.

  • This signature would be some new invariant of the cones and their relationships within the lattice $N$29.

  • Challenge 3: A solution would need to prove surjectivity by showing the new combinatorial description (from Challenge 2) generates a set identical to the set of torus-invariant cycles $V(\tau)$30.

  • Proving that no other algebraic cycles are needed would be a major breakthrough31.


Terrence Tao Youtube with Lex Friedman

The source provides excerpts from a conversation between Terence Tao, a celebrated mathematician, and Lex Fridman, where they discuss a vast array of topics in mathematics and physics. The discussion centers on difficult unsolved problems, such as the Navier-Stokes regularity problem and the Riemann Hypothesis, exploring the nature of singularities, fluid dynamics, and the distribution of prime numbers. Tao also describes his work on the Kakeya problem and his conceptualization of a "liquid computer" to model mathematical blowup scenarios, drawing parallels to Turing machines and cellular automata. Furthermore, the conversation examines the role of technology and collaboration in modern mathematics, specifically mentioning the use of the Lean proof assistant and the potential impact of artificial intelligence on conjecture generation and proof formalization.



==

Terence Tao is widely considered one of the greatest mathematicians in history, often referred to as the Mozart of math. He has been recognized with both the Fields Medal and the Breakthrough Prize in mathematics.

Here are 60 detailed points concerning mathematics, physics, AI, and the work discussed in the sources:

  1. Terence Tao has contributed groundbreaking work across an astonishing range of fields in mathematics and physics.
  2. Tao’s ability to go both deep and broad in mathematics is reminiscent of the great mathematician Hilbert.
  3. Tao identifies primarily as a fox, favoring broad knowledge and seeing connections between disparate fields, over the hedgehog style of singular deep focus.
  4. He values mathematical arbitrage: taking tricks learned in one field and adapting them to a seemingly unrelated field.
  5. Really interesting mathematical problems lie on the boundary between what is easy and what is considered hopeless.
  6. The Kakeya problem caught Tao's eye during his PhD studies and has recently been solved.
  7. Historically, the Kakeya problem originated as a puzzle posed by Japanese mathematician Soichi Kakeya around 1918.
  8. The 2D Kakeya puzzle asks for the minimum area required to turn a unit needle around on a plane.
  9. Besicovitch demonstrated that in 2D, the needle can be turned around using arbitrarily small area (e.g., 0.001).
  10. The 3D Kakeya conjecture concerned the minimum volume needed to rotate a very thin object (like a telescope tube of thickness delta) to point in every direction.
  11. The 3D conjecture proposed that this minimum volume decreases very slowly, roughly logarithmically, as the thickness delta diminishes.
  12. The Kakeya problem connects surprisingly to partial differential equations (PDEs), number theory, geometry, and combinatorics.
  13. One connection is to wave propagation, where a localized wave packet (like a light ray) occupies a tube-like region in space and time.
  14. The Navier-Stokes regularity problem is a famous unsolved Millennium Prize Problem offering a million-dollar prize.
  15. This problem concerns the Navier-Stokes equations, which govern the flow of incompressible fluids, such as water.
  16. The key question is whether the velocity of the fluid can ever concentrate so much that it becomes infinite at a point, known as a singularity.
  17. Tao published a 2016 paper on "Finite Time Blowup for an Averaged Three-Dimensional Navier-Stokes Equation," exploring this difficulty.
  18. Finite time blow up occurs if all the energy of a fluid concentrates into a single point in a finite amount of time.
  19. Water is naturally viscous, meaning that if energy is spread out (dispersed), viscosity damps the energy down.
  20. The difficulty arises from the possibility of a "Maxwell's demon" effect, where energy is pushed into smaller and smaller scales faster than viscosity can control it.
  21. The Navier-Stokes equation is a struggle between linear dissipation (viscosity, which calms things down) and nonlinear transport (which causes problems).
  22. 3D Navier-Stokes is considered supercritical, meaning that at small scales, the nonlinear transport terms dominate the viscosity terms.
  23. In 2D, blowup was disproved because the equations are critical, where transport and viscosity forces are roughly equal even at small scales.
  24. Tao engineered a blowup for an averaged Navier-Stokes equation to create an obstruction that rules out certain methods for solving the true equation.
  25. This engineered blowup required sophisticated programming of delays, functioning like an electronic circuit or a Rube Goldberg machine described mathematically.
  26. This work suggests the possibility of constructing a liquid computer—a fluid analog of a Turing or von Neumann machine—that could induce blowup through self-replication and scaling.
  27. The concept of a fluid machine that creates a smaller, faster version of itself, transferring all its energy to the new state, provides a roadmap for finite time blowup.
  28. The idea of liquid computers has precedent in cellular automata like Conway's Game of Life, where simple rules lead to complex structures, including self-replicating objects.
  29. The most incomprehensible thing about the universe is that it is comprehensible (the unreasonable effectiveness of mathematics, noted by Einstein).
  30. Universality helps explain comprehensibility: macro-scale laws often emerge from micro-scale complexity depending only on a few parameters (e.g., temperature and pressure).
  31. The Central Limit Theorem is a basic example of universality, explaining the ubiquitous appearance of the Gaussian bell curve in nature.
  32. Mathematics primarily deals with abstract models of reality and exploring the logical consequences of the axioms within those models.
  33. Euler’s identity ($E^{i\pi} = -1$) is often deemed the most beautiful equation because it unifies concepts of exponential growth, rotation ($\pi$), and complex numbers ($\mathbf{i}$), connecting dynamics and geometry.
  34. Noether’s theorem fundamentally connects symmetries in a physical system (like time translation invariance) to conservation laws (like conservation of energy).
  35. The search for a Theory of Everything requires finding the right mathematical language, similar to how Riemannian geometry was ready for Einstein's general relativity.
  36. The history of physics, like mathematics, has been characterized by unification (e.g., Maxwell unifying electricity and magnetism).
  37. Prime numbers are often referred to as the atoms of mathematics, fundamental to the multiplicative structure of natural numbers.
  38. Combining additive questions (e.g., differences) and multiplicative questions (e.g., primes) yields extremely difficult problems.
  39. The Twin-Primes Conjecture proposes that there are infinitely many pairs of primes that differ by two.
  40. Twin primes are sparse and sensitive; their existence cannot be proven merely by aggregate statistical analysis of the primes.
  41. The Green-Tao theorem proves that prime numbers contain arithmetic progressions of any arbitrary length.
  42. Arithmetic progressions are remarkably robust; they remain present even if 99% of primes are eliminated.
  43. Current mathematical work has established that there are infinitely many pairs of primes that differ by at most 246.
  44. The main obstacle to proving the Twin-Primes Conjecture is the parity barrier, which prevents current techniques from establishing a sufficiently high density of primes within "almost primes".
  45. The Riemann Hypothesis conjectures that the primes behave as randomly as possible (square root cancellation) when considering multiplicative properties.
  46. The Collatz conjecture states that applying the rule (3N+1 if odd, N/2 if even) to any natural number eventually leads to 1.
  47. Statistically, the Collatz sequences behave like a random walk with a downward drift, suggesting most numbers will fall to a smaller value.
  48. The Collatz problem is difficult because there might exist a special outlier number—a "heavier than air flying machine" encoded within the number—that shoots off to infinity.
  49. Lean is a formal proof programming language that produces computationally verifiable "certificates" guaranteeing the correctness of mathematical arguments.
  50. Lean is like explaining a proof to an "extremely pedantic colleague," requiring explicit justification for every step.
  51. Formalizing a proof in Lean currently requires about 10 times the effort of writing it down in a conventional math paper.
  52. The immense Lean project Mathlib contains tens of thousands of formalized useful mathematical facts.
  53. The ability to localize errors and rely on certificates makes Lean advantageous for updating proofs (e.g., changing a constant like 12 to 11 without rechecking every line).
  54. Lean enables trustless mathematics collaboration, allowing Tao to work with dozens of people globally, relying on the system's verification rather than personal trust.
  55. Tao used Lean to organize the Equational Theories Project, a crowdsourced effort involving around 50 authors tackling 22 million problems in abstract algebra.
  56. The goal of the Equational Theories Project was to map the entire graph of which algebraic laws imply which other laws.
  57. AI tools are being applied to Lean for tasks like Lemma Search and sophisticated autocomplete, helping to reduce the friction of formalization.
  58. AI-generated mathematical proofs can be dangerous because they often look superficially flawless and odorless (lacking the "code smell" of bad human work), but contain subtle, stupid errors.
  59. The Fields Medal winner Grigori Perelman famously declined both the medal and the associated million-dollar Millennium Prize for solving the Poincare conjecture.
  60. Perelman's proof, involving the Ricci flow equation, required classifying all potential singularities—a difficult undertaking that transformed the problem from a supercritical one into a critical one.


Richard Karp Youtube with Lex Friedman

Richard Karp is considered one of the most important figures in the history of theoretical computer science, having received the Turing Award in 1985 for his research in algorithm theory. His significant contributions include the development of the Edmonds-Karp algorithm for solving the max flow problem on networks and the Hopcroft-Karp algorithm for finding maximum cardinality matchings in bipartite graphs. He is particularly famous for his landmark paper in complexity theory, "Reducibility among combinatorial problems," which proved that 21 problems were NP-complete, acting as the most important catalyst for the explosion of interest in the P versus NP problem.

https://www.youtube.com/watch?v=KllCrlfLuzs&t=580s

==

Richard Karp is a professor at Berkeley and one of the most important figures in the history of theoretical computer science.

  1. He received the Turing Award in 1985 for his research in the theory of algorithms.
  2. His contributions include the development of the Edmonds-Karp algorithm for solving the max flow problem on networks.
  3. He also developed the Hopcroft-Karp algorithm for finding maximum cardinality matchings in bipartite graphs.
  4. Karp is known for his landmark paper in complexity theory, titled "Reducibility among combinatorial problems".
  5. This paper proved that 21 problems were NP-complete.
  6. The paper served as the most important catalyst for the explosion of interest in the study of NP-completeness and the P versus NP problem.
  7. At age 13, Karp was first exposed to plane geometry and was "wonder struck by the power and elegance of formal proofs".
  8. He enjoyed the fact that pure reasoning could establish a geometric fact "beyond dispute".
  9. Karp found solving puzzles in plain geometry much more enjoyable than earlier mathematics courses focused on arithmetic operations.
  10. He was surprised and convinced by the ease of the proof that the sum of the angles of a triangle is 180 degrees.
  11. Karp notes that he lacked three-dimensional vision and intuition for visualizing 3D objects or hyperplanes.
  12. When working with tools like linear programming, he relies on algebraic properties because he lacks high-dimensional intuition.
  13. When designing algorithms, he visualizes the process as an inner loop where the distance from the desired solution is iteratively reducing until the exact solution is reached.
  14. He finds compelling beauty in the certainty of convergence, where the gap from the optimum point decreases monotonically.
  15. Karp connects his appreciation for the orderly, systematic nature of innovative algorithms to a desire he might have had for orderly activities like woodworking.
  16. He used to amuse himself by performing mental arithmetic, such as multiplying four-digit decimal numbers in his head.
  17. Mathematics offers an "escape from the messiness of the real world where nothing can be proved".
  18. The Assignment Problem requires finding a one-to-one matching (e.g., $N$ boys and $N$ girls) that minimizes the sum of associated costs.
  19. The Hungarian Algorithm solves the Assignment Problem.
  20. A key observation enabling the Hungarian Algorithm is that the optimal assignment is unchanged if a constant is subtracted from any row or column of the cost matrix.
  21. The algorithm proceeds by subtracting constants from rows or columns while ensuring all elements remain non-negative, ultimately aiming for a full permutation of zeros.
  22. Jack Edmonds and Karp were the first to show that the Assignment Problem could be solved in polynomial time, specifically $N^3$, improving on earlier $N^4$ algorithms.
  23. As a PhD student in 1955, Karp was at the computational lab at Harvard, where Howard Aiken had built the Mark I and the Mark IV computers.
  24. The Mark IV computer filled a large room, and Karp could walk around inside its rows of relays.
  25. He noted that the machine would sometimes fail due to "bugs," which literally meant flying creatures landing on the switches.
  26. The lab eventually acquired a Univac computer with 2,000 words of storage, which necessitated careful allocation due to varying access times.
  27. Karp was primarily attracted to the underlying algorithms rather than the physical implementation of the machines.
  28. He did not anticipate the future of personal computing or having computers in pockets.
  29. Karp read Turing's paper on the Turing Test but felt the test was too subjective to accurately calibrate intelligence.
  30. He is doubtful that algorithms can achieve human-level intelligence.
  31. Karp suggests that multiplying the speed of computer switches by a large factor will not be useful until the organizational principle behind the network of switches is understood.
  32. A combinatorial algorithm deals with a system of discrete objects that need to be arranged or selected to achieve some goal or minimize a cost function.
  33. A graph is a set of points (vertices) where certain pairs are joined by lines (edges), often representing interconnections.
  34. The maximum flow problem, which Karp worked on, involves finding the maximum rate at which a commodity (like gas, water, or information) can flow from a source to a destination through channels with capacity limits.
  35. An algorithm runs in polynomial time (P) if the number of computational steps grows only as some fixed power of the size of the input (e.g., $N, N^2, N^3$).
  36. Theorists generally take polynomial time as the definition of an efficient algorithm.
  37. Complexity theory measures the performance of an algorithm based on its performance in the worst case.
  38. NP (Non-deterministic Polynomial time) is the class of problems where, although solving the problem may be hard, verifying a potential solution can be done efficiently (in polynomial time).
  39. For example, finding the largest clique is hard (NP), but checking whether a given set of vertices forms a clique is easy (P).
  40. The central problem in computational complexity is whether P is equal to NP (if every problem easy to check is also easy to solve).
  41. Karp strongly suspects that P is unequal to NP because centuries of intensive study have failed to find polynomial-time algorithms for many easy-to-check problems, such as factoring large numbers.
  42. If P $\neq$ NP, researchers will know that for the great majority of NP-complete problems, they cannot expect to get optimal solutions and must rely on heuristics or approximations.
  43. NP-complete problems are defined as the hardest decision (yes/no) problems within the class NP.
  44. NP-hard problems are optimization problems that correspond to the hardest problems in the class, such as finding the largest clique rather than just deciding if one exists.
  45. Stephen Cook showed that the Satisfiability problem (SAT) of propositional logic is as hard as any problem in the class P (contextually, NP).
  46. Cook proved this using the abstract Turing machine, showing that any NP problem can be translated into an equivalent SAT instance.
  47. Karp extended this, showing that SAT could be reduced to 21 other fundamental problems (e.g., integer programming, clique), establishing their complexity equivalence.
  48. Karp considers the Stable Matching Problem (Stable Marriage Problem) to be one of the most beautiful combinatorial algorithms.
  49. A matching is stable if there is no pair who would prefer to run away with each other, leaving their current partners behind.
  50. An algorithm developed by Gale and Shapley ensures that a stable matching exists and can be found by having one side (e.g., boys) propose and the other side (girls) tentatively accept.
  51. In the Gale and Shapley algorithm, the proposing side (the boys) ends up doing at least as well as they could in any other stable matching.
  52. Karp is especially proud of the Rabin-Karp algorithm for string searching because it demonstrates the power of randomization.
  53. This algorithm associates a fingerprint (a number derived using a random prime) with the word being searched.
  54. The use of randomization, such as taking a random sample in an election, works well because phenomena that occur almost all the time are likely to be found via random selection.
  55. Although problems like Satisfiability and the Traveling Salesman Problem are NP-hard (poor worst-case performance), practical instances arising in digital design or geometry can often be solved efficiently by specialized "sat solvers" and codes.
  56. Karp studied average-case analysis by modeling random graphs, but concluded that results based on such simplistic assumptions about typical problems often lacked practical "bite".
  57. He believes that if P=NP is proven, it will involve concepts and approaches that "we do not now have".
  58. Karp dedicated his Turing Award lecture to the memory of his father.
  59. He inherited a great desire to be a teacher from his father, remembering his ability to draw perfect circles by hand on the blackboard and engage his students.
  60. His top three pieces of advice for teaching are preparation, preparation, and preparation.


ISO checklist for good customer support

Here’s an ISO-aligned checklist for good customer support, drawing on standards such as ISO 10002:2018 (Customer satisfaction — Guidelines for complaints handling) and ISO 9001:2015 (Quality management systems).


🧭 1. Customer Focus & Policy

  • A documented customer service policy is in place.

  • The policy aligns with the organization’s quality objectives.

  • Customer focus is embedded in company culture and communicated to all staff.

  • Roles and responsibilities for customer support are clearly defined.


📋 2. Complaint Handling Process (ISO 10002)

  • A formal, accessible, and simple complaint process is established.

  • Complaints can be submitted via multiple channels (email, web form, phone, etc.).

  • Each complaint is acknowledged promptly.

  • A unique reference number is assigned for tracking.

  • Resolution timelines are clearly communicated to the customer.

  • Complaint records are retained and reviewed for trends and improvement.

  • Escalation procedures are documented and applied consistently.


📞 3. Communication & Responsiveness

  • Support channels are available and responsive (as per service-level targets).

  • Staff provide clear, polite, and professional communication.

  • Customers receive regular updates on open issues.

  • Support scripts/templates are standardized but allow personalization.

  • Multilingual support and accessibility options are available (if applicable).


👩‍💼 4. Competence & Training

  • Customer support staff undergo regular training on products, systems, and soft skills.

  • Training records are maintained.

  • Performance reviews include customer satisfaction metrics.

  • Staff are empowered to resolve issues within defined authority levels.


📊 5. Monitoring, Measurement & Feedback

  • Customer satisfaction surveys are regularly conducted.

  • Key Performance Indicators (KPIs) are tracked, such as:

    • Average response/resolution time

    • Customer satisfaction score (CSAT)

    • Net Promoter Score (NPS)

    • First Contact Resolution (FCR) rate

  • Feedback results are analyzed and used for improvement.

  • Reports are reviewed by management periodically.


🔄 6. Continuous Improvement

  • A Corrective Action process is in place for recurring issues.

  • Lessons learned are shared across departments.

  • The complaint handling process is periodically audited and updated.

  • Improvement actions are documented and tracked to closure.


🧱 7. Documentation & Record Control

  • Policies, procedures, and records are document-controlled (revision history, approvals, etc.).

  • Records of customer interactions and resolutions are secure and confidential.

  • Data protection complies with ISO 27001 and GDPR (where applicable).