TorchLean API

NN.Proofs.Autograd.Coverage

Autograd Proof Coverage #

This module is a curated import surface and roadmap for TorchLean's proved reverse-mode autograd stack. It does not introduce new theorems; it gathers the pieces users should import when working with the proof-level layer rather than only executable training.

Primitive coverage #

The smooth, deterministic primitive path has three layers:

  1. scalar calculus lemmas in NN.Proofs.Gradients.Activation,
  2. OpSpecCorrect / OpSpecFDerivCorrect proofs in NN.Proofs.Autograd.FDeriv.Elementwise, and
  3. tape-node NodeFDerivCorrect wrappers in NN.Proofs.Autograd.Tape.Nodes.

Current fully smooth elementwise coverage includes:

Pointwise / condition-carrying coverage includes:

These hypotheses are not bookkeeping noise: they are the mathematical reason we avoid claiming that nonsmooth primitives have ordinary Fréchet derivatives everywhere. Runtime systems such as PyTorch choose subgradient conventions at kinks; TorchLean can model those conventions, but the classical HasFDerivAt theorem must state the domain condition explicitly.

Structured block coverage #

The larger block proofs are built by composing the tape-node theorems:

Remaining model-level proof work #

The runtime/API model zoo is broader than the current end-to-end proof zoo. The reusable pieces above are intentionally the hard foundations, but the following model-level theorems are still open work rather than already-proved claims:

Trust boundary #

These are source-level mathematical theorems about TorchLean specs and the proof tape. CUDA kernels, cuBLAS/cuDNN/cuFFT, and compiler backends remain engineering trust boundaries. The intended bridge is: prove the spec/VJP rule here, then test and contract-check each executable fast path against that spec.

References #