Compiled Core #
Proof-compiled execution path.
This module exposes the "approach (a)" workflow:
- Build an executable SSA/DAG graph (
Proofs.Autograd.Algebra.GraphData). - Compile it to a runtime
TapewithcompileAuxData. - Run
Tape.backwardDenseFrom/Tape.backwardDenseAll.
Notes / trust boundaries:
- If you instantiate
α := Floatorα := TorchLean.Floats.IEEE754.IEEE32Exec, you get an executable engine, but connecting those runs to real hardware semantics is treated as a trusted interface. - The proof-carrying graph (
Proofs.Autograd.Algebra.Graph) remains available for backends where you can actually discharge algebraic/calc correctness assumptions (e.g.ℝ,ℚ).
Reading map #
NN.Runtime.Autograd.Compiled.GraphMis the small authoring DSL for compiled graphs.NN.Runtime.Autograd.Compiled.IRExecbridgesNN.IR.Graphto executable graph data.NN.Runtime.Autograd.Compiled.IRExec.Correctnessproves the forward-correctness lemmas.
See also:
- Proof link between compiled tapes and proved graph backprop:
NN/Proofs/Autograd/Runtime/Link.lean
Executable SSA/DAG graph for the proof-compiled pipeline.
This is Proofs.Autograd.Algebra.GraphData specialized to:
Δ := Unit(no extra opaque environment threaded through evaluation), and- the
Runtime.Autograd.Compilednamespace.
Instances For
Compile an executable GraphData into a runtime eager tape.
This is the bridge from the proof-compiled SSA representation to the runtime tape engine:
Graph.compileAuxData emits a Runtime.Autograd.Tape whose nodes replay the graph and whose
backward closures implement the graph's VJP rules.
PyTorch comparison: conceptually similar to the front half of torch.compile / TorchDynamo
(tracing a computation to an IR), except our target is an explicit autograd tape for which we
also maintain proof links.
Instances For
Convention for the output node id in a compiled tape.
compile places the original inputs first (Γ.length nodes), then appends one node per element
of ss. The final output is therefore at index Γ.length + ss.length - 1.
Invariant: callers should only use this when ss is nonempty. For defensive code, prefer
checkedOutId, which reports an error instead of relying on Nat's saturating subtraction.
Instances For
Checked version of outId.
Compiled scalar/output programs should always have at least one produced node. This helper makes that precondition executable, which is friendlier for user-facing compiled APIs and tests.
Instances For
Run reverse-mode backprop on a compiled tape, returning gradients for all node ids.
This uses the "total/dense" variant Tape.backwardDenseAll, seeding the output gradient with 1
(the usual scalar-loss convention).
Instances For
Run reverse-mode backprop starting from an explicit seed gradient context.
This is the most general entry point: callers provide a TList of initial gradients for every
value in the compiled context (Γ ++ ss), and we run the proof-friendly dense loop
Tape.backwardDenseFrom.