TorchLean API

NN.Runtime.Autograd.Overview

NN.Runtime.Autograd: Runtime Autograd Overview #

This directory is the runtime execution layer for TorchLean's automatic differentiation. It contains three closely related "ways to run" the same differentiable programs:

  1. Eager tape (dynamic DAG): record a runtime tape during the forward pass, then run a reverse-mode loop over that tape to accumulate gradients.
  2. Proof-compiled graph (typed SSA/DAG): record a well-typed IR (GraphData), compile it to the same runtime tape, then reuse the same reverse-mode loop.
  3. Imperative sessions: wrap either backend behind an API that feels closer to PyTorch (TensorRef objects, backward, step, etc.).

The key architectural idea is that we keep a small, explicit, pure core (the tape and its reverse-mode loop), then build convenience layers on top (imperative sessions, training helpers, optimizers). This makes it easier to:

Where to look #

Connection to Proofs #

The runtime files define executable behavior. Proof modules under NN.Proofs.Autograd.* state and prove facts about the same tape/IR vocabulary, including semantic equivalence of compiled IR execution and reusable tape-node soundness lemmas. CUDA and foreign-process bridges are checked by contracts and tests at this layer, while their external implementations remain outside Lean's trusted kernel.

References / citations #