IRExec #
IR → executable SSA graph bridge.
This module lets us run an op-tagged NN.IR.Graph by compiling it into an executable
Proofs.Autograd.Algebra.GraphData (the SSA/DAG representation used by the proof-compiled runtime).
Why this exists:
- Verification tooling already targets
NN.IR.Graph(an op-tagged DAG with external payloads). - The runtime
.compiledpath executesGraphData(closures for each node). - To enforce a single shared IR contract, we provide a checked translation
IR.Graph → GraphData. The forward-correctness theorem connectingGraphData.evalto the IR denotation (NN.IR.Graph.denote*) lives inNN.Runtime.Autograd.Compiled.IRExec.Correctness.SemanticEquivalence(split out so routine runtime imports do not pull in the full semantic proof).
Important:
- The produced
GraphDatais forward-correct by construction. - Today
jvp/vjpare forward-only sentinels; this bridge is intended for forward execution and for closing the shared-IR semantics gap, not for training-style gradient computation.
PyTorch intuition #
If you’re coming from PyTorch:
- This is closer in spirit to compiled graph execution (TorchScript /
torch.compile) than to eager mode. NN.IR.Graphis the "shared IR" we want verifiers and runtimes to agree on.GraphDatais the executable SSA/DAG form: each node becomes a closure that reads parent values from a typed runtime context.- PyTorch’s autograd engine computes gradients by recording an eager tape; this bridge is about running the forward pass of an IR graph with a proof that it matches the IR semantics.
Reading map #
ExecGraphDatapackages a compiled graph with its input shape.IRExec.dValsOfCtxconverts typed runtime contexts back into IR-style value arrays.IRExec.buildFromis the compiler fromNN.IR.Graphto executable graph data.IRExec.execGraphOfIRis the main user-facing bridge entry point.
Main definitions #
ExecGraphData: compiled executable graph package.IRExec.mkIdx: checked parent-id to typed-index bridge.IRExec.mkFwdNode: forward-only node constructor used during lowering.IRExec.buildFrom: recursive compiler from IR graph to executable SSA graph.IRExec.execGraphOfIR: user-facing compile entrypoint.
Implementation notes #
- We intentionally focus this bridge on forward semantics first; this keeps the shared IR contract clear between verification and runtime execution.
- We set
jvp/vjpsentinels in this layer because gradient compilation is a separate concern from proving forward semantic equivalence. - Lowering untyped numeric ids goes through typed indices (
Idx) and explicit shape checks.
References #
Tags #
ir, compiler, runtime, graphdata, forward-semantics
simp rule for Except-do chains: binding an .error short-circuits.
Used heavily when discharging impossible branches in compilation correctness proofs.
Definitional simplification for Except.bind on .ok.
Definitional simplification for Except.bind on .error.
A forward-executable SSA graph derived from an NN.IR.Graph.
The compiled graph stores:
- one distinguished input shape (
inShape), - one shape per compiled node (
ss, corresponding to IR node ids1..n-1), - and executable node closures (
g) consumed byGraphData.eval.
- inShape : Spec.Shape
The distinguished IR input node’s shape (node id 0).
- ss : List Spec.Shape
Shapes of the IR nodes 1..(n-1) (one per executable SSA node).
Executable SSA/DAG graph for nodes 1..(n-1); inputs live in
Γ := [inShape].
Instances For
Evaluate the compiled executable SSA graph on a concrete input tensor.
The result is the full typed runtime context [inShape] ++ ss, i.e. input followed by every
compiled node value in topological order.
Instances For
Denotation Table Helper #
ExecGraphData.eval produces a typed runtime context TList α ([inShape] ++ ss).
For debugging and for the forward-correctness development in
NN.Runtime.Autograd.Compiled.IRExec.Correctness,
we provide a helper that erases this context
into an IR-style value table Array (NN.IR.DVal α) in node-id order.
Convert a runtime AnyTensor (shape carried as a field) into an IR denotation value DVal.
Instances For
Convert a typed runtime context TList α ss into an IR-style value table.
This is phrased in terms of Array (DVal α) because the IR denotation functions (denoteAll*)
are array-based, while the compiled runtime evaluates into a typed context (TList).
Instances For
Convert the full evaluated context into an IR-style value table (one DVal per node id).
This is the concrete bridge used in semantic equivalence statements that compare compiled evaluation against
NN.IR.Graph.denoteAll*.
Instances For
Build a typed runtime index (Idx) for a numeric IR parent id.
The compiled runtime context is typed by a list of shapes [inShape] ++ ss. mkIdx checks that:
idis in bounds, and- the context shape at that position matches the expected shape
s.
On failure, this returns a descriptive error string used directly by buildFrom.
Instances For
Construct a NodeData for forward execution only.
The compiled runtime GraphData expects each node to supply forward, jvp, and vjp.
For this IR bridge we only care about forward correctness, so jvp/vjp are populated with
forward-only sentinels that panic! if called.
This is intentional: IRExec closes the forward semantics gap; full gradient behavior is handled
by other runtime/autograd layers. Using panic! here is a safety measure: it prevents silently
wrong gradients if someone accidentally routes differentiation through an IRExec-compiled graph.
Instances For
Instances For
Forward projection for mkFwdNode.
The JVP/VJP fields are sentinels in this bridge, but the forward field is exactly the function passed to the constructor. This small simp lemma is used by the IR semantic-equivalence proof.
Apply a list of adjacent swaps (specified by swap depths) to a shape.
This is the shape-level companion of applySwapsTensor, and mirrors IR permutation lowering.
Instances For
Apply the same swap sequence as swapShapeBySwaps, but to a tensor value.
This uses Tensor.swap_at_depth_helper repeatedly; it is the runtime companion of the IR-side
swapDepthsForPerm lowering used by .permute.
Instances For
Concatenate a list of tensors (all with shape .dim nP rest) along dimension 0.
The input list is expressed as typed indices into the runtime context Γ; the result tracks the
total concatenated size as a sigma.
This helper supports lowering of IR concat-style operators while preserving shape information.
Instances For
The concatenated size reported by concatDim0FromInfos is the sum of the input sizes.
This theorem is used to justify the output-shape side conditions in concat lowering branches.
Compile the IR graph starting at node index i, extending the current SSA State.
This is the main compiler loop:
- it checks
i < g.nodes.size, - compiles node
iinto aNodeData.forwardclosure (rejecting unsupported ops/shapes), and snocs the resulting node into the accumulatingGraphData.
The public entrypoint execGraphOfIR handles node 0 and calls buildFrom starting at i = 1.
Operationally, buildFrom is a checked compiler:
- success means every visited node had well-typed parents and a supported lowering case,
- failure returns a concrete error explaining the first unsupported/malformed node.
Instances For
Compile an op-tagged IR graph into an executable SSA graph (GraphData) for forward evaluation.
Requirements:
- Node id 0 must be
.input. - The graph must satisfy
Graph.checkWellFormed. - The external payload must contain entries for every
.const/.linear/.conv2dnode id.
This returns an ExecGraphData whose eval computes all node values in topo order.
This is the main API consumed by runtime callers that want executable evaluation while remaining
aligned with the shared NN.IR.Graph semantics.