IRPyTorch #
IR → PyTorch code generation.
This module takes an op-tagged NN.IR.Graph plus a NN.MLTheory.CROWN.Graph.ParamStore payload and
emits a standalone PyTorch nn.Module implementation as a Python source string.
What this is (and isn't):
- This is an extraction/convenience layer used in round-trip demos: run/train in Python, then optionally import weights back to Lean.
- This is not a formal proof of semantic equivalence between PyTorch execution and the Lean IR
denotation. The "source of truth" semantics live on the Lean side (
NN.IR.Graph.denote*, and the compiled runtime).
Assumptions:
- Node ids index
g.nodesconsistently (we sanity-check this ingetNode). - The ParamStore contains parameters for the node kinds that require them (linear/conv2d/etc.).
- Only a supported subset of IR node kinds is lowered.
Failure modes (reported as Except String):
- missing/malformed ParamStore entries,
- shape mismatches between graph nodes and parameters,
- unsupported node kinds.
PyTorch context (comments only):
PyTorch’s ONNX exporter and torch.export can capture graphs for execution in other runtimes.
TorchLean’s emitter here prints readable Python that mirrors the IR, rather than producing an
execution-focused serialized graph artifact.
Flatten a tensor and render it as a Python list literal (used for
torch.tensor([...]).reshape(...)).
Instances For
Return true iff every element of the list equals the first element (vacuously true on []).
Instances For
Detect whether a flattened (seqLen × embedDim) tensor is a broadcast of a single row.
If so, return that row. This is used to compress large broadcasted constants into a smaller learnable vector parameter in the emitted PyTorch code.
Instances For
Options controlling IR-to-PyTorch emission.
Most knobs here configure demo emission (training scaffold, dtype), and how to materialize constant nodes (buffers vs parameters, and whether to compress broadcasted parameters).
- className : String
Class name to use in the emitted Python source.
- dtypeExpr : String
Python expression used for the tensor dtype (e.g.
"torch.float32"). - includeTrainingSkeleton : Bool
If true, include a small training skeleton and smoke-test in the emitted script.
- learnableConsts : Bool
If true, emit IR
constnodes asnn.Parameterwhen appropriate (learnable). - compressBroadcastParams : Bool
If true, compress broadcasted 2D consts into a smaller learnable vector parameter.
Instances For
How an IR const node is represented in the emitted PyTorch module.
bufferFull: a non-learnableregister_buffer(...)tensorparamFull: a learnablenn.Parameterwith the full tensor shapeparamBroadcast2D: a learnable vectornn.Parameterthat is expanded at runtime to a(seqLen × embedDim)tensor (compression for broadcasted 2D constants).
- bufferFull (attr : String) : ConstBinding
- paramFull (attr : String) : ConstBinding
- paramBroadcast2D (attr : String) (seqLen embedDim : ℕ) : ConstBinding
Instances For
Map from IR node id to how its constant should be referenced in the PyTorch module.
Instances For
The Python attribute name used to reference a bound constant (self.<attr>).
Instances For
Default attribute name for a const node: self.const_<id>.
Instances For
Attribute name for a linear layer weight tensor in the ParamStore (self.linear_<id>_W).
Instances For
Attribute name for a linear layer bias tensor in the ParamStore (self.linear_<id>_b).
Instances For
Attribute name for a conv2d kernel tensor in the ParamStore (self.conv2d_<id>_kernel).
Instances For
Attribute name for a conv2d bias tensor in the ParamStore (self.conv2d_<id>_bias).
Instances For
Emit a torch.tensor([...]).reshape(...) expression from a flat Python list literal and a
Shape.
Instances For
Retrieve a node from the graph, with a couple sanity checks.
This yields more actionable error messages than directly indexing into g.nodes.
Instances For
Emit a standalone PyTorch nn.Module class for an IR graph.
This is the main entrypoint for IR exporters: it bundles:
- imports,
- a class definition with parameters/buffers materialized from the
ParamStore, - a
forwardmethod implementing the IR, and - (optionally) a compact training scaffold for export checks.