TorchLean API

NN.Runtime.PyTorch.Export.IRPyTorch

IRPyTorch #

IR → PyTorch code generation.

This module takes an op-tagged NN.IR.Graph plus a NN.MLTheory.CROWN.Graph.ParamStore payload and emits a standalone PyTorch nn.Module implementation as a Python source string.

What this is (and isn't):

Assumptions:

Failure modes (reported as Except String):

PyTorch context (comments only):

PyTorch’s ONNX exporter and torch.export can capture graphs for execution in other runtimes. TorchLean’s emitter here prints readable Python that mirrors the IR, rather than producing an execution-focused serialized graph artifact.

Flatten a tensor and render it as a Python list literal (used for torch.tensor([...]).reshape(...)).

Instances For

    Return true iff every element of the list equals the first element (vacuously true on []).

    Instances For
      def Export.IRPyTorch.broadcastRow2D? (seqLen embedDim : ) (flat : List Float) :

      Detect whether a flattened (seqLen × embedDim) tensor is a broadcast of a single row.

      If so, return that row. This is used to compress large broadcasted constants into a smaller learnable vector parameter in the emitted PyTorch code.

      Instances For

        Options controlling IR-to-PyTorch emission.

        Most knobs here configure demo emission (training scaffold, dtype), and how to materialize constant nodes (buffers vs parameters, and whether to compress broadcasted parameters).

        • className : String

          Class name to use in the emitted Python source.

        • dtypeExpr : String

          Python expression used for the tensor dtype (e.g. "torch.float32").

        • includeTrainingSkeleton : Bool

          If true, include a small training skeleton and smoke-test in the emitted script.

        • learnableConsts : Bool

          If true, emit IR const nodes as nn.Parameter when appropriate (learnable).

        • compressBroadcastParams : Bool

          If true, compress broadcasted 2D consts into a smaller learnable vector parameter.

        Instances For
          @[implicit_reducible]

          How an IR const node is represented in the emitted PyTorch module.

          • bufferFull: a non-learnable register_buffer(...) tensor
          • paramFull: a learnable nn.Parameter with the full tensor shape
          • paramBroadcast2D: a learnable vector nn.Parameter that is expanded at runtime to a (seqLen × embedDim) tensor (compression for broadcasted 2D constants).
          Instances For
            @[reducible, inline]

            Map from IR node id to how its constant should be referenced in the PyTorch module.

            Instances For

              The Python attribute name used to reference a bound constant (self.<attr>).

              Instances For

                Default attribute name for a const node: self.const_<id>.

                Instances For

                  Attribute name for a linear layer weight tensor in the ParamStore (self.linear_<id>_W).

                  Instances For

                    Attribute name for a linear layer bias tensor in the ParamStore (self.linear_<id>_b).

                    Instances For

                      Attribute name for a conv2d kernel tensor in the ParamStore (self.conv2d_<id>_kernel).

                      Instances For

                        Attribute name for a conv2d bias tensor in the ParamStore (self.conv2d_<id>_bias).

                        Instances For
                          def Export.IRPyTorch.pyTensorFromFlat (flatList : String) (shape : Spec.Shape) (dtypeVar : String := "dtype") :

                          Emit a torch.tensor([...]).reshape(...) expression from a flat Python list literal and a Shape.

                          Instances For

                            Retrieve a node from the graph, with a couple sanity checks.

                            This yields more actionable error messages than directly indexing into g.nodes.

                            Instances For

                              Emit a standalone PyTorch nn.Module class for an IR graph.

                              This is the main entrypoint for IR exporters: it bundles:

                              • imports,
                              • a class definition with parameters/buffers materialized from the ParamStore,
                              • a forward method implementing the IR, and
                              • (optionally) a compact training scaffold for export checks.
                              Instances For