TorchLean API

NN.Runtime.PyTorch.Export.TorchExport

PyTorch nn.Module Graph Capture Adapter #

This module emits the Python side of the model-import bridge:

PyTorch nn.Module
  --torch.export / FX capture-->
TorchLean graph JSON (`torchlean.ir.v1`)
  --NN.Runtime.PyTorch.Import.TorchExport.parseGraph-->
NN.IR.Graph

Why generate a Python adapter instead of parsing PyTorch objects in Lean?

The generated adapter is intentionally conservative. It lowers only ops that already exist in NN.IR.OpKind; unsupported operators fail with a clear message. This is exactly the behavior we want for verification: better to reject a model than silently erase semantics.

References:

Options for the generated PyTorch graph-capture script.

  • functionName : String

    Name of the Python helper function emitted into the script.

  • preferTorchExport : Bool

    If true, use torch.export.export first and fall back to FX symbolic tracing.

  • includeDebugTargets : Bool

    If true, include raw PyTorch target strings in each node for debugging.

Instances For

    Emit a Python script that captures a PyTorch module and writes TorchLean graph JSON.

    The generated script expects a Python file containing a zero-argument model constructor or class:

    python export_torchlean_graph.py my_model.py MyModel out_graph.json --example-shape 1,4
    

    The first implementation target is the shared IR subset: elementwise ops, matmul, reductions, reshape/permute/flatten/concat, softmax, layernorm, and simple pooling/conv metadata when PyTorch exposes enough static arguments. Linear layers can appear either as aten.linear or as lower-level matmul/add depending on PyTorch's graph capture.

    Instances For