PyTorch nn.Module Graph Capture Adapter #
This module emits the Python side of the model-import bridge:
PyTorch nn.Module
--torch.export / FX capture-->
TorchLean graph JSON (`torchlean.ir.v1`)
--NN.Runtime.PyTorch.Import.TorchExport.parseGraph-->
NN.IR.Graph
Why generate a Python adapter instead of parsing PyTorch objects in Lean?
- PyTorch checkpoints are Python/Pickle/zip artifacts; PyTorch is the external loader that knows how to read them.
torch.exportand FX already know how to inspect PyTorch programs.- Lean should receive a small, explicit artifact and then validate it with TorchLean's own IR checkers.
The generated adapter is intentionally conservative. It lowers only ops that already exist in
NN.IR.OpKind; unsupported operators fail with a clear message. This is exactly the behavior we
want for verification: better to reject a model than silently erase semantics.
References:
torch.export:https://docs.pytorch.org/docs/stable/user_guide/torch_compiler/export.htmltorch.fx:https://docs.pytorch.org/docs/stable/fx.html
Options for the generated PyTorch graph-capture script.
- functionName : String
Name of the Python helper function emitted into the script.
- preferTorchExport : Bool
If true, use
torch.export.exportfirst and fall back to FX symbolic tracing. - includeDebugTargets : Bool
If true, include raw PyTorch target strings in each node for debugging.
Instances For
Instances For
Emit a Python script that captures a PyTorch module and writes TorchLean graph JSON.
The generated script expects a Python file containing a zero-argument model constructor or class:
python export_torchlean_graph.py my_model.py MyModel out_graph.json --example-shape 1,4
The first implementation target is the shared IR subset: elementwise ops, matmul, reductions,
reshape/permute/flatten/concat, softmax, layernorm, and simple pooling/conv metadata when PyTorch
exposes enough static arguments. Linear layers can appear either as aten.linear or as lower-level
matmul/add depending on PyTorch's graph capture.