TorchLean API

NN.Widgets.Interop.PyTorchTranslator

PyTorch Translator Widget #

This file implements the editor-side "write PyTorch, see TorchLean" translator workflow.

The goal is deliberately modest and honest:

This is not a verified Python parser and it is not a full PyTorch semantic import. For full graph capture, the right path is still the existing torch.export JSON bridge in NN.Runtime.PyTorch.Import.TorchExport. This widget is the fast, friendly "front door" that shows whether a model is close to the supported TorchLean subset before users commit to the full capture/import path.

The VS Code extension version can reuse this design:

  1. Run a stronger Python-side analyzer (ast, torch.fx, or torch.export).
  2. Send the normalized layer/graph report to Lean.
  3. Display the same kind of TorchLean skeleton plus trust-boundary diagnostics.

For the in-repo workflow, prefer #pytorch_translate_file "path/to/model.py". That command reads a real Python source file and renders the report in the Lean infoview. The lower-level #pytorch_translate_view someString command remains useful for tests and for future editor integrations that already have selected text in memory.

A layer shape recognized by the file-based PyTorch supported-subset analyzer.

The constructors intentionally describe semantic layer families, not exact Python AST nodes. For example, linear 784 128 can come from nn.Linear(784, 128) inside nn.Sequential, a field assignment such as self.fc = nn.Linear(784, 128), or a compact documentation snippet. This is what makes the widget useful for quick editor feedback, while still keeping it honest: anything outside this small layer vocabulary is represented as unsupported and shown to the user as a boundary.

  • linear (inDim outDim : Nat) : Layer

    Fully connected layer with numeric in_features and out_features.

  • conv2d (inC outC kernel stride padding : Nat) : Layer

    2D convolution with the fields this text-level assistant can reliably infer from common snippets.

    We do not generate executable TorchLean code directly from this constructor because a correct nn.conv term also needs the input contract: batch size, input height, and input width.

  • maxPool2d (kernel stride : Nat) : Layer

    2D max-pooling metadata; executable lowering likewise needs the surrounding image shape.

  • adaptiveAvgPool2d (out : Nat) : Layer

    Adaptive average pooling is detected as a named boundary item.

  • flatten : Layer

    Flatten layer; translated to nn.flatten in vector-style sequential skeletons.

  • relu : Layer

    Elementwise ReLU.

  • gelu : Layer

    Elementwise GELU.

  • sigmoid : Layer

    Elementwise sigmoid.

  • tanh : Layer

    Elementwise tanh.

  • dropout : Layer

    Dropout is recognized, but emitted as a boundary comment because mode/seed must be explicit.

  • unsupported (raw reason : String) : Layer

    A line that looks relevant to PyTorch but is outside the supported translator subset.

Instances For

    Summary produced by the file-based supported-subset analyzer.

    layers preserves the order in which relevant lines appear, including unsupported boundary items. translated counts only recognized layer-family rows; warnings are global advice such as "this CNN snippet needs an input image shape"; unsupported is a compact list for the red diagnostic section in the widget.

    • layers : Array Layer

      Ordered layer/boundary rows extracted from the snippet.

    • translated : Nat

      Count of rows recognized as part of the supported layer vocabulary.

    • warnings : Array String

      Human-facing warnings about missing shape contracts or mode choices.

    • unsupported : Array String

      Unsupported PyTorch-looking lines, each paired with the reason it was not translated.

    Instances For

      Analyze PyTorch source text using the small supported layer subset.

      The analyzer is order-preserving and fail-soft: one unsupported line does not prevent later lines from being recognized. That matters for editor UX, because users should still get a useful partial skeleton even when one layer needs manual handling.

      Instances For
        def NN.Widgets.PyTorchTranslator.torchLeanSkeleton (r : Report) (name : String := "translatedModel") :

        Generate a TorchLean skeleton from the recognized layer sequence.

        The emitted code is meant to be a starting point, not a final theorem. It imports the public TorchLean umbrella, opens the user-facing API namespaces, emits direct sequential terms for the safe subset, and then appends boundary notes as Lean comments. The next intended step is to add a concrete shape contract and wrap the model in a train.Task / SeqTask.

        Instances For

          Render the translator report as an infoview panel.

          The panel has four sections:

          1. badges that summarize the number of recognized rows;
          2. a row-by-row layer table;
          3. warnings / unsupported diagnostics;
          4. a generated Lean skeleton plus a trust-boundary explanation.

          That layout mirrors the intended editor-facing view: useful generated code beside an equally visible account of what has not been checked.

          Instances For

            Low-level command frontend for already-selected source text.

            This command accepts a Lean String term. It is mostly a hook for tests and future editor integrations that already have selected Python text in memory. For normal in-repo use, prefer #pytorch_translate_file, which reads a real .py file.

            Usage:

            def snippet : String :=
              "nn.Linear(784, 128)\n" ++
              "nn.ReLU()\n"
            #pytorch_translate_view snippet
            

            The argument is a Lean term of type String, so examples can define reusable snippets rather than putting large multi-line strings directly in the command.

            Instances For

              Read a Python source file and render the translator report.

              This is the practical in-repo workflow:

              #pytorch_translate_file "NN/Examples/Quickstart/pytorch_translator_mlp.py"
              

              The command runs during elaboration, reads the file relative to the current Lake working directory, and displays the same report as #pytorch_translate_view. If the file is missing, the Lean build does not crash with an opaque IO exception; the widget renders an explicit file-error panel instead.

              Instances For

                Command frontend that reads a .py file and renders the translator widget.

                This is still not a proof about Python. It is the file-based translator widget over real source text. For checked model import, use the existing torch.export JSON bridge after the report tells you the model is close to the supported subset.

                Instances For