TorchLean API

NN.Runtime.Autograd.TorchLean.Module

Module #

TorchLean module wrappers with PyTorch-style ergonomics.

TorchLean already provides the core ingredients:

This file adds a thin “nn.Module-style” wrapper so users can:

Important: dtype selection is handled in NN.API.DType (because it picks the Lean type α). The module definitions here are polymorphic in α, so the same module can be:

Small helpers #

Cast a Float tensor to a backend scalar type α by mapping a scalar cast function.

This is mainly used to turn tensorND!-authored Float initializers into Float/IEEE32Exec/etc.

Instances For
    def Runtime.Autograd.TorchLean.Module.castTList {α : Type} (cast : Floatα) {ss : List Spec.Shape} :
    TList Float ssTList α ss

    List-shaped version of castTensor for TorchLean's TList parameter bundles.

    Instances For

      Scalar-loss module (training) #

      A scalar-loss module definition:

      • initParams is stored as Float constants (easy to write with tensorND!),
      • loss is polymorphic in the scalar backend (same code works for Float/IEEE32Exec/…).

      You can instantiate this definition as a ScalarModule under a chosen backend and dtype.

      Instances For

        Runtime module instance (the thing you "run").

        This wraps Torch.ScalarTrainer, but exposes a more Module-like set of methods.

        Instances For
          def Runtime.Autograd.TorchLean.Module.ScalarModule.create {α : Type} [Context α] [DecidableEq Spec.Shape] [Torch.Internal.CudaBridge.TensorConv α] {paramShapes inputShapes : List Spec.Shape} (opts : Options := { }) (initRequiresGrad : List Bool := List.replicate paramShapes.length true) (loss : {m : TypeType} → [Monad m] → [inst : Ops m α] → CurriedRef (fun (s : Spec.Shape) => Torch.Ops.Ref m α s) (paramShapes ++ inputShapes) (m (Torch.Ops.Ref m α Spec.Shape.scalar))) (initParams : TList α paramShapes) :
          IO (ScalarModule α paramShapes inputShapes)

          Create a runtime scalar-loss module from an explicit loss program and initial parameter values.

          This is the low-level constructor; most users start from a ScalarModuleDef and call ScalarModuleDef.instantiate.

          Instances For
            def Runtime.Autograd.TorchLean.Module.ScalarModule.forward {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (xs : TList α inputShapes) :

            Run the forward pass and return the scalar loss value.

            Instances For
              def Runtime.Autograd.TorchLean.Module.ScalarModule.backward {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (xs : TList α inputShapes) :
              IO (TList α paramShapes)

              Run one forward/backward pass and return gradients for all parameters.

              Instances For
                def Runtime.Autograd.TorchLean.Module.ScalarModule.step {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (lr : α) (xs : TList α inputShapes) :

                Convenience "one-step SGD": compute gradients and apply an SGD update with learning rate lr.

                Instances For
                  def Runtime.Autograd.TorchLean.Module.ScalarModule.initOptim {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (opt : Optim.Optimizer α paramShapes) :
                  IO opt.State

                  Initialize an optimizer state for this module's parameters.

                  Instances For
                    def Runtime.Autograd.TorchLean.Module.ScalarModule.stepWith {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (opt : Optim.Optimizer α paramShapes) (st : opt.State) (xs : TList α inputShapes) :
                    IO opt.State

                    Run one optimizer step using an explicit optimizer + state.

                    This mirrors a PyTorch training step:

                    1. compute gradients (backwardT)
                    2. update parameters via opt.step and return the new optimizer state
                    Instances For
                      def Runtime.Autograd.TorchLean.Module.ScalarModule.params {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) :
                      IO (TList α paramShapes)

                      Fetch the current parameter values as a shape-indexed list.

                      Instances For
                        def Runtime.Autograd.TorchLean.Module.ScalarModule.setParams {α : Type} [Context α] [DecidableEq Spec.Shape] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (ps : TList α paramShapes) :

                        Overwrite all parameter values.

                        Instances For
                          def Runtime.Autograd.TorchLean.Module.ScalarModule.trainSGD {α : Type} [Context α] [DecidableEq Spec.Shape] [ToString α] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (lr : α) (steps : ) (samples : List (TList α inputShapes)) (logEvery : := 1) :

                          Train with vanilla SGD for a fixed number of steps on a fixed list of samples.

                          Instances For
                            def Runtime.Autograd.TorchLean.Module.ScalarModule.trainWith {α : Type} [Context α] [DecidableEq Spec.Shape] [ToString α] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (opt : Optim.Optimizer α paramShapes) (st0 : opt.State) (steps : ) (samples : List (TList α inputShapes)) (logEvery : := 1) :
                            IO opt.State

                            Like trainSGD, but with an explicit optimizer + mutable optimizer state.

                            Instances For
                              def Runtime.Autograd.TorchLean.Module.ScalarModule.meanLoss {α : Type} [Context α] [DecidableEq Spec.Shape] [ToString α] [Add α] [Div α] [Zero α] [Coe α] {paramShapes inputShapes : List Spec.Shape} (m : ScalarModule α paramShapes inputShapes) (samples : List (TList α inputShapes)) :
                              IO α

                              Compute the mean loss over a list of samples (no parameter updates).

                              Instances For
                                def Runtime.Autograd.TorchLean.Module.ScalarModuleDef.instantiateWith {α : Type} [Context α] [DecidableEq Spec.Shape] [Torch.Internal.CudaBridge.TensorConv α] {paramShapes inputShapes : List Spec.Shape} (d : ScalarModuleDef paramShapes inputShapes) (cast : Floatα) (opts : Options) :
                                IO (ScalarModule α paramShapes inputShapes)

                                Instantiate a ScalarModuleDef by casting Float initializers to α and choosing Torch options.

                                This is the most general constructor; instantiate is a convenience wrapper that just selects a backend.

                                Instances For
                                  def Runtime.Autograd.TorchLean.Module.ScalarModuleDef.instantiate {α : Type} [Context α] [DecidableEq Spec.Shape] [Torch.Internal.CudaBridge.TensorConv α] {paramShapes inputShapes : List Spec.Shape} (d : ScalarModuleDef paramShapes inputShapes) (cast : Floatα) (backend : Backend := Torch.Backend.eager) :
                                  IO (ScalarModule α paramShapes inputShapes)

                                  Convenience instantiator that chooses only the backend (.eager or .compiled).

                                  Instances For