Module #
TorchLean module wrappers with PyTorch-style ergonomics.
TorchLean already provides the core ingredients:
- a small
Opsinterface, so you write a model once and run it on different backends; scalarTrainer, which builds an eager or compiled training loop for scalar losses.
This file adds a thin “nn.Module-style” wrapper so users can:
- package initial parameters + a loss definition as a single object,
- instantiate it under a chosen backend (
.eager/.compiled), - call
forward / backward / step / paramswith a small, consistent API.
Important: dtype selection is handled in NN.API.DType (because it picks the Lean type α).
The module definitions here are polymorphic in α, so the same module can be:
Small helpers #
Cast a Float tensor to a backend scalar type α by mapping a scalar cast function.
This is mainly used to turn tensorND!-authored Float initializers into Float/IEEE32Exec/etc.
Instances For
List-shaped version of castTensor for TorchLean's TList parameter bundles.
Instances For
Scalar-loss module (training) #
A scalar-loss module definition:
initParamsis stored as Float constants (easy to write withtensorND!),lossis polymorphic in the scalar backend (same code works for Float/IEEE32Exec/…).
You can instantiate this definition as a ScalarModule under a chosen backend and dtype.
Initial parameter values, stored as
Floattensors and cast at instantiation time.Per-parameter
requiresGradflags aligned withparamShapes.- loss {α : Type} [Context α] [DecidableEq Spec.Shape] : Program α (paramShapes ++ inputShapes) Spec.Shape.scalar
Scalar loss program over
(params ++ inputs), polymorphic in the scalar backend.
Instances For
Runtime module instance (the thing you "run").
This wraps Torch.ScalarTrainer, but exposes a more Module-like set of methods.
- trainer : Torch.ScalarTrainer α paramShapes inputShapes
Instances For
Create a runtime scalar-loss module from an explicit loss program and initial parameter values.
This is the low-level constructor; most users start from a ScalarModuleDef and call
ScalarModuleDef.instantiate.
Instances For
Run the forward pass and return the scalar loss value.
Instances For
Run one forward/backward pass and return gradients for all parameters.
Instances For
Convenience "one-step SGD": compute gradients and apply an SGD update with learning rate lr.
Instances For
Initialize an optimizer state for this module's parameters.
Instances For
Run one optimizer step using an explicit optimizer + state.
This mirrors a PyTorch training step:
- compute gradients (
backwardT) - update parameters via
opt.stepand return the new optimizer state
Instances For
Fetch the current parameter values as a shape-indexed list.
Instances For
Overwrite all parameter values.
Instances For
Train with vanilla SGD for a fixed number of steps on a fixed list of samples.
Instances For
Like trainSGD, but with an explicit optimizer + mutable optimizer state.
Instances For
Compute the mean loss over a list of samples (no parameter updates).
Instances For
Instantiate a ScalarModuleDef by casting Float initializers to α and choosing Torch options.
This is the most general constructor; instantiate is a convenience wrapper that just selects a
backend.
Instances For
Convenience instantiator that chooses only the backend (.eager or .compiled).