NN.API.Common #
Small, practical helpers used across TorchLean workflows and entrypoints: typed tensor constructors,
casting between scalar backends, and Except/IO glue.
Common Helpers For Workflows And Small Programs #
We keep this module small and practical: it contains helper functions that show up repeatedly in executable workflows and tutorials.
What belongs here:
- casting
Spec.Tensor Floatinto an arbitrary scalar backendα(givenFloat → α) - building typed tensors from raw lists (
tensorF,vecF,matF) - bridging
Except StringintoIO(orThrow) - running small examples under a chosen scalar backend via
--dtype(seeNN.API.DType)
What does not belong here:
- core semantics/proofs (those live under
NN/NN.Proofs) - serious CLI parsing (these helpers are for small runnable binaries)
PyTorch Mapping Notes #
If you're coming from PyTorch, this module plays the role of the small shared utility layer often written around:
torch.tensor([...], dtype=..., device=...)- choosing a dtype/backend based on CLI flags
try/exceptwrappers for dataset loading and runnable workflow code
The main difference is that TorchLean tracks shapes in types, so tensor constructors here return
Except String ... rather than silently reshaping.
Cast a tensor elementwise while preserving its type-level shape.
Instances For
Fail with a tagged userError if a boolean condition is false.
This is a small convenience for examples that want short, readable checks:
Common.check exeName "loss finite" (loss == loss).
Instances For
Write a standard JSON training artifact for routines that record an initial and final loss.
This is a convenience wrapper around Runtime.Training.TrainLog.beforeAfterLoss and the stable
TrainLog JSON format. It is intentionally independent of any particular model, dataset, or
runtime backend.
Instances For
Write a before/after loss log to an explicit logging destination.
LogDestination.disabled is a no-op, mirroring wandb disabled for quick smoke tests.
Instances For
Write a one-series scalar curve as a standard TrainLog JSON artifact.
Instances For
Write a one-series scalar curve to an explicit logging destination.
Instances For
Common CLI result for training commands that accept --steps/--epochs and --log.
- steps : ℕ
Number of optimizer updates.
Logging destination. Use
--log false/off/noneto disable.- logPath : System.FilePath
Path where the JSON
TrainLogshould be written.
Instances For
Instances For
Parse common training flags: positive --steps/--epochs plus optional --log.
--log <path> writes the standard local JSON artifact. --log false, --log off, or
--log none disables artifact writing while preserving the parsed step count.
Instances For
Training flags shared by the runnable model examples.
Most model commands should not each define their own parser for the same knobs. They can parse
model-specific data flags first, then call parseModelTrainFlags for the common optimizer loop:
--steps/--epochs: how many optimizer passes the example should run;--log: where to write a TrainLog JSON artifact;--lr: Adam learning rate.
Special examples can still extend this record with extra fields, but the default path stays one
shared parser rather than one local TrainOptions clone per model.
- train : LoggedTrainFlags
Shared step/epoch count and logging destination.
- lr : Float
Learning rate for the default Adam optimizer used by examples.
Instances For
Parse the standard model-training flags: --steps/--epochs, --log, and --lr.
Instances For
Default Adam optimizer constructor used by supervised and vision examples.
The reusable part is the optimizer convention, not the model. Individual examples still own their architecture and loss, while this helper keeps the Adam hyperparameter spelling identical across MLP, CNN, ResNet, ViT, and similar small demos.
Instances For
Run an executable with the standard TorchLean runtime parser, using the polymorphic scalar path by
default and switching to the Float path when requested.
This is the common shape for public examples that support all executable scalar backends, but need
the Float path for CUDA bridges, decoded probes, or JSON artifacts whose metrics are stored as
Float.
Instances For
Build an N-D tensor from a raw list of floats and cast it into the chosen scalar backend.
Fails if xs.length ≠ numel(dims).
Instances For
Generate an N-D tensor by calling f for each element index, then cast into the chosen backend.
The function f is indexed by the flat element index 0..numel-1.
Instances For
Generate an N-D tensor by calling f for each flat element index, with no failure case.
This is the “total” sibling of tensorFGen: since we generate exactly numel(dims) values,
the reshape cannot fail, so we avoid an Except.
When you want to build a deterministic constant tensor for an example, this is usually the
right tool.
Instances For
Generate a tensor of a known shape s by calling f for each flat element index, then cast into
the chosen backend.
This is a convenience wrapper used in examples to avoid the common boilerplate:
let xDyn : Tensor α (shapeOfDims s.toList) := ...
let x : Tensor α s := by simpa using xDyn
Instances For
2D matrix tensor constructor specialized to shape Mat rows cols.
Fails if xs.length ≠ rows * cols.
Instances For
Generator variant of matF.
Instances For
Run a workflow once under a dtype selected from args (via --dtype / --float32-mode).
This logs the chosen dtype and then calls k with a cast function Float → α for the selected
scalar backend.
In particular:
--dtype=floatselects Lean's builtinFloat(trusted semantics, executable),--dtype=float32selects TorchLean's verified IEEE32 executable semantics,--dtype=complexselects TorchLean's parametric complex scalar over Float32,--dtype=realselectsℝ(proof-only; errors at runtime).
Instances For
Like runWithDType, but also provides an API.Runtime.Scalar α instance.
Use this when your workflow uses numeric literals (1.0, -3.5, etc.) at runtime.
Instances For
Convenience wrapper for runnable binaries that need runtime float-literal injection.