API Core #
Public convenience API core on top of TorchLean's spec + runtime entrypoints.
This module holds the small foundational surface used by the higher-level API modules:
- scalar-role separation (
SemanticsvsRuntime) - CLI helpers
- dtype dispatch
- tensor constructor re-exports
Most end-user code should not import this file directly; it is re-exported through NN.API.Public
and via the top-level NN facade.
PyTorch Mapping #
DType here plays a role similar to torch.dtype selection in Python, but it also encodes a key
TorchLean distinction:
- some scalar types are executable (you can
#eval/ run demos), and - some scalar types are proof-only (e.g.
ℝor a noncomputable float model).
Public name for TorchLean's generic scalar semantics contract.
Read this as:
- "the model/loss is allowed to do its math over
α" - not "this backend is executable"
That second role belongs to API.Runtime.Scalar.
Instances For
Host-side scalar ReLU for task definitions and synthetic targets.
Instances For
Runtime conversion from host Float constants into a TorchLean scalar type.
PyTorch analogy:
- in Python tutorials, users write float literals directly and tensors inherit a runtime dtype;
- in TorchLean, examples often start from host
Floatliterals and inject them into the chosen runtime scalar backend withRuntime.ofFloat.
Instances
Generic host-float injection for TorchLean scalar backends.
Instances For
Float is already the host literal type, so injection is identity.
Inject host Float literals into the executable IEEE-754 binary32 backend.
Inject host literals into the dual-number backend used by the runtime autograd engine.
We interpret a literal as a primal value with zero tangent/adjoint component.
Inject host literals into TorchLean's parametric complex scalar (imaginary part defaults to 0).
Allow numeric literals like 0.1 to elaborate to any TorchLean runtime scalar backend.
Like takeFlagValueOnce, but return the value as a System.FilePath.
Instances For
Common training flags that appear across many demos: --epochs and --batch.
Instances For
Parse a --steps flag, but also accept --epochs as a synonym.
This is a small ergonomics helper for tutorials that use a step-based loop,
while still letting users pass --epochs out of habit from typical PyTorch training code.
If both --steps and --epochs are provided, this errors.
Instances For
Configuration for the float32 dtype option.
We support both:
- proof-only float32 semantics (
mode = .fp32, noncomputable), and - executable IEEE-754 float32 semantics (
mode = .ieee754Exec).
- mode : TorchLean.Floats.Float32Mode
Which float32 semantics backend to use.
Instances For
Instances For
Scalar type choice for demos and small executables.
This is a runtime selection mechanism used by example programs; the core library itself is
parametric in the scalar type α.
PyTorch Mapping #
This corresponds loosely to choosing dtype= in PyTorch, but with additional "proof-only"
variants:
.floatuses Lean's builtinFloat(executable, but its IEEE-754 behavior is trusted),.float32uses TorchLean's float32 model (either proof-only or executable),.complexuses TorchLean's parametric complex scalar over a float32 backend,.realusesℝ(proof-only; not executable).
- float : DType
- float32 (cfg : Float32Config) : DType
- complex (cfg : Float32Config) : DType
- real : DType
Instances For
Whether this dtype can be used in an executable (IO/#eval) context.
Instances For
Log a human-readable description of the chosen dtype to stdout.
Instances For
Parse a float32 mode selector string into a Float32Mode.
Instances For
Parse and remove dtype flags from CLI arguments, using default when no dtype flags are provided.
This is the same parsing logic as parseAndStrip, but it lets higher-level runners choose a
different default dtype depending on context (e.g. CUDA eager requires Float).
Instances For
Run k under the scalar type selected by dt.
If dt is proof-only, this returns an error rather than trying to execute noncomputable code.
Instances For
Run k under the scalar type selected by dt, passing an explicit cast function Float → α.
This is a convenient shape for small demos that want to construct tensors from Float lists.