Float32 #
Unified Float32 entrypoint (TorchLean).
This file keeps several common meanings of "float32" separate and explicit: the trusted runtime
Float implementation, a proof-oriented rounding model (FP32), and an executable bit-level model
(IEEE32Exec).
What “32-bit precision” means here #
Throughout TorchLean, float32 refers to IEEE-754 binary32: a 32-bit floating-point format with
- 1 sign bit,
- 8 exponent bits,
- 23 fraction bits (24 bits of precision including the implicit leading 1 for normals).
This is a widely supported baseline dtype for ML workloads; other formats (bf16/fp16/tf32, etc.) can be added on top of the same structure.
The three meanings we support #
Lean runtime
Float/Float32are fast and convenient, but their arithmetic is implemented by external/runtime code. That behavior is not kernel-reducible, so we treat it as an explicit trust boundary.FP32is our proof-oriented float32 semantics: a finite-only “rounding-on-ℝ” model (in the style of Flocq) where each primitive operation is specified as “compute inℝ, then round to the float32 grid”. Concretely, it fixes binary32-style parameters (radix 2, exponent function for gradual underflow, round-to-nearest ties-to-even). It does not model NaN/Inf.IEEE32Execis our execution-oriented float32 semantics: an executable, bit-level IEEE-754 binary32 kernel implemented in Lean (rawUInt32bits, with signed zeros, subnormals, NaNs/Infs, and IEEE rules for core arithmetic). (Transcendentals are not specified by IEEE-754; we provide deterministic executable definitions, but we do not claim they match any particular hardware/libm.) This gives a concrete meaning to “float32 execution” inside Lean, independent of a particular platform’s runtime/libm.
The intent is to let the rest of the codebase depend on a single name (Float32/F32) while
keeping the boundary easy to see and easy to swap:
- theorem statements and error bounds typically use
FP32, - runnable demos typically use
IEEE32Exec, - runtime
Float32is treated as an explicitly trusted/assumed implementation detail.
This design is described in the TorchLean paper appendix ("Appendix C (Numerical Semantics)"):
arXiv:2602.22631 (https://arxiv.org/abs/2602.22631).
Backend selection #
Selects which float32 semantics TorchLean should use.
.fp32 is the proof-oriented rounding-on-ℝ model.
.ieee754Exec is the executable, bit-level IEEE-754 binary32 model.
- fp32 : Float32Mode
Finite float32 rounding model (
FP32). - ieee754Exec : Float32Mode
Executable IEEE754 binary32 kernel (
IEEE32Exec): bit-level float32 with NaN/Inf.
Instances For
Executable float32 backend (bit-level IEEE-754 binary32).
This is the scalar type you pick when you want runs inside Lean to have an explicit float32 meaning (including NaN/Inf and signed-zero behavior), rather than depending on the platform runtime.
Instances For
TorchLean’s “float32” surface with selectable semantics.
Default is .ieee754Exec because it is the closest to real float32 execution you can define
inside Lean. For theorem statements and compositional error reasoning, prefer .fp32.
Instances For
Short alias used in demos/docs.
Instances For
Lightweight logging (CLI/examples) #
Human-readable summary of the selected float32 semantics.
Instances For
Print a one-line summary of the selected float32 semantics.