NF: a rounded scalar type (rounding-on-ℝ) #
NeuralFloat (the record with a mantissa/exponent) is useful for talking about the grid and for
stating format predicates like FLT_format. In many places, though, we want something closer to a
“numeric scalar type” that we can plug into higher-level specs and examples.
NF β fexp rnd is that scalar:
- it stores a semantic value
val : ℝ, - and by construction every primitive operation rounds back to the format using
neural_round.
So when you write a + b in NF, what you get is:
val(a + b) = round( val(a) + val(b) )
This is the standard textbook model used for floating-point error analysis: compute in reals, then incur a rounding error at each step (Higham/Goldberg style).
Trust boundary:
NF/NeuralFloatare proof-relevant Lean models of rounded arithmetic (built onℝ).- Instantiating
NFwith IEEE single parameters + round-to-nearest-even models the finite fragment of IEEE-754 binary32 (subnormals/rounding viafexp, but no NaN/Inf). - Correspondence to hardware float32 / Lean's builtin
Floatis not proved in this file; that connection is an external assumption/interface boundary (or requires a separate verified kernel).
Rounded scalar value at a given radix/format/rounding mode.
β is the radix (typically 2), fexp selects the exponent grid, and rnd rounds the scaled
mantissa to an integer.
- val : ℝ
val.
Instances For
The rounding operator associated with the format: roundR x = neural_round … x.
Instances For
Inject a real into NF by rounding it onto the target grid.
Instances For
Forgetful projection (semantic view): treat an NF as a real number.
Instances For
toReal (ofReal x) is definitionally the rounded real roundR x.
A default inhabitant (rounded zero).
Coerce natural literals into NF by rounding (n : ℝ) onto the grid.
0 and 1 for NF are defined via ofReal, so they live on the target grid.
1 : NF is ofReal 1, i.e. the rounded real 1 on the target grid.
Arithmetic on NF is “compute in ℝ, then round”.
This is the key choice that makes many error bounds compositional: each primitive incurs at most
ulp/2 of rounding error (under round-to-nearest assumptions), so long compositions can be bounded
by accumulating per-op bounds.
Rounded multiplication: val(a * b) = roundR (val a * val b).
Boolean equality on NF values (semantic equality of reals).
This is not intended as a fast runtime check (it relies on classical decidability for ℝ), but
it is convenient for specs that want a BEq instance for logging or small examples.
Min/max in the semantic order on ℝ, lifted to NF.
max on NF, defined by comparing the underlying real values.
Exponentiation, rounded back to the grid.
We define a^b via exp(b * log a) when a > 0. If a ≤ 0 we return 0 to keep the operation
total in ℝ. (In practical ML code this is usually guarded or avoided; here we prefer a total
spec-level function over partiality.)
Common math functions lifted to NF by “evaluate in ℝ, then round”.
This matches the same modeling decision as Add/Mul: the spec says what real function we intend,
and the rounding model accounts for discretization.
Numeric constants for NF via rounded reals.
Context instance used by TorchLean specs.
We provide a decidable > relation by classical reasoning on ℝ (noncomputable, but fine for the
spec layer).
Extract an approximate radix-β mantissa/exponent pair for debugging.
We compute:
e := cexp(x)from the format (fexp),m := rnd( scaled_mantissa(x) ),
so that x ≈ m · β^e (with the approximation coming from rounding).
This is meant for logs / human inspection; it is not used by the core proofs.
Instances For
Format an integer in base 10.
Instances For
Format an NF value as a radix-β scientific string "m * β^e".
Example (β = 2): "-123 * 2^7".
Instances For
Format an interval [lo, hi] for NF values using formatRadix.