Flocq-style formats (FIX / FLX / FLT) #
We are not defining the executable IEEE-754 layer here.
What we do here is the same separation used by Flocq:
Core.leangives us mantissa/exponent arithmetic onℝplusbpow,mag, andcexp;- this file defines format families via exponent-selection functions
fexp : ℤ → ℤ.
Those fexps let us talk about “a fixed-point grid”, “an unbounded float grid”, or “a bounded,
IEEE-like float grid with gradual underflow” without committing to a concrete bit encoding.
In particular:
FIX_*models a fixed exponent (useful for quantization / fixed-point reasoning),FLX_*models an unbounded exponent float (a convenient intermediate model),FLT_*is the IEEE-like finite model (bounded exponent, gradual underflow).
If you want NaN/Inf/signed-zero and an executable kernel, that is NN/Floats/IEEEExec/.
For TorchLean-specific “which precision do we use in each phase?” configuration helpers, see
NN/Floats/NeuralFloat/Metadata.lean.
References:
- Flocq project: https://flocq.gitlabpages.inria.fr/flocq/
- S. Boldo, G. Melquiond, “Flocq: a unified Coq library for proving floating-point algorithms correct” (ARITH 2011), DOI: 10.1109/ARITH.2011.40
- IEEE Standard for Floating-Point Arithmetic (IEEE 754-2019)
FIX_exp emin is the simplest exponent-selection function: it always returns the same exponent.
This is the Flocq “FIX” family. It is useful when you want to reason about values living on a
single, fixed grid β^emin * ℤ (think: fixed-point arithmetic / quantization).
Instances For
FIX_exp satisfies the standard Flocq-style Valid_exp axioms (here: NeuralValidExp).
Even though the proof is trivial, having the instance is what lets later theorems reuse the same generic lemmas for FIX/FLX/FLT.
FIX_format emin x says “x is exactly representable on the fixed grid”.
This is phrased via an existential NeuralFloat β so that it composes smoothly with the rest of
the rounding model (neural_to_real, ULP bounds, etc.).
Instances For
FLX_exp prec is the unbounded-exponent family.
This is Flocq’s “FLX” family: it models a floating-point format with no exponent bounds but with
a mantissa precision parameter prec. It is a convenient intermediate model for proofs because it
removes underflow/overflow corner cases while still tracking mantissa rounding.
Instances For
FLX_exp satisfies NeuralValidExp.
The side-condition 0 < prec matches the standard assumption that “precision is positive”.
Exact representability predicate for FLX.
Heuristically: there exists a mantissa/exponent pair with mantissa bounded by the precision, and
x = m * β^e.
Instances For
FLT_exp emin prec is the bounded-exponent family with gradual underflow.
This is Flocq’s “FLT” family: it is the closest match to the finite fragment of IEEE-754
floating-point (no NaNs/Inf), where the exponent is bounded below by emin and the effective
precision is prec. Gradual underflow is captured by taking max (e - prec) emin.
Instances For
FLT_exp satisfies NeuralValidExp.
This is where most format-bridge lemmas live when connecting proofs to float32-style bounds
(e.g. via NN/Floats/FP32).
Exact representability predicate for FLT.
This version includes:
- a mantissa size bound (precision),
- and the lower exponent bound
emin ≤ exponent(no values smaller than the min normal/subnormal scale, depending on the choice ofeminand rounding).