Rounding modes and a half-ULP error bound #
We use this file as the “rounding” half of the Flocq-style rounding-on-ℝ model:
- rounding modes
rnd : ℝ → ℤ(floor/ceil/trunc/nearest-even), - the validity typeclasses
NeuralValidRndandNeuralValidRndToNearest, - the core rounding operator
neural_round, - the standard bound
abs(neural_round … x - x) ≤ ulp(x)/2for round-to-nearest.
These definitions are used by NF (the rounded scalar type), by the FP32 model, and by the
bridge layer that connects proofs to executable IEEE-754 behavior.
References #
- IEEE Std 754-2019, "IEEE Standard for Floating-Point Arithmetic".
- D. Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic", ACM Computing Surveys, 1991.
- N. J. Higham, "Accuracy and Stability of Numerical Algorithms", SIAM, 2nd ed., 2002.
- The Flocq Coq library is a classic example of axiomatizing "rounding on R"; TorchLean's
NeuralFloatlayer is inspired by that style (but is implemented natively in Lean).
Round toward negative infinity (floor)
Instances For
Round toward positive infinity (ceiling)
Instances For
Round toward zero (truncation)
Instances For
Round to nearest, ties to even
Instances For
Rounding modes with a half-unit error bound on the rounded integer.
This matches "round-to-nearest" style roundings (ties can be resolved arbitrarily):
|rnd x - x| ≤ 1/2 for all x.
Instances
neural_floor_round is a valid rounding mode (monotone and fixes integers).
neural_ceil_round is a valid rounding mode (monotone and fixes integers).
Basic bounds for nearest-even rounding.
In words: neural_nearest_even x always lands in {⌊x⌋, ⌊x⌋ + 1}.
This is the key fact used in the monotonicity proof and in IEEE-style interval bounds.
neural_nearest_even is a valid rounding mode (monotone and fixes integers).
Nearest-even is a “round-to-nearest” mode in the integer sense: |rnd(x) - x| ≤ 1/2.
This is the key property required by NeuralValidRndToNearest.
neural_nearest_even satisfies the half-unit error bound |rnd x - x| ≤ 1/2.
Core rounding operator (“compute in ℝ, then round back to the grid”).
We build a NeuralFloat record and then interpret it via neural_to_real.
Only mantissa/exponent affect the value; the metadata fields are set to conventional defaults.
Instances For
Rounding preserves exactly-representable numbers.
In words: if x lies on the grid described by (β,fexp) (neural_generic_format), then
rounding it with any valid rnd is a no-op.
This is the Flocq-style “round_generic” lemma.
Scaled mantissa times base power equals the original value.
In words: scaled_mantissa(x) * β^{cexp(x)} = x.
This is the algebraic identity that justifies the scaling used before rounding.
Half-ULP error bound for neural_round under round-to-nearest.
This is the basic “one-step” bound used by most error propagation arguments:
neural_round deviates from x by at most half an ulp at the chosen exponent scale.