Floating-Point Error Bounds #
We collect small, compositional error bounds for TorchLean’s Flocq-style rounding model.
The lowest-level rounding interface lives in NN/Floats/NeuralFloat/Rounding.lean: once you have a rounding
mode rnd : ℝ → ℤ that satisfies the usual “round-to-nearest” property, you get the familiar
half-ULP bound for a single rounding step.
Here we package that core lemma into a few helper theorems that come up frequently in proofs
(e.g. the fl(x) = x(1+δ) factorization), and a lightweight mixed-precision estimate based on
machine-epsilon style proxies.
More ambitious Higham/Goldberg style results (dot products, matvec, backward stability, etc.) require a concrete definition of the computed kernels and additional format hypotheses; those are belong in specialized files with explicit hypotheses.
References #
- N. J. Higham, "Accuracy and Stability of Numerical Algorithms", SIAM, 2nd ed., 2002.
- D. Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic", ACM Computing Surveys, 1991.
- IEEE Std 754-2019, "IEEE Standard for Floating-Point Arithmetic" (for the intended meaning of rounding modes and ULP terminology).
Single Operation Error Bounds #
Relative error as a scalar metric.
We define this as |computed - exact| / |exact|, with the convention relative_error 0 _ = 0.
This convention avoids a division-by-zero branch in downstream statements.
Instances For
Relative error bound for a single neural_round step (ULP form).
This is the “divide the half-ULP absolute bound by |x|” version of the classic rounding model.
It is often the easiest lemma to use when a proof is naturally phrased in relative terms.
One-step bounds for common expressions #
Rounding error for a real addition.
This is just the core half‑ULP bound instantiated at the expression x + y.
Rounding error for a real multiplication.
This is the core half‑ULP bound instantiated at the expression x * y.
Rounding error for a real division.
Rounding error for a “fused multiply-add” style expression x*y + z (one rounding step).
Rounding error for Real.sqrt x (one rounding step).
Rounding error for 1 / Real.sqrt x (one rounding step).
Relative error factorisation for rounding, with a ULP-based bound.
This is the standard model fl(x) = x(1+δ), with |δ| bounded using the ULP at x.
Mixed Precision Error Bounds #
When accumulating in higher precision (e.g., FP32 accumulation for BFloat16 inputs), the error is dominated by the input precision, not accumulation precision.