TorchLean API

NN.Floats.NeuralFloat.ErrorBounds

Floating-Point Error Bounds #

We collect small, compositional error bounds for TorchLean’s Flocq-style rounding model.

The lowest-level rounding interface lives in NN/Floats/NeuralFloat/Rounding.lean: once you have a rounding mode rnd : ℝ → ℤ that satisfies the usual “round-to-nearest” property, you get the familiar half-ULP bound for a single rounding step.

Here we package that core lemma into a few helper theorems that come up frequently in proofs (e.g. the fl(x) = x(1+δ) factorization), and a lightweight mixed-precision estimate based on machine-epsilon style proxies.

More ambitious Higham/Goldberg style results (dot products, matvec, backward stability, etc.) require a concrete definition of the computed kernels and additional format hypotheses; those are belong in specialized files with explicit hypotheses.

References #

Single Operation Error Bounds #

noncomputable def TorchLean.Floats.ErrorBounds.relativeError (exact computed : ) :

Relative error as a scalar metric.

We define this as |computed - exact| / |exact|, with the convention relative_error 0 _ = 0. This convention avoids a division-by-zero branch in downstream statements.

Instances For
    theorem TorchLean.Floats.ErrorBounds.relative_error_round_ulp {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x : ) (hx : x 0) :
    relativeError x (neuralRound rnd x) neuralUlp β fexp x / (2 * |x|)

    Relative error bound for a single neural_round step (ULP form).

    This is the “divide the half-ULP absolute bound by |x|” version of the classic rounding model. It is often the easiest lemma to use when a proof is naturally phrased in relative terms.

    One-step bounds for common expressions #

    theorem TorchLean.Floats.ErrorBounds.round_add_abs_error {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x y : ) :
    |neuralRound rnd (x + y) - (x + y)| neuralUlp β fexp (x + y) / 2

    Rounding error for a real addition.

    This is just the core half‑ULP bound instantiated at the expression x + y.

    theorem TorchLean.Floats.ErrorBounds.round_mul_abs_error {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x y : ) :
    |neuralRound rnd (x * y) - x * y| neuralUlp β fexp (x * y) / 2

    Rounding error for a real multiplication.

    This is the core half‑ULP bound instantiated at the expression x * y.

    theorem TorchLean.Floats.ErrorBounds.round_div_abs_error {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x y : ) :
    |neuralRound rnd (x / y) - x / y| neuralUlp β fexp (x / y) / 2

    Rounding error for a real division.

    theorem TorchLean.Floats.ErrorBounds.round_fma_abs_error {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x y z : ) :
    |neuralRound rnd (x * y + z) - (x * y + z)| neuralUlp β fexp (x * y + z) / 2

    Rounding error for a “fused multiply-add” style expression x*y + z (one rounding step).

    Rounding error for Real.sqrt x (one rounding step).

    theorem TorchLean.Floats.ErrorBounds.round_rsqrt_abs_error {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x : ) :
    |neuralRound rnd (1 / x) - 1 / x| neuralUlp β fexp (1 / x) / 2

    Rounding error for 1 / Real.sqrt x (one rounding step).

    theorem TorchLean.Floats.ErrorBounds.neural_round_relative_error_ulp {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x : ) (hx : x 0) :
    ∃ (δ : ), |δ| neuralUlp β fexp x / (2 * |x|) neuralRound rnd x = x * (1 + δ)

    Relative error factorisation for rounding, with a ULP-based bound.

    This is the standard model fl(x) = x(1+δ), with |δ| bounded using the ULP at x.

    Mixed Precision Error Bounds #

    theorem TorchLean.Floats.ErrorBounds.mixed_precision_accumulation_benefit (n : ) (input_prec accum_prec : NeuralPrecision) (h_n : n 1) (h_accum_better : accum_prec.machineEpsilon input_prec.machineEpsilon / n) :
    have error_with_accum := n * input_prec.machineEpsilon + accum_prec.machineEpsilon; have error_without_accum := n * input_prec.machineEpsilon; error_with_accum 2 * error_without_accum

    When accumulating in higher precision (e.g., FP32 accumulation for BFloat16 inputs), the error is dominated by the input precision, not accumulation precision.