TorchLean API

NN.Proofs.RuntimeApprox.Rounding.RoundingApprox

Scalar Rounding Approximation #

Rounding-level approximation lemmas.

This module is the start of the runtime→spec bridge: it gives compositional error bounds for expressions evaluated under a neural_round model (e.g. NF / mixed-precision models).

These lemmas are scalar-level and can be lifted to tensors/graphs once a concrete set of ops is fixed (MLP first, then larger models).

PyTorch correspondence / citations #

In ordinary PyTorch execution, floating-point ops are performed in a chosen dtype (e.g. float32) with hardware/IEEE-754 rounding. In TorchLean, neuralRound/NF is a proof-relevant rounding model where each operation exposes an explicit ulp-style error bound suitable for composition. https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype

Scalar Approximation Predicate #

Absolute-error approximation for scalar real values.

scalarApprox x xhat eps means the rounded/interpreted value xhat is within absolute error eps of the ideal real value x.

Instances For

    Convert the scalar rounding predicate to the shared ApproxTol.absOnly predicate.

    Exact equality is zero-error scalar approximation.

    theorem Proofs.RuntimeRoundingApprox.scalarApprox_mono {x xhat eps₁ eps₂ : } (h : scalarApprox x xhat eps₁) ( : eps₁ eps₂) :
    scalarApprox x xhat eps₂

    Enlarging the error budget preserves scalar approximation.

    Single-Step Rounding Bounds #

    Interpret one rounded real operation as neuralRound applied to a real input.

    Instances For

      One neuralRound step is within half an ulp of the exact real input.

      Compositional Bounds For + And * #

      Rounded addition: compute roundR (x + y).

      Instances For

        Rounded multiplication: compute roundR (x * y).

        Instances For
          theorem Proofs.RuntimeRoundingApprox.scalarApprox_roundedAdd {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y xhat yhat epsx epsy : } (hx : scalarApprox x xhat epsx) (hy : scalarApprox y yhat epsy) :
          scalarApprox (x + y) (roundedAdd xhat yhat) (epsx + epsy + TorchLean.Floats.neuralUlp β fexp (xhat + yhat) / 2)

          Compositional absolute-error bound for rounded addition.

          The output budget is the input budgets plus one fresh rounding term for the addition result.

          theorem Proofs.RuntimeRoundingApprox.scalarApprox_roundedMul {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y xhat yhat epsx epsy : } (hx : scalarApprox x xhat epsx) (hy : scalarApprox y yhat epsy) :
          scalarApprox (x * y) (roundedMul xhat yhat) ((|xhat| + epsx) * epsy + (|yhat| + epsy) * epsx + TorchLean.Floats.neuralUlp β fexp (xhat * yhat) / 2)

          Compositional absolute-error bound for rounded multiplication.

          Besides the fresh rounding term for xhat * yhat, the budget includes the usual first-order product perturbation terms using the available magnitude/error bounds.