Scalar Rounding Approximation #
Rounding-level approximation lemmas.
This module is the start of the runtime→spec bridge: it gives compositional error bounds for
expressions evaluated under a neural_round model (e.g. NF / mixed-precision models).
These lemmas are scalar-level and can be lifted to tensors/graphs once a concrete set of ops is fixed (MLP first, then larger models).
PyTorch correspondence / citations #
In ordinary PyTorch execution, floating-point ops are performed in a chosen dtype (e.g. float32)
with hardware/IEEE-754 rounding. In TorchLean, neuralRound/NF is a proof-relevant rounding model
where each operation exposes an explicit ulp-style error bound suitable for composition.
https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype
Scalar Approximation Predicate #
Absolute-error approximation for scalar real values.
scalarApprox x xhat eps means the rounded/interpreted value xhat is within absolute error eps
of the ideal real value x.
Instances For
Convert the scalar rounding predicate to the shared ApproxTol.absOnly predicate.
Exact equality is zero-error scalar approximation.
Enlarging the error budget preserves scalar approximation.
Single-Step Rounding Bounds #
Interpret one rounded real operation as neuralRound applied to a real input.
Instances For
One neuralRound step is within half an ulp of the exact real input.
Compositional Bounds For + And * #
Rounded addition: compute roundR (x + y).
Instances For
Rounded multiplication: compute roundR (x * y).
Instances For
Compositional absolute-error bound for rounded addition.
The output budget is the input budgets plus one fresh rounding term for the addition result.
Compositional absolute-error bound for rounded multiplication.
Besides the fresh rounding term for xhat * yhat, the budget includes the usual first-order
product perturbation terms using the available magnitude/error bounds.