Exact ReLU interpolation and FP32 error lifting #
This file has two jobs.
First, it proves an exact interpolation lemma on a uniform grid: given arbitrary target values at
the grid points, we construct a two-layer ReLU MLP that matches those values exactly under the
ideal ℝ semantics. This is the familiar hinge-basis construction behind one-dimensional
piecewise-linear approximation; see Pinkus for the approximation-theory background and Yarotsky
for quantitative ReLU-network rates.
Second, it states the FP32 lifting layer: once a real construction is fixed, explicit
rounding-error bounds turn it into an FP32 approximation theorem. This keeps the architecture
clean: real approximation, parameter rounding, and executable IEEE semantics are proved in
separate modules, matching the numerical-analysis separation used in Goldberg and Higham.
Exact interpolation on a uniform grid by a 2-layer ReLU MLP (over ℝ semantics).
Given arbitrary target values y₀,…,y_N at the uniform grid points
grid k = a + k * ((b-a)/N), this constructs a width-N hinge network that matches them at the
grid points.
FP32 (rounding-on-ℝ) error propagation for the hinge construction #
This section does not claim that FP32 is equivalent to hardware float32.
It proves a pointwise bound for evaluating the same hinge network with rounded operations
(TorchLean.Floats.FP32) versus exact real arithmetic on the rounded inputs/weights.
Alias for the “round-after-each-primitive” real-valued FP32 model (TorchLean.Floats.FP32).
Instances For
Extract scalar from a length-1 tensor (FP32).
Instances For
Evaluate a 2-layer ReLU MLP on a scalar FP32 input (returns an FP32 scalar).
Instances For
FP32 hinge layers and a pointwise error bound #
First FP32 hinge layer: hidden unit i computes x - tᵢ before ReLU.
Instances For
Second FP32 hinge layer: sum hidden activations with coefficients cᵢ and bias b.
Instances For
Per-neuron FP32 hinge-term error bound.
The bound has two pieces: one half-ulp term for the final multiplication and one subtraction
rounding term propagated through the 1-Lipschitz ReLU and scaled by |cᵢ|.
The per-hinge error bound used by hinge_term_abs_error.
Instances For
Named version of hinge_term_abs_error using hingeTermErrorBound.
Summation error propagation (FP32 hinge network) #
Fold state for summing hinge terms in FP32, while tracking:
- a real reference sum (computed from
.val), - and a provable error bound on the difference between them.
Instances For
One summation step: add a hinge term, and accumulate rounding+term error bounds.
Instances For
Compute the hinge-term sum state over all Fin n in a fixed order (List.finRange).
Instances For
FP32 value produced by folding all hinge terms in the fixed List.finRange order.
Instances For
Real reference sum accumulated alongside hingeSumFp32.
Instances For
Accumulated certified absolute-error budget for hingeSumFp32.
Instances For
Fold invariant for FP32 hinge summation.
At every prefix of the fold, the rounded accumulator is within the tracked error budget of the real accumulator. The proof is deliberately order-sensitive because floating-point addition is not associative.
Certified absolute-error bound for the whole FP32 hinge-term sum.
Certified absolute-error bound for the complete FP32 hinge network.
This composes the fold invariant with the final rounded + b, giving the main FP32 rounding term
used by the executable approximation theorems.
Real approximation + rounding combination (pointwise) #
Triangle bound combining real approximation error with FP32 rounding error.
The theorem is pointwise: it does not construct the approximating hinge parameters, it simply says
that once a real hinge network is close to f, the rounded FP32 network is close up to the
certified rounding budget.
Strict version of hinge_fun_total_abs_error_le for use with < ε approximation statements.
The real reference accumulator is the ordinary finite sum of real hinge terms.
1D ReLU approximation with a pointwise FP32 rounding bound.
This combines:
- the constructive real-valued hinge-network approximation theorem
(
relu_universal_approximation_Icc_hinge), and - the FP32 hinge-network rounding bound (
hinge_fun_total_abs_error_lt).
The output network is evaluated on the FP32 model (NF rounding-on-ℝ); the additional
rounding error is given by hinge_fun_error_bound.