FP32 MLP Approximation #
This module builds on NN.Proofs.RuntimeApprox.FP32.Layers and packages end-to-end error bounds
for small MLP patterns that show up frequently in examples and verification pipelines.
The theorems here are intentionally architecture-shaped rather than fully generic. They are the readable bridge lemmas that downstream verification examples can cite: “this whole FP32 MLP is within some explicit real error budget of the corresponding real-spec MLP.”
Compositional FP32 approximation theorem for a 3-layer tanh MLP:
Linear → tanh → Linear → tanh → Linear.
Each parameter/input hypothesis is an approxT statement comparing the real-spec tensor with the
FP32 runtime tensor. The conclusion existentially packages the propagated output budget. We keep the
budget existential because the exact expression is intentionally produced by the NF backend
combinators (matVecMulBoundTensor, tanhBoundTensor, addBoundTensor) rather than hand-expanded
at every call site.
2-layer ReLU MLP #
This is the FP32 analogue of the 2-layer ReLU MLP used by CROWN/IBP:
Linear → ReLU → Linear.
Note: the runtime ReLU here is the rounded variant reluR used by the NFBackend forward
approximation framework (apply max · 0 in ℝ, then round once).
Compositional FP32 approximation theorem for a 2-layer ReLU MLP:
Linear → ReLU → Linear.
This is the network-level bound consumed by the CROWN/IBP integration in
NN.Proofs.RuntimeApprox.FP32.CROWN.