TorchLean API

NN.Proofs.RuntimeApprox.FP32.MLP

FP32 MLP Approximation #

This module builds on NN.Proofs.RuntimeApprox.FP32.Layers and packages end-to-end error bounds for small MLP patterns that show up frequently in examples and verification pipelines.

The theorems here are intentionally architecture-shaped rather than fully generic. They are the readable bridge lemmas that downstream verification examples can cite: “this whole FP32 MLP is within some explicit real error budget of the corresponding real-spec MLP.”

theorem NN.Proofs.RuntimeApprox.FP32.approxT_tanhMlp3_fp32 {d0 d1 d2 d3 : } {L0S : Spec.LinearSpec d0 d1} {L1S : Spec.LinearSpec d1 d2} {L2S : Spec.LinearSpec d2 d3} {L0R : Spec.LinearSpec R d0 d1} {L1R : Spec.LinearSpec R d1 d2} {L2R : Spec.LinearSpec R d2 d3} {xS : Spec.SpecTensor (Spec.Shape.dim d0 Spec.Shape.scalar)} {xR : Spec.Tensor R (Spec.Shape.dim d0 Spec.Shape.scalar)} {e0W e0b e1W e1b e2W e2b ex : } (h0W : Proofs.RuntimeApprox.approxT toSpec L0S.weights L0R.weights e0W) (h0b : Proofs.RuntimeApprox.approxT toSpec L0S.bias L0R.bias e0b) (h1W : Proofs.RuntimeApprox.approxT toSpec L1S.weights L1R.weights e1W) (h1b : Proofs.RuntimeApprox.approxT toSpec L1S.bias L1R.bias e1b) (h2W : Proofs.RuntimeApprox.approxT toSpec L2S.weights L2R.weights e2W) (h2b : Proofs.RuntimeApprox.approxT toSpec L2S.bias L2R.bias e2b) (hx : Proofs.RuntimeApprox.approxT toSpec xS xR ex) :
∃ (eps : ), Proofs.RuntimeApprox.approxT toSpec (have z0 := Spec.linearSpec L0S xS; have a0 := Activation.tanhSpec z0; have z1 := Spec.linearSpec L1S a0; have a1 := Activation.tanhSpec z1; Spec.linearSpec L2S a1) (have z0 := Spec.linearSpec L0R xR; have a0 := Activation.tanhSpec z0; have z1 := Spec.linearSpec L1R a0; have a1 := Activation.tanhSpec z1; Spec.linearSpec L2R a1) eps

Compositional FP32 approximation theorem for a 3-layer tanh MLP:

Linear → tanh → Linear → tanh → Linear.

Each parameter/input hypothesis is an approxT statement comparing the real-spec tensor with the FP32 runtime tensor. The conclusion existentially packages the propagated output budget. We keep the budget existential because the exact expression is intentionally produced by the NF backend combinators (matVecMulBoundTensor, tanhBoundTensor, addBoundTensor) rather than hand-expanded at every call site.

2-layer ReLU MLP #

This is the FP32 analogue of the 2-layer ReLU MLP used by CROWN/IBP: Linear → ReLU → Linear.

Note: the runtime ReLU here is the rounded variant reluR used by the NFBackend forward approximation framework (apply max · 0 in ℝ, then round once).

Compositional FP32 approximation theorem for a 2-layer ReLU MLP:

Linear → ReLU → Linear.

This is the network-level bound consumed by the CROWN/IBP integration in NN.Proofs.RuntimeApprox.FP32.CROWN.