Soundness of affine relaxations (CROWN / LiRPA) #
This file proves basic inequalities and compositional lemmas used by affine-relaxation bound propagation (LiRPA) methods, in particular the CROWN/DeepPoly family.
The main result in this file is an end-to-end soundness statement for a small MLP, assembled from:
- soundness of affine images of scalar intervals,
- soundness of the ReLU triangular upper relaxation,
- soundness of affine composition through linear layers,
- an IBP-style fallback used by
bound_affine(so the end-to-end theorem is sound but not tight).
For certificate-checking theorems over the graph dialect (the form used by TorchLean verification examples), see:
NN.MLTheory.CROWN.Proofs.GraphCrownCertSoundnessNN.MLTheory.CROWN.Proofs.GraphAlphaCrownTransferSoundness
References #
- CROWN: Zhang et al., Efficient Neural Network Robustness Certification with General Activation Functions, NeurIPS 2018. (arXiv:1811.00866)
- β-CROWN / α/β-CROWN: Wang et al., Beta-CROWN: Efficient Bound Propagation with Provable Guarantees, NeurIPS 2021. (arXiv:2103.06624)
auto_LiRPA: https://github.com/Verified-Intelligence/auto_LiRPAalpha-beta-CROWN: https://github.com/Verified-Intelligence/alpha-beta-CROWN
Lemma: If x ∈ [lo, hi], then for any affine function f(x) = ax + b, f(x) ∈ [min(alo + b, ahi + b), max(alo + b, ahi + b)]
Theorem: ReLU upper affine bound is sound. For ReLU(z) where z ∈ [l, u], the triangular relaxation provides an upper bound.
The CROWN relaxation uses:
- If l ≥ 0: slope = 1, bias = 0 (ReLU is identity)
- If u ≤ 0: slope = 0, bias = 0 (ReLU is zero)
- If l < 0 < u: slope = u/(u-l), bias = -l*u/(u-l) (linear upper envelope)
Main Theorem: CROWN affine bounds are sound for 2-layer MLPs. If x ∈ xB, then forward(net, x) ∈ bound_affine(net, xB).
This is the key soundness theorem for CROWN: it shows that the affine relaxation
computed by bound_affine is indeed an overapproximation of the true network output.