TorchLean API

NN.MLTheory.CROWN.Proofs.SoundnessProofs

Soundness of affine relaxations (CROWN / LiRPA) #

This file proves basic inequalities and compositional lemmas used by affine-relaxation bound propagation (LiRPA) methods, in particular the CROWN/DeepPoly family.

The main result in this file is an end-to-end soundness statement for a small MLP, assembled from:

For certificate-checking theorems over the graph dialect (the form used by TorchLean verification examples), see:

References #

theorem NN.MLTheory.CROWN.Soundness.affine_scalar_interval_sound (a b x lo hi : ) (hx : lo x x hi) :
min (a * lo + b) (a * hi + b) a * x + b a * x + b max (a * lo + b) (a * hi + b)

Lemma: If x ∈ [lo, hi], then for any affine function f(x) = ax + b, f(x) ∈ [min(alo + b, ahi + b), max(alo + b, ahi + b)]

theorem NN.MLTheory.CROWN.Soundness.relu_affine_upper_bound_sound (z l u : ) (hz : l z z u) :
have relu_z := max 0 z; have slope := if l 0 then 1 else if u 0 then 0 else u / (u - l); have bias := if l 0 then 0 else if u 0 then 0 else -l * u / (u - l); relu_z slope * z + bias

Theorem: ReLU upper affine bound is sound. For ReLU(z) where z ∈ [l, u], the triangular relaxation provides an upper bound.

The CROWN relaxation uses:

  • If l ≥ 0: slope = 1, bias = 0 (ReLU is identity)
  • If u ≤ 0: slope = 0, bias = 0 (ReLU is zero)
  • If l < 0 < u: slope = u/(u-l), bias = -l*u/(u-l) (linear upper envelope)
theorem NN.MLTheory.CROWN.Soundness.crown_affine_mlp2_sound {inDim hidDim outDim : } (net : MLP2 inDim hidDim outDim) (xB : Box (Spec.Shape.dim inDim Spec.Shape.scalar)) (x : Spec.Tensor (Spec.Shape.dim inDim Spec.Shape.scalar)) (hx : xB.contains x) :
(boundAffine net xB).contains (forward net x)

Main Theorem: CROWN affine bounds are sound for 2-layer MLPs. If x ∈ xB, then forward(net, x) ∈ bound_affine(net, xB).

This is the key soundness theorem for CROWN: it shows that the affine relaxation computed by bound_affine is indeed an overapproximation of the true network output.