TorchLean API

NN.MLTheory.Proofs.Verification.Robustness.MlpRobustness

MLP robustness: basic analytic lemmas #

This file collects small real-analysis facts used by robustness statements for spec-level MLPs.

Contents:

References #

Basic Lipschitz lemmas #

Fundamental property: ReLU satisfies |ReLU(x) - ReLU(y)| ≤ |x - y|

theorem NN.MLTheory.Proofs.comp_lipschitz (f g : ) (h_f : ∀ (x y : ), |f x - f y| |x - y|) (h_g : ∀ (x y : ), |g x - g y| |x - y|) (x y : ) :
|(g f) x - (g f) y| |x - y|

Composition property for Lipschitz functions

theorem NN.MLTheory.Proofs.relu_comp_preserves (f : ) (h : ∀ (x y : ), |f x - f y| |x - y|) (x y : ) :
|max 0 (f x) - max 0 (f y)| |x - y|

Main result: ReLU composition preserves unit Lipschitz property

Tensor helpers #

Definition: A non-zero tensor has at least one non-zero entry

Instances For

    A non-zero tensor has positive L2 norm

    noncomputable def NN.MLTheory.Proofs.linearLayerOperatorNorm {inDim outDim : } (layer : Spec.LinearSpec inDim outDim) :

    Get the operator norm of the weight matrix in a linear layer

    Instances For
      theorem NN.MLTheory.Proofs.linear_layer_lipschitz_bound {inDim outDim : } (layer : Spec.LinearSpec inDim outDim) (h_weights_nonzero : layer.weights Spec.fill 0 (Spec.Shape.dim outDim (Spec.Shape.dim inDim Spec.Shape.scalar))) :

      Theorem: Linear layers are Lipschitz continuous with constant matrix_op_norm.

      theorem NN.MLTheory.Proofs.mlp_lipschitz_complete_analysis {inDim hidDim outDim : } (l1 : Spec.LinearSpec inDim hidDim) (l2 : Spec.LinearSpec hidDim outDim) (h1_nonzero : l1.weights Spec.fill 0 (Spec.Shape.dim hidDim (Spec.Shape.dim inDim Spec.Shape.scalar))) (h2_nonzero : l2.weights Spec.fill 0 (Spec.Shape.dim outDim (Spec.Shape.dim hidDim Spec.Shape.scalar))) :
      lipschitz_constant > 0, ∀ (x y : Spec.Tensor (Spec.Shape.dim inDim Spec.Shape.scalar)), Proofs.tensorL2Dist (Examples.mlpForward l1 l2 x) (Examples.mlpForward l1 l2 y) lipschitz_constant * Proofs.tensorL2Dist x y
      theorem NN.MLTheory.Proofs.mlp_is_lipschitz_continuous_l2 {inDim hidDim outDim : } (l1 : Spec.LinearSpec inDim hidDim) (l2 : Spec.LinearSpec hidDim outDim) (h1_nonzero : l1.weights Spec.fill 0 (Spec.Shape.dim hidDim (Spec.Shape.dim inDim Spec.Shape.scalar))) (h2_nonzero : l2.weights Spec.fill 0 (Spec.Shape.dim outDim (Spec.Shape.dim hidDim Spec.Shape.scalar))) :

      Repackage mlp_lipschitz_complete_analysis as a robustness-spec is_lipschitz_continuous fact.

      This is the form expected by the certified-robustness lemmas in NN.MLTheory.Proofs.Verification.Robustness.LipschitzCertified.

      theorem NN.MLTheory.Proofs.adversarial_robustness_certificate {inDim hidDim outDim : } (l1 : Spec.LinearSpec inDim hidDim) (l2 : Spec.LinearSpec hidDim outDim) (h1_nonzero : l1.weights Spec.fill 0 (Spec.Shape.dim hidDim (Spec.Shape.dim inDim Spec.Shape.scalar))) (h2_nonzero : l2.weights Spec.fill 0 (Spec.Shape.dim outDim (Spec.Shape.dim hidDim Spec.Shape.scalar))) (x₀ : Spec.Tensor (Spec.Shape.dim inDim Spec.Shape.scalar)) (perturbation_radius : ) :
      perturbation_radius > 0robustness_guarantee > 0, ∀ (x : Spec.Tensor (Spec.Shape.dim inDim Spec.Shape.scalar)), Proofs.tensorL2Dist x₀ x perturbation_radiusProofs.tensorL2Dist (Examples.mlpForward l1 l2 x₀) (Examples.mlpForward l1 l2 x) robustness_guarantee * perturbation_radius

      Robustness statements for MLPs (ML theory layer).

      This file collects robustness definitions and theorems specialized to multi-layer perceptrons, used as a bridge between learning-theory specifications and concrete model classes.