TorchLean API

NN.MLTheory.CROWN.Extras.AlphaConfig

α-CROWN configuration #

This file defines data structures for (\alpha)-optimized CROWN bounds (as in (\alpha)-CROWN / auto_LiRPA): per-neuron relaxation parameters that can be tuned externally to tighten affine relaxations.

This module is optional for the core bound-propagation development; it is grouped under NN/MLTheory/CROWN/Extras/ to keep the main entrypoints smaller.

The design mirrors the practical α-CROWN workflow: Lean stores the relaxation parameters and the resulting transfer functions, while a separate optimizer may tune those parameters before a certificate is replayed.

References:

Per-neuron optimizable alpha parameters.

  • lower : α

    Lower-envelope α parameter. For ReLU this is the candidate lower slope.

  • upper : α

    Upper-envelope α parameter, used by smooth relaxations that interpolate upper candidates.

Instances For

    Layer-wise alpha configuration.

    Instances For

      Full network alpha configuration.

      • numLayers :

        Number of activation layers (not counting input/output)

      • layers : Array (LayerAlpha α)

        Per-layer alpha values

      Instances For

        Neuron status based on pre-activation bounds.

        Instances For

          Determine neuron status from bounds.

          Instances For

            Initialize alpha for a crossing ReLU neuron.

            The default lower slope is 0, the conservative lower envelope y ≥ 0.

            Instances For

              Initialize alpha for active neuron (no relaxation needed).

              Instances For

                Initialize alpha for inactive neuron.

                Instances For

                  Initialize alpha based on pre-activation bounds [l, u].

                  Instances For

                    Initialize layer alpha from pre-activation bounds box.

                    Instances For

                      Project alpha to valid range [0, 1] for ReLU.

                      Instances For
                        def NN.MLTheory.CROWN.alpha.reluLowerWithAlpha {α : Type} [Context α] (_l _u alphaLo : α) :
                        α × α

                        Compute a ReLU lower-envelope candidate using an optimized alpha.

                        For a crossing neuron with bounds [l, u], the candidate is y = α * x. A sound replay theorem should pair this executable rule with the usual α-range condition for the chosen scalar backend.

                        Instances For
                          def NN.MLTheory.CROWN.alpha.reluUpperFixed {α : Type} [Context α] (l u : α) :
                          α × α

                          Compute the fixed triangular ReLU upper bound.

                          The line is y = (u/(u-l)) * x - (u*l)/(u-l) for a crossing interval [l,u].

                          Instances For
                            def NN.MLTheory.CROWN.alpha.reluWithAlpha {α : Type} [Context α] (l u : α) (alphas : NeuronAlpha α) :
                            α × α × α × α

                            Apply alpha-parameterized ReLU relaxation to get affine bounds. Returns (slope_lo, bias_lo, slope_hi, bias_hi).

                            Instances For

                              Gradient of output bounds w.r.t. alpha (for optimization). For ReLU: ∂bound/∂α = x for lower bound (where y = αx).

                              • grad_lower : α

                                Gradient for lower alpha

                              • grad_upper : α

                                Gradient for upper alpha

                              Instances For

                                Layer-wise alpha gradients.

                                Instances For

                                  Local sigmoid approximation: σ(x) = 1 / (1 + exp(-x)) Using polynomial approximation for the Context typeclass.

                                  Instances For
                                    def NN.MLTheory.CROWN.alpha.tanhApprox {α : Type} [Context α] (x : α) :
                                    α

                                    Local tanh approximation: tanh(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x))

                                    Instances For
                                      def NN.MLTheory.CROWN.alpha.sigmoidWithAlpha {α : Type} [Context α] (l u : α) (alphas : NeuronAlpha α) :
                                      α × α × α × α

                                      Sigmoid relaxation with interpolation alpha. α ∈ [0,1] interpolates between tangent and secant for lower/upper.

                                      Instances For
                                        def NN.MLTheory.CROWN.alpha.tanhWithAlpha {α : Type} [Context α] (l u : α) (alphas : NeuronAlpha α) :
                                        α × α × α × α

                                        Tanh relaxation with interpolation alpha.

                                        Instances For

                                          Configuration for alpha optimization.

                                          • learningRate : Float

                                            Learning rate for alpha updates

                                          • numIterations :

                                            Number of optimization iterations

                                          • optimizeLower : Bool

                                            Whether to optimize lower alphas

                                          • optimizeUpper : Bool

                                            Whether to optimize upper alphas (for smooth activations)

                                          Instances For

                                            Result of alpha optimization.

                                            • alphas : NetworkAlpha α

                                              Optimized network alpha configuration

                                            • finalBound : α

                                              Final bound achieved

                                            • iterations :

                                              Number of iterations used

                                            Instances For

                                              ReLU lower with alpha=0 gives zero slope.

                                              theorem NN.MLTheory.CROWN.alpha.Theorems.relu_lower_alpha_one {α : Type} [Context α] (l u : α) :
                                              match reluLowerWithAlpha l u Numbers.one with | (slope, snd) => slope = Numbers.one

                                              ReLU lower with alpha=1 gives unit slope.

                                              Init layer alpha preserves dimension.