NN.MLTheory.Robustness.Runtime #
Executable Float-specialized utilities for the robustness specifications in
NN.MLTheory.Robustness.Spec.
Robustness runtime utilities (Float) #
This file specializes the polymorphic spec in NN.MLTheory.Robustness.Spec to Float and adds a
few executable helpers used by experiments, examples, and command-line diagnostics.
It includes:
- deterministic specializations (norms/distances/balls), and
- small sampling-based helpers that compute empirical quantities (e.g. “max observed ratio” over a chosen sample set).
The sampling helpers return concrete values computed from finite samples. They do not produce
certificate checks or Prop-level guarantees.
Runtime L∞ norm, defined by specializing the polymorphic spec to Float.
Instances For
Runtime L2 norm, defined by specializing the polymorphic spec to Float.
Instances For
Internal helper: L2 norm specialization reused by runtime wrappers in this module.
Instances For
Internal helper: L∞ norm specialization reused by runtime wrappers in this module.
Instances For
L2 distance (specialization of Robustness.Spec.tensor_distance).
Instances For
L∞ distance (specialization of Robustness.Spec.tensor_distance).
Instances For
Decide whether t lies in the closed L2-ball of radius ε around center.
Instances For
Decide whether t lies in the closed L∞-ball of radius ε around center.
Instances For
Empirical (sampling-based) helpers #
Maximum observed L2 Lipschitz ratio over a given finite list of input pairs.
This computes:
max_{(x,y) in pairs} ‖f x - f y‖₂ / ‖x - y‖₂
with the convention that a pair with ‖x-y‖₂ = 0 contributes 0.
Factually: this is a maximum over the provided pairs only; it is not a certified global bound.
Edge cases / conventions:
- If
pairs = [], the result is0(the fold's initial value). - If a particular pair satisfies
x = y(so the input distance is0), that pair contributes0.
Instances For
Sampling design (why these helpers look “complicated”) #
The spec tensor representation Spec.Tensor α s is shape-indexed and is represented
functionally (Fin n → ...). That is great for proofs, but it is not the easiest shape to work
with when you want to build concrete perturbations of a fixed length at runtime.
For sampling, we therefore go through a standard interop path:
- convert the type-level shape
s : Shapeinto a runtime shape listshapeList s : List Nat, - compute the number of scalar entries
numel sas the product of those dimensions, then - construct a flat list
xs : List Floatof lengthnumel s, unflattenit back into a tensor of the appropriate shape.
Correctness (what is and is not guaranteed):
- Every point returned by
sampleL2Ballprovably satisfies the predicatein_l2_ball_float center εbecause we filter candidates using that very predicate. - The sampler is deterministic (no
IOrandomness). You control variability via theseedinput (here derived from the loop index). - The sampler is not intended to approximate a uniform distribution on the ball, and it does not attempt to be “complete” in any verification sense: it is purely an empirical exploration tool to produce inputs for downstream checks/counterexamples.
Runtime view of a type-level Shape (same convention as TensorBridge.shapeToList).
Instances For
Number of scalar elements (“numel”) in a tensor of shape s.
This is the runtime analogue of Spec.Shape.size. We compute it via TensorArray.shapeProd on the
list view of the shape.
Instances For
Cast a tensor with type-level shape s into the definitional-equal “list-shaped” view used by
TensorBridge.flatten/unflatten.
This cast is purely a type-level transport; it does not change the tensor values.
Instances For
Inverse cast: transport a “list-shaped” tensor back to the original type-level shape s.
Instances For
Deterministically generate a length-n direction vector from a seed.
This is not cryptographic and not intended to model a probabilistic distribution; it is a
deterministic way to generate reproducible, varied directions without introducing IO.
Instances For
Euclidean norm of a flat list (helper for normalization).
Instances For
Unflatten a flat list of length numel s into a Spec.Tensor Float s.
The length proof is part of the interface to avoid “silent truncation/padding”.
Instances For
Build a perturbation tensor of (approximately) the given radius, deterministically from seed.
Construction:
- Make a flat “direction” list
baseof lengthnumel s. - Normalize it to unit norm (when nonzero).
- Scale by
radius. - Unflatten back into the tensor shape.
This ensures the perturbation has the right shape by construction.
Instances For
Generate candidate samples in the closed L2 ball around center (deterministic sampler).
We generate numSamples candidates, each constructed as:
center + δ_k where δ_k is a direction derived from k and then scaled to a radius in [0, ε].
We then filter candidates using in_l2_ball_float center ε to ensure that every returned
element satisfies the predicate under the same runtime distance function.
This “generate + filter” style is deliberate: it makes the only factual guarantee we claim extremely clear (“every returned element lies in the ball”), independent of the details of the direction generator.
Instances For
Turn a nonempty list [x₀,x₁,…,x_{m-1}] into adjacent pairs
[(x₀,x₁),(x₁,x₂),…,(x_{m-1},x₀)].
This is a small combinator that is useful when you want to turn a sample list into a set of “nearby pairs” for empirical ratio computations.
Instances For
Empirical max “gain from the center” on a sampled L2-ball neighborhood.
This computes:
max_{x in samples} ‖f x - f x₀‖₂ / ‖x - x₀‖₂
where samples are generated by Sampling.sampleL2Ball. This is a maximum over those samples
only (not a certified global Lipschitz bound).