TorchLean API

NN.MLTheory.LearningTheory.Stability.Dynamics.Spec

NN.MLTheory.Stability.Spec #

Scalar-polymorphic stability definitions for discrete-time dynamical systems x_{t+1} = f x_t over shape-indexed tensors.

Stability specifications (discrete-time dynamical systems) #

This module defines standard stability notions for iterated maps f : Tensor α s → Tensor α s, phrased over TorchLean's shape-indexed tensors:

The definitions are polymorphic in the scalar type α via [Context α]; for noncomputable quantities (e.g. the supremum defining a stability margin on ), we expose the notion via a type class StabilityMarginComputable. TorchLean installs the real supremum instance globally and keeps the conservative 0 lower-bound instance behind an explicit opt-in scope for examples and tests.

References #

These are standard definitions in control theory / dynamical systems. Useful entry points include:

@[reducible, inline]
abbrev NN.MLTheory.Stability.Spec.iterate {α : Type} {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (n : ) (x : Spec.Tensor α s) :

Iterate f for n steps: iterate f n x = f^[n] x.

Instances For

    A computable notion of stability margin, matching the "largest invariant ball" intuition: the largest r ≥ 0 such that the closed ball {x | dist(eq, x) ≤ r} is forward-invariant.

    TorchLean only installs a real supremum-based instance globally for . Other scalar backends can opt into the conservative lower-bound instance below explicitly; this avoids silently reporting 0 as a semantic stability margin for arbitrary scalar types.

    Instances

      Named opt-in scope for the conservative stability-margin lower bound.

      Use open scoped NN.MLTheory.Stability.ConservativeMargin only in examples/tests that deliberately want a total fallback. Production theorem statements should either use the instance or require an explicit StabilityMarginComputable α hypothesis.

      def NN.MLTheory.Stability.Spec.isLyapunovStable {α : Type} [Context α] {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) :

      Lyapunov stability of equilibrium for the discrete-time system x_{t+1} = f x_t.

      This is the usual ε/δ definition using the distance induced by norm.

      Instances For
        def NN.MLTheory.Stability.Spec.isAsymptoticallyStable {α : Type} [Context α] {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) :

        Asymptotic stability: Lyapunov stability plus convergence to equilibrium for nearby initial conditions.

        Instances For
          def NN.MLTheory.Stability.Spec.isExponentiallyStable {α : Type} [Context α] {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) (decay_rate M : α) :

          Exponential stability with decay parameters decay_rate and M.

          This is a quantitative strengthening of asymptotic stability.

          Instances For
            def NN.MLTheory.Stability.Spec.isGloballyStable {α : Type} [Context α] {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) :

            Global stability: every initial condition converges to equilibrium.

            This is stated as convergence in the distance induced by norm.

            Instances For
              def NN.MLTheory.Stability.Spec.isInputToStateStable {α : Type} [Context α] {s₁ s₂ : Spec.Shape} (f : Spec.Tensor α s₁Spec.Tensor α s₂Spec.Tensor α s₁) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (β : ααα) (γ : αα) :

              Input-to-state stability (ISS) for an input-driven system x_{t+1} = f x_t u_t.

              This is the standard bound ‖x_t‖ ≤ β(‖x_0‖, t) + γ( sup_{k ≤ t} ‖u_k‖ ) packaged as a Prop.

              Instances For
                def NN.MLTheory.Stability.Spec.isInputToStateStable.iterateWithInput {α : Type} {s₁ s₂ : Spec.Shape} (f : Spec.Tensor α s₁Spec.Tensor α s₂Spec.Tensor α s₁) (input_seq : Spec.Tensor α s₂) :
                Spec.Tensor α s₁Spec.Tensor α s₁
                Instances For
                  def NN.MLTheory.Stability.Spec.isInputToStateStable.sup_norm_over_time {α : Type} [Context α] {s₂ : Spec.Shape} (norm : {s : Spec.Shape} → Spec.Tensor α sα) (input_seq : Spec.Tensor α s₂) (t : ) :
                  α
                  Instances For
                    def NN.MLTheory.Stability.Spec.isBiboStable {α : Type} [Context α] {s₁ s₂ : Spec.Shape} (f : Spec.Tensor α s₁Spec.Tensor α s₂) (norm₁ norm₂ : {s : Spec.Shape} → Spec.Tensor α sα) (bound : α) :

                    Bounded-input bounded-output (BIBO) stability with respect to norm₁ and norm₂.

                    Instances For
                      def NN.MLTheory.Stability.Spec.isIncrementallyStable {α : Type} [Context α] {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (contraction_factor : α) :

                      Incremental stability: distances between trajectories contract by contraction_factor.

                      This is a discrete-time contraction condition phrased using tensor_distance.

                      Instances For
                        def NN.MLTheory.Stability.Spec.stabilityMargin {α : Type} [Context α] {s : Spec.Shape} [StabilityMarginComputable α] (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) :
                        α

                        Stability margin: the largest forward-invariant ball radius around equilibrium (when computable).

                        Instances For
                          def NN.MLTheory.Stability.Spec.isFiniteTimeStable {α : Type} {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (_norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) (settling_time_steps : ) :

                          Finite-time stability: trajectories reach equilibrium exactly within a fixed step budget.

                          Instances For
                            def NN.MLTheory.Stability.Spec.isPracticallyStable {α : Type} [Context α] {s : Spec.Shape} (f : Spec.Tensor α sSpec.Tensor α s) (norm : {s : Spec.Shape} → Spec.Tensor α sα) (equilibrium : Spec.Tensor α s) (ultimate_bound : α) :

                            Practical stability: trajectories eventually enter and remain in a fixed ultimate_bound ball.

                            Instances For
                              def NN.MLTheory.Stability.Spec.isTrainingStable {α : Type} [Context α] {s : Spec.Shape} (update_rule : Spec.Tensor α sSpec.Tensor α s) (loss : Spec.Tensor α sα) (parameters : Spec.Tensor α s) :

                              One-step monotonicity of a training loss under an update rule.

                              This is the “training stability” predicate used as a spec for decreasing-loss update rules.

                              Instances For
                                def NN.MLTheory.Stability.Spec.isGeneralizationStable {α : Type} [Context α] {s₁ s₂ : Spec.Shape} (training_algorithm : List (Spec.Tensor α s₁ × Spec.Tensor α s₂)Spec.Tensor α s₁Spec.Tensor α s₂) (norm₁ norm₂ : {s : Spec.Shape} → Spec.Tensor α sα) (stability_constant : α) :

                                Generalization stability of a learning algorithm: small dataset changes produce small prediction changes.

                                This is a generic stability-style specification; concrete instances typically choose a specific dataset metric and output norm.

                                Instances For
                                  def NN.MLTheory.Stability.Spec.isGeneralizationStable.dataset_distance {α : Type} [Context α] {s₁ s₂ : Spec.Shape} (norm₁ norm₂ : {s : Spec.Shape} → Spec.Tensor α sα) (d₁ d₂ : List (Spec.Tensor α s₁ × Spec.Tensor α s₂)) :
                                  α
                                  Instances For