TorchLean API

NN.MLTheory.LearningTheory.DifferentialPrivacy.Core

Differential privacy (learning theory) #

This file introduces a small, reusable vocabulary for differential privacy (DP) in TorchLean’s learning-theory layer.

We isolate the core event-wise definition of differential privacy and the closure properties that show up when DP is connected to learning theory, stability, and verification pipelines.

We keep the definition general by parameterizing over:

Then (ε, δ)-DP is the standard event-wise bound:

P[M a ∈ S] ≤ exp(ε) * P[M a' ∈ S] + δ for all adjacent a ~ a' and measurable events S.

We phrase this in mathlib using ProbabilityMeasure and Measure:

This makes the definition work uniformly for discrete and continuous outputs, while keeping the measurability side-conditions explicit.

We also include a couple of basic structural lemmas that are easy to reuse downstream (and that are often needed to compose DP facts through a larger construction):

Typical instantiations #

In this repository, the stability development (see NN.MLTheory.LearningTheory.Stability.Core) uses datasets of the form Fin n → Z for a fixed “sample size” n. For DP, one often uses a similar encoding and defines Adj in terms of replacing one coordinate.

References #

Mechanisms #

@[reducible, inline]

A randomized mechanism from inputs α to outputs β.

We use ProbabilityMeasure β so probability mass is total (μ univ = 1) and so that post-processing can be phrased using ProbabilityMeasure.map (pushforward along a measurable function).

Instances For

    Differential privacy #

    def NN.MLTheory.LearningTheory.DifferentialPrivacy {α β : Type} (Adj : ααProp) [MeasurableSpace β] (M : Mechanism α β) (ε : ) (δ : ENNReal) :

    (ε, δ)-differential privacy with respect to an adjacency relation Adj.

    This is the standard “event-wise” definition:

    for all adjacent inputs a ~ a' and all measurable events S,

    P[M a ∈ S] ≤ exp(ε) * P[M a' ∈ S] + δ.

    We write probabilities as measures (M a : Measure β) S so the inequality lives in ENNReal.

    Instances For

      A common special case: δ = 0 (“pure DP”).

      @[reducible, inline]
      abbrev NN.MLTheory.LearningTheory.PureDP {α β : Type} (Adj : ααProp) [MeasurableSpace β] (M : Mechanism α β) (ε : ) :
      Instances For
        theorem NN.MLTheory.LearningTheory.differentialPrivacy_mono_delta {α β : Type} {Adj : ααProp} [MeasurableSpace β] {M : Mechanism α β} {ε : } {δ₁ δ₂ : ENNReal} ( : δ₁ δ₂) :
        DifferentialPrivacy Adj M ε δ₁DifferentialPrivacy Adj M ε δ₂

        δ-monotonicity: if a mechanism is (ε, δ₁)-DP and δ₁ ≤ δ₂, then it is also (ε, δ₂)-DP.

        This small lemma is useful when you:

        • prove DP with a “clean” bound, then
        • want to reuse it under a slightly looser δ (e.g. after taking a sup, or adding a slack term).

        Post-processing #

        noncomputable def NN.MLTheory.LearningTheory.postprocess {α β γ : Type} [MeasurableSpace β] [MeasurableSpace γ] (M : Mechanism α β) (f : βγ) (hf : Measurable f) :
        Mechanism α γ

        Post-process a mechanism by applying a measurable function to its output.

        In DP folklore: if M is DP, then so is f ∘ M for any (measurable) f that does not look at the private input. This is called post-processing and is one of the core reasons DP composes well with downstream pipelines.

        Formally, postprocess M f is the pushforward measure (M a).map f for each input a.

        Instances For
          theorem NN.MLTheory.LearningTheory.differentialPrivacy_postprocess {α β γ : Type} {Adj : ααProp} [MeasurableSpace β] [MeasurableSpace γ] {M : Mechanism α β} {ε : } {δ : ENNReal} {f : βγ} (hf : Measurable f) :
          DifferentialPrivacy Adj M ε δDifferentialPrivacy Adj (postprocess M f hf) ε δ

          Post-processing theorem: measurable mappings of outputs preserve DP.

          Proof idea (the standard one):

          • the probability of an event S under the mapped output is the probability of the preimage f ⁻¹' S under the original output;
          • apply DP for M to the measurable set f ⁻¹' S.