TorchLean API

NN.MLTheory.SelfSupervised.Masking

Masking primitives for self-supervised objectives #

This file gives a small finite-index vocabulary for masked prediction objectives. It is deliberately independent of any particular image or transformer implementation: a patch/token collection is just Fin n → α, and a mask is a Boolean predicate on Fin n.

The goal is to make MAE/JEPA-style objectives precise enough to prove local invariants before we connect them to larger executable models.

@[reducible, inline]

A finite mask over n patches/tokens. true means the index is selected.

Instances For

    Predicate form of a mask.

    Instances For

      The all-visible/all-target mask.

      Instances For

        The empty mask.

        Instances For

          Complement a mask.

          Instances For
            def NN.MLTheory.SelfSupervised.maskedLoss {n : } (idxs : List (Fin n)) (perPatchLoss : Fin n) :

            Generic masked loss over an explicit list of selected indices.

            For theory files we keep the scalar loss as Nat; concrete runtime losses can instantiate this with squared-error bins, quantized patch losses, or any other executable per-patch score.

            Instances For
              @[simp]
              theorem NN.MLTheory.SelfSupervised.maskedLoss_nil {n : } (perPatchLoss : Fin n) :
              maskedLoss [] perPatchLoss = 0
              @[simp]
              theorem NN.MLTheory.SelfSupervised.maskedLoss_cons {n : } (i : Fin n) (idxs : List (Fin n)) (perPatchLoss : Fin n) :
              maskedLoss (i :: idxs) perPatchLoss = perPatchLoss i + maskedLoss idxs perPatchLoss
              theorem NN.MLTheory.SelfSupervised.maskedLoss_append {n : } (xs ys : List (Fin n)) (perPatchLoss : Fin n) :
              maskedLoss (xs ++ ys) perPatchLoss = maskedLoss xs perPatchLoss + maskedLoss ys perPatchLoss
              theorem NN.MLTheory.SelfSupervised.maskedLoss_reverse {n : } (idxs : List (Fin n)) (perPatchLoss : Fin n) :
              maskedLoss idxs.reverse perPatchLoss = maskedLoss idxs perPatchLoss
              theorem NN.MLTheory.SelfSupervised.maskedLoss_eq_zero_of_all_zero {n : } (idxs : List (Fin n)) (perPatchLoss : Fin n) (h : ∀ (i : Fin n), i idxsperPatchLoss i = 0) :
              maskedLoss idxs perPatchLoss = 0