TorchLean API

NN.Proofs.Autograd.Notation

Notation for analytic (Fréchet) derivatives in TorchLean autograd proofs #

Mathlib already provides two very useful scoped notations in this area:

TorchLean’s autograd proofs are primarily VJP-first: the central analytic object is the adjoint of the Fréchet derivative (fderiv ℝ f x)†, which is exactly the vector-Jacobian product (Jacobian-transpose product) that PyTorch-style reverse-mode computes.

This file defines a small scoped notation VJP[f, x] for that operator.

@[reducible, inline]
noncomputable abbrev Proofs.Autograd.jacobian {E F : Type} [NormedAddCommGroup E] [InnerProductSpace E] [NormedAddCommGroup F] [InnerProductSpace F] (f : EF) (x : E) :

The Jacobian (Fréchet derivative) of f at x, packaged as a bounded linear map.

Instances For

    The Jacobian (Fréchet derivative) of f at x, packaged as a bounded linear map.

    Instances For
      @[reducible, inline]
      noncomputable abbrev Proofs.Autograd.vjp {E F : Type} [NormedAddCommGroup E] [InnerProductSpace E] [CompleteSpace E] [NormedAddCommGroup F] [InnerProductSpace F] [CompleteSpace F] (f : EF) (x : E) :

      The vector-Jacobian product operator (VJP) of f at x.

      VJP[f, x] : F →L[ℝ] E is the adjoint of the Fréchet derivative (fderiv ℝ f x) : E →L[ℝ] F.

      When f is a scalar loss (F = ℝ), the gradient is VJP[f, x] 1 (equivalently ∇ f x).

      Instances For

        The vector-Jacobian product operator (VJP) of f at x.

        VJP[f, x] : F →L[ℝ] E is the adjoint of the Fréchet derivative (fderiv ℝ f x) : E →L[ℝ] F.

        When f is a scalar loss (F = ℝ), the gradient is VJP[f, x] 1 (equivalently ∇ f x).

        Instances For