Notation for analytic (Fréchet) derivatives in TorchLean autograd proofs #
Mathlib already provides two very useful scoped notations in this area:
open scoped InnerProductenables postfix†for adjoints of bounded operators (ContinuousLinearMap.adjoint).open scoped Gradientenables∇ f xfor the gradient of a scalar function (in Hilbert spaces).
TorchLean’s autograd proofs are primarily VJP-first: the central analytic object is the
adjoint of the Fréchet derivative (fderiv ℝ f x)†, which is exactly the vector-Jacobian product
(Jacobian-transpose product) that PyTorch-style reverse-mode computes.
This file defines a small scoped notation VJP[f, x] for that operator.
The Jacobian (Fréchet derivative) of f at x, packaged as a bounded linear map.
Instances For
The Jacobian (Fréchet derivative) of f at x, packaged as a bounded linear map.
Instances For
The vector-Jacobian product operator (VJP) of f at x.
VJP[f, x] : F →L[ℝ] E is the adjoint of the Fréchet derivative (fderiv ℝ f x) : E →L[ℝ] F.
When f is a scalar loss (F = ℝ), the gradient is VJP[f, x] 1 (equivalently ∇ f x).
Instances For
The vector-Jacobian product operator (VJP) of f at x.
VJP[f, x] : F →L[ℝ] E is the adjoint of the Fréchet derivative (fderiv ℝ f x) : E →L[ℝ] F.
When f is a scalar loss (F = ℝ), the gradient is VJP[f, x] 1 (equivalently ∇ f x).