TorchLean API

NN.Proofs.Models.Attention

Attention Model Proofs #

This is the proof-facing umbrella for attention model facts.

The executable/spec definitions live under NN.Spec.Layers.Attention. This module collects theorems about those definitions: exact causal-mask semantics, attention-weight normalization, and permutation equivariance of self-attention without positional encodings.

Import this file when downstream proofs need the attention theorem bundle. Import the spec module directly when you only need the definitions.