PositionalEncoding #
Module wrappers for spec-layer positional encodings.
This is the simplest learnable variant: add a (seqLen, embedDim) parameter tensor.
PyTorch equivalent: "learnable positional embedding" that is added to token embeddings. In practice
this is often implemented via nn.Embedding(seqLen, embedDim) and an index arange; here we treat
the positional tensor itself as the parameter.
def
Spec.PositionalEncodingModuleSpec
{α : Type}
[Context α]
{seqLen embedDim : ℕ}
(pe : PositionalEncodingSpec seqLen embedDim α)
:
ModSpec.NNModuleSpec α (Shape.dim seqLen (Shape.dim embedDim Shape.scalar))
(Shape.dim seqLen (Shape.dim embedDim Shape.scalar))
Learnable positional encoding wrapper (adds a (seqLen,embedDim) parameter tensor).