TorchLean API

NN.Spec.Core.Sequence

Sequence utilities (spec layer) #

These helpers manipulate sequence tensors, typically of shape:

Tensor α (.dim seqLen (.dim featureDim .scalar)).

They are intentionally simple (structural recursion on Tensor.dim) and are used by the RNN/LSTM/GRU specs.

Why we have a dedicated "sequence" module:

PyTorch analogies:

Naming note:

References / analogies:

def Spec.mapSequenceSpec {α : Type} {seqLen inDim outDim : } (f : Tensor α (Shape.dim inDim Shape.scalar)Tensor α (Shape.dim outDim Shape.scalar)) (seq : Tensor α (Shape.dim seqLen (Shape.dim inDim Shape.scalar))) :
Tensor α (Shape.dim seqLen (Shape.dim outDim Shape.scalar))

Map a per‑step function over a sequence (vector input → vector output).

Instances For
    def Spec.mapSequenceVecScalarSpec {α : Type} {seqLen dim : } (f : Tensor α Shape.scalarTensor α (Shape.dim dim Shape.scalar)) (seq : Tensor α (Shape.dim seqLen Shape.scalar)) :

    Map a per‑step function over a scalar sequence, producing vectors.

    Instances For
      def Spec.map2SequenceSpec2 {α : Type} {seqLen dim1 dim2 : } (outShape : Shape) (f : Tensor α (Shape.dim dim1 Shape.scalar)Tensor α (Shape.dim dim2 Shape.scalar)Tensor α outShape) (seq1 : Tensor α (Shape.dim seqLen (Shape.dim dim1 Shape.scalar))) (seq2 : Tensor α (Shape.dim seqLen (Shape.dim dim2 Shape.scalar))) :
      Tensor α (Shape.dim seqLen outShape)

      Zip two sequences together, then apply a vector x vector function at each step.

      We pass outShape explicitly so the result type is easy for Lean to elaborate at call sites. This avoids a lot of "stuck" metavariables in larger model definitions.

      Instances For
        def Spec.map2SequenceVecScalarSpec {α : Type} {seqLen dim : } (outShape : Shape) (f : Tensor α (Shape.dim dim Shape.scalar)Tensor α Shape.scalarTensor α outShape) (seq1 : Tensor α (Shape.dim seqLen (Shape.dim dim Shape.scalar))) (seq2 : Tensor α (Shape.dim seqLen Shape.scalar)) :
        Tensor α (Shape.dim seqLen outShape)

        Zip a vector sequence and a scalar sequence, then apply a vector×scalar function per step.

        Instances For
          def Spec.reduceSumSequenceSpec {α : Type} [Context α] {seqLen dim : } (seq : Tensor α (Shape.dim seqLen (Shape.dim dim Shape.scalar))) (h : seqLen 0) :

          Reduce a sequence by summing along the time axis (axis = 0).

          We ask for seqLen ≠ 0 because downstream uses often combine this with mean-like operations or want to avoid degenerate "empty sequence" cases in proofs. (PyTorch defines behavior for zero-length dims, but we prefer making those cases explicit in the spec layer.)

          Instances For
            def Spec.reduceSumSequenceSpec2 {α : Type} [Context α] {seqLen outputSize hiddenSize : } (seq : Tensor α (Shape.dim seqLen (Shape.dim outputSize (Shape.dim hiddenSize Shape.scalar)))) (h : seqLen 0) :
            Tensor α (Shape.dim outputSize (Shape.dim hiddenSize Shape.scalar))

            Like reduceSumSequenceSpec, but for 3-D tensors (seqLen × outputSize × hiddenSize).

            Instances For
              def Spec.reverseSequenceSpec {α : Type} {seqLen dim : } (seq : Tensor α (Shape.dim seqLen (Shape.dim dim Shape.scalar))) :

              Reverse a sequence along its time axis.

              PyTorch analogy: seq.flip(dims=[0]).

              Instances For
                def Spec.concatSequenceSpec {α : Type} {seqLen dim1 dim2 : } (seq1 : Tensor α (Shape.dim seqLen (Shape.dim dim1 Shape.scalar))) (seq2 : Tensor α (Shape.dim seqLen (Shape.dim dim2 Shape.scalar))) :
                Tensor α (Shape.dim seqLen (Shape.dim (dim1 + dim2) Shape.scalar))

                Concatenate two sequences along the feature dimension.

                PyTorch analogy: torch.cat([seq1, seq2], dim=1) when shapes are (seqLen, dim1) and (seqLen, dim2).

                Do not confuse this with Spec.Tensor.concatSequenceSpec (defined in NN.Spec.Core.TensorReductionShape), which concatenates along the time axis (axis 0).

                Instances For
                  def Spec.reduceSumVec {α : Type} [Add α] [Zero α] {inDim batch : } (axis : ) (t : Tensor α (Shape.dim batch (Shape.dim inDim Shape.scalar))) :
                  axis = 0Tensor α (Shape.dim inDim Shape.scalar)

                  Reduce a batch of vectors by summing over the batch axis (explicit axis = 0).

                  This axis-explicit helper takes an axis argument plus a proof it equals 0. Prefer reduce_sum_vec2 when the axis is fixed by the surrounding code.

                  Instances For
                    def Spec.reduceSumVec2 {α : Type} [Add α] [Zero α] {batch inDim : } (t : Tensor α (Shape.dim batch (Shape.dim inDim Shape.scalar))) :

                    Same as reduce_sum_vec, but avoids the explicit axis argument.

                    We define this with a small Nat loop so the recursion is obvious to Lean and doesn't depend on List.finRange. It's the same mathematical operation: sum all batch vectors pointwise.

                    Instances For
                      @[irreducible]
                      def Spec.reduceSumVec2.loop {α : Type} [Add α] {batch inDim : } (batchFn : Fin batchTensor α (Shape.dim inDim Shape.scalar)) (i : ) (acc : Tensor α (Shape.dim inDim Shape.scalar)) :
                      Instances For