TorchLean API

NN.Spec.Core.Tensor.Constructors

Tensor constructors (spec layer) #

These are small, total constructors for building Spec.Tensor values directly.

They are used heavily inside the spec layer (models/layers) and in proofs, where we want:

If you want a PyTorch-style user experience for examples (dynamic dims, runtime errors, and Float-literal casting), prefer NN/Tensor/API.lean instead.

Design choice (why these are "total"):

def Spec.fill {α : Type} (value : α) (s : Shape) :
Tensor α s

Fill a tensor of shape s with a constant value.

PyTorch analogy: torch.full(shape, value).

Instances For
    def Spec.scalarTensor {α : Type} (value : α) :

    Scalar constructor (explicit name).

    PyTorch analogy: a 0-dim tensor holding one value.

    Instances For
      def Spec.vectorTensor {α : Type} {n : } (values : Fin nα) :

      Vector constructor from a Fin n → α function.

      PyTorch analogy: torch.tensor([...]) with shape (n,), but our input is a function, not a list.

      Instances For
        def Spec.matrixTensor {α : Type} {m n : } (values : Fin mFin nα) :

        Matrix constructor from an Fin m → Fin n → α function.

        PyTorch analogy: torch.tensor([...]).reshape(m, n) (again, function input rather than a list).

        Instances For
          def Spec.nDArrayTensor {α : Type} {n : } {s : Shape} :
          (Fin nTensor α s)Tensor α (Shape.dim n s)

          Generic dim constructor (kept for clarity at call sites).

          Instances For
            def Spec.vectorN {α : Type} [Zero α] (n : ) (f : Fin nα) :

            Generic vector creation that works with any type (handles n = 0).

            Why this exists:

            • For n = 0, a function Fin 0 → α is fine (it has no inputs), but it can be awkward at call sites. This helper makes the intent explicit and keeps patterns uniform.
            • The 0 case is definitionally a tensor with an empty outer dimension (so the function body is never evaluated).
            Instances For
              def Spec.matrixMN {α : Type} [Zero α] (m n : ) (f : Fin mFin nα) :

              Generic matrix creation that works with any type.

              Instances For
                def Spec.generate {α : Type} (n : ) (f : Fin nTensor α Shape.scalar) :

                Build a length-n vector from a Fin n → scalar function.

                This is a small wrapper that reads nicely at call sites where we already have scalar tensors.

                Instances For
                  def Spec.singleton {α : Type} (x : α) :

                  A singleton vector.

                  PyTorch analogy: x.unsqueeze(0) for a scalar x.

                  Instances For
                    def Spec.padLeft {α : Type} [Context α] {n : } {s : Shape} (x : Tensor α s) :

                    Pad a tensor with n leading dimensions of size 1.

                    This is the tensor-level companion of Shape.padLeft. It is useful for broadcasting-style normalization: if you need a tensor to have extra leading batch dimensions of size 1, this does so without changing any underlying values.

                    PyTorch analogy: repeated unsqueeze(0) (or viewing a tensor as having extra leading singleton dims).

                    Instances For
                      def Spec.Tensor.ofArray1D {α : Type} {n : } (xs : Array α) (h : n = xs.size) :

                      Build a vector tensor from an array.

                      We require an explicit proof that the target length matches the array size. This keeps the spec construction honest and makes mismatches obvious at call sites.

                      Instances For
                        def Spec.Tensor.ofArray2D {α : Type} [Inhabited α] {m n : } (xss : Array α) :
                        m * n = xss.sizeTensor α (Shape.dim m (Shape.dim n Shape.scalar))

                        Build a matrix tensor from a flat array (row-major).

                        PyTorch analogy: xss.reshape(m, n) assuming xss is laid out row-major.

                        Instances For
                          def Spec.Tensor.ofArrayDim {α : Type} {n : } {s : Shape} (xs : Array (Tensor α s)) (_h : n = xs.size) :
                          Tensor α (Shape.dim n s)

                          Build a tensor with one leading dimension from an array of inner tensors.

                          This is the "tensor-of-tensors" analogue of Tensor.ofArray1D:

                          • input: Array (Tensor α s) of length n
                          • output: Tensor α (.dim n s)

                          We require an explicit proof that the array size matches n so callers do not silently drop/pad data.

                          Instances For

                            View a length-n vector as an n x 1 "column" matrix.

                            PyTorch analogy: v.unsqueeze(-1) for a 1D tensor v (or v.reshape(n, 1)).

                            Instances For