TorchLean API

NN.Runtime.Autograd.TorchLean.Functional

Functional #

TorchLean functional helpers in the style of torch.* building blocks.

These are derived ops built from the small primitive TorchLean.Ops surface, so they work for:

The goal is to make losses readable without forcing users to call specialized ops like mse_loss directly.

PyTorch References #

AD References #

For background on reverse-mode AD (the idea behind tape-based autograd), see:

Elementwise helpers #

Safe list indexing helper used in the dynamic (String-parsed) einsum/permute code paths.

Instances For
    def Runtime.Autograd.TorchLean.F.square {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (x : RefTy m α s) :
    m (RefTy m α s)

    Elementwise square: x ↦ x * x.

    PyTorch analogue: torch.square.

    Instances For

      Checkpointing (semantics-first identity wrapper) #

      def Runtime.Autograd.TorchLean.F.checkpoint {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s t : Spec.Shape} (f : RefTy m α sm (RefTy m α t)) (x : RefTy m α s) :
      m (RefTy m α t)

      Checkpoint wrapper for API parity with PyTorch-style memory-saving patterns.

      In this codebase, checkpointing is a semantic identity wrapper (checkpoint f x = f x). Backends that implement recomputation can refine this hook without changing the mathematical meaning.

      Instances For

        Detach / stop-grad #

        def Runtime.Autograd.TorchLean.F.detach {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (x : RefTy m α s) :
        m (RefTy m α s)

        Stop-gradient boundary (forward identity).

        Instances For
          def Runtime.Autograd.TorchLean.F.stopGrad {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (x : RefTy m α s) :
          m (RefTy m α s)

          Alias for detach.

          Instances For

            Broadcasting helpers #

            def Runtime.Autograd.TorchLean.F.addB {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s₁ s₂ t : Spec.Shape} [s₁.BroadcastTo t] [s₂.BroadcastTo t] (x : RefTy m α s₁) (y : RefTy m α s₂) :
            m (RefTy m α t)

            Broadcasting add: compute x + y after broadcasting both inputs to the target shape t.

            PyTorch analogue: torch.add (broadcasting semantics).

            Instances For
              def Runtime.Autograd.TorchLean.F.mulB {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s₁ s₂ t : Spec.Shape} [s₁.BroadcastTo t] [s₂.BroadcastTo t] (x : RefTy m α s₁) (y : RefTy m α s₂) :
              m (RefTy m α t)

              Broadcasting multiply: compute x * y after broadcasting both inputs to the target shape t.

              PyTorch analogue: torch.mul (broadcasting semantics).

              Instances For

                Indexing helpers #

                def Runtime.Autograd.TorchLean.F.embedding {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {vocab dim : } (w : RefTy m α (Spec.Shape.dim vocab (Spec.Shape.dim dim Spec.Shape.scalar))) (idx : Fin vocab) :

                Embedding lookup (gather one row of an embedding table).

                Given w : vocab × dim, return w[idx] : dim.

                PyTorch analogue: torch.nn.functional.embedding for a single index.

                Instances For

                  Reductions #

                  def Runtime.Autograd.TorchLean.F.mean {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (x : RefTy m α s) :

                  Mean reduction: mean(x) = sum(x) / numel(x).

                  PyTorch analogue: torch.mean.

                  Instances For

                    Seeded RNG helpers #

                    def Runtime.Autograd.TorchLean.F.randUniform {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (seed : ) :
                    m (RefTy m α s)

                    Deterministic U[0,1) tensor generator (seeded).

                    Instances For
                      def Runtime.Autograd.TorchLean.F.bernoulliMask {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (keepProb : RefTy m α Spec.Shape.scalar) (seed : ) :
                      m (RefTy m α s)

                      Deterministic {0,1} mask generator (seeded) with scalar keep-probability input.

                      Instances For
                        def Runtime.Autograd.TorchLean.F.dropoutSeeded {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (x : RefTy m α s) (p : α) (seed : ) (training : Bool := true) :
                        m (RefTy m α s)

                        Seeded dropout implemented as x * mask / keepProb where mask ∈ {0,1} is sampled from a deterministic PRNG keyed by seed.

                        Instances For
                          def Runtime.Autograd.TorchLean.F.dropoutRefSeeded {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (x : RefTy m α s) (p : RefTy m α Spec.Shape.scalar) (seed : ) (training : Bool := true) :
                          m (RefTy m α s)

                          Seeded dropout where the probability is supplied as a scalar tensor ref.

                          This is useful in model builders where the layer definition stores p as data, avoiding an ad-hoc Float → α cast in polymorphic model code.

                          Instances For

                            Einsum-ish building blocks #

                            Matrix matmul: [m,n] × [n,p] → [m,p].

                            Instances For
                              def Runtime.Autograd.TorchLean.F.bmm {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {batch mDim nDim pDim : } (a : RefTy m α (Spec.Shape.dim batch (Spec.Shape.dim mDim (Spec.Shape.dim nDim Spec.Shape.scalar)))) (b : RefTy m α (Spec.Shape.dim batch (Spec.Shape.dim nDim (Spec.Shape.dim pDim Spec.Shape.scalar)))) :

                              Batched matmul: [batch,m,n] × [batch,n,p] → [batch,m,p].

                              Instances For

                                Typed einsum wrappers (fast, total) #

                                These are non-Option equivalents for the most common einsum contractions in ML code. They are intended to be used directly (no string parsing), and serve as the fast-path targets for einsumDyn.

                                einsum("ij,jk->ik", A, B) as a typed matmul.

                                Instances For

                                  einsum("bij,bjk->bik", A, B) as a typed batched matmul.

                                  Instances For

                                    Einsum pattern used in attention: bhid,bhjd -> bhij (batched Q·Kᵀ per head).

                                    Instances For
                                      def Runtime.Autograd.TorchLean.F.einsumBhijBhjdBhid {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {batch heads iDim jDim dDim : } (attn : RefTy m α (Spec.Shape.dim batch (Spec.Shape.dim heads (Spec.Shape.dim iDim (Spec.Shape.dim jDim Spec.Shape.scalar))))) (v : RefTy m α (Spec.Shape.dim batch (Spec.Shape.dim heads (Spec.Shape.dim jDim (Spec.Shape.dim dDim Spec.Shape.scalar))))) :

                                      Einsum pattern used in attention: bhij,bhjd -> bhid (batched Attn·V per head).

                                      Instances For

                                        General einsum (PyTorch-style subscripts; runtime-checked) #

                                        Decidable instance for Shape.well_formed, used by the dynamic einsum lowering.

                                        Instances For
                                          @[implicit_reducible]

                                          Local decidability instance for Shape.well_formed (used by the dynamic einsum lowering).

                                          Label used by the dynamic einsum parser.

                                          chr c is a concrete axis label like 'i' or 'j', while ell k is a generated ellipsis label that stands for "some number of unnamed batch-like axes".

                                          Instances For

                                            One operand’s subscript, split around an optional ellipsis.

                                            For example, parsing "ab...cd" yields:

                                            Instances For

                                              Remove ASCII whitespace to simplify the hand-rolled parser.

                                              Instances For

                                                Parse a single operand subscript (with at most one ...).

                                                Instances For

                                                  Parsed einsum equation: input subscripts and an optional explicit output subscript.

                                                  Instances For

                                                    Parse an equation of the form "a,b->c" or "a,b" (implicit output).

                                                    Instances For

                                                      Detect whether a list of labels contains any duplicates (order-preserving scan).

                                                      Instances For

                                                        Find the first index of a label (like List.findIdx?, but returning an Option Nat).

                                                        Instances For

                                                          Swap adjacent elements at position d (used to implement permutations via adjacent swaps).

                                                          Instances For

                                                            Convert a permutation of axes into a sequence of adjacent swaps.

                                                            This mirrors the IR-side lowering strategy: represent a general permutation as a list of swaps at depths, then implement swaps with swapAdjacentAtDepth.

                                                            Instances For
                                                              @[irreducible]
                                                              Instances For

                                                                Expand an input operand’s labels to a full label list matching the operand’s rank.

                                                                If the subscript contains an ellipsis, this inserts fresh Label.ell labels so that the total label count matches Shape.rank s.

                                                                Instances For

                                                                  Expand output labels, materializing the full ellipsis range [0..maxEll) when present.

                                                                  Instances For

                                                                    Small association-list helpers #

                                                                    To keep this file dependency-light, we represent maps as association lists and use small helpers instead of Std.HashMap.

                                                                    @[reducible, inline]

                                                                    Occurrence counts for labels, represented as an association list.

                                                                    Instances For

                                                                      Lookup a label count.

                                                                      Instances For

                                                                        Lookup with default for Counts.

                                                                        Instances For

                                                                          Increment a label’s count (inserting it if absent).

                                                                          Instances For

                                                                            Count all labels across a list of operands.

                                                                            Instances For

                                                                              Keep first occurrences of labels, preserving order.

                                                                              Instances For
                                                                                @[reducible, inline]

                                                                                Map each label to its concrete dimension size (association list).

                                                                                Instances For

                                                                                  Lookup a label’s dimension size.

                                                                                  Instances For

                                                                                    Lookup with default for DimMap.

                                                                                    Instances For

                                                                                      Insert/update a label’s dimension size in a DimMap.

                                                                                      Instances For

                                                                                        Infer a consistent label-to-dimension map from operand label lists and operand shapes.

                                                                                        This implements standard einsum broadcast rules: if a label is seen with both d and 1, we keep d; if two non-1 sizes disagree, we error.

                                                                                        Instances For

                                                                                          Compute a Shape.CanBroadcastTo witness at runtime.

                                                                                          This mirrors the typeclass-based broadcasting used elsewhere, but returns an Option so the dynamic einsum lowering can fail gracefully.

                                                                                          Instances For
                                                                                            def Runtime.Autograd.TorchLean.F.Einsum.permuteBySwaps {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (x : (s : Spec.Shape) × RefTy m α s) (swaps : List ) :
                                                                                            m ((s : Spec.Shape) × RefTy m α s)

                                                                                            Apply a permutation expressed as adjacent swap depths to an existentially-shaped tensor.

                                                                                            This is the runtime “apply swaps” primitive used by both .permute and the dynamic einsum output reordering.

                                                                                            Instances For

                                                                                              Reflexive broadcast witness constructor (s can always broadcast to itself).

                                                                                              Instances For

                                                                                                Remove the element at index n (0-based), leaving the list unchanged if out of bounds.

                                                                                                Instances For

                                                                                                  Compute a permutation that maps src to tgt when duplicates are present.

                                                                                                  This is used for the “diagonal embedding” case when output labels contain repeats: we temporarily expand the output with extra axes, then permute back to the requested (possibly-duplicated) order.

                                                                                                  Instances For

                                                                                                    Build a diagonal mask tensor (spec-level) for diagonal embedding/extraction.

                                                                                                    Given axes p and q, the resulting tensor is 1 when the indices along those axes agree, and 0 otherwise. The ip/iq parameters track the first-seen indices while recursing over dimensions.

                                                                                                    Instances For

                                                                                                      Convenience wrapper around diagMaskSpecAux with fresh index-tracking state.

                                                                                                      Instances For

                                                                                                        Shape.ofList is a left-inverse of Shape.toList.

                                                                                                        Specialize diagMaskSpec to a concrete Shape by rewriting through Shape.toList.

                                                                                                        Instances For

                                                                                                          Return the first duplicate label in xs, along with its original and duplicate positions.

                                                                                                          Instances For

                                                                                                            Shape.appendDim s 1 preserves Shape.size (used to justify reshape tricks).

                                                                                                            Permutation list that moves axis to the last position (keeping relative order of others).

                                                                                                            Instances For
                                                                                                              def Runtime.Autograd.TorchLean.F.einsumDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (equation : String) (xs : List ((s : Spec.Shape) × RefTy m α s)) :
                                                                                                              m (Option ((s : Spec.Shape) × RefTy m α s))

                                                                                                              Runtime-checked einsum that returns an existential output shape.

                                                                                                              Supported:

                                                                                                              • multiple inputs, explicit/implicit output, and ellipsis (...).
                                                                                                              • repeated labels within an operand (diagonal extraction / trace semantics).
                                                                                                              • repeated labels in the output (diagonal embedding / zeroing off-diagonal entries).

                                                                                                              Currently unsupported (returns none):

                                                                                                              • non-broadcastable size mismatches.
                                                                                                              • any case that would require gather/scatter-style indexing (not in the verifier-friendly op set).

                                                                                                              This is implemented purely by reordering, reshaping, broadcasting, elementwise multiplication, and summing contracted axes.

                                                                                                              Instances For
                                                                                                                def Runtime.Autograd.TorchLean.F.einsumDyn.diagonalizeOperand {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (fuel : ) (cur : (s : Spec.Shape) × RefTy m α s) (labs : List Einsum.Label) :
                                                                                                                OptionT m (((s : Spec.Shape) × RefTy m α s) × List Einsum.Label)
                                                                                                                Instances For
                                                                                                                  def Runtime.Autograd.TorchLean.F.einsumDyn.reduceContracted {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (n : ) (cur : (s : Spec.Shape) × RefTy m α s) :
                                                                                                                  OptionT m ((s : Spec.Shape) × RefTy m α s)
                                                                                                                  Instances For
                                                                                                                    def Runtime.Autograd.TorchLean.F.einsum {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {sOut : Spec.Shape} (equation : String) (xs : List ((s : Spec.Shape) × RefTy m α s)) :
                                                                                                                    m (Option (RefTy m α sOut))

                                                                                                                    einsum with an expected output shape.

                                                                                                                    Instances For

                                                                                                                      Shape/axis helpers #

                                                                                                                      def Runtime.Autograd.TorchLean.F.swapAdjacentAtDepth {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (depth : ) (x : RefTy m α s) :
                                                                                                                      m (RefTy m α (s.swapAdjacentAtDepth depth))

                                                                                                                      Swap two adjacent axes at a given nesting depth.

                                                                                                                      This is the primitive used to implement general permutations via a sequence of adjacent swaps. It corresponds to the backend op Torch.swapAdjacentAtDepth.

                                                                                                                      Instances For

                                                                                                                        Core tensor semantics (PyTorch-style) #

                                                                                                                        Detect duplicate Nats (used to validate axis lists at runtime).

                                                                                                                        Instances For

                                                                                                                          Insert x into a list kept in descending order.

                                                                                                                          Instances For

                                                                                                                            Sort a list of Nats in descending order (small insertion sort).

                                                                                                                            Instances For

                                                                                                                              Swap depths that move an axis to the last position (for “reduce along axis” lowering).

                                                                                                                              Instances For

                                                                                                                                Swap depths that move an axis to the front position.

                                                                                                                                Instances For

                                                                                                                                  Decidable Shape.well_formed for the dynamic reduction/slicing helpers.

                                                                                                                                  Instances For
                                                                                                                                    @[implicit_reducible]

                                                                                                                                    Local decidability instance for Shape.well_formed (used by dynamic reduction/slicing helpers).

                                                                                                                                    Shape.appendDim s 1 preserves size (used to justify reshape in unsqueeze/keepdim code).

                                                                                                                                    def Runtime.Autograd.TorchLean.F.permuteDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axes : List ) (x : RefTy m α s) :
                                                                                                                                    m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                    Dynamic permutation: like permute, but returns an existential output shape.

                                                                                                                                    PyTorch analogue: torch.permute / Tensor.permute (with runtime checks).

                                                                                                                                    Instances For
                                                                                                                                      def Runtime.Autograd.TorchLean.F.permute {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s sOut : Spec.Shape} (axes : List ) (x : RefTy m α s) :
                                                                                                                                      m (Option (RefTy m α sOut))

                                                                                                                                      Permutation with an expected output shape.

                                                                                                                                      This is a thin wrapper over permuteDyn that checks the computed shape equals sOut.

                                                                                                                                      Instances For
                                                                                                                                        def Runtime.Autograd.TorchLean.F.Internal.reduceAlongLastSum {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (x : (s : Spec.Shape) × RefTy m α s) :
                                                                                                                                        m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                        Reduce along the last axis with sum, returning the new (existential) shape.

                                                                                                                                        This is the primitive step used by reduceDimsDynCore after it has permuted the requested axis to the last position.

                                                                                                                                        Instances For
                                                                                                                                          def Runtime.Autograd.TorchLean.F.Internal.reduceAlongLastMean {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (x : (s : Spec.Shape) × RefTy m α s) :
                                                                                                                                          m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                          Like reduceAlongLastSum, but using mean.

                                                                                                                                          Instances For
                                                                                                                                            def Runtime.Autograd.TorchLean.F.Internal.reduceDimsDynCore {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (reduceLast : (s : Spec.Shape) × RefTy m α sm (Option ((s' : Spec.Shape) × RefTy m α s'))) {s : Spec.Shape} (axes : List ) (keepdim : Bool) (x : RefTy m α s) :
                                                                                                                                            m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                            Core implementation for dynamic reductions over multiple axes.

                                                                                                                                            This lowers “reduce along axis k” to:

                                                                                                                                            1. permute axis k to the last position,
                                                                                                                                            2. call reduceLast, and
                                                                                                                                            3. optionally re-insert a singleton dimension when keepdim = true.

                                                                                                                                            reduce_sum_dimsDyn and reduce_mean_dimsDyn are just specializations.

                                                                                                                                            Instances For
                                                                                                                                              def Runtime.Autograd.TorchLean.F.reduceSumDimsDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axes : List ) (x : RefTy m α s) (keepdim : Bool := false) :
                                                                                                                                              m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                              Dynamic multi-axis sum reduction (like torch.sum(x, dim=axes, keepdim=...)).

                                                                                                                                              Instances For
                                                                                                                                                def Runtime.Autograd.TorchLean.F.reduceMeanDimsDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axes : List ) (x : RefTy m α s) (keepdim : Bool := false) :
                                                                                                                                                m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                Dynamic multi-axis mean reduction (like torch.mean(x, dim=axes, keepdim=...)).

                                                                                                                                                Instances For
                                                                                                                                                  def Runtime.Autograd.TorchLean.F.sliceRangeAxisDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis start len : ) (x : RefTy m α s) :
                                                                                                                                                  m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                  Dynamic slice on an arbitrary axis.

                                                                                                                                                  This lowers slice_range_axisDyn axis start len to:

                                                                                                                                                  1. permute axis to the front,
                                                                                                                                                  2. call the axis-0 slice primitive, then
                                                                                                                                                  3. permute back.
                                                                                                                                                  Instances For
                                                                                                                                                    def Runtime.Autograd.TorchLean.F.softmaxDimDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis : ) (x : RefTy m α s) :
                                                                                                                                                    m (Option (RefTy m α s))

                                                                                                                                                    Dynamic softmax over an arbitrary axis (implemented by permuting to last, applying softmax, permuting back).

                                                                                                                                                    Instances For
                                                                                                                                                      def Runtime.Autograd.TorchLean.F.logSoftmaxDimDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis : ) (x : RefTy m α s) (ε : α := Numbers.epsilon) :
                                                                                                                                                      m (Option (RefTy m α s))

                                                                                                                                                      Dynamic log_softmax over an arbitrary axis (with optional epsilon for numerical stability).

                                                                                                                                                      Instances For
                                                                                                                                                        def Runtime.Autograd.TorchLean.F.unsqueezeDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis : ) (x : RefTy m α s) :
                                                                                                                                                        m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                        Dynamic unsqueeze: insert a singleton dimension at axis.

                                                                                                                                                        PyTorch analogue: torch.unsqueeze(x, dim=axis).

                                                                                                                                                        Instances For
                                                                                                                                                          def Runtime.Autograd.TorchLean.F.squeezeDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis : ) (x : RefTy m α s) :
                                                                                                                                                          m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                          Dynamic squeeze along a specific axis, requiring that axis to have size 1.

                                                                                                                                                          PyTorch analogue: torch.squeeze(x, dim=axis) (the dim-restricted variant).

                                                                                                                                                          Instances For
                                                                                                                                                            def Runtime.Autograd.TorchLean.F.catAxis2Dyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (axis : ) (x y : (s : Spec.Shape) × RefTy m α s) :
                                                                                                                                                            m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                            Dynamic concatenation of two tensors along axis (existential output shape).

                                                                                                                                                            This is the binary helper used by cat_axisDyn. It lowers to concat_dim0 by moving the requested axis to the front.

                                                                                                                                                            Instances For
                                                                                                                                                              def Runtime.Autograd.TorchLean.F.catAxisDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (axis : ) (xs : List ((s : Spec.Shape) × RefTy m α s)) :
                                                                                                                                                              m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                              Dynamic concatenation of a list of tensors along axis (folding cat_axis2Dyn).

                                                                                                                                                              Instances For
                                                                                                                                                                def Runtime.Autograd.TorchLean.F.stackAxisDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] (axis : ) (xs : List ((s : Spec.Shape) × RefTy m α s)) :
                                                                                                                                                                m (Option ((s' : Spec.Shape) × RefTy m α s'))

                                                                                                                                                                Dynamic stack along a new axis.

                                                                                                                                                                PyTorch analogue: torch.stack(xs, dim=axis).

                                                                                                                                                                Implementation: unsqueeze each input at axis, then cat along the same axis.

                                                                                                                                                                Instances For
                                                                                                                                                                  def Runtime.Autograd.TorchLean.F.splitAxisDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis : ) (splitSizes : List ) (x : RefTy m α s) :
                                                                                                                                                                  m (Option (List ((s' : Spec.Shape) × RefTy m α s')))

                                                                                                                                                                  Dynamic split along an axis with explicit split sizes.

                                                                                                                                                                  PyTorch analogue: torch.split(x, split_sizes, dim=axis).

                                                                                                                                                                  Instances For
                                                                                                                                                                    def Runtime.Autograd.TorchLean.F.chunkAxisDyn {α : Type} [Context α] [DecidableEq Spec.Shape] {m : TypeType} [Monad m] [Ops m α] {s : Spec.Shape} (axis chunkSize : ) (x : RefTy m α s) :
                                                                                                                                                                    m (Option (List ((s' : Spec.Shape) × RefTy m α s')))

                                                                                                                                                                    Dynamic chunk along an axis, given a desired chunk size.

                                                                                                                                                                    PyTorch analogue: torch.split(x, chunkSize, dim=axis) or torch.chunk (size-based variant).

                                                                                                                                                                    Instances For

                                                                                                                                                                      NCHW → NHWC for 4D tensors, implemented via two adjacent swaps.

                                                                                                                                                                      Instances For

                                                                                                                                                                        NHWC → NCHW for 4D tensors, implemented via two adjacent swaps.

                                                                                                                                                                        Instances For