TorchLean API

NN.Examples.BugZoo.RoPEPosition

BugZoo: RoPE position accounting #

LLM inference-engine bug studies report cache, RoPE, tokenizer/config, batching, and resource boundary bugs across real serving stacks:

https://arxiv.org/abs/2506.09713

TorchLean does not model every production RoPE kernel. It can make the position schedule explicit. A KV-cache or decode importer should not infer positions from ambient mutable state; it should hand over a schedule that Lean can inspect.

A position schedule gives the rotary/absolute position used for every token slot.

Instances For

    Append one decode step using the canonical next position seqLen.

    Existing positions are preserved, and the new final token is assigned the next sequence index.

    Instances For
      theorem NN.Examples.BugZoo.RoPEPosition.appendNextPosition_last {seqLen : Nat} (sched : PositionSchedule seqLen) :
      (appendNextPosition sched).pos seqLen, = seqLen

      The newly appended token gets exactly the next position, not an off-by-one cache position.