BugZoo: RoPE position accounting #
LLM inference-engine bug studies report cache, RoPE, tokenizer/config, batching, and resource boundary bugs across real serving stacks:
https://arxiv.org/abs/2506.09713
TorchLean does not model every production RoPE kernel. It can make the position schedule explicit. A KV-cache or decode importer should not infer positions from ambient mutable state; it should hand over a schedule that Lean can inspect.
def
NN.Examples.BugZoo.RoPEPosition.appendNextPosition
{seqLen : Nat}
(sched : PositionSchedule seqLen)
:
PositionSchedule (seqLen + 1)
Append one decode step using the canonical next position seqLen.
Existing positions are preserved, and the new final token is assigned the next sequence index.
Instances For
theorem
NN.Examples.BugZoo.RoPEPosition.appendNextPosition_last
{seqLen : Nat}
(sched : PositionSchedule seqLen)
:
The newly appended token gets exactly the next position, not an off-by-one cache position.