Conv2D Forward Approximation #
NF (rounded) backend: Conv2D forward runtime→spec approximation.
This file proves soundness of the NFBackend.conv2dPointBound/conv2dBoundTensor bounds and
packages Conv2D as a FwdNode so it composes via FwdGraph.eval_approx.
PyTorch analogue: a forward Conv2D op (typically torch.nn.functional.conv2d) plus the standard
“stack over channels/spatial positions” tensor semantics.
https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html
Map of this file #
- Small fold/indexing lemmas (
foldl_flatMap,entry_eq_scalar_get_at_or_zero3) used to align the spec definition of convolution with the bound-generating replay in the runtime proof. approx_conv2d_point: elementwise forward error bound for a single(out_ch, i, j)output.approxT_conv2d_spec: tensor-levelapproxTstatement obtained by lifting the pointwise bound.conv2dNode: packaging as aFwdNodeso it composes inside larger graphs.
References #
- Dumoulin & Visin, A guide to convolution arithmetic for deep learning (indexing/stride/padding conventions).
- Goodfellow, Bengio, Courville, Deep Learning (MIT Press, 2016), Chapter on convolutional networks.
Pointwise forward soundness for Conv2D in the rounded NF backend.
Given spec/runtime approximations for the kernel, bias, and input, this bounds the absolute error
of a single output entry of Spec.conv2dSpec by conv2dPointBound (a replay-style bound
constructed from local per-term bounds).
Tensor-level forward approximation for Conv2D (approxT).
This lifts approx_conv2d_point to an approxT statement for the full output tensor, with a
global error linfNorm (conv2dBoundTensor ...) that upper-bounds all pointwise error entries.
Package Conv2D as a FwdNode for use in FwdGraph.eval_approx.
The node reads kernel/bias/input from the typed context Γ, runs the spec/runtime forward passes,
and uses approxT_conv2d_spec as its soundness proof.