quant_layernorm
Registers torch.nn.LayerNorm with QuantInputBase.
Enables LayerNorm output quantizers to be honored during quantization. Required for FP8 attention fusion where a single LayerNorm output QDQ is shared across all downstream Q/K/V/FC consumers (instead of repeating it on each input), which enables TRT to fuse DQ into the attention MatMul kernels.