Integer overflow checks cost ~3-5% on typical workloads, but clang’s diagnostic-enabled sanitizer causes mis-optimizations that inflate overhead to 28% on bzip2 compression.
Key Takeaways
Theoretical worst-case: 2x penalty per add/sub; across SPECint workload mix (40% load/store, 10% branches, 50% other), that translates to ~3% total slowdown.
The gap is a compiler bug: clang chose %ebx as destination, forcing a callee-save push/pop and blocking register allocation optimizations, not a fundamental hardware limit.
Simple sum loops hit 4-6x slowdown under fsanitize because the sanitizer prevents SSE vectorization.
clang 3.8+ and gcc 5+ fix the register allocation issue; gcc’s -ftrapv only checks signed overflow and has been broken since 2008.
Hacker News Comment Review
No substantive HN discussion yet; the resubmission was flagged as timely given recent Linux kernel integer overflow work (lwn.net/Articles/1065889).