Table of Contents
Integer overflows
Ints have a fixed size.
For x86-64 Linux:
type | bits | min | max |
signed char | 8 | $-2^{7}$ | $2^{7}-1$ |
unsigned char | 8 | 0 | $2^{8}-1$ |
short | 16 | $-2^{15}$ | $2^{15}-1$ |
unsigned short | 16 | 0 | $2^{16}-1$ |
int | 32 | $-2^{31}$ | $2^{31}-1$ |
unsigned int | 32 | 0 | $2^{32}-1$ |
long | 64 | $-2^{63}$ | $2^{63}-1$ |
unsigned long | 64 | 0 | $2^{64}-1$ |
If number doesn’t fit, overflow. CPU discards bits that don’t fit.
i.e. result is computed modulo 2ⁿ (n = number of bits).
leads to unexpected results in casts, computation, comparison:
- truncation: cast to smaller type, discarding bits
- arithmetic overflow: wrap around
- signedness: negative int interpreted as unsigned