How Compilers Skip Multiplication
Multiplication is a costly operation in CPUs. While simple operations like addition or shifting only need one CPU cycle, multiplying integers can take 3 or more cycles even with modern processors.
But compilers are packed with clever tricks to avoid multiplication when possible. They’re constantly scanning for places where a simple shift, add, or subtract can replace multiplication without losing accuracy.
Here’s a simple example: take 13, which in binary is represented as [1101]
. Shifting this left by 2 gives us the 6-bit vector [110100]
, representing 13 × 4 = 52. So instead of multiplying by 4, we can simply left shift by 2.
Now say you’re multiplying by 14. A direct x * 14
would do the trick, but compilers are smarter than that. Knowing that multiplication is more expensive, they’ll break it down by spotting that 14 = 23 + 22 + 21
, then rewrite it as (x << 3) + (x << 2) + (x << 1)
. So now, instead of one big multiplication, you get three quick shifts and two additions.
Alongside strength reduction (replacing multiplications with shifts, as shown above), compilers also use multiplication by constants (pre-computing values when possible), reciprocal multiplication (replacing division by a constant with multiplication by its reciprocal), and Karatsuba multiplication (breaking large multiplications into smaller parts). All these tricks work together to cut down on cycle costs, so your code runs faster without you lifting a finger.