Why 0.1 + 0.2 = 0.30000000000000004
To understand floating-point representation, let’s start with how computers handle numbers.
Representing Numbers with Bits
Computers use bits (0s and 1s) to represent data. The simplest way is through integers. For instance, an 8-bit unsigned integer can represent values from 0 to 255. For signed integers, it covers -128 to +127. But what if we want to represent decimal points, very small, or very large values?
Fixed-Point Representation: Introducing Decimals
To represent decimals, we can use a fixed-point format, where some bits represent the integer part and others the fraction. This setup requires that the decimal (or binary) point be fixed in one specific place for every number.

But this setup is highly inefficient. Let’s say we want to store a relatively large number like 8,000,000,000 in a 64-bit fixed-point format, with the point fixed in the middle. Here’s how it would look:
111011100110101100101000000000000.00000000000000000000000000000000
In this representation, we have 32 bits for the integer part, which is fine. But because 8,000,000,000 is a whole number, the entire fractional part consists of 32 wasted 0s. These bits add no value and are effectively just padding, consuming memory without contributing useful information.
Now imagine scaling this setup. Modern systems typically use 32 or 64 bits to store numbers. With a fixed-point system, we’d need a lot of extra bits to store truly large numbers, leading to significant memory waste. Floating-point representation was designed to avoid this by allowing the “point” to float, so we can represent a wide range of values without the padding waste.
This flexibility is key for day-to-day computing, where most applications don’t require exact precision, just a reasonable approximation. But understanding the trade-offs is essential: while floating-point saves space, it introduces small errors that can be critical in cases where precision matters.
Floating-Point Representation: The Scientific Notation Solution
Scientists faced a similar problem when needing to express both massive and minuscule values efficiently, so they developed scientific notation to represent numbers compactly. Floating-point uses this same principle but in binary. Bits are divided into three parts:
- Sign bit – Determines if a number is positive or negative.
- Exponent – Decides where the binary point “floats,” allowing representation of a vast range of values.
- Mantissa (or significand) – Holds the actual digits, capturing as much precision as possible.
The IEEE 754 floating-point standard uses this structure, allowing computers to represent numbers across a wide range using just 32 or 64 bits.

Why Doesn’t 0.1 + 0.2 Equal 0.3?
Certain decimal values, like 0.1
and 0.2
, can’t be represented exactly in binary. For example, 0.1
in binary becomes a repeating fraction: 0.00011001100110011...
and so on, similar to how 1/3
is 0.333333333…
in decimal. Because computers have a finite number of bits to store each number, they have to cut off the fraction at a certain point, making 0.1
an approximation rather than an exact value.
This is similar to significant figures in scientific notation, where we decide on a certain level of precision. For instance, when measuring something to two significant figures, we accept that we’re only capturing an approximate value. In computing, the bits used in floating-point numbers set this limit, and when two approximated values like 0.1
and 0.2
are added, the result can accumulate tiny errors due to rounding. This is why 0.1 + 0.2
in many programming languages yields 0.30000000000000004
instead of exactly 0.3
.
Although memes make fun of JavaScript for this, it’s a result of IEEE 754 and affects almost every language. Floating-point arithmetic is designed to balance the need for range with a reasonable level of precision, but it does mean that some values are only close approximations.
Closing Thoughts
Floating-point representation is a clever system that lets computers efficiently handle a wide range of values, from extremely large to very small, with a limited number of bits. But the trade-off is precision: floating-point numbers are not exact, and tiny rounding errors are part of the package. For most practical purposes, this approximation is close enough. But knowing when precision matters (and understanding the quirks of floating-point math) can help avoid surprises in calculations for critical tasks (e.g. banking).
These notes are based on my own learning journey, with additional insights from these excellent resources:
- How Floating Point Works [YouTube]
- Floating Point Numbers – Computerphile [YouTube]
- Computer Systems: A Programmer’s Perspective [Book]