One of the strangest quirks in JavaScript (and many other programming languages) is this: if you
type 0.1 + 0.2 in the console, you don't get 0.3. Instead, you get
0.30000000000000004. Wait, what?
The Confusion
When students first encounter this in my class, they think JavaScript is broken. "How can a computer not do basic math correctly?" It's a valid question, and the answer reveals something fundamental about how computers work.
Binary Fractions
Remember that computers think in binary - 1s and 0s. While it's easy to represent whole numbers in binary (5 is 101, 10 is 1010), decimal fractions are trickier.
In our decimal system, we can precisely represent 1/2 (0.5), 1/4 (0.25), and 1/8 (0.125) because these are fractions with denominators that are powers of 2. But what about 1/10?
Just like 1/3 becomes 0.333... (repeating forever) in decimal, 1/10 becomes a repeating fraction in binary. The computer has to round it off somewhere, and that's where the tiny error creeps in.
The Technical Details
JavaScript uses a format called IEEE 754 double-precision floating-point to store numbers. This format allocates 64 bits to store a number, with specific bits for the sign, exponent, and mantissa.
When you write 0.1, JavaScript converts it to the closest binary representation it can. This representation is very close to 0.1, but not exactly 0.1. The same happens with 0.2. When you add these two approximate values, the tiny errors combine, giving you 0.30000000000000004.
Is This a Problem?
For most everyday programming, these tiny errors don't matter. If you're building a game or a website, the difference between 0.3 and 0.30000000000000004 is negligible.
However, if you're working with money or other situations where precision is critical, you need to be careful. Never use floating-point numbers for financial calculations!
How to Handle It
There are several ways to deal with floating-point precision issues:
- Rounding: Use
toFixed()to round to a specific number of decimal places - Integer math: For money, work in cents (integers) instead of dollars (decimals)
- Comparison tolerance: Instead of checking if two numbers are exactly equal, check if they're close enough
- Special libraries: Use libraries designed for precise decimal math when needed
A Practical Example
Here's how you might safely compare floating-point numbers:
// Don't do this:
if (0.1 + 0.2 === 0.3) {
console.log("Equal!");
}
// Do this instead:
function areClose(a, b, tolerance = 0.0001) {
return Math.abs(a - b) < tolerance;
}
if (areClose(0.1 + 0.2, 0.3)) {
console.log("Close enough!");
}
It's Not Just JavaScript
This isn't a JavaScript problem - it's how computers handle decimal numbers. Python, Java, C++, and almost every programming language has this same behavior. It's a fundamental limitation of binary floating-point arithmetic.
The Takeaway
Understanding why 0.1 + 0.2 doesn't equal exactly 0.3 teaches us something important: computers are incredibly powerful, but they're not magic. They have limitations, and good programmers know how to work within those limitations.
When I teach this concept, I use it as an opportunity to discuss how computers represent data at a fundamental level. It's not just a quirk to memorize - it's a window into understanding how your code really runs.
So the next time you see 0.30000000000000004 in your console, don't panic.
JavaScript isn't broken. It's just being honest about the limitations of binary floating-point
arithmetic!