- Published on
Floating-Point Precision Errors in JavaScript
- Authors
- Name
- Yisak Abraham
- @YisakAbrahamK
Adding numbers in JavaScript should be simple, right? Well, there’s a little twist that might surprise you. Let’s add it up together. JavaScript has floating-point precision errors during arithmetic operations due to how it stores numbers in binary format.
Hang on, I’m about to explain more. It’s important to note that it’s not just JavaScript; other languages have them too. There is no fundamental difference in the way floating-point errors occur between JavaScript and other programming languages. The underlying principles of floating-point arithmetic are the same across different programming languages and platforms.
However, the specific implementation of floating-point arithmetic may differ slightly between programming languages and compilers, which can lead to differences in how floating-point errors manifest themselves.
JavaScript uses a standard called IEEE 7541 to represent numbers in binary format. But not all decimal numbers can be represented perfectly in binary, leading to small errors in the value of the number. These errors can accumulate over time, leading to unexpected behavior in your code. For example, 0.1 + 0.2 is not exactly equal to 0.3 in JavaScript.
console.log(0.1 + 0.2) // Outputs: 0.30000000000000004
To delve a bit deeper, the reason why not every number can be expressed exactly in binary is due to the finite representation of numbers in the computer’s memory. Computers use binary (base-2) number system, whereas we typically work with decimal (base-10) numbers. When we try to represent certain decimal numbers in binary,like 0.1, which is 1/10 in fraction form, the binary system can’t represent them precisely. This is because 1/10 is not a sum of powers of 2. It’s akin to trying to express one-third in the decimal system—you end up with a repeating pattern of 3s after the decimal point. Similarly, in binary, 0.1 translates to an infinite repeating pattern: 0.00011001100110011…, and so on. Since computers have limited memory, they can’t store this infinite pattern, and they must truncate it at some point, leading to an approximation of the number rather than an exact representation. This approximation is the root of the floating-point precision errors we encounter in programming languages.
To avoid these errors, you can use a library like Decimal.js that handles decimal numbers with precision. Alternatively, you can use an ‘epsilon’ value to compare two numbers and see if they’re close enough to be considered equal. For example, you can define an epsilon value of 0.0001 and check if the absolute difference between two numbers is less than that value.
Footnotes
IEEE 754-2019, “IEEE Standard for Floating-Point Arithmetic,” IEEE Computer Society, published 22 July 2019. ↩