Inspect IEEE 754 floating point values instantly
Floating point numbers look simple when you read them as text.
You type a value like 0.1, 3.14159, or 1.23e-10 and expect the computer to store exactly what you entered. In practice, that is not always what happens.
Most software systems store non-integer real numbers using IEEE 754 floating point representation. That format is fast and widely supported, but it introduces precision limits, rounding behavior, special values, and binary encodings that are not obvious from the decimal number alone.
This Floating Point Analyzer helps you inspect that representation clearly.
Use it to:
- analyze Float32 and Float64 values
- see the sign bit, exponent bits, and fraction bits
- inspect the binary layout and hex representation
- view the decoded exponent and reconstructed formula
- classify values as normal, subnormal, zero, infinity, or NaN
- compare the stored value with the decimal value you entered
- understand why common values such as
0.1cannot be represented exactly
Everything runs live in your browser with no uploads.
What this Floating Point Analyzer shows
When you enter a value, the tool breaks it into the main parts of IEEE 754 representation:
- Sign bit: whether the value is positive or negative
- Exponent bits: the stored exponent field
- Fraction bits: the fractional part of the significand
- Exponent raw: the encoded exponent before bias is removed
- Exponent decoded: the actual exponent after applying the format bias
- Significand: the reconstructed binary significand used in the formula
- Binary layout: the full bit pattern for the selected precision
- Hex representation: the same stored value in hexadecimal form
- Classification: normal, subnormal, zero, infinity, or NaN
- Stored value: the actual value encoded in Float32 or Float64
- Difference from input: how far the stored value differs from the original decimal input
This gives you both a practical debugging view and a learning view of floating point behavior.
Float32 vs Float64
The tool supports the two most common IEEE 754 binary formats.
Float32
Float32 uses 32 total bits:
- 1 sign bit
- 8 exponent bits
- 23 fraction bits
This is often called single precision. It is common in graphics, game engines, shaders, embedded systems, and memory-sensitive workloads.
Float64
Float64 uses 64 total bits:
- 1 sign bit
- 11 exponent bits
- 52 fraction bits
This is often called double precision. It is common in JavaScript numbers, scientific computing, finance, and many general-purpose systems where higher precision matters.
In general:
- Float32 is smaller and faster in some contexts, but less precise
- Float64 stores many decimal values more accurately and across a wider numeric range
This analyzer lets you switch between the two instantly so you can see the exact difference in representation.
Why floating point numbers can look wrong
One of the most common surprises in programming is that values like 0.1, 0.2, or 0.3 do not always behave the way people expect.
For example, in many systems:
0.1 + 0.2 !== 0.3
That does not mean the computer is broken. It means the decimal values you typed had to be converted into a binary fraction format with limited precision.
Some decimal fractions terminate cleanly in binary. Many do not.
For example:
0.5is exact in binary0.25is exact in binary0.1is repeating in binary and must be rounded
This tool helps you see that rounding directly by showing the stored value and the difference from the input.
How IEEE 754 representation works
A normal floating point value is usually described as:
(-1)^sign × significand × 2^exponent
In the tool, that appears as a decoded formula so you can see how the stored bits reconstruct the final value.
1. Sign bit
The sign bit determines whether the number is positive or negative.
0means positive1means negative
2. Exponent field
The exponent bits do not directly store the final exponent. They store a biased exponent.
The analyzer shows both:
- Exponent raw: the stored exponent field
- Exponent decoded: the unbiased exponent after subtracting the bias
Typical biases are:
- 127 for Float32
- 1023 for Float64
3. Fraction bits
The fraction bits store the fractional portion of the significand.
For normal numbers, IEEE 754 assumes an implicit leading 1, so the significand behaves like:
1.fraction_bits
For subnormal numbers, that implicit leading 1 is not used. The significand behaves like:
0.fraction_bits
This tool shows that directly in the Significand field.
Mantissa vs significand
Many people search for terms like mantissa, exponent, and sign bit. That is understandable, and “mantissa” is still very common in teaching material and developer discussions.
Strictly speaking, in IEEE 754 the more accurate term is significand, while the stored trailing field is usually called the fraction.
For practical purposes:
- people often say mantissa bits informally
- the standard terminology is closer to fraction and significand
This page uses the more accurate wording where possible while still matching the language many users search for.
Supported inputs
You can enter:
- plain decimal numbers like
12.5 - negative values like
-42 - scientific notation like
1.23e-5 - special values such as
Infinity,-Infinity,inf, andNaN
That makes the tool useful for both everyday debugging and edge-case analysis.
Classification types explained
The analyzer classifies the stored result into one of the standard IEEE 754 categories.
Normal
A standard finite floating point value with a non-zero exponent field that is not all ones. These are the values most users work with most of the time.
Subnormal
A very small non-zero value where the exponent field is all zeros but the fraction is not all zeros. Subnormal numbers fill the gap between zero and the smallest normal value. They allow gradual underflow instead of an abrupt jump straight to zero.
Zero
Both exponent and fraction bits are zero.
IEEE 754 also distinguishes between +0 and -0, depending on the sign bit.
Infinity
The exponent field is all ones and the fraction is all zeros. This represents positive or negative infinity.
NaN
The exponent field is all ones and the fraction is not all zeros.
This represents “Not a Number,” which appears in undefined or invalid operations such as 0/0.
How to use the tool
1. Choose precision
Select either:
- Float32
- Float64
This changes the field sizes, exponent bias, precision, and stored result.
2. Enter a value
Paste or type a number such as:
0.1
or:
1.23e-10
or:
Infinity
3. Review the analysis
The tool immediately shows:
- bit fields
- hex
- classification
- formula
- stored value
- input vs stored difference
4. Copy the report
Use Copy report, Copy bits, or Copy hex when you want to share or document the result.
Example: why 0.1 is famous
0.1 is one of the most common examples in floating point discussions because it cannot be represented exactly as a finite binary fraction.
When you enter 0.1, the stored IEEE 754 value is the nearest representable binary value for the selected precision.
That means:
- the stored value is extremely close to
0.1 - but it is not exactly equal to the decimal
0.1 - the difference from input reveals that tiny mismatch
This is why floating point rounding issues often show up in beginner programming examples and precision-sensitive calculations.
Binary and hex views
The analyzer shows both binary and hexadecimal output because each is useful in a different workflow.
Binary view
Binary is best when you want to inspect the exact bit structure:
- sign
- exponent
- fraction
You can also group the bits visually for easier reading.
Hex view
Hex is more compact and easier to paste into documentation, debugging notes, code comments, or low-level tooling.
If you are comparing raw memory patterns, protocol payloads, or serialized values, hex is often the fastest representation to use.
Reconstructed formula
One of the most helpful features here is the displayed formula.
For normal and subnormal values, the tool reconstructs the stored number in the form:
(-1)^sign × significand × 2^decoded_exponent
That helps you understand how the raw bits map back to the actual numeric value. It also makes the tool useful for students and technical explanations, not just debugging.
Helpful use cases
Debug floating point issues in code
If a value looks slightly wrong in a program, inspect its Float32 or Float64 representation to see the real stored bits and the exact rounding result.
Compare precision levels
Switch between Float32 and Float64 to see how the stored value, fraction length, and rounding behavior change.
Teach or learn IEEE 754
This tool is useful for explaining sign bits, exponent bias, significands, subnormals, special values, and why decimal intuition can fail in binary storage.
Inspect special values
Analyze Infinity, -Infinity, and NaN without needing a separate low-level debugger.
Document raw numeric representations
Copy the report, bits, or hex form into notes, tutorials, bug reports, or technical docs.
Common examples to try
Paste any of these into the analyzer:
0.1
0.5
0.25
1.23e-10
16777217
Infinity
NaN
-0
These values help demonstrate different floating point behaviors:
0.5and0.25are exact in binary0.1is not exact in binary16777217is useful when comparing integer precision limits in Float32InfinityandNaNshow special IEEE 754 encodings-0highlights signed zero behavior
Why subnormal numbers matter
Subnormal numbers are easy to overlook, but they are important in numerical computing.
They allow the system to represent values smaller than the minimum normal number by giving up the implicit leading 1 in the significand. That creates a smoother transition toward zero and reduces abrupt underflow.
When you analyze a very tiny value, this tool will show when it has become subnormal instead of normal.
Why signed zero matters
IEEE 754 has both +0 and -0.
They compare as equal in many contexts, but the sign can still matter in operations involving division, limits, branch behavior, and certain numeric functions.
This analyzer makes signed zero visible through the sign bit and classification details.
Tips for interpreting results
- Use Float32 when debugging single-precision storage, graphics, shaders, or low-memory systems.
- Use Float64 when analyzing JavaScript numbers or most standard double-precision workflows.
- If the stored value differs slightly from the input, that usually means the decimal number is not exactly representable in binary.
- If the class is subnormal, the value is extremely close to zero and uses a different significand rule.
- If the class is NaN or Infinity, inspect the exponent field and fraction bits to see why.
Troubleshooting
“Why does my stored value differ from the input?” Because IEEE 754 stores the nearest representable binary value, not always the exact decimal value you typed.
“Why does Float32 differ more than Float64?” Float32 has fewer exponent and fraction bits, so it has less precision and a smaller representable range.
“Why is 0.5 exact but 0.1 is not?”
Because 0.5 terminates in binary, while 0.1 repeats and must be rounded.
“Why do I see NaN or Infinity?” Those are standard IEEE 754 special values. The analyzer shows their raw bit structure and classification.
“What does the fraction field represent?” It stores the fractional portion of the significand. For normal values, the leading 1 is implicit; for subnormal values, it is not.
Perfect for
- developers debugging numeric precision issues
- students learning IEEE 754 representation
- teachers explaining sign, exponent, and fraction fields
- engineers inspecting Float32 and Float64 storage
- technical writers creating accurate examples
- anyone trying to understand why a decimal value behaves differently in code
If you need a fast, accurate, and practical IEEE 754 floating point analyzer, this tool lets you inspect the full stored representation of a value, compare Float32 and Float64, and understand the exact bit-level reason behind floating point precision behavior.