When I ask GDB to print a real in binary, I get this:
(gdb) p/t 5210887.5
$1 = 10011111000001100000111
According to this,
0 10010101 00111110000011000001111
is the expected value.
Aligning them,
1 0011111000001100000111
0 10010101 00111110000011000001111
And it looks like GDB is giving me the integer representation after rounding. Thing is it was doing this with a variable at work as well. Using a declaration in a C program -
main()
{
float f = 5210887.5;
}
and debugging it -
$ gcc -g -O0 floatdec.c -o floatdec
$ gdb floatdec
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /home/ /floatdec...done.
(gdb) b main
Breakpoint 1 at 0x804839a: file floatdec.c, line 3.
(gdb) r
Starting program: /home/ /floatdec
Breakpoint 1, main () at floatdec.c:3
3 float f = 5210887.5;
(gdb) s
4 }
(gdb) p f
$1 = 5210887.5
(gdb) p/t f
$2 = 10011111000001100000111
same thing, it is showing me the integer representation. Is there no way to have GDB show me the floating point format?
The t
format does indeed convert the value to an integer. From gdb: Output Formats:
t Print as integer in binary. The letter ‘t’ stands for “two”.
This is backed up by the code in gdb's print_scalar_formatted
:
if (options->format != 'f')
val_long = unpack_long (type, valaddr);
where unpack_long
converts various types of values, including float, into a long.
A workaround is to take the address of the variable, cast it to an (int32 *), and dereference that.
(gdb) p f
$2 = 5210887.5
(gdb) p/t *(int32_t *)&f
$3 = 1001010100111110000011000001111
(gdb) p/t {int32_t}&f
$4 = 1001010100111110000011000001111
Or, at the language-independent level, using the x
command:
(gdb) x/tw &f
0xbffff3f4: 01001010100111110000011000001111
This was done on an x86. Other-endian systems may produce a different bit pattern.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With