Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do IDL defaultvalue values look rounded?

Tags:

vba

com

vb6

idl

I have a COM object with a function with an optional last argument. The IDL is a bit like this:

interface ICWhatever: IDispatch
{
  [id(96)] HRESULT SomeFunction([in,defaultvalue(50.6)]float parameter);
};

This works fine: if I don't specify the parameter, 50.6 is filled in. But in several development environments (Excel VBA, VB6) the default value is rounded before display. After typing the open brace, I see:

SomeFunction([parameter As Single = 51])

Does anyone know why this is? Is it a bug? This will confuse client programmers...

like image 944
Michel de Ruiter Avatar asked Jun 09 '10 16:06

Michel de Ruiter


1 Answers

I was able to reproduce the problem you experienced (VBA), and it appears to be indeed a bug in the treatment of the Single type by (specifically) VB IDEs. Namely, the VB IDEs will improperly cast the Single default value to int before printing it out again (as part of the method signature) as a (truncated) single-precision floating-point value.

This problem does not exist in the Microsoft Script Editor, nor does it exist in OleView.exe etc.

To test, try the following Single default value: 18446744073709551615.0. In my case, this value is properly encoded in the TLB and properly displayed by OleView.exe and by Microsoft Script Editor as 1.844674E+19. However, it gets displayed as -2.147484E+09 in the VB IDEs. Indeed, casting (float)18446744073709551615.0 to int produces -2147483648 which, displayed as float, produces the observed (incorrect) VB IDE output -2.147484E+09.

Similarly, 50.6 gets cast to int to produce 51, which is then printed out as 51.

To work around this issue use Double instead of Single, as Double is converted and displayed properly by all IDEs I was able to test.


On a tangent, you are probably already aware of the fact that certain floating point values (such as 0.1) do not have a corresponding exact IEEE 754 representation and cannot be distinguished from other values (e.g. 0.1000000015.) Thus, specifying a default double-precision value of 0.1 will be displayed in most IDEs as 0.100000001490116. One way to alleviate this precision issue is to choose a different scale for your parameters (e.g. switch from seconds to milliseconds, thus 0.1 seconds would become 100 milliseconds, unambiguously representable as both single- and double precision floating point, as well as integral values/parameters.)

like image 51
vladr Avatar answered Nov 14 '22 11:11

vladr