Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it dangerous converting from float to BigDecimal then back?

I'm in the process of making a program that generates test vectors to be used in a VHDL testbench. The testbench essentially tests a piece of hardware that acts as a single precision floating point adder, so the vectors are gonna conform to the IEEE 754 standard.

Anyway, my current plan for generation is to convert float values to BigDecimal, do the necessary arithmatic, then convert back to float. Is this dangerous? Will precision be lost resulting in a potentially inaccurate result in the test vector? I want to convert to BigDecimal, so I can avoid rounding issues.

So would this truncate the result?

BigDecimal repA = new BigDecimal(Float.toString(A));
BigDecimal repB = new BigDecimal(Float.toString(B));
BigDecimal repResult = repA.add(repB);
float result = repResult.floatValue();

Where A and B are some float.

like image 241
Franklin Avatar asked Mar 31 '12 18:03

Franklin


1 Answers

If your goal is to have accurate 32-bit float vectors within the expected limitations of a float, then I like your approach. You're first converting from 32-bit float to an object with higher precision, performing several steps to your math, then converting back to 32-bit floating point. In the end, your rounding errors would likely be lower than if you had performed the same series of steps natively in your 32-bit floats.

If your goal is to accurately simulate the expected results of a piece of hardware that is performing calculations natively using 32-bit floats, then you may run the risk of falsely reporting a test failure because your calculations are performed with more accuracy than the hardware being tested.

like image 50
phatfingers Avatar answered Nov 12 '22 01:11

phatfingers