I'm in the process of making a program that generates test vectors to be used in a VHDL testbench. The testbench essentially tests a piece of hardware that acts as a single precision floating point adder, so the vectors are gonna conform to the IEEE 754 standard.
Anyway, my current plan for generation is to convert float values to BigDecimal
, do the necessary arithmatic, then convert back to float. Is this dangerous? Will precision be lost resulting in a potentially inaccurate result in the test vector? I want to convert to BigDecimal
, so I can avoid rounding issues.
So would this truncate the result?
BigDecimal repA = new BigDecimal(Float.toString(A));
BigDecimal repB = new BigDecimal(Float.toString(B));
BigDecimal repResult = repA.add(repB);
float result = repResult.floatValue();
Where A and B are some float.
If your goal is to have accurate 32-bit float vectors within the expected limitations of a float, then I like your approach. You're first converting from 32-bit float to an object with higher precision, performing several steps to your math, then converting back to 32-bit floating point. In the end, your rounding errors would likely be lower than if you had performed the same series of steps natively in your 32-bit floats.
If your goal is to accurately simulate the expected results of a piece of hardware that is performing calculations natively using 32-bit floats, then you may run the risk of falsely reporting a test failure because your calculations are performed with more accuracy than the hardware being tested.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With