Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Will Decimal or Double work better for translations that need to be accurate up to .00001?

I'm an inspector at a machine shop. I have an html report generated by another inspector that has some problems I need to fix. This isn't the first time: I need something better than PowerShell and RegEx. (Fear not internet warriors, I know I shouldn't use RegEx for html. I'm using HtmlAgilityPack now.)

I'm aware there are a lot of similar discussions on SO and on the internet in general. I didn't find anything quite this specific. I can write some small experiment apps to test some of this (and I plan to) but, I want to have some idea of if it will be future safe before I implement all of it. Even though I'm not a programmer by trade I have a good grasp of the concepts we're talking about; Don't worry about talking over my head.

Over a series of transformations is it likely I will have more than .0001 error? What about .00001?
-If a report's alignment is off, I may need to rotate and translate it multiple times.
-I've only implemented rotation and translation at this time but, I plan on adding more transformations that may increase the number and complexity of operations.
-The integer component can go into the thousands.
-Our instruments are certified to .0001 typically. Normal significant digit rules for scientific measurements apply.

Will the overhead of Decimal and writing the trig functions manually be incredibly time consuming (edit: at runtime)?
-Typically a report has 100 to 100 points. Each point is actually 2 points: Nominal (as modeled) and Actual (as measured.)
-Easiest to test, but I want to know before implementing math functions for Decimal.

Side question:
I have a point class, Point3D, that holds x, y and z. Since each data point is two of these (the Nominal and Actual.) I then have a class, MeasuredPoint, with two Point3D instances. There has to be a better name than MeasuredPoint that isn't annoyingly long.

Oh yeah, this is C#/.Net. Thanks,

like image 312
Josiah Avatar asked Dec 29 '22 04:12

Josiah


2 Answers

Don't implement trig functions with Decimal! There's a reason why the standard library doesn't provide them, which is that if you're doing trig, Decimal doesn't provide any added benefit.

Since you're going to be working in radians anyway, your values are defined as multiples/ratios of PI, which isn't representable in any base system. Forcing the representation to base ten is more likely to increase than decrease error.

If precision (minimum error in ulps) is important for your application, then you must read What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg. That article does a much better job explaining than I would.

The upshot however is that if your desired precision is only 5 decimal places, even a 32-bit float (IEEE-754 single-precision) is going to be plenty. A 64-bit double IEEE-754 double-precision will help you be more precise with your error term, but a 128-bit base-10 floating-point value is just performance-killing overkill, and almost certainly won't improve the precision of your results one iota.

like image 50
Daniel Pryden Avatar answered Dec 31 '22 16:12

Daniel Pryden


If you need accuracy to be maintained over multiple operations then you really ought to consider using Decimal. While it may be okay for holding numbers for a short time, no IEEE754-backed float format can sustain its value indefinitely as the number of operations applied increases.

like image 34
Ignacio Vazquez-Abrams Avatar answered Dec 31 '22 18:12

Ignacio Vazquez-Abrams