I have to add trailing zeros to a decimal value. Not only for displaying (so Format
is not an option), but in the actual underlying data, because the decimal precision is important in our application.
I tried:
decimal value = 1M
decimal withPrecision = value + 0.000M;
Which works well in many cases ... strangely not in all. I debugged into a case where the value in withPrecision was still 1M, by not seeing any difference in the value at runtime and the same hardcoded value in the immediate window. I also used decimal.GetBits to find differences - there are none.
I tried (as proposed here Adjusting decimal precision, .net):
decimal value = 1M
decimal withPrecision = value * 1.000M;
which works well - except for the case value is zero. Then the result is 0M without any trailing zeros. I also do not trust the solution, it may also not work in other cases.
Currently I'm with:
decimal value = 1M
decimal withPrecision = (value * 1.000M) + 0.000M;
Which works in all cases I currently found ... but doesn't look very trustworthy neither. I could also implement an exceptional case for zeros.
I think Format
and Parse
would work. I don't like it much. It doesn't look very fast and I don't understand why I have to put the decimal into a string just to manipulate it.
I start to believe that there is no clean solution for such a simple task.
A decimal
occupies 128 bits (16 bytes), of which 1 bit is used for the sign, 96 bits (12 bytes) are used for the actual value and 5 bits are used to store the position of the decimal point.
When the C# compiler sees 1M
, it parses it as {sign: 0, value: 1, point: 0}
, while 1.0M
is parsed as {sign: 0, value: 10, point: 1}
. However, both represent the same value (1M == 1.0M
returns true), and another parser could easily have mapped both 1M
and 1.0M
to {sign: 0, value: 1, point: 0}
.
What happens when you add 1M
and 0.1M
together? 1M
is {sign: 0, value: 1, point: 0}
and 0.1M
is {sign: 0, value: 1, point: 1}
, so we have two numbers with different precision. That's no problem however: we can move the point in 1M
by adding 1
to its point and by multiplying its value by 10
: {sign: 0, value: 10, point: 1}
. Now that both numbers have the same point position, we can add them together by simply adding up their values, which results in {sign: 0, value: 11, point: 1}
, which corresponds to 1.1M
.
So the internal representation of a decimal
does not affect the precision of its operations - the decimal point position is moved (and the value adjusted) whenever this becomes necessary.*
However, if for some reason your decimals absolutely must have a certain point position (and from what you've posted so far, I see no compelling reason - formatting is purely a display issue), then the easiest approach is to use the decimal(int, int, int, bool, byte)
constructor (or alternately decimal(int[])
). This allows you to pass in the value (as 3 integers), the sign (as a boolean) and the point position (as a byte). You will have to multiply the value yourself if you pass a point position higher than 0: 1.000M
must be constructed as new decimal(1000, 0, 0, false, 3)
, not as new decimal(1, 0, 0, false, 3)
(because that would give you 0.001M
).
*the point position is limited to [0-28], so a decimal
cannot represent numbers with more than 28 digits behind the dot. Also, the value has to be 'split' between digits in front of the dot and behind the dot, so very large numbers will put restrictions on the available precision, possibly cutting it down in favor of representing the digits in front of the dot.
Probably not the answer you were hoping for, but it looks like you will have to use formatting with ToString(). I recommend you read the Remarks section in this MSDN link.
The last paragraph in Remarks states:
The scaling factor also preserves any trailing zeros in a Decimal number. Trailing zeros do not affect the value of a Decimal number in arithmetic or comparison operations. However, trailing zeros might be revealed by the ToString method if an appropriate format string is applied.
As I understand from your comment, you want to avoid additional field for storing precision by storing in decimal value. Don't do this. It's abusing the framework and even if you successfully implement this, it can stop working in another framework version/mono/etc. This kind of programming makes your code base unreadable and hard to debug.
Just use your own type:
struct DecimalEx
{
public decimal Value;
public byte Precision;
}
It's cool and fun to fit couple of values in one simple data type, but if your sharing code with others, try avoiding that, or you will easily earn your special place in hell for that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With