Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Hi. I've got following problem:
43.65+61.11=104.75999999999999
for decimal is correct:
(decimal)43.65+(decimal)61.11=104.76
Why result for double is wrong?
C programming language is a machine-independent programming language that is mainly used to create many types of applications and operating systems such as Windows, and other complicated programs such as the Oracle database, Git, Python interpreter, and games and is considered a programming foundation in the process of ...
In the real sense it has no meaning or full form. It was developed by Dennis Ritchie and Ken Thompson at AT&T bell Lab. First, they used to call it as B language then later they made some improvement into it and renamed it as C and its superscript as C++ which was invented by Dr. Stroustroupe.
C is a general-purpose language that most programmers learn before moving on to more complex languages. From Unix and Windows to Tic Tac Toe and Photoshop, several of the most commonly used applications today have been built on C. It is easy to learn because: A simple syntax with only 32 keywords.
The letter c was applied by French orthographists in the 12th century to represent the sound ts in English, and this sound developed into the simpler sibilant s.
This question and its answers are a wealth of info on this - Difference between decimal, float and double in .NET?
To quote:
For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.
Short answer: floating point representation (such as "double") is inherently inaccurate. So is fixed point (such as "decimal"), but the inaccuracy in fixed-point representation is of a different kind. Here's one short explanation: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm
You can google for "floating point inaccuracy" or so for more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With