4.1.7 The decimal type
decimal type is a 128-bit data type suitable for financial and monetary calculations. The
decimal type can represent values ranging from 1.0 × 10−28 to approximately 7.9 × 1028 with 28-29 significant digits.
The finite set of values of type
decimal are of the form (–1)s x c x 10-e, where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < 296, and the scale e is such that 0 ≤ e ≤ 28. The decimal type does not support signed zeros, infinities, or NaN's. A
decimal is represented as a 96-bit integer scaled by a power of ten. For
decimals with an absolute value less than
1.0m, the value is exact to the 28th decimal place, but no further. For
decimals with an absolute value greater than or equal to
1.0m, the value is exact to 28 or 29 digits. Contrary to the
double data types, decimal fractional numbers such as 0.1 can be represented exactly in the
decimal representation. In the
double representations, such numbers are often infinite fractions, making those representations more prone to round-off errors.
If one of the operands of a binary operator is of type
decimal, then the other operand must be of an integral type or of type
decimal. If an integral type operand is present, it is converted to
decimal before the operation is performed.
The result of an operation on values of type
decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position (this is known as "banker's rounding"). A zero result always has a sign of 0 and a scale of 0.
If a decimal arithmetic operation produces a value less than or equal to 5 x 10-29 in absolute value, the result of the operation becomes zero. If a
decimal arithmetic operation produces a result that is too large for the
decimal format, a
System.OverflowException is thrown.
decimal type has greater precision but smaller range than the floating-point types. Thus, conversions from the floating-point types to
decimal might produce overflow exceptions, and conversions from
decimal to the floating-point types might cause loss of precision. For these reasons, no implicit conversions exist between the floating-point types and
decimal, and without explicit casts, it is not possible to mix floating-point and
decimal operands in the same expression.