Decimal Operators
Assembly: mscorlib (in mscorlib.dll)
| Name | Description | |
|---|---|---|
![]() ![]() | Addition(Decimal, Decimal) | Adds two specified Decimal values. |
![]() ![]() | Decrement(Decimal) | Decrements the Decimal operand by one. |
![]() ![]() | Division(Decimal, Decimal) | Divides two specified Decimal values. |
![]() ![]() | Equality(Decimal, Decimal) | Returns a value that indicates whether two Decimal values are equal. |
![]() ![]() | Explicit(Decimal to Byte) | Defines an explicit conversion of a Decimal to an 8-bit unsigned integer. |
![]() ![]() | Explicit(Decimal to Char) | Defines an explicit conversion of a Decimal to a Unicode character. |
![]() ![]() | Explicit(Decimal to Double) | Defines an explicit conversion of a Decimal to a double-precision floating-point number. |
![]() ![]() | Explicit(Decimal to Int16) | Defines an explicit conversion of a Decimal to a 16-bit signed integer. |
![]() ![]() | Explicit(Decimal to Int32) | Defines an explicit conversion of a Decimal to a 32-bit signed integer. |
![]() ![]() | Explicit(Decimal to Int64) | Defines an explicit conversion of a Decimal to a 64-bit signed integer. |
![]() ![]() | Explicit(Decimal to SByte) | Defines an explicit conversion of a Decimal to an 8-bit signed integer. |
![]() ![]() | Explicit(Decimal to Single) | Defines an explicit conversion of a Decimal to a single-precision floating-point number. |
![]() ![]() | Explicit(Decimal to UInt16) | Defines an explicit conversion of a Decimal to a 16-bit unsigned integer. |
![]() ![]() | Explicit(Decimal to UInt32) | Defines an explicit conversion of a Decimal to a 32-bit unsigned integer. |
![]() ![]() | Explicit(Decimal to UInt64) | Defines an explicit conversion of a Decimal to a 64-bit unsigned integer. |
![]() ![]() | Explicit(Double to Decimal) | Defines an explicit conversion of a double-precision floating-point number to a Decimal. |
![]() ![]() | Explicit(Single to Decimal) | Defines an explicit conversion of a single-precision floating-point number to a Decimal. |
![]() ![]() | GreaterThan(Decimal, Decimal) | |
![]() ![]() | GreaterThanOrEqual(Decimal, Decimal) | |
![]() ![]() | Implicit(Byte to Decimal) | Defines an implicit conversion of an 8-bit unsigned integer to a Decimal. |
![]() ![]() | Implicit(Char to Decimal) | Defines an implicit conversion of a Unicode character to a Decimal. |
![]() ![]() | Implicit(Int16 to Decimal) | Defines an implicit conversion of a 16-bit signed integer to a Decimal. |
![]() ![]() | Implicit(Int32 to Decimal) | Defines an implicit conversion of a 32-bit signed integer to a Decimal. |
![]() ![]() | Implicit(Int64 to Decimal) | Defines an implicit conversion of a 64-bit signed integer to a Decimal. |
![]() ![]() | Implicit(SByte to Decimal) | Defines an implicit conversion of an 8-bit signed integer to a Decimal. |
![]() ![]() | Implicit(UInt16 to Decimal) | Defines an implicit conversion of a 16-bit unsigned integer to a Decimal. |
![]() ![]() | Implicit(UInt32 to Decimal) | Defines an implicit conversion of a 32-bit unsigned integer to a Decimal. |
![]() ![]() | Implicit(UInt64 to Decimal) | Defines an implicit conversion of a 64-bit unsigned integer to a Decimal. |
![]() ![]() | Increment(Decimal) | Increments the Decimal operand by 1. |
![]() ![]() | Inequality(Decimal, Decimal) | Returns a value that indicates whether two Decimal objects have different values. |
![]() ![]() | LessThan(Decimal, Decimal) | |
![]() ![]() | LessThanOrEqual(Decimal, Decimal) | |
![]() ![]() | Modulus(Decimal, Decimal) | Returns the remainder resulting from dividing two specified Decimal values. |
![]() ![]() | Multiply(Decimal, Decimal) | Multiplies two specified Decimal values. |
![]() ![]() | Subtraction(Decimal, Decimal) | Subtracts two specified Decimal values. |
![]() ![]() | UnaryNegation(Decimal) | Negates the value of the specified Decimal operand. |
![]() ![]() | UnaryPlus(Decimal) | Returns the value of the Decimal operand (the sign of the operand is unchanged). |

