Convert.ToUInt16 Method (String, Int32)
Converts the string representation of a number in a specified base to an equivalent 16-bit unsigned integer.
This API is not CLS-compliant.
Assembly: mscorlib (in mscorlib.dll)
[<CLSCompliantAttribute(false)>] static member ToUInt16 : value:string * fromBase:int -> uint16
A string that contains the number to convert.
The base of the number in value, which must be 2, 8, 10, or 16.
Return ValueType: System.UInt16
A 16-bit unsigned integer that is equivalent to the number in value, or 0 (zero) if value is null.
fromBase is not 2, 8, 10, or 16.
value, which represents a non-base 10 unsigned number, is prefixed with a negative sign.
value is String.Empty.
value contains a character that is not a valid digit in the base specified by fromBase. The exception message indicates that there are no digits to convert if the first character in value is invalid; otherwise, the message indicates that value contains invalid trailing characters.
If fromBase is 16, you can prefix the number specified by the value parameter with "0x" or "0X".
Because the UInt16 data type supports unsigned values only, the method assumes that value is expressed using unsigned binary representation. In other words, all 16 bits are used to represent the numeric value, and a sign bit is absent. As a result, it is possible to write code in which a signed integer value that is out of the range of the UInt16 data type is converted to a UInt16 value without the method throwing an exception. The following example converts Int16.MinValue to its hexadecimal string representation, and then calls the method. Instead of throwing an exception, the method displays the message, "0x8000 converts to 32768."
When performing binary operations or numeric conversions, it is always the responsibility of the developer to verify that a method or operator is using the appropriate numeric representation to interpret a particular value. The following example illustrates one technique for ensuring that the method does not inappropriately use binary representation to interpret a value that uses two's complement representation when converting a hexadecimal string to a UInt16 value. The example determines whether a value represents a signed or an unsigned integer while it is converting that value to its string representation. When the example converts the value to a UInt16 value, it checks whether the original value was a signed integer. If so, and if its high-order bit is set (which indicates that the original value was negative), the method throws an exception.
Available since 4.5
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Available since 8.1