UTF7Encoding::GetChars Method (array<Byte>^, Int32, Int32, array<Char>^, Int32)
Decodes a sequence of bytes from the specified byte array into the specified character array.
Assembly: mscorlib (in mscorlib.dll)
public: virtual int GetChars( array<unsigned char>^ bytes, int byteIndex, int byteCount, array<wchar_t>^ chars, int charIndex ) override
Parameters
- bytes
-
Type:
array<System::Byte>^
The byte array containing the sequence of bytes to decode.
- byteIndex
-
Type:
System::Int32
The index of the first byte to decode.
- byteCount
-
Type:
System::Int32
The number of bytes to decode.
- chars
-
Type:
array<System::Char>^
The character array to contain the resulting set of characters.
- charIndex
-
Type:
System::Int32
The index at which to start writing the resulting set of characters.
| Exception | Condition |
|---|---|
| ArgumentNullException | bytes is null (Nothing). -or- chars is null (Nothing). |
| ArgumentOutOfRangeException | byteIndex or byteCount or charIndex is less than zero. -or- byteindex and byteCount do not denote a valid range in bytes. -or- charIndex is not a valid index in chars. |
| ArgumentException | chars does not have enough capacity from charIndex to the end of the array to accommodate the resulting characters. |
| DecoderFallbackException | A fallback occurred (see Character Encoding in the .NET Framework for complete explanation) -and- DecoderFallback is set to DecoderExceptionFallback. |
To calculate the exact array size required by GetChars to store the resulting characters, use GetCharCount. To calculate the maximum array size, the application should use GetMaxCharCount. The GetCharCount method generally allows allocation of less memory, while the GetMaxCharCount method generally executes faster.
Data to be converted, such as data read from a stream, might be available only in sequential blocks. In this case, or if the amount of data is so large that it needs to be divided into smaller blocks, the application should use the Decoder or the Encoder provided by the GetDecoder method or the GetEncoder method, respectively.
Note |
|---|
UTF7Encoding does not provide error detection. When invalid bytes are encountered, UTF7Encoding generally emits the invalid bytes. If a byte is larger than hexadecimal 0x7F, the byte value is zero-extended into a Unicode character, the result is stored in the chars array, and any shift sequence is terminated. For example, if the byte to encode is hexadecimal 0x81, the resulting character is U+0081. For security reasons, your applications are recommended to use UTF8Encoding, UnicodeEncoding, or UTF32Encoding and enable error detection. |
The following code example demonstrates how to use the GetChars method to decode a range of elements in a byte array and store the result in a character array.
using namespace System; using namespace System::Text; using namespace System::Collections; int main() { array<Char>^chars; array<Byte>^bytes = {85,84,70,55,32,69,110,99,111,100,105,110,103,32,69,120,97,109,112,108,101}; UTF7Encoding^ utf7 = gcnew UTF7Encoding; int charCount = utf7->GetCharCount( bytes, 2, 8 ); chars = gcnew array<Char>(charCount); int charsDecodedCount = utf7->GetChars( bytes, 2, 8, chars, 0 ); Console::WriteLine( "{0} characters used to decode bytes.", charsDecodedCount ); Console::Write( "Decoded chars: " ); IEnumerator^ myEnum = chars->GetEnumerator(); while ( myEnum->MoveNext() ) { Char c = safe_cast<Char>(myEnum->Current); Console::Write( "[{0}]", c.ToString() ); } Console::WriteLine(); }
Available since 10
.NET Framework
Available since 1.1
