Encoding.GetChars Method (Byte*, Int32, Char*, Int32)
When overridden in a derived class, decodes a sequence of bytes starting at the specified byte pointer into a set of characters that are stored starting at the specified character pointer.
This API is not CLS-compliant. Namespace: System.Text
Assembly: mscorlib (in mscorlib.dll)
[CLSCompliantAttribute(false)] [ComVisibleAttribute(false)] public virtual int GetChars( byte* bytes, int byteCount, char* chars, int charCount )
- Type: System.Byte*
A pointer to the first byte to decode.
- Type: System.Int32
The number of bytes to decode.
- Type: System.Char*
A pointer to the location at which to start writing the resulting set of characters.
- Type: System.Int32
The maximum number of characters to write.
Return ValueType: System.Int32
The actual number of characters written at the location indicated by the chars parameter.
bytes is null.
chars is null.
byteCount or charCount is less than zero.
charCount is less than the resulting number of characters.
A fallback occurred (see Character Encoding in the .NET Framework for complete explanation)
To calculate the exact array size that GetChars requires to store the resulting characters, you should use the GetCharCount method. To calculate the maximum array size, use the GetMaxCharCount method. The GetCharCount method generally allows allocation of less memory, while the GetMaxCharCount method generally executes faster.
Encoding.GetChars gets characters from an input byte sequence. Encoding.GetChars is different than Decoder.GetChars because Encoding expects discrete conversions, while Decoder is designed for multiple passes on a single input stream.
If the data to be converted is available only in sequential blocks (such as data read from a stream) or if the amount of data is so large that it needs to be divided into smaller blocks, you should use the Decoder or the Encoder object provided by the GetDecoder or the GetEncoder method, respectively, of a derived class.
Note This method is intended to operate on Unicode characters, not on arbitrary binary data, such as byte arrays. If you need to encode arbitrary binary data into text, you should use a protocol such as uuencode, which is implemented by methods such as Convert.ToBase64CharArray.
The GetCharCount method determines how many characters result in decoding a sequence of bytes, and the GetChars method performs the actual decoding. The Encoding.GetChars method expects discrete conversions, in contrast to the Decoder.GetChars method, which handles multiple passes on a single input stream.
Your app might need to decode multiple input bytes from a code page and process the bytes using multiple calls. In this case, you probably need to maintain state between calls, because byte sequences can be interrupted when processed in batches. (For example, part of an ISO-2022 shift sequence may end one GetChars call and continue at the beginning of the next GetChars call. Encoding.GetChars will call the fallback for those incomplete sequences, but Decoder will remember those sequences for the next call.)
If your app handles string outputs, the GetString method is recommended. Since this method must check string length and allocate a buffer, it is slightly slower, but the resulting String type is to be preferred.
The byte version of allows some fast techniques, particularly with multiple calls to large buffers. Bear in mind, however, that this method version is sometimes unsafe, since pointers are required.
If your app must convert a large amount of data, it should reuse the output buffer. In this case, the GetChars(Byte, Int32, Int32, Char, Int32) version that supports output character buffers is the best choice.
Consider using the Decoder.Convert method instead of GetCharCount. The conversion method converts as much data as possible and throws an exception if the output buffer is too small. For continuous decoding of a stream, this method is often the best choice.
Requires full trust for the immediate caller. This member cannot be used by partially trusted or transparent code.