This documentation is archived and is not being maintained.

Encoding::GetBytes Method (array<Char>, Int32, Int32, array<Byte>, Int32)

When overridden in a derived class, encodes a set of characters from the specified character array into the specified byte array.

Namespace:  System.Text
Assembly:  mscorlib (in mscorlib.dll)

virtual int GetBytes(
	array<wchar_t>^ chars, 
	int charIndex, 
	int charCount, 
	array<unsigned char>^ bytes, 
	int byteIndex
) abstract


Type: array<System::Char>

The character array containing the set of characters to encode.

Type: System::Int32

The index of the first character to encode.

Type: System::Int32

The number of characters to encode.

Type: array<System::Byte>

The byte array to contain the resulting sequence of bytes.

Type: System::Int32

The index at which to start writing the resulting sequence of bytes.

Return Value

Type: System::Int32
The actual number of bytes written into bytes.


chars is nullptr.


bytes is nullptr.


charIndex or charCount or byteIndex is less than zero.


charIndex and charCount do not denote a valid range in chars.


byteIndex is not a valid index in bytes.


bytes does not have enough capacity from byteIndex to the end of the array to accommodate the resulting bytes.


A fallback occurred (see Understanding Encodings for complete explanation)


EncoderFallback is set to EncoderExceptionFallback.

To calculate the exact array size required by GetBytes to store the resulting bytes, the application should use GetByteCount. To calculate the maximum array size, the application should use GetMaxByteCount. The GetByteCount method generally allows allocation of less memory, while the GetMaxByteCount method generally executes faster.

If the data to be converted is available only in sequential blocks (such as data read from a stream) or if the amount of data is so large that it needs to be divided into smaller blocks, the application should use the Decoder or the Encoder provided by the GetDecoder method or the GetEncoder method, respectively, of a derived class.

The GetByteCount method determines how many bytes result in encoding a set of Unicode characters, and the GetBytes method performs the actual encoding. The Encoding::GetBytes method expects discrete conversions, in contrast to the Encoder::GetBytes method, which handles multiple conversions on a single input stream.

Several versions of GetByteCount and GetBytes are supported. The following are some programming considerations for use of these methods:

  • The application might need to encode many input characters to a code page and process the characters using multiple calls. In this case, your application probably needs to maintain state between calls, taking into account the state that is persisted by the Encoder object being used.

  • If the application handles string inputs, it is recommended to use the string version of GetBytes(String).

  • The Unicode character buffer version of GetBytes allows some fast techniques, particularly with multiple calls using the Encoder object or inserting into existing buffers. Bear in mind, however, that this method version is sometimes unsafe, since pointers are required.

  • If your application must convert a large amount of data, it should reuse the output buffer. In this case, the GetBytes version that supports byte arrays is the best choice.

  • Consider using the Encoder::Convert method instead of GetByteCount. The conversion method converts as much data as possible, and does throw an exception if the output buffer is too small. For continuous encoding of a stream, this method is often the best choice.

The following example determines the number of bytes required to encode three characters from a character array, encodes the characters, and displays the resulting bytes.

using namespace System;
using namespace System::Text;
void PrintCountsAndBytes( array<Char>^chars, int index, int count, Encoding^ enc );
void PrintHexBytes( array<Byte>^bytes );
int main()

   // The characters to encode: 
   //    Latin Small Letter Z (U+007A) 
   //    Latin Small Letter A (U+0061) 
   //    Combining Breve (U+0306) 
   //    Latin Small Letter AE With Acute (U+01FD) 
   //    Greek Small Letter Beta (U+03B2) 
   //    a high-surrogate value (U+D8FF) 
   //    a low-surrogate value (U+DCFF) 
   array<Char>^myChars = gcnew array<Char>{

   // Get different encodings.
   Encoding^ u7 = Encoding::UTF7;
   Encoding^ u8 = Encoding::UTF8;
   Encoding^ u16LE = Encoding::Unicode;
   Encoding^ u16BE = Encoding::BigEndianUnicode;
   Encoding^ u32 = Encoding::UTF32;

   // Encode three characters starting at index 4, and print out the counts and the resulting bytes.
   PrintCountsAndBytes( myChars, 4, 3, u7 );
   PrintCountsAndBytes( myChars, 4, 3, u8 );
   PrintCountsAndBytes( myChars, 4, 3, u16LE );
   PrintCountsAndBytes( myChars, 4, 3, u16BE );
   PrintCountsAndBytes( myChars, 4, 3, u32 );

void PrintCountsAndBytes( array<Char>^chars, int index, int count, Encoding^ enc )

   // Display the name of the encoding used.
   Console::Write( "{0,-30} :", enc );

   // Display the exact byte count. 
   int iBC = enc->GetByteCount( chars, index, count );
   Console::Write( " {0,-3}", iBC );

   // Display the maximum byte count. 
   int iMBC = enc->GetMaxByteCount( count );
   Console::Write( " {0,-3} :", iMBC );

   // Encode the array of chars. 
   array<Byte>^bytes = enc->GetBytes( chars, index, count );

   // The following is an alternative way to encode the array of chars: 
   // byte[] bytes = new byte[iBC]; 
   // enc.GetBytes( chars, index, count, bytes, bytes.GetLowerBound(0) ); 
   // Display all the encoded bytes.
   PrintHexBytes( bytes );

void PrintHexBytes( array<Byte>^bytes )
   if ( (bytes == nullptr) || (bytes->Length == 0) )
      Console::WriteLine( "<none>" );
      for ( int i = 0; i < bytes->Length; i++ )
         Console::Write( "{0:X2} ", bytes[ i ] );

This code produces the following output.

System.Text.UTF7Encoding       : 10  11  :2B 41 37 4C 59 2F 39 7A 2F 2D
System.Text.UTF8Encoding       : 6   12  :CE B2 F1 8F B3 BF
System.Text.UnicodeEncoding    : 6   8   :B2 03 FF D8 FF DC
System.Text.UnicodeEncoding    : 6   8   :03 B2 D8 FF DC FF
System.Text.UTF32Encoding      : 8   16  :B2 03 00 00 FF FC 04 00


Windows 7, Windows Vista, Windows XP SP2, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP Starter Edition, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003, Windows Server 2000 SP4, Windows Millennium Edition, Windows 98, Windows CE, Windows Mobile for Smartphone, Windows Mobile for Pocket PC, Xbox 360, Zune

The .NET Framework and .NET Compact Framework do not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.

.NET Framework

Supported in: 3.5, 3.0, 2.0, 1.1, 1.0

.NET Compact Framework

Supported in: 3.5, 2.0, 1.0

XNA Framework

Supported in: 3.0, 2.0, 1.0