Encoding.GetMaxByteCount Method (Int32)


When overridden in a derived class, calculates the maximum number of bytes produced by encoding the specified number of characters.

Namespace:   System.Text
Assembly:  mscorlib (in mscorlib.dll)

abstract GetMaxByteCount : 
        charCount:int -> int


Type: System.Int32

The number of characters to encode.

Return Value

Type: System.Int32

The maximum number of bytes produced by encoding the specified number of characters.

Exception Condition

charCount is less than zero.


A fallback occurred (see Character Encoding in the .NET Framework for complete explanation)


EncoderFallback is set to EncoderExceptionFallback.

The charCount parameter actually specifies the number of Char objects that represent the Unicode characters to encode, because the .NET Framework internally uses UTF-16 to represent Unicode characters. Consequently, most Unicode characters can be represented by one Char object, but a Unicode character represented by a surrogate pair, for example, requires two Char objects.

To calculate the exact array size required by GetBytes to store the resulting bytes, you should use the GetByteCount method. To calculate the maximum array size, use the GetMaxByteCount method. The GetByteCount method generally allows allocation of less memory, while the GetMaxByteCount method generally executes faster.

GetMaxByteCount retrieves a worst-case number, including the worst case for the currently selected EncoderFallback. If a fallback is chosen with a potentially large string, GetMaxByteCount retrieves large values, particularly in cases where the worst case for the encoding involves switching modes for every character. For example, this can happen for ISO-2022-JP. For more information, see the blog entry "What's with Encoding.GetMaxByteCount() and Encoding.GetMaxCharCount()?" (http://blogs.msdn.com/shawnste/archive/2005/03/02/383903.aspx).

In most cases, this method retrieves reasonable values for small strings. For large strings, you might have to choose between using very large buffers and catching errors in the rare case when a more reasonable buffer is too small. You might also want to consider a different approach using GetByteCount or Encoder.Convert.

When using GetMaxByteCount, you should allocate the output buffer based on the maximum size of the input buffer. If the output buffer is constrained in size, you might use the Convert method.

Note that GetMaxByteCount considers potential leftover surrogates from a previous decoder operation. Because of the decoder, passing a value of 1 to the method retrieves 2 for a single-byte encoding, such as ASCII. You should use the IsSingleByte property if this information is necessary.


GetMaxByteCount(N) is not necessarily the same value as N* GetMaxByteCount(1).

Notes to Implementers:

All Encoding implementations must guarantee that no buffer overflow exceptions occur if buffers are sized according to the results of this method's calculations.

The following example determines the number of bytes required to encode a character array, encodes the characters, and displays the resulting bytes.

No code example is currently available or this language may not be supported.

Universal Windows Platform
Available since 8
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1
Return to top