Was this page helpful?
Your feedback about this content is important. Let us know what you think.
Additional feedback?
1500 characters remaining
Export (0) Print
Expand All

Translation Guide: Moving Your Programs from Managed Extensions for C++ to C++/CLI

Visual Studio 2005
 

Stanley B. Lippman
Microsoft Corporation

August 2004

Applies to:
   C++/CLI Version 2
   ISO-C++

Summary: C++/CLI represents a dynamic programming paradigm extension to the ISO-C++ standard language. This document provides an enumerated listing of the Version 1 language features and their mapping to the Version 2 language, where such a mapping exists, and points out those constructs for which no mapping exists. (68 printed pages)

Contents

Introduction
1. Language Keywords
2. The Managed Types
3. Member Declarations Within a Class or Interface
4 Value Types and Their Behaviors
5. General Language Changes
Appendix: Motivating the Revised Language Design
Acknowledgement

Introduction

C++/CLI represents a dynamic programming paradigm extension to the ISO-C++ standard language. There are a number of significant weaknesses in the original language design (Version 1), which we feel are corrected in the revised language design (Version 2). This document provides an enumerated listing of the V1 language features and their mapping to the V2 language, where such a mapping exists and points out those constructs for which no mapping exists. For the interested reader, an Appendix provides an extended rationale for the new language design. In addition, a source to source translation tool (mscfront) is under development and may be provided with the release of the C++/CLI language for those who wish for help in automating the migration of their V1 code to the new language design.

The document is broken up into five sections plus an Appendix. Section 1 discusses the broad issue of language keywords, in particular the removal of the double underscore and the introduction of both contextual and spaced keywords. Section 2 looks at changes with the managed types—in particular, the managed reference type and the array. A detailed discussion of the semantics of deterministic finalization is found here. The changes involving class members such as properties, index properties, and operators is the focus of Section 3. Section 4 looks at changes in the syntax of CLI enums, interior, and pinning pointers. It also discusses a number of significant semantics changes such as the introduction of implicit boxing, changes to CLI enums, and the removal of support for default constructors within value classes. Section 5 is something of a hodge-podge—the infamous misc. A discussion of cast notation, string literal behavior, and the parameter array is found here.

1. Language Keywords

One general transformation of the language between the original and revised language design is the removal of the double-underscore from all keywords. For example, a property is now declared as property, not __property. There were two primary reasons for using the double-underscore prefix in the original language design:

  1. It is the conformant method of providing local extensions to the ISO-C++ Standard. A primary goal of the original language design was to not introduce incompatibilities with the standard language, such as new keywords and tokens. It was this reason, in large part, which motivated the choice of pointer syntax for the declaration of objects of managed reference types.
  2. The use of the double-underscore, apart from its conformant aspect, is also a reasonable guarantee of being non-invasive with the existing code base of our users. This was a second primary goal of the original language design.

Why, then, did we remove the double-underscore (as well as introduce a number of new tokens)? No, it is not that we are no longer concerned with being conformant with the standard!

We remain committed to being conformant. However, we recognize that support for the CLI dynamic object model represents a new and powerful programming paradigm. Both our experience with the original language design and our experience with the design and evolution of the C++ language itself have convinced us that support of this new paradigm requires its own high-level keywords and tokens. We have sought to provide a first-class expression of this new paradigm while integrating it and supporting the standard language. We hope you agree that the revised language design provides a first class programming experience of these two disparate object models.

Similarly, we are very concerned with maximizing the non-invasive nature of these new language keywords. This has been accomplished with the use of contextual and spaced keywords. Before we look at the actual revised language syntax, let's try to make sense of these two special keyword flavors.

A contextual keyword has a special meaning within specific program contexts. Within the general program, for example, sealed is treated as an ordinary identifier. However, when it occurs within the declaration portion of a managed reference class type, it is treated as a keyword within the context of that class declaration. This minimizes the potential invasive impact of introducing a new keyword in the language, something that we feel is very important to users with an existing code base. At the same time, it allows users of the new functionality to have a first-class experience of the additional language feature—something we felt was missing from the original language design. We'll see an example of how sealed is used in Section 1.1.2.

A spaced keyword is a special case of a contextual keyword. It literally pairs an existing keyword with a contextual modifier separated by a space. The pair is treated as a single unit, such as value class (see Section 1.1 for an example), rather than as two separate keywords. In practical terms, this means that a macro redefinition of value, such as the following,

   #ifndef __cplusplus_cli
   #define value

does not blank out value from a class declaration. If one should wish to accomplish this, one would have to redefine the unit pair by writing

   #ifndef __cplusplus_cli
   #define value class class

For practical reasons, this is quite necessary; otherwise, it is possible to have existing #define transformations of the contextual keyword portion of the spaced keyword.

2. The Managed Types

The syntax for the declaration of managed types and the creation and use of objects of these types has been significantly altered to promote their integration within the ISO-C++ type system. These changes are presented in detail in the following subsections. The discussion of delegates is deferred until Section 2.3 in order to present them with event members within a class, the general topic of Section 2. (For a more detailed discussion of the rationale behind the introduction of the tracking reference syntax and the general shift in design, see Appendix A, Motivating the Revised Language Design.)

2.1 Declaration of a Managed Class Type

In the original language definition, a reference class type is prefaced with the __gc keyword. In the revised language, the __gc keyword is replaced by one of two spaced keywords: ref class or ref struct. The choice of struct or class simply indicates the public (for struct) or private (for class) default access level of its members that are declared within an initial unlabeled portion of the body of the type.

Similarly, in the original language definition, a value class type is prefaced with the __value keyword. In the revised language, the __value keyword is replaced by one of two spaced keywords: value class or value struct.

An interface type, in the original language definition, was indicated with the keyword __interface. In the revised language, this is replaced with interface class.

For example, the follow pair of class declarations

   // original language syntax 
   public __gc class Block { ... };   // reference class
   public __value class Vector { ... };   // value class
   public __interface IMyFile { ... };   // interface class

under the revised language design are equivalently declared as follows:

   // revised language syntax 
   public ref class Block { ... };
   public value class Vector { ... };
   public interface class IMyFile { ... };

The choice of ref (for reference type) over gc (for garbage collected) is thought to better suggest the fundamental nature of the type.

2.1.1 Specifying the Class as Abstract

Under the original language definition, the keyword __abstract is placed before the type keyword (either before or after the __gc) to indicate that the class is incomplete and that objects of the class cannot be created within the program:

   public __gc __abstract class Shape {};
   public __gc __abstract class Shape2D: public Shape {};

Under the revised language design, the abstract contextual keyword is specified following the class name and before the class body, base class derivation list, or semicolon.

   public ref class Shape abstract {};
   public ref class Shape2D abstract : public Shape{};

Of course, the semantic meaning is unchanged.

2.1.2 Specifying the Class as Sealed

Under the original language definition, the keyword __sealed is placed before the class keyword (either before or after __gc) to indicate that objects of the class cannot be inherited from:

   public __gc __sealed class String {};

Under the V2 language design, the abstract contextual keyword is specified following the class name and before the class body, base class derivation list, or semicolon. (You can both derive a class and seal it. For example, the String class is implicitly derived from Object. The benefit of sealing a class is that it allows the static resolution (that is, at compile-time) of all virtual function calls through the sealed reference class object. This is because the sealed specifier guarantees that the String tracking handle cannot refer to a subsequently derived class that might provide an overriding instance of the virtual method being invoked.

   public ref class String sealed {};

One can also specify a class as both abstract and sealed—this is a special condition that indicates a static class. This is described in the CLI documentation as follows:

A type that is both abstract and sealed should have only static members, and serves as what some languages call a namespace.

For example, here is a declaration of an abstract sealed class using the V1 syntax,

   public __gc __sealed __abstract class State 
{
public:
   static State();
   static bool inParamList();

private:
   static bool ms_inParam;
};

and here is this declaration translated into the revised language design,

   public ref class State abstract sealed
{
public:
   static State();
   static bool inParamList();

private:
   static bool ms_inParam;
};

2.1.3 CLI Inheritance: Specifying the Base Class

Under the CLI object model, only public single inheritance is supported. However, the original language definition retained the ISO-C++ default interpretation of a base class without an access keyword as specifying a private derivation. This meant that each CLI inheritance declaration had to provide the public keyword simply to override the default interpretation. Many users felt this was a bit severe on the compiler's part.

// V1: error: defaults to private derivation
__gc class My : File{};

In the revised language definition, the absence of an access keyword defaults to a public derivation in a CLI inheritance definition. Thus, the public access keyword is no longer required, but optional. While this does not require any modification of V1 code, I list this change here for completeness.

// V2: ok: defaults to public derivation
ref class My : File{};

2.2 Declaration of a CLI Reference Class Object

In the original language definition, a reference class type object is declared using the ISO-C++ pointer syntax, with an optional use of the __gc keyword to the left of the star (*). For example, here is a variety of reference class type object declarations under the V1 syntax:

public __gc class Form1 : public System::Windows::Forms::Form {
private:
   System::ComponentModel::Container __gc *components;
   Button __gc *button1;
   DataGrid __gc *myDataGrid;   
   DataSet __gc *myDataSet;
   
void PrintValues( Array* myArr )  
{
    System::Collections::IEnumerator* myEnumerator = 
myArr->GetEnumerator();

          Array *localArray = myArr->Copy();
          // ...
      }
   };

Under the revised language design, a reference class type object is declared using a new declarative token (^) referred to formally as a tracking handle and more informally as a hat. (The tracking adjective underscores the idea that a reference type sits within the CLI heap, and can therefore transparently move locations during garbage collection heap compaction. A tracking handle is transparently updated during runtime. Two analogous concepts are (a) the tracking reference (%), and (b) the interior pointer (interior_ptr<>), discussed in Section 4.4.3.

There are two primary reasons to move the declarative syntax away from a reuse of the ISO-C++ pointer syntax:

  1. The use of the pointer syntax did not allow overloaded operators to be directly applied to a reference object; rather, one had to call the operator through its internal name, such as rV1->op_Addition(rV2) rather than the more intuitive rV1+rV2.
  2. There are a number of pointer operations, such as casting and pointer arithmetic, that are disallowed for objects stored on a garbage collected heap. We felt that the notion of a tracking handle better captures the nature of a CLI reference type.

The use of the __gc modifier on a tracking handler is unnecessary and is not supported. The use of the object itself is not changed; it still accesses members through the pointer member selection operator (->). For example, here is the above V1 text translated into the new language syntax:

public ref class Form1: public System::Windows::Forms::Form{
private:
   System::ComponentModel::Container^ components;
   Button^ button1;
   DataGrid^ myDataGrid;
   DataSet^ myDataSet;

void PrintValues( Array^ myArr )
{
          System::Collections::IEnumerator^ myEnumerator =
                myArr->GetEnumerator();

          Array ^localArray = myArr->Copy();
             // ...
      }
   };

2.2.1 Dynamic Allocation of an Object on the CLI Heap

In the original language design, the existence of two new expressions to allocate between the native and managed heap was largely transparent. In nearly all instances, the compiler is able by context to correctly determine whether the native or managed heap is intended. For example,

   Button *button1 = new Button; // OK: managed heap
   int *pi1 = new int;           // OK: native heap
   Int32 *pi2 = new Int32;       // OK: managed heap

In cases in which the contextual heap allocation is not the intended instance, one could direct the compiler with either the __gc or __nogc keyword. In the revised language, the separate nature of the two new expressions is made explicit with the introduction of the gcnew keyword. For example, the above three declarations look as follows in the revised language:

Button^ button1 = gcnew Button;        // OK: managed heap
int * pi1 = new int;                   // OK: native heap
interior_ptr<Int32> pi2 = gcnew Int32; // OK: managed heap

(The interior_ptr is discussed in more detail in Section 3. Generally, it addresses an object that may, but need not, reside on the managed heap. If the object it addresses does reside on the managed heap, it is transparently updated if the object should be relocated.)

Here is the V1 initialization of the Form1 members declared in the previous section:

void InitializeComponent() 
{
      components = new System::ComponentModel::Container();
      button1 = new System::Windows::Forms::Button();
      myDataGrid = new DataGrid();

      button1->Click += 
            new System::EventHandler(this, &Form1::button1_Click);

      // ...
}

Here is the same initialization recast to the revised language syntax. Note that the hat is not required for the reference type when it is the target of a gcnew expression.

   void InitializeComponent()
   {
      components = gcnew System::ComponentModel::Container;
      button1 = gcnew System::Windows::Forms::Button;
      myDataGrid = gcnew DataGrid;

      button1->Click += 
            gcnew System::EventHandler( this, &Form1::button1_Click );

      // ...
   }

2.2.2 A Tracking Reference to No Object

In the new language design, 0 no longer represents a null address but is simply treated as an integer, the same as 1, 10, or 100, and so we needed to introduce a special token to represent a null value for a tracking reference. For example, in the original language design, we initialize a reference type to address no object as follows,

// OK: we set obj to refer to no object
Object * obj = 0;

// Error: no implicit boxing ...
Object * obj2 = 1;

In the revised language, any initialization or assignment of a value type to an Object results in an implicit boxing of that value type. In the revised language, both obj and obj2 are initialized to addressed boxed Int32 objects holding, respectively, the values 0 and 1. For example,

// causes the implicit boxing of both 0 and 1
Object ^ obj = 0;
Object ^ obj2 = 1;

Therefore, in order to allow the explicit initialization, assignment, and comparison of a tracking handle to null, we introduced a new keyword, nullptr. And so the correct revision of the V1 example looks as follows:

// OK: we set obj to refer to no object
Object ^ obj = nullptr;

// OK: we initialize obj2 to a Int32^
Object ^ obj2 = 1;

This complicates somewhat the porting of existing V1 code into the revised language design. For example, consider the following value class declaration:

__value struct Holder { // the original V1 syntax
      Holder( Continuation* c, Sexpr* v )
      {
         cont = c;
         value = v;
         args = 0;
         env = 0;
   }

private:
   Continuation* cont;
   Sexpr * value;
   Environment* env;
   Sexpr * args __gc [];
   };

where both args and env are CLI reference types. The initialization of these two members to 0 in the constructor cannot remain unchanged in the transition to the new syntax. Rather, they must be changed to nullptr:

// the revised V2 syntax
value struct Holder
{
   Holder( Continuation^ c, Sexpr^ v )
   {
      cont = c;
      value = v;
      args = nullptr;
      env = nullptr;
   }

private:
   Continuation^ cont;
   Sexpr^ value;
   Environment^ env;
   array<Sexpr^>^ args;
};

Similarly, tests against those members comparing them to 0 must also be changed to compare the members to nullptr. Here is the original syntax,

// the original V1 syntax

Sexpr * Loop (Sexpr* input)
   {
      value = 0;
      Holder holder = Interpret(this, input, env);

      while (holder.cont != 0)
      {
      if (holder.env != 0)
      {
        holder=Interpret(holder.cont,holder.value,holder.env);
      }
      else if (holder.args != 0)
      {
        holder = 
    holder.value->closure()->
                  apply(holder.cont,holder.args);
      }
      }

      return value;
   }

and here is the revision, turning each 0 instance into a nullptr. (The translation tool helps in this transformation, automating many if not all of the occurrences, including use of the NULL macro.)

// the revised V2 syntax
Sexpr ^ Loop (Sexpr^ input)
   {
      value = nullptr;
      Holder holder = Interpret(this, input, env);

      while ( holder.cont != nullptr )
      {
      if ( holder.env != nullptr )
      {
        holder=Interpret(holder.cont,holder.value,holder.env);
      }
      else if (holder.args != nullptr )
      {
        holder = 
    holder.value->closure()->
                  apply(holder.cont,holder.args);
      }
      }

      return value;
   }

The nullptr is converted into any pointer or tracking handle type but is not promoted to an integral type. For example, in the following set of initializations, the nullptr is only legal as an initial value to the first two.

// OK: we set obj and pstr to refer to no object
Object^ obj = nullptr;
char*   pstr = nullptr; // 0 would also work here ...

// Error: no conversion of nullptr to 0 ...
int ival = nullptr;

Similarly, given an overloaded set of methods such as the following:

void f( Object^ ); // (1)
void f( char* );   // (2)
void f( int );     // (3)

An invocation with nullptr literal, such as the following,

// Error: ambiguous: matches (1) and (2)
f(  nullptr );

is ambiguous because the nullptr matches both a tracking handle and a pointer and there is no preference given to one type over the other. (This requires an explicit cast in order to disambiguate.)

An invocation with 0 exactly matches instance (3):

// OK: matches (3)
f( 0 );

because a 0 is of type integer. Were f(int) not present, the call would unambiguously match f(char*) through a standard conversion. The matching rules give precedence of an exact match over a standard conversion. In the absence of an exact match, a standard conversion is given precedence over an implicit boxing of a value type. This is why there is no ambiguity.

2.3 Declaration of a CLI Array

The declaration of a CLI array object in the original language design was a slightly non-intuitive extension of the standard array declaration in which a __gc keyword was placed between the name of the array object and its possibly comma-filled dimension, as in the following pair of examples,

// V1 syntax
void PrintValues( Object* myArr __gc[]);
void PrintValues( int myArr __gc[,,]);

This has been simplified in the revised language design, in which we use a template-like declaration that suggests the STL vector declaration. The first parameter indicates the element type. The second parameter specifies the array dimension (with a default value of 1, so only multiple dimensions require a second argument). The array object itself is a tracking handle and so must be given a hat. If the element type is also a reference type, that, too, then must be so marked. For example, the above example, when expressed in the revised language, looks as follows:

// V2 syntax
void PrintValues( array<Object^>^ myArr );
void PrintValues( array<int,3>^ myArr );

Because a reference type is a tracking handle rather than an object, it is possible to specify a CLI array as the return type of a function. (It is not possible to specify the native array as the return type of a function.) The syntax for doing this in the original language design was somewhat non-intuitive. For example,

// V1 syntax
Int32 f() [];
int GetArray() __gc[];

In V2, the declaration is much simpler for the human reader to parse. For example,

// V2 syntax
array<Int32>^ f();
array<int>^ GetArray();

The shorthand initialization of a local managed array is supported in both versions of the language. For example,

// V1 syntax
int GetArray() __gc[]
{
   int a1 __gc[] = { 1, 2, 3, 4, 5 };
   Object* myObjArray __gc[] = { 
__box(26), __box(27), __box(28), __box(29), __box(30)
   };

   // ...
}
   

is considerably simplified in V2 (note that because boxing is implicit in the revised language design, the __box operator has been eliminated—see Section 3 for a discussion),

// V2 syntax
array<int>^ GetArray()
{
   array<int>^ a1 = {1,2,3,4,5};
array<Object^>^ myObjArray = {26,27,28,29,30};

// ...
}

Because an array is a CLI reference type, the declaration of each array object is a tracking handle. Therefore, it must be allocated on the CLI heap. (The shorthand notation hides the managed heap allocation.) Here is the explicit form of an array object initialization under the original language design:

// V1 syntax
Object* myArray[] = new Object*[2];
String* myMat[,] = new String*[4,4];

Under the new language design, the new expression, recall, is replaced with gcnew. The dimension sizes are passed as parameters to the gcnew expression, as follows:

// V2 syntax
array<Object^>^ myArray = gcnew array<Object^>(2);
array<String^,2>^ myMat = gcnew array<String^,2>(4,4);

In the revised language, an explicit initialization list can follow the gcnew expression; this was not supported in the V1 language. For example,

// V2 syntax 
// explicit initialization list follow gcnew 
//          is not supported in V1

array<Object^>^ myArray = 
      gcnew array<Object^>(4){ 1, 1, 2, 3 }

2.4 Changes in Destructor Semantics

In the original language design, a class destructor was permitted within a reference class but not within a value class. This has not changed in the revised V2 language design. However, the semantics of the class destructor have changed considerably. The 'what' and 'why' of that change (and how it impacts the translation of existing V1 code) is the topic of this section. This is probably the most complicated section of the text, so we'll try to go slowly. It is also probably the most important programmer-level change between the two versions of the language, and so it is worth the effort to walk through the material in a stepwise fashion.

2.4.1 Non-Deterministic Finalization

Before the memory associated with an object is reclaimed by the garbage collector, an associated Finalize() method, if present, is invoked. You can think of this method as a kind of super-destructor since it is not tied to the program lifetime of the object. We refer to this as finalization. The timing of just when or even whether a Finalize() method is invoke is undefined. This is what is meant when we say that garbage collection exhibits non-deterministic finalization.

Non-deterministic finalization works well with dynamic memory management. When available memory gets sufficiently scarce, the garbage collector kicks in and things pretty much just work. Under a garbage collected environment, destructors to free memory are unnecessary. It's kind of spooky when you first implement an application and don't fret over each potential memory leak. Acclimation comes easy, however.

Non-deterministic finalization does not work well, however, when an object maintains a critical resource such as a database connection or a lock of some sort. In this case, we need to release that resource as soon as possible. In the native world, that is done through the pairing of a constructor/destructor pair. As soon as the lifetime of the object ends, either through the completion of the local block within which it is declared or through the unraveling of the stack because of a thrown exception, the destructor kicks in and the resource is automatically released. It works very well, and its absence under the original language design was sorely missed.

The solution provided by the CLI is for a class to implement the Dispose() method of the IDisposable interface. The problem here is that Dispose() requires an explicit invocation by the user. This is error-prone and therefore a step backwards. The C# language provides a modest form of automation through a special using statement. Our original language design, as I already mentioned, provided no special support at all.

2.4.2 In V1, Destructors go to Finalize()

In the original language, the destructor of a reference class is implemented through the following two steps:

  1. The user supplied destructor is renamed internally to Finalize(). If the class has a base class (remember, under the CLI Object Model, only single inheritance is supported), the compiler injects a call of its finalizer following execution of the user-supplied code. For example, given the following trivial hierarchy taken from the V1 language specification,
    __gc class A {
    public:
       ~A() { Console::WriteLine(S"in ~A"); }
    };
       
    __gc class B : public A {
    public:
       ~B() { Console::WriteLine(S"in ~B");  }
    };
    

    both destructors are renamed Finalize(). B's Finalize() has an invocation of A's Finalize() method added, following the invocation of WriteLine(). This is what the garbage collector will invoke by default during finalization. Here is what this internal transformation might look like:

    // internal transformation of destructor under V1
    __gc class A {
    public:
       void Finalize() { Console::WriteLine(S"in ~A"); }
    };
    
    __gc class B : public A {
    public:
       void Finalize() { 
    Console::WriteLine(S"in ~B");  
    A::Finalize(); 
       }
    };
    
  2. In the second step, the compiler synthesizes a virtual destructor. This destructor is what our V1 user programs invoke either directly or through an application of the delete expression. It is never invoked by the garbage collector.

    What is placed within this synthesized destructor? Two statements. One is a call to GC::SuppressFinalize() to make sure there are no further invocations of Finalize(). The second is the actual invocation of Finalize(). This, recall, represents the user-supplied destructor for that class. Here is what this might look like:

    __gc class A {
    public:
          virtual ~A() 
    {
             System::GC::SuppressFinalize(this);
             A::Finalize();
          }
    };
    
    __gc class B : public A {
    public:
          virtual ~B() 
    { 
                System::GC:SuppressFinalize(this);
             B::Finalize();
          }
    };
    

While this implementation allows the user to explicitly invoke the class Finalize() method now rather than whenever, it does not really tie in with the Dispose() method solution. This is changed in the revised language design.

2.4.3 In V2, Destructors go to Dispose()

In the revised language design, the destructor is renamed internally to the Dispose() method and the reference class is automatically extended to implement the IDispose interface. That is, under V2, our pair of classes is transformed as followed:

// internal transformation of destructor under V2
__gc class A : IDisposable {
public:
   void Dispose() { 
   System::GC::SuppressFinalize(this);
Console::WriteLine( "in ~A"); }
   }
};

__gc class B : public A {
public:
   void Dispose() { 
   System::GC::SuppressFinalize(this);
Console::WriteLine( "in ~B");  
A::Dispose(); 
   }
};

When either a destructor is invoked explicitly under V2, or when delete is applied to a tracking handle, the underlying Dispose() method is invoked automatically. If it is a derived class, a call of the Dispose() method of the base class is inserted at the close of the synthesized method.

But this doesn't get us all the way to deterministic finalization. In order to reach that, we need the additional support of local reference objects. (This has no analogous support within the original language design, and so it is not a translation issue.)

2.4.4 Declaring a Reference Object

The revised language supports the declaration of an object of a reference class on the local stack or as a member of a class as if it were directly accessible (note that this is not available in the Beta1 release of Microsoft Visual Studio 2005). When combined with the association of the destructor with the Dispose() method as described in Section 2.4.3, the result is the automated invocation of finalization semantics on reference types. The fiery dragon of non-deterministic finalization that has bedeviled the CLI community has been tamed, at least for users of C++/CLI. Let's look and see what this means.

First, we define our reference class such that object creation functions as the acquisition of a resource through its class constructor. Secondly, within the class destructor, we release the resource acquired when the object was created.

public ref class R {
public:
    R() { /* acquire expensive resource */ }
      ~R(){ /* release expensive resource */ }
      
      // ... everything else ...
};

The object is declared locally using the type name but without the accompanying hat. All uses of the object, such as invoking a member function, are done through the member selection dot (.) rather than arrow (->). At the end of the block, the associated destructor, transformed into Dispose(), is invoked automatically:

void f()
{
    R r; 
    r.methodCall();

    // ...
    // r is automatically destructed here -
    // that is, r.Dispose() is invoked ... 
}

As with the using statement within C#, this is syntactic sugar rather than defiance of the underlying CLI constraint that all reference types must be allocated on the CLI heap. The underlying semantics remain unchanged. The user could equivalently have written the following (and this is likely the internal transformation carried out by the compiler):

// equivalent implementation ...
// except that it should be in a try/finally clause
void f()
{
    R^ r = gcnew R; 
    r->methodCall();
    // ...
    delete r;
}

In effect, under the revised language design, destructors are once again paired with constructors as an automated acquisition/release mechanism tied to a local object's lifetime. This is a significant and quite astonishing accomplishment and the language designers should be roundly applauded for this.

2.4.5 Declaring an Explicit Finalize() ( !R )

In the revised language, as we've seen, the destructor is synthesized into the Dispose() method. This means that in cases where the destructor is not explicitly invoked, the garbage collector, during finalization, will not as before find an associated Finalize() method for the object. In order to support both destruction and finalization, the revised language design has introduced a special syntax for providing a finalizer. For example,

public ref class R {
   public:
      !R() { Console::WriteLine( "I am the R::finalizer()!" ); }
   };

The ! prefix is meant to suggest the analogous tilde (~) that introduces a class destructor—that is, both post-lifetime methods have a token prefixing the name of the class. If the synthesized Finalize() method occurs within a derived class, an invocation of the base class Finalize() method is inserted at its end. If the destructor is explicitly invoked, the finalizer is suppressed. Here is what the transformation might look like:

// internal transformation under V2
public ref class R {
public:
void Finalize()
   { Console::WriteLine( "I am the R::finalizer()!" ); }
}; 

2.4.6 What This Means Going from V1 to V2

This means that the runtime behavior of a V1 program is silently changed when compiled under V2 whenever a reference class contains a non-trivial destructor. The required translation algorithm seems to be the following:

  1. If a destructor is present, rewrite that to be the class finalizer.
  2. If a Dispose() method is present, rewrite that into the class destructor.
  3. If a destructor is present but there is no Dispose() method, retain the destructor while carrying out item (1).

In moving your code from V1 to V2, it is possible to miss performing this transformation. If the application depended in some way on the execution of associated finalization methods, then the behavior of the application will silently differ.

3. Member Declarations Within a Class or Interface

The declaration of properties and operators has been extensively reworked in the revised language design, hiding the underlying implementation details that were exposed in the original design. In addition, event declarations have been modified as well.

Under the category of changes that have no V1 support, static constructors can now be defined out-of-line (they were required to be defined inline within V1), and the notion of a delegating constructor has been introduced.

3.1 Property Declaration

In the original language design, each set or get property accessor is specified as an independent member function. The declaration of each method is prefixed with the __property keyword. The method name begins with either set_ or get_ followed by the actual name of the property (as visible to the user). Thus, a Vector providing an x coordinate get property would name it get_x and the user would invoke it as x. This naming convention and separate specification of methods actually reflects the underlying runtime implementation of the property. For example, here is our Vector with a set of coordinate properties:

public __gc __sealed class Vector 
{
public:
   // ...

   __property double get_x(){ return _x; }
   __property double get_y(){ return _y; }
   __property double get_z(){ return _z; }

   __property void set_x( double newx ){ _x = newx; }
   __property void set_y( double newy ){ _y = newy; }
   __property void set_z( double newz ){ _z = newz; }
};

This was found to be confusing because it spreads out the functionality associated with a property and requires the user to lexically unify the associated sets and gets. Moreover, it is lexically verbose and feels inelegant. In the revised language design, which is more like that of C#, the property keyword is followed by the type of the property and its unadorned name. The set access and get access methods are placed within a block following the property name. Note that unlike C#, the signature of the access method is specified. For example, here is the code example above translated into the new language design.

public ref class Vector sealed
{ 
public:



   property double x 
   {
      double get()
      {
         return _x;
      }

      void set( double newx )
      {
         _x = newx;
      }

   } // Note: no semi-colon ...
};

If the access methods of the property reflect distinct access levels—such as a public get and a private or protected set—an explicit access label can be specified. By default, the access level of the property reflects that of the enclosing access level. For example, in the above definition of Vector, both the get and set methods are public. To make the set method protected or private, the definition would be revised as follows:

public ref class Vector sealed
{ 
public:
   property double x 
   {
      double get()
      {
         return _x;
      }

   private:
      void set( double newx )
      {
         _x = newx;
      }

   } // note: extent of private culminates here ...

   // note: dot is a public method of Vector ...
   double dot( const Vector^ wv );

// etc.

};

The extent of an access keyword within a property extends until either the closing brace of the property or the specification of an additional access keyword. It does not extend beyond the definition of the property to the enclosing access level within which the property is defined. In the above declaration, for example, Vector::dot() is a public member function.

Writing the set/get properties for the three Vector coordinates is a bit tedious given the canned nature of the implementation: (a) declare a private state member of the appropriate type, (b) return it when the user wishes to get its value, and (c) set it to whatever new value the user wishes to assign. In the revised language design, a shorthand property syntax is available which automates this usage pattern:

public ref class Vector sealed
{ 
public:
   // equivalent shorthand property syntax
   property double x; 
property double y;
property double z;
};

The interesting side-effect of the shorthand property syntax is that while the backstage state member is automatically generated by the compiler, it is not accessible within the class except through the set/get accessors. Talk about the strict enforcement of data hiding!

3.2 Property Index Declaration

The two primary shortcoming of the original language support of indexed properties is the inability to provide class-level subscripting; that is, all indexed properties are required to be given a name, and thus there is no way, for example, to provide a managed subscript operator that can be directly applied to a Vector or Matrix class object. A second, less significant, shortcoming is that it is visually difficult to distinguish a property from an indexed property—the number of parameters is the only indication. Finally, indexed properties suffer from the same problems as those of non-indexed properties—the accessors are not treated as an atomic unit, but separated into individual methods. For example:

public __gc class Vector;
public __gc class Matrix
{
    float mat[,];

public: 
   __property void set_Item( int r, int c, float value);
   __property int get_Item( int r, int c );

   __property void set_Row( int r, Vector* value );
   __property int get_Row( int r );
};

As you can see here, the indexers are distinguished only by the additional parameters to specify a two or single dimension index. In the revised syntax, the indexers are distinguished by the bracket ([,]) following the name of the indexer and indicating the number and type of each index:

public ref class Vector;
public ref class Matrix
{
private:
   array<float, 2>^ mat;

public:

   property int Item [int,int]
   {
      int get( int r, int c );
      void set( int r, int c, float value );
   }

   property int Row [int]
   {
      int get( int r );
      void set( int r, Vector^ value );
   }

};

To indicate a class level indexer that can be applied directly to objects of the class in the revised syntax, the default keyword is reused to substitute for an explicit name. For example:

public ref class Matrix
{
private:
   array<float, 2>^ mat;

public:

   // ok: class level indexer now
      //
   //     Matrix mat ...
      //     mat[ 0, 0 ] = 1; 
      //
      // invokes the set accessor of the default indexer ...

   property int default [int,int]
   {
      int get( int r, int c );
      void set( int r, int c, float value );
   }

   property int Row [int]
   {
      int get( int r );
      void set( int r, Vector^ value );
   }

};

In the revised language syntax, when the default indexed property is specified, the two following names are reserved: get_Item and set_Item. This is because these are the underlying names generated for the default indexed property.

Note that there is no simple index syntax analogous to the simple property syntax.

3.3 Delegates and Events

The only change to the declarations of a delegate and a trivial event is the removal of the double underscore, as in the following sample. As these things go, this has proved to be completely non-controversial. That is, there have been no advocates for the retention of the double underscore, which everyone now seems to agree gave the original language a somewhat grotty feel.

// the original language (V1) 
__delegate void ClickEventHandler(int, double);
__delegate void DblClickEventHandler(String*);

__gc class EventSource {
         __event ClickEventHandler* OnClick;  
         __event DblClickEventHandler* OnDblClick;  

      // ...
};

// the revised language (V2) 
delegate void ClickEventHandler( int, double );
delegate void DblClickEventHandler( String^ );

ref class EventSource
{
   event ClickEventHandler^ OnClick; 
   event DblClickEventHandler^ OnDblClick; 
// ...
};

Events (and delegates) are reference types, which is more apparent in V2 due to the presence of the hat (^). Events support an explicit declaration syntax as well as the trivial form. In the explicit form, the user specifies the add(), raise(), and remove() methods associated with the event. (Only the add() and remove() methods are required; the raise() method is optional.)

Under the V1 design, if the user chooses to provide these methods, she must not provide an explicit event declaration, although she must decide on a name for the event that is not present. Each individual method is specified in the form add_EventName, raise_EventName, and remove_EventName, as in the following example taken from the V1 language specification:

// under the original V1 language
// explicit implementations of add, remove, raise ...

public __delegate void f(int);
public __gc struct E {
   f* _E;
public:
   E() { _E = 0; }

   __event void add_E1(f* d) { _E += d; }

   static void Go() {
      E* pE = new E;
      pE->E1 += new f(pE, &E::handler);
      pE->E1(17); 
      pE->E1 -= new f(pE, &E::handler);
      pE->E1(17); 
   }

private:
   __event void raise_E1(int i) {
      if (_E)
         _E(i);
   }

protected:
   __event void remove_E1(f* d) {
      _E -= d;
   }
};

The problems with this design are largely cognitive rather than functional. Although the design supports adding these methods, it is not immediately clear from looking at the above sample exactly what is going on. As with the V1 property and indexed property, the methods are shot-gunned across the class declaration. Slightly more unnerving is the absence of the actual E1 event declaration. (Once again, the underlying details of the implementation penetrate up through the user-level syntax of the feature, adding to the apparent lexical complexity.) It simply labors too hard for something that is really not all that complex. The V2 design hugely simplifies the declaration, as the following translation demonstrates. An event specifies the two or three methods within a pair of curly braces following the declaration of the event and its associated delegate type, as follows:

// the revised V2 language design
delegate void f( int );
public ref struct E {
private:
   f^ _E; // yes, delegates are also reference types

public:
   E()
   {  // note the replacement of 0 with nullptr!
      _E = nullptr; 
   }

   // the V2 aggregate syntax of an explicit event declaration
   event f^ E1
   {
   public:
      void add( f^ d )
      {
         _E += d;
      }

   protected:
      void remove( f^ d )
      {
         _E -= d;
      }

   private:
      void raise( int i )
      {
         if ( _E )
              _E( i );
      }

   }
   static void Go()
   {
      E^ pE = gcnew E;
      pE->E1 += gcnew f( pE, &E::handler );
      pE->E1( 17 ); 
      pE->E1 -= gcnew f( pE, &E::handler );
      pE->E1( 17 ); 
   }
};

Although people tend to discount syntax as non-glamorous and trivial in terms of language design, it actually has a significant if largely unconscious impact on the user's experience of the language. A confusing or inelegant syntax increases the hazardousness of the development process in much the same way that a dirty or fogged windshield increases the hazardousness of driving. In the revised design, we've tried to make the syntax as transparent as a highly polished, newly installed windshield.

3.4 Sealing a Virtual Function

The __sealed keyword is used in V1 to modify either a reference type, disallowing subsequent derivation from it—as we saw in Section 2.1.2—or to modify a virtual function, disallowing subsequent overriding of the method in a derived class. For example:

class base { public: virtual void f(); };
class derived : public base {
public:
   __sealed void f();
};

In this example, derived::f() overrides the base::f() instance based on the exact match of the function prototype. The __sealed keyword indicates that a subsequent class inherited from the derived class cannot provide an override of derived::f().

In the new language design, sealed is placed after the signature rather than being allowed to appear anywhere before the actual function prototype, as was allowed in V1. In addition, the use of sealed requires an explicit use of the virtual keyword as well. That is, the correct translation of derived, above, is as follows:

class derived: public base
{
public:
   virtual void f() sealed;
};

The absence of the virtual keyword in this instance results in an error. In V2, the contextual keyword abstract can be used in place of the =0 to indicate a pure virtual function. This was not supported within V1. For example:

class base { public: virtual void f()=0; };

can be rewritten as

class base { public: virtual void f() abstract; };

3.5 Overloaded Operators

Perhaps the most striking aspect of the original language design is its support for operator overloading—or rather, its effective absence. Within the declaration of a reference type, for example, rather than using the native operator+ syntax, one had to explicitly write out the underlying internal name of the operator—in this case, op_Addition. More onerous, however, is the fact that the invocation of an operator had to be explicitly invoked through that name, thus precluding the two primary benefits of operator overloading: (a) the intuitive syntax, and (b) the ability to intermix new types with existing types. For example:

public __gc __sealed class Vector {
public:
  Vector( double x, double y, double z );
   
  static bool    op_Equality( const Vector*, const Vector* );
  static Vector* op_Division( const Vector*, double );
  static Vector* op_Addition( const Vector*, const Vector* );
  static Vector* op_Subtraction( const Vector*, const Vector* );
};

int main()
{
  Vector *pa = new Vector( 0.231, 2.4745, 0.023 );
  Vector *pb = new Vector( 1.475, 4.8916, -1.23 ); 

  Vector *pc1 = Vector::op_Addition( pa, pb );
  Vector *pc2 = Vector::op_Subtraction( pa, pc1 );
  Vector *pc3 = Vector::op_Division( pc1, pc2->x() );

  if ( Vector::op_Equality( pc1, p2 )) 
    // ...
}

In the language revision, the usual expectations of a native C++ programmer are restored, both in the declaration and use of the static operators. Here is the Vector class translated into the V2 syntax:

public ref class Vector sealed {
public:
   Vector( double x, double y, double z );

   static bool    operator ==( const Vector^, const Vector^ );
   static Vector^ operator /( const Vector^, double );
   static Vector^ operator +( const Vector^, const Vector^ );
   static Vector^ operator -( const Vector^, const Vector^ );
};

int main()
{
   Vector^ pa = gcnew Vector( 0.231, 2.4745, 0.023 ),
   Vector^ pb = gcnew Vector( 1.475,4.8916,-1.23 );

   Vector^ pc1 = pa + pb;
   Vector^ pc2 = pa-pc1;
   Vector^ pc3 = pc1 / pc2->x();

   if ( pc1 == p2 )
        // ...
}

3.6 Conversion Operators

Speaking of that grotty feel, having to write op_Implicit to specify a conversion just didn't feel like C++ in the V1 language design. For example, here is a definition of MyDouble taken from the V1 language specification:

__gc struct MyDouble 
{
   static MyDouble* op_Implicit( int i ); 
   static int op_Explicit( MyDouble* val );
   static String* op_Explicit( MyDouble* val ); 
};

This says that, given an integer, the algorithm for converting that integer into a MyDouble is provided by the op_Implicit operator. Moreover, that conversion will be carried out implicitly by the compiler. Similarly, given a MyDouble object, the two op_Explicit operators provide the respective algorithms for converting that object into either an integer or a managed String entity. However, the compiler will not be carried out the conversion unless explicitly requested by the user.

In C#, this would look as follows:

class MyDouble 
{
   public static implicit operator MyDouble( int i ); 
   public static explicit operator int( MyDouble val );
   public static explicit operator string( MyDouble val ); 
};

And apart from the weirdness of the explicit public access label for each member, the C# code looks a lot more like C++ than the Managed Extensions for C++ does. So we had to fix that. But how should we do that?

On one hand, C++ programmers are left slightly reeling by the absence of a single argument constructor being construed as a conversion operator. On the other hand, however, that design proved grotty enough to manage that the ISO-C++ committee introduced a keyword, explicit, just to reign in its unintended consequences—for example, an Array class which takes a single integer argument as a dimension will implicitly convert any integer into an Array object even when that is the very last thing one wants. Andy Koenig was the first person who brought that to my attention when he explained a design idiom of a dummy second argument to a constructor just to prevent such a bad thing from happening. So I don't regret the absence of a single constructor implicit conversion in C++/CLI.

On the other hand, it is not ever a good idea to provide a conversion pair when designing a class type within C++. The best example for that is the standard string class. The implicit conversion is the single-argument constructor taking a C-style string. However, it does not provide the corresponding implicit conversion operator—that of converting a string object to a C-style string—but rather requires the user to explicitly invoke a named function—in this case, c_str().

So, associating an implicit/explicit behavior on a conversion operator (as well as encapsulating the set of conversions to a single form of declaration) appears to be an improvement on the original C++ support for conversion operators, which has been a public cautionary tale back since 1988 when Robert Murray gave a Usenix C++ talk entitled Building Well-Behaved Type Relationships in C++, and which eventually led to the explicit keyword. The revised V2 language support for conversion operators looks as follows, which is slightly less verbose than that of C# due to the default behavior of the operator supporting an implicit application of the conversion algorithm:

ref struct MyDouble
{
public:
   static operator MyDouble^ ( int i );
   static explicit operator int ( MyDouble^ val );
   static explicit operator String^ ( MyDouble^ val );
};

Another change between V1 and V2 is that a single argument constructor in V2 is treated as if it is declared as explicit. This means that in order to trigger its invocations, an explicit cast is required. Note, however, that if an explicit conversion operator is defined, it, and not the single-argument constructor, is invoked.

3.7 Explicit Override of an Inteface Member

It is often desirable to provide two instances of an interface member within a class that implements the interface—one that is used when class objects are manipulated through an interface handle, and one that is used when class objects are used through the class interface. For example:

public __gc class R : public ICloneable 
{
   // to be used through Icloneable ...
   Object* ICloneable::Clone();

   // to be used through an R ...
   R* Clone();
};

In V1, we do this by providing an explicit declaration of the interface method with the method's name qualified with the name of the interface. The class-specific instance is unqualified. This eliminates the need to downcast the return value of Clone(), in this example, when explicitly called through an instance of R.

In V2, a general overriding mechanism has been introduced that replaces the previous syntax. Our example would be rewritten as follows:

public __gc class R : public ICloneable 
{
   // to be used through Icloneable ...
   Object^ InterfaceClone() = ICloneable::Clone;

   // to be used through an R ...
   virtual R^ Clone() new;
};

This revision requires that the interface member that is being explicitly overridden be given a unique name within the class. Here, I've provided the rather awkward name of InterfaceClone(). The behavior is still the same—an invocation through the ICloneable interface invokes the renamed InterfaceClone(), while a call through an object of type R invokes the second Clone() instance.

3.8 Private Virtual Functions

In V1, the access level of a virtual function does not constrain its ability to be overridden within a derived class. This has changed in V2. In V2, a virtual function cannot override a base class virtual function that it cannot access. For example:

__gc class My
{
      // inaccessible to a derived class ...
      virtual void g();
};
 
__gc class File : public My {
public:
      // in V1, ok: g() overrides My::g()
      // in V2, error: cannot override: My::g() is inaccessible ...
     void g();
};

There is no real mapping of this sort of design onto V2. One simply has to make the base class members accessible—that is, non-private. The inherited methods do not have to bear the same access. In this example, the least invasive change is to make the My member protected. This way the general program's access to the method through My is still prohibited.

ref class My {
protected:
      virtual void g();
};
 
ref class File : My {
public:
     void g();
};

Note that the absence of the explicit virtual keyword in the base class, under V2, generates a warning message.

3.9 Static Const Int Linkage is no Longer Literal

Although static const integral members are still supported, their linkage attribute has changed. Their former linkage attribute is now carried in a literal integral member. For example, consider the following V1 class:

public __gc class Constants {
public:
static const int LOG_DEBUG = 4;
// ...
};

This generates the following underlying CIL attributes for the field (note the literal attribute in boldface):

.field public static literal int32 
modopt([Microsoft.VisualC]Microsoft.VisualC.IsConstModifier) STANDARD_CLIENT_PRX = int32(0x00000004)
 

While this still compiles under the V2 syntax,

public ref class Constants {
public:
static const int LOG_DEBUG = 4;
// ...
};

it no longer emits the literal attribute, and therefore is not viewed as a constant by the CLI runtime.

.field public static int32 modopt([Microsoft.VisualC]Microsoft.VisualC.IsConstModifier) STANDARD_CLIENT_PRX = int32(0x00000004)

In order to have the same inter-language literal attribute, the declaration should be changed to the newly supported literal data member, as follows:

public ref class Constants {
public:
literal int LOG_DEBUG = 4;
// ...
};

4 Value Types and Their Behaviors

In this section, we look at the CLI enum type and the value class type, together with a look at boxing and access to the boxed instance on the CLI heap, as well as a look at interior and pinning pointers. There have been extensive language changes in this area.

4.1 CLI Enum Type

The original language CLI enum declaration is preceded by the __value keyword. The idea here is distinguish the native enum from the CLI enum that is derived from System::ValueType, while suggesting an analogous functionality. For example,

__value enum e1 { fail, pass };
public __value enum e2 : unsigned short  { 
not_ok = 1024, 
maybe, ok = 2048 
};  

The revised language solves the problem of distinguishing native and CLI enums by emphasizing the class nature of the latter rather than its value type roots. As such, the __value keyword is discarded, replaced with the spaced keyword pair of enum class. This provides a paired keyword symmetry to the declarations of the reference, value, and interface classes:

enum class ec;
value class vc;
ref class rc;
interface class ic;

The translation of the enumation pair e1 and e2 in the revised language design looks as follows:

enum class e1 { fail, pass };
public enum class e2 : unsigned short { 
not_ok = 1024,
maybe, ok = 2048 
};

Apart from this small syntactic change, the behavior of the managed enum type has been changed in a number of ways:

  1. A forward declaration of a CLI enum is no longer supported in V2. There is no mapping. It is simply flagged as a compile-time error.
    __value enum status; // V1: ok
    enum class status;   // V2: error
    
  2. The overload resolution between the built-in arithmetic types and the Object class hierarchy has reversed between V1 and V2! As a side-effect, managed enums are no longer implicitly converted to arithmetic types in V2 as they were in V1.
  3. In V2, a managed enum maintains its own scope, which is not the case in V1. In V1, the enumerators are visible within the containing scope of the enum; in V2, the enumerators are encapsulated within the scope of the enum.

4.1.1 CLI Enums are a Kind of Object

For example, consider the following code fragment:

__value enum status { fail, pass };

void f( Object* ){ cout << "f(Object)\n"; }
void f( int ){ cout << "f(int)\n"; }

int main()
{
   status rslt;
   // ...
   f( rslt ); // which f is invoked?
}

For the native C++ programmer, the natural answer to the question, which instance of the overloaded f() invoked is that of f(int). An enum is a symbolic integral constant, and it participates in the standard integral promotions that take precedence in this case. And in fact, in the original language design, this in fact was the instance to which the call resolves. This caused a number of surprises—not when we used them in a native C++ frame of mind—but when we needed them to interact with the existing BCL (Base Class Library) framework, where an Enum is a class indirectly derived from Object. In the revised language design, the instance of f() invoked is that of f(Object^).

The way V2 has chosen to enforce this is to not support implicit conversions between a CLI enum type and the arithmetic types. This means that any assignment of an object of a managed enum type to an arithmetic type will require an explicit cast. So, for example, given

   void f( int );

as a non-overloaded method, in V1, the call

   f( rslt ); // ok: V1; error: V2

is ok, and the value contained within rslt is implicitly converted into an integer value. In V2, this call fails to compile. To correctly translate it, we must insert a conversion operator:

   f( safe_cast<int>( rslt )); // ok: V2

4.1.2 The Scope of the CLI Enum Type

One of the changes between the C and C++ languages was the addition in C++ of scope within the struct facility. In C, a struct is just a data aggregate without the support of either an interface or an associated scope. This was quite a radical change at the time and was a contentious issue for many new C++ users coming from the C language. The relationship between the native and CLI enum is analogous.

In the original language design, an attempt was made to define weakly injected names for the enumerators of a managed enum in order to simulate the absence of scope within the native enum. This did not prove successful. The problem is that this causes the enumerators to spill into the global namespace, resulting in difficult to manage name-collisions. In the revised language, we have conformed to the other CLI languages in supporting scopes within the managed enum.

This means that any unqualified use of an enumerator of a CLI enum will not be recognized by the revised language. Let's look at a real-world example.

// original language design supporting weak injection
__gc class XDCMake {
public:
  __value enum _recognizerEnum { 
     UNDEFINED,
     OPTION_USAGE, 
     XDC0001_ERR_PATH_DOES_NOT_EXIST = 1,
     XDC0002_ERR_CANNOT_WRITE_TO = 2,
     XDC0003_ERR_INCLUDE_TAGS_NOT_SUPPORTED = 3,
     XDC0004_WRN_XML_LOAD_FAILURE = 4,
     XDC0006_WRN_NONEXISTENT_FILES = 6,
  };

  ListDictionary* optionList;
  ListDictionary* itagList;

  XDCMake() 
  {
     optionList = new ListDictionary;

     // here are the problems ...
     optionList->Add(S"?", __box(OPTION_USAGE));   // (1)
     optionList->Add(S"help", __box(OPTION_USAGE));   // (2)

     itagList = new ListDictionary;
     itagList->Add(S"returns",           
                   __box(XDC0004_WRN_XML_LOAD_FAILURE)); // (3)
   }
};

Each of the three unqualified uses of the enumerator names ((1), (2), and (3)) will need to be qualified in the translation to the revised language syntax in order for the source code to compile. Here is a correct translation of the original source code:

ref class XDCMake
{
public:
  enum class _recognizerEnum
  {
     UNDEFINED, OPTION_USAGE, 
     XDC0001_ERR_PATH_DOES_NOT_EXIST = 1,
     XDC0002_ERR_CANNOT_WRITE_TO = 2,
     XDC0003_ERR_INCLUDE_TAGS_NOT_SUPPORTED = 3,
     XDC0004_WRN_XML_LOAD_FAILURE = 4,
     XDC0006_WRN_NONEXISTENT_FILES = 6
  };

  ListDictionary^ optionList;
  ListDictionary^ itagList;
  XDCMake()
  {
    optionList = gcnew ListDictionary;
    optionList->Add("?",_recognizerEnum::OPTION_USAGE); // (1)
    optionList->Add("help",_recognizerEnum::OPTION_USAGE); //(2)
    itagList = gcnew ListDictionary;
    itagList->Add( "returns", 
             recognizerEnum::XDC0004_WRN_XML_LOAD_FAILURE); //(3)
  }
};

This changes the design strategy between a native and a CLI enum. With a CLI enum maintaining an associated scope in V2, it is no longer either necessary or effective to encapsulate the declaration of the enum within a class. This idiom evolved around the time of cfront 2.0 within Bell Laboratories, also in order to solve global name pollution problem.

In the original beta release of the new iostream library by Jerry Schwarz at Bell Laboratories, Jerry did not encapsulate all the associated enums defined for the library, and the common enumerators such as read, write, append, and so on, made it nearly impossible for users to compile their existing code. One solution would have been to mangle the names, such io_read, io_write, and so on. A second solution would have been to modify the language by adding scope to an enum, but this was not practicable at the time. (The middle solution was to encapsulate the enum within the class, or class hierarchy, where both the tag name and enumerators of the enum populate the enclosing class scope.) That is, the motivation for placing enums within classes, at least originally, was not philosophical, but a practical response to the global name-space pollution problem.

With the V2 CLI enum, there is no longer any compelling benefit to encapsulating an enum within a class. In fact, if you look at the System namespaces, you will see that enums, classes, and interfaces all inhabit the same declaration space.

4.2 Implicit Boxing

Ok, so we reversed ourselves. In politics, that would likely lose us an election. In language design, it means that we imposed a philosophical position in lieu of practical experience with the feature and, in practice, it was a mistake. As an analogy, in the original multiple inheritance language design, Stroustrup decided that a virtual base class sub-object could not be initialized within a derived class constructor, and therefore the language required that any class serving as a virtual base class must define a default constructor. It is that default constructor that would be invoked by any subsequent virtual derivation.

The problem of a virtual base class hierarchy is that responsibility for the initialization of the shared virtual sub-object shifts with each subsequent derivation. For example, if I define a base class for which initialization requires the allocation of a buffer, the user-specified size of that buffer might be passed as an argument to the constructor. If I then provide two subsequent virtual derivations, call them inputb and outputb, each provides a particular value to the base class constructor. Now, when I derived a in_out class from both inputb and outputb, neither of those values to the shared virtual base class sub-object can sensibly be allowed to evaluate.

Therefore, in the original language design, Stroustrup disallowed the explicit initialization of a virtual base class within the member initialization list of the derived class constructor. While this solved the problem, in practice the inability to direct the initialization of the virtual base class proved impracticable. Keith Gorlen of the National Institute of Health, who had implemented a freeware version of the SmallTalk collection library called nihcl, was a principle voice in convincing Bjarne that he had to come up with a more flexible language design.

A principle of Object-Oriented hierarchical design holds that a derived class should only concern itself with the non-private implementation of its immediate base classes. In order to support a flexible initialization design for virtual inheritance, Bjarne had to violate this principle. The most derived class in a hierarchy assumes responsibility for all virtual sub-object initialization regardless of how deep into the hierarchy it occurs. For example, inputb and outputb are both responsible for explicitly initializing their immediate virtual base class. When in_out derives from both inputb and outputb, in_out becomes responsible for the initialization of the once removed virtual base class, and the initialization made explicit within inputb and outputb is suppressed.

This provides the flexibility required by language developers, but at the cost of a complicated semantics. This burden of complication is stripped away if we restrict a virtual base class to be without state and simply allow it to specify an interface. This is a recommend design idiom within C++. Within C++/CLI, it is raised to policy with the Interface type.

Here is a real code sample doing something very simple—and in this case, the explicit boxing is mostly a lexical tax without representation.

   // original language requires explicit __box operation
int my1DIntArray __gc[] = { 1, 2, 3, 4, 5 };
   Object* myObjArray __gc[] = { 
             __box(26), __box(27), __box(28), __box(29), __box(30)
      };

   Console::WriteLine( "{0}\t{1}\t{2}", __box(0),
              __box(my1DIntArray->GetLowerBound(0)),
              __box(my1DIntArray->GetUpperBound(0)) );

As you can see, there is a whole lot of boxing going on. Under V2, value type boxing is implicit:

   // revised language makes boxing implicit
array<int>^ my1DIntArray = {1,2,3,4,5};
array<Object^>^ myObjArray = {26,27,28,29,30};

   Console::WriteLine( "{0}\t{1}\t{2}", 0, 
   my1DIntArray->GetLowerBound( 0 ), 
   my1DIntArray->GetUpperBound( 0 ) );

4.3 A Tracking Handle to a Boxed Value

Boxing is a peculiarity of the CLI unified type system. Value types directly contain their state, while reference types are an implicit duple: the named entity is a handle to an unnamed object allocated on the managed heap. Any initialization or assignment of a value type to an Object, for example, requires that the value type be placed within the CLI heap—this is where the image of boxing arises—first by allocating the associated memory, then by copying the value type's state, and then returning the address of this anonymous Value/Reference hybrid. Thus, when one writes in C#

object o = 1024; // C# implicit boxing

there is a great deal more going on than is made apparent by the simplicity of the code. The design of C# hides the complexity not only of what operations are taking place under the hood, but also of the abstraction of boxing itself. V1, on the other hand, concerned that this would lead to a false sense of efficiency, puts it in the user's face by requiring an explicit instruction,

Object *o = __box( 1024 ); // V1 explicit boxing

as if in this case one had any choice. In my opinion, forcing the user to make an explicit request in these cases is at best the equivalent of one's mother repeatedly demanding as one is trying to leave the house, now you will be careful, won't you? On the one hand, at some point, one has to internalize the admonition; that's called maturation. On the other hand, at some point, one has to trust in the maturation of one's children. Substitute the language designer for one's mother, and the programmer for one's child, and that is why boxing is implicit under V2:

Object ^o = 1024; // V2 implicit boxing

The __box keyword serves a second more vital service within the original language design, one that is absent by design from languages such as C# and Microsoft Visual Basic .NET: it provides both a vocabulary and tracking handle for directly manipulating a boxed instance on the managed heap. For example, consider the following small program:

int main()
{
double result = 3.14159;
__box double * br = __box( result );
   
result = 2.7; 
*br = 2.17;   
Object * o = br;

Console::WriteLine( S"result :: {0}", result.ToString() ) ;
Console::WriteLine( S"result :: {0}", __box(result) ) ;
Console::WriteLine( S"result :: {0}", br );
}

The underlying code generated for the three invocations of WriteLine show the various costs of accessing the value of a boxed value type (thanks to Yves Dolce for pointing out these differences), where the lines in bold show the overhead associated with each invocation.

// Console::WriteLine( S"result :: {0}", result.ToString() ) ;
ldstr      "result :: {0}"
ldloca.s   result
call       instance string  [mscorlib]System.Double::ToString()
call       void [mscorlib]System.Console::WriteLine(string, object)
  
// Console::WriteLine( S"result :: {0}", __box(result) ) ;
ldstr    " result :: {0}"
ldloc.0
box     [mscorlib]System.Double
call    void [mscorlib]System.Console::WriteLine(string, object)


// Console::WriteLine( S"result :: {0}", br );
ldstr    "result :: {0}"
ldloc.0
call     void [mscorlib]System.Console::WriteLine(string, object)

Passing the boxed value type directly to Console::WriteLine eliminates both the boxing and the need to invoke ToString(). (Of course, there is the earlier boxing to result to initialize br, so of course we don't really gain anything unless we really put br to work.)

In the revised language syntax, the support for boxed value types is considerably more elegant and integrated within the type system while retaining its power. For example, here is the translation of the earlier small program:

int main()
{
   double result = 3.14159;
   double^ br = result;
   result = 2.7;
   *br = 2.17;
   Object^ o = br;
   Console::WriteLine( S"result :: {0}", result.ToString() );
   Console::WriteLine( S"result :: {0}", result );
   Console::WriteLine( S"result :: {0}", br );
}

4.4 Value Type Semantics

Here is the canonical trivial value type used in the V1 language spec:

            __value struct V { int i; };
       __gc struct R { V vr; };

In V1, we can have four syntactic variants of a value type (where forms 2 and 3 are the same semantically):

V v = { 0 };  
V *pv = 0; 
V __gc *pvgc = 0;  // Form (2) is an implicit form of (3) 
__box V* pvbx = 0;  // must be local 

4.4.1 Invoking Inherited Virtual Methods

Form (1) is the canonical value object, and it is reasonably well understood, except when someone attempts to invoke an inherited virtual method such as ToString(). For example,

v.ToString(); // error!

In order to invoke this method, because it is not overridden in V, the compiler must have access to the associated virtual table of the base class. Because value types are in-state storage without the associated pointer to its virtual table (vptr), this requires that v be boxed. In the original language design, implicit boxing is not supported but must be explicitly specified by the programmer, as in

            __box( v )->ToString(); // V1: note the arrow

The primary motive behind this design is pedagogical: it wishes to make the underlying mechanism visible to the programmer so that she would understand the 'cost' of not providing an instance within her value type. Were V to contain an instance of ToString, the boxing would not be necessary.

The lexical complexity of explicitly boxing the object, but not the underlying cost of the boxing itself, is removed in the revised language design:

   v.ToString(); // V2

but at the cost of possibly misleading the class designer as to the cost of not having provided an explicit instance of the ToString method within V. The reason the implicit boxing is preferred is because while there is usually just one class designer, there are an unlimited number of users, none of whom would have the freedom to modify V to eliminate the possibly onerous explicit box.

The criteria to determine whether or not to provide an overriding instance of ToString within a value class should be the frequency and location of its uses. If it is called very rarely, there is of course little benefit in its definition. Similarly, if it is called in non-performant areas of the application, adding it will also not measurably add to the general performance of the application. Alternatively, one can keep a tracking handle to the boxed value, and calls through that handle would not require boxing.

4.4.2 There is No Longer a Value Class Default Constructor

Another difference with a value type between the original and revised language design is the removal of support for a default constructor. This is because there are occasions during execution in which the CLI can create an instance of the value type without invoking the associated default constructor. That is, the attempt under V1 to support a default constructor within a value type could not in practice be guaranteed. Given that absence of guarantee, it was felt to be better to drop the support altogether rather than have it be non-deterministic in its application.

This is not as bad as it might initially seem. This is because each object of a value type is zeroed out automatically (each type is initialized to its default value). That is, the members of a local instance are never undefined. In this sense, the loss of the ability to define a trivial default constructor is really not a loss at all—and in fact is more efficient when performed by the CLI.

The problem is when a user of the original V1 language defines a non-trivial default constructor. This has no mapping to the revised V2 language design. The code within the constructor will need to be migrated into a named initialization method that would then need to be explicitly invoked by the user.

The declaration of a value type object within the revised V2 language design is otherwise unchanged. The down side of this is that value types are not satisfactory for the wrapping of native types for the following reasons:

  1. There is no support for a destructor within a value type. That is, there is no way to automate a set of actions triggered by the end of an object's lifetime.
  2. A native class can only be contained within a managed type as a pointer, which is then allocated on the native heap.

We would like to wrap a small native class in a value type rather than a reference type to avoid a double heap allocation: the native heap to hold the native type, and the CLI heap to hold the managed wrapper. Wrapping a native class within a value type allows you to avoid the managed heap, but provides no way to automate the reclamation of the native heap memory. Reference types are the only practicable managed type within which to wrap non-trivial native classes.

4.4.3 Interior Pointers

Forms (2) and (3) can address nearly anything in this world or the next (that is, anything managed or native). So, for example, all the following are permitted in the original language design:

           // from Section 4.4
      __value struct V { int i; };
      __gc struct R { V vr; };

V v = { 0 };  
V *pv = 0; 
V __gc *pvgc = 0;  // Form (2) is an implicit form of (3) 
__box V* pvbx = 0;  // must be local 

R* r;

pv = &v;         // address a value type on the stack
pv = __nogc new V;  // address a value type on native heap
pv = pvgc;          // we are not sure what this addresses
pv = pvbx;        // address a boxed value type on managed heap
pv = &r->vr;        // an interior pointer to value type within a
                    //    reference type on the managed heap

So, a V* can address a location within a local block (and therefore can be dangling), at global scope, within the native heap (for example, if the object it addresses has already been deleted), within the CLI heap (and therefore will be tracked if it should be relocated during garbage collection), and within the interior of a reference object on the CLI heap (an interior pointer, as this is called, is also transparently tracked).

In the original language design, there is no way to separate out the native aspects of a V*; that is, it is treated at its inclusive, which handles the possibility of it addressing an object or subobject on the managed heap.

In the revised language design, a value type pointer is factored into two types: V*, which is limited to non-CLI heap locations, and the interior pointer, interior_ptr<V>, which allows for but does not require an address within the managed heap.

// may not address within managed heap 
V *pv = 0; 

// may or may not address within managed heap
interior_ptr<V> pvgc = nullptr; 
 

Forms (2) and (3) of the original language map into interior_ptr<V>. Form (4) is a tracking handle. It addresses the whole object that has been boxed within the managed heap. It is translated in the revised language into a V^:

   V^ pvbx = nullptr; // __box V* pvbx = 0;  

The following declarations in the original language design all map to interior pointers in the revised language design. (They are value types within the System namespace.)

Int32 *pi;   => interior_ptr<Int32> pi;
Boolean *pb; => interior_ptr<Boolean> pb;
E *pe;       => interior_ptr<E> pe; // Enumeration
            

The built-in types are not considered managed types, although they do serve as aliases to the types within the System namespace. Thus the following mappings hold true between the original and revised languages:

     int * pi;     => int* pi;
      int __gc * pi => interior_ptr<int> pi;

When translating a V* in your existing thing1 program, the most conservative strategy is to always turn it to an interior_ptr<V>. This is how it was treated under the original language. In the revised language, the programmer has the option of restricting a value type to non-managed heap addresses by specifying V* rather than an interior pointer. If, on translating your program, you can do a transitive closure of all its uses and be sure that no assigned address is within the managed heap, then leaving it as V* is fine.

4.4.4 Pinning Pointers

The garbage collector may optionally move objects that reside on the CLI heap to different locations within the heap, usually during a compaction phase. This movement is not a problem to tracking handles, tracking references, and interior pointers that update these entities transparently. This movement is a problem, however, if the user has passed the address of an object on the CLI heap outside of the runtime environment. In this case, the volatile movement of the object is likely to cause a runtime failure. To exempt objects such as these from being moved, we must locally pin them to their location for the extent of their outside use.

In the original language design, a pinning pointer is be declared by qualifying a pointer declaration with the __pin keyword. Here is an example that has been slightly modified from the original language specification:

__gc struct H { int j; };

int main() 
{
   H * h = new H;
   int __pin * k = & h -> j;
  
   // ...
};

In the new language design, a pinning pointer is declared with syntax analogous to that of an interior pointer.

ref struct H
{
public:
   int j;
};

int main()
{
   H^ h = gcnew H;
   pin_ptr<int> k = &h->j;

   // ...
}

A pinning pointer under the revised language is a special case of an interior pointer. The V1 constraints on a pinning pointer remain. For example, it cannot be used as a parameter or return type of a method; rather, it can only be declared on a local object. A number of additional constraints, however, have been added in the revised language design:

  1. The default value of a pinning pointer is nullptr, not 0. A pin_ptr<> cannot be initialized or assigned 0. All assignments of 0 in existing code will need to be changed to nullptr.
  2. A pinning pointer under V1 was permitted to address a whole object, as in the following example taken from the original language specification:
         __gc struct H { int j; };

void f( G * g ) 
{
            H __pin * pH = new H;   
             g->incr(& pH -> j);   
};

In the revised language, pinning the whole object returned by the new expression is not supported. Rather, the address of the interior member needs to be pinned. For example:

void f( G^ g )
{
   H ^ph = gcnew H;
   pin_ptr<int> pj = &ph->j;
   g->incr(  pj );
}

5. General Language Changes

The changes described in this section are a sort of language miscellany. The section includes a change in the handling of string literals, a change in the overload resolution between an ellipsis and the Param attribute, the change of typeof to typeid, and the introduction of a new cast notation, safe_cast.

5.1 String Literal

In the original language design, a managed string literal was indicated by prefacing the string literal with an S. For example:

   String *ps1 = "hello";
   String *ps2 = S"goodbye";

The performance overhead between the two initializations turns out to be non-trivial, as the following CIL representation demonstrates as seen through ildasm:

// String *ps1 = "hello";
ldsflda    valuetype $ArrayType$0xd61117dd
     modopt([Microsoft.VisualC]Microsoft.VisualC.IsConstModifier) 
     '?A0xbdde7aca.unnamed-global-0'

newobj instance void [mscorlib]System.String::.ctor(int8*)
stloc.0

// String *ps2 = S"goodbye";
ldstr      "goodbye"
stloc.0

That's a pretty remarkable savings for just remembering [or learning] to prefix a literal string with an S. In the revised V2 language, the handling of string literals is made transparent, determined by the context of use. The S no longer needs to be specified.

What about cases in which we need to explicitly direct the compiler to one interpretation or another? In these cases, we apply an explicit cast. For example:

   f( safe_cast<String^>("ABC") );

Moreover, the string literal now matches a String with a trivial conversion rather than a standard conversion. While this may not sound like much, it changes the resolution of overloaded function sets that include a String and a const char* as competing formal parameters. The resolution that once resolved to a const char* instance is now flagged as ambiguous. For example:

void f(const char*);
void f(String^);


// v1: f( const char* );
// v2: error: ambiguous ...
f("ABC"); 

What is going on here? Why is there a difference? Since there is more than one instance named f that exists within the program, this requires the function overload resolution algorithm to be applied to the call. The formal resolution of an overload function involves three steps.

  1. The collection of the candidate functions. The candidate functions are those methods within the scope that lexically match the name of the function being invoked. For example, since My() is invoked through an instance of R, all named functions My that are not a member of R (or of its base class hierarchy) are not candidate functions. In our example, there are two candidate functions. These are the two member functions of R named My. A call fails during this phase if the candidate function set is empty.
  2. The set of viable functions from among the candidate function. A viable function is one that can be invoked with the arguments specified in the call, given the number of arguments and their types. In our example, both candidate functions are also viable functions. A call fails during this phase if the viable function set is empty.
  3. Select the function that represents the best match of the call. This is done by ranking the conversions that are applied to transform the arguments to the type of the viable function parameters. This is relatively straight-forward with a single parameter function; it becomes somewhat more complex when there are multiple parameters. A call fails during this phase if there is no best match. That is, if the conversions necessary to transform the type of the actual argument to the type of the formal parameter are equally good. The call is flagged as ambiguous.

In the original language design, the resolution of this call invoked the const char* instance as the best match. In V2, the conversion necessary to match "abc" to const char* and String^ are now equivalent—that is, equally good—and so the call is flagged as bad—that is, as ambiguous.

This leads us to two questions:

  1. What is the type of the actual argument, "abc"?
  2. What is the algorithm for determining when one type conversion is better than another?

The type of the string literal "abc" is const char[4]—remember, there is an implicit null terminating character at the end of every string literal.

The algorithm for determining when one type conversion is better than another involves placing the possible type conversions in a hierarchy. Here is my understanding of that hierarchy—all these conversions, of course, are implicit. Using an explicit cast notation overrides the hierarchy similar to the way parentheses overrides the usual operator precedence of an expression.

  1. An exact match is best. Surprisingly, for an argument to be an exact match it does not need to exactly match the parameter type; it just needs to be close enough. This is the key to understanding what is going on in this example, and how the language has changed.
  2. A promotion is better than a standard conversion. For example, promoting a short int to an int is better than converting an int into a double.
  3. A standard conversion is better than a boxing conversion. For example, converting an int into a double is better that boxing an int into an Object.
  4. A boxing conversion is better than an implicit user-defined conversion. For example, boxing an int into an Object is better than applying a conversion operator of a SmallInt value class.
  5. An implicit user-defined conversion is better than no conversion at all. An implicit user-defined conversion is the last exit before Error (with the caveat that the formal signature might contain a param array or ellipsis at that position).

So, what does it mean to say that an exact match isn't necessarily exactly a match? For example, const char[4] does not exactly match either const char* or String^, and yet the ambiguity of our example is between two conflicting exact matches!

An exact match, as it happens, includes a number of trivial conversions. There are four trivial conversions under ISO-C++ that can be applied and still qualify as an exact match. Three are referred to as lvalue transformations. A fourth type is called a qualification conversion. The three lvalue transformations are treated as a better exact match than one requiring a qualification conversion.

One form of the lvalue transformation is the native-array-to-pointer conversion. This is what is involved in matching a const char[4] to const char*. Therefore, the match of My("abc") to My(const char*) is an exact match. In the earlier incarnations of our C++/CLI language, this was the best match, in fact.

For the compiler to flag the call as ambiguous, therefore, requires that the conversion of a const char[4] to a String^ also be an exact match through a trivial conversion. This is the change that has been introduced in V2. And this is why the call is now flagged as ambiguous.

5.2 Param Array and Ellipsis

In both the original language design and in the V2 language to be released in Visual Studio 2005, there is no explicit support for the param array that C# and Visual Basic .NET support. Instead, one flags an ordinary array with an attribute, as follows:

void Trace1( String* format, [ParamArray]Object* args[] );
void Trace2( String* format, Object* args[] );

While these both look the same, the ParamArray attribute tags this for C# or other CLI languages as an array taking a variable number of elements with each invocation. The change in behavior in programs between the original and revised language is in the resolution of an overloaded function set in which one instance declares an ellipsis and a second declares a ParamArray attribute, as in the following example provided by Artur Laksberg.

int My(...); // 1
int My( [ParamArray] Int32[] ); // 2

In the original language design the ellipsis was given precedence over the attribute, which is reasonable since the attribute is not a formal aspect of the language. However, in V2, the param array is now supported directly within the language, and it is given precedence over the ellipsis because it is more strongly typed. Thus, in the original language, the call

My( 1, 2 );    

resolves to My(...), while in the revised language, it resolves to the ParamArray instance. In the off chance that your program behavior depends on the invocation of the ellipsis instance over that of the ParamArray, you will need to modify either the signature or the call.

5.3 typeof Goes to T::typeid

In the original language design, the __typeof() operator returns the associated Type* object when passed the name of a managed type. For example:

// Creates and initializes a new Array instance.
Array* myIntArray = 
       Array::CreateInstance( __typeof(Int32), 5 );

In the revised language design, __typeof has been replaced by an additional form of typeid that returns a Type^ when a managed type is specified.

// Creates and initializes a new Array instance.
Array^ myIntArray = 
 Array::CreateInstance( Int32::typeid, 5 );

5.4 Cast Notation and Introduction of safe_cast<>

Note that this is a somewhat wordy entry and those becoming impatient are urged to jump to its end for an illustration of the actual changes.

Modifying an existing structure is a much different and, in some sense, a more difficult experience than crafting the initial structure; there are fewer degrees of freedom, and the solution tends towards a compromise between an ideal restructuring and what is practicable given the existing structural dependencies. If you have ever typeset a book, for example, you know that making corrections to an existing page is constrained by the need to limit the reformatting to just that page; you cannot allow the text to spill over into subsequent pages, and so you cannot add or cut too much (or too little), and it too often feels as if the meaning of the correction is compromised in favor of its fit on the page.

Language extension is another example. Back in the early 1990s, as Object-Orienting programming became an important paradigm, the need for a type-safe downcast facility in C++ became pressing. Downcasting is the user-explicit conversion of a base-class pointer, or reference to a pointer, or reference of a derived class. Downcasting requires an explicit cast because, if the base class pointer is not a kind of derived class object, the program is likely to, well, do really bad things. The problem is that the actual type of the base class pointer is an aspect of the runtime; the compiler therefore cannot check it. Or, to rephrase that, a downcast facility, just like a virtual function call, requires some form of dynamic resolution. This raises two questions:

  1. Why should a downcast be necessary in the Object-Oriented paradigm? Isn't the virtual function mechanism sufficient in all cases? That is, why can't one claim that any need for a downcast (or a cast of any sort) is a design failure on the part of the programmer?
  2. Why should support of a downcast be a problem in C++? After all, it is not a problem in object-oriented languages such as Smalltalk (or, subsequently, Java and C#)? What is it about C++ that makes supporting a downcast facility difficult?

A virtual function represents a type-dependent algorithm common to a family of types (I am not considering interfaces, which are not supported in ISO-C++ but are available in C++/CLI and which represent an interesting design alternative). The design of that family is typically represented by a class hierarchy in which there is an abstract base class declaring the common interface (the virtual functions) and a set of concrete derived classes which represent the actual family types in the application domain.

A Light hierarchy in a Computer Generated Imagery (CGI) application domain, for example, will have common attributes such as color, intensity, position, on, off, and so on. One can pepper one's world space with a fistful of lights, and control them through the common interface without worrying whether a particular light is a spotlight, a directional light, a non-directional light (think of the sun), or perhaps a barn-door light. In this case, downcasting to a particular light-type in order to exercise its virtual interface is unnecessary and, all things being equal, ill-advised. In a production environment, however, things are not always equal; in many cases, what matters is speed. One might choose to downcast and explicitly invoke each method if by doing so an inline execution of the calls can be exercised in place of going through the virtual mechanism.

So, one reason to downcast in C++ is to suppress the virtual mechanism in return for a significant gain in runtime performance. (Note that the automation of this manual optimization is an active area of research. However, it is more difficult to solve than replacing the explicit use of the register or inline keyword.)

A second reason to downcast falls out of the dual nature of polymorphism. One way to think of polymorphism is being divided into a passive and dynamic pair of forms.

A virtual invocation (and a downcast facility) represents dynamic uses of polymorphism: one is performing an action that is based on the actual type of the base class pointer at that particular instance in the execution of the program.

Assigning a derived class object to its base class pointer, however, is a passive form of polymorphism; it is using the polymorphism as a transport mechanism. This is the main use of Object, for example, in the pre-generic CLI. When used passively, the base class pointer chosen for transport and storage typically offers an interface that is too abstract. Object, for example, provides roughly five methods through its interface; any more specific behavior requires an explicit downcast. For example, if we wish to adjust the angle of our spotlight or its rate of fall off, we would need to downcast explicitly. A virtual interface within a family of sub-types cannot practicably be a superset of all the possible methods of its many children, and so a downcast facility will always be needed within an object-oriented language.

If a safe downcast facility is needed in an object-oriented language, then why did it take C++ so long to add one? The problem is in how to make the information as to the run-time type of the pointer available. In the case of a virtual function, as most people know by now, the run-time information is set up in two parts by the compiler: (a) the class object contains an additional virtual table pointer member (either at the beginning or end of the class object; that's has an interesting history in itself) that addresses the appropriate virtual table—so, for example, a spotlight object addresses a spotlight virtual table, a directional light, a directional light virtual table, etc. and (b) each virtual function has an associated fixed slot in the table, and the actual instance to invoke is represented by the address stored within the table. So, for example, the virtual Light destructor might be associated with slot 0, Color with slot 1, and so on. This is an efficient if inflexible strategy because it is set up at compile-time and represents minimal overhead.

The problem, then, was how to make the type information available to the pointer without changing the size of C++ pointers, either by perhaps adding a second address or directly adding some sort of type encoding. This would not be acceptable to those programmers (and programs) who chose not to use the object-oriented paradigm—which was still the predominant user community. Another possibility was to introduce a special pointer for polymorphic class types, but this would be awfully confusing, and make it very difficult to inter-mix the two—particularly with issues of pointer arithmetic. Nor would it be acceptable to maintain a run-time table associating each pointer with its currently associated type, and dynamically updating it.

The problem then is a pair of user-communities which have different but legitimate programming aspirations. The solution needs to be a compromise between the two communities, allowing each not only their aspiration but the ability to interoperate. This means that the solutions offered by either side are likely to be infeasible, and the solution that is finally implemented is likely to be less than perfect. The actual resolution revolves around the definition of a polymorphic class: a polymorphic class is one that contains a virtual function. A polymorphic class supports a dynamic type-safe downcast. This solves the 'maintain the pointer as address' problem because all polymorphic classes contain that additional pointer member to their associated virtual table. The associated type information, therefore, can be stored in an expanded virtual table structure. The cost of the type-safe downcast is (almost) localized to users of the facility.

The next issue concerning the type-safe downcast was its syntax. Because it is a cast, the original proposal to the ISO-C++ committee used the unadorned cast syntax, so that one wrote, for example,

   spot = ( SpotLight* ) plight;

but this was rejected by the committee because it did not allow the user to control the cost of the cast. If the dynamic type-safe downcast had the same syntax as the previously unsafe but cast static cast notation it became a substitution, and the user had no ability to suppress the runtime overhead in cases where it was unnecessary and perhaps too costly.

In general, in C++, there is always a mechanism to suppress compiler-supported functionality. For example, we can turn off the virtual mechanism by either using the class scope operator (Box::rotate(angle)) or by invoking the virtual method through a class object (rather than a pointer or reference of that class)—this latter suppression is not required by the language but is a quality of implementation issue—it's similar to the suppression of the construction of a temporary in a declaration of the form:

   // compilers are free to optimize away the temporary ...
X x = X::X( 10 ); 

So the proposal was taken back for further consideration, and a number of alternative notations were considered, and the one brought back to the committee was of the form (?type), which indicated its undetermined—that is, dynamic—nature. This gave the user the ability to toggle between the two forms—static or dynamic—but no one was too pleased with it. So it was back to the drawing board. The third and successful notation is the now standard dynamic_cast<type>, which was generalized to a set of four new-style cast notations.

In ISO-C++, dynamic_cast returns 0 when applied to inappropriate pointer types, and throws a std::bad_cast exception when applied to a reference type. In the original language design, applying dynamic_cast to a managed reference type (because of its pointer representation) always returned 0. __try_cast<type> was introduced as an analog to the exception throwing variant of the dynamic_cast, except that it throws System::InvalidCastException if the cast fails.

public __gc class ItemVerb;
public __gc class ItemVerbCollection
{
public:
    ItemVerb *EnsureVerbArray() []
    {
     return __try_cast<ItemVerb *[]>
             (verbList->ToArray(__typeof(ItemVerb *)));
    }
};

In the revised language, __try_cast has been recast as safe_cast. Here is the same code fragment in the revised language:

using namespace stdcli::language;
public ref class ItemVerb;
public ref class ItemVerbCollection
{
public:
    array<ItemVerb^>^ EnsureVerbArray()
    {
   return safe_cast<array<ItemVerb^>^>
            ( verbList->ToArray( ItemVerb::typeid ));
   }

};

In the managed world, it is important to allow for verifiable code by taming the ability of programmers to cast between types in ways that leave the code unverifiable. This is a critical aspect of the dynamic programming paradigm represented by C++/CLI. For this reason, instances of old-style casts are recast internally as run-time casts, so that, for example:

   // internally recast into the 
// equivalent safe_cast expression above
( array<ItemVerb^>^ ) verbList->ToArray( ItemVerb::typeid ); 

On the other hand, because polymorphism provides both an active and a passive mode, it is sometimes necessary to perform a downcast simply to gain access to the non-virtual API of a subtype. This can happen, for example, with the member(s) of a class that wish to address any type within the hierarchy (passive polymorphism as a transport mechanism) but for which the actual instance within a particular program context is known. In this case, the system programmer feels very strongly that having a run-time check of the cast is an unacceptable overhead. If C++/CLI is to serve as the managed systems programming language, it must provide some means of allowing a compile-time (that is, static) downcast. This is why, in the revised language, the application of the static_cast notation is allowed to remain a compile-time downcast:

// ok: cast performed at compile-time. 
// No run-time check for type correctness
static_cast< array<ItemVerb^>^>( 
             verbList->ToArray( ItemVerb::typeid )); 

The problem, of course, is that there is no way to guarantee that the programmer doing the static_cast is correct and well-intentioned; that is, there is no way to force managed code to be verifiable. This is a more urgent concern under the dynamic program paradigm than under native, but is not sufficient within a system programming language to disallow the user the ability to toggle between a static and run-time cast.

There is a performance C++/CLI trap and pitfall to be aware of, however. In native programming, there is no difference in performance between the old-style cast notation and the new-style static_cast notation. But in the new language design, the old-style cast notation is significantly more expensive than the use of the new-style static_cast notation since the compiler internally transforms the use of the old-style notation into a run-time check that throws an exception. Moreover, it also changes the execution profile of the code because it results in an uncaught exception bringing down the application—perhaps wisely, but the same error would not cause that exception if the static_cast notation were used. One might argue, well, this will help prod users into using the new-style notation. But only when it fails; otherwise, it will simply cause programs that use the old-style notation to run significantly slower with no visible understanding of why, similar to the following C programmer pitfalls:

// pitfall # 1: 
// initialization can remove a temporary class object, 
// assignment cannot
Matrix m;     
m = another_matrix;  

// pitfall # 2: declaration of class objects far from their use
Matrix m( 2000, 2000 ), n( 2000, 2000 );
if ( ! mumble ) return;

Appendix: Motivating the Revised Language Design

Probably the most conspicuous and eyebrow-lifting change between the original and revised language design is the change in the declaration of a managed reference type:

// original language
Object * obj = 0;

// revised language
Object ^ obj = nullptr;

There are two primary questions that get asked when people see this: why the hat (as the caret (^) is called affectionately along the corridors here within Microsoft), but, more fundamentally, why any new syntax at all? Why couldn't the original language design be cleaned up with less invasiveness rather than the admittedly in-your-face strangeness of the revised C++/CLI language design?

C++ is built upon a machine-oriented systems view. Although it supports a high-level type system, there is always an escape mechanism, and those mechanisms always lead down into the machine. When push comes to shove, and the user is hard-pressed to pull a rabbit out of the hat, she tunnels under the program abstractions, picking apart types into addresses and offsets.

The CLI is a software abstraction layer that runs between the OS and our application. When push comes to shove, the user reflects upon the execution environment, querying, coding, and creating objects literally out of thin air. Instead of tunneling, one jumps over, but the experience can be unsettling to those used to having both feet on the ground.

For example, what does it mean when we write the following?

            T t; 

Well, in ISO-C++, regardless of the nature of T, we are certain of the following characteristics: (1) there is a compile-time memory commitment of bytes associated with t equal to sizeof(T); (2) this memory associated with t is independent of all other objects within the program during the extent of t; (3) the memory directly holds the state/values associated with t; and (4) this memory and state persists for the extent of t.

What are some of the consequences of these characteristics?

Item (1) tells us that t cannot be polymorphic. That is, it cannot represent a family of types related through an inheritance hierarchy. That is, a polymorphic type cannot have a compile-time memory commitment except in the trivial case in which derived instances do not impose additional memory requirements. This is true regardless of whether T is a primitive type or serves as a base class to a complex hierarchy.

A polymorphic type in C++ is possible only when the type is qualified as either a pointer (T*) or as a reference (T&)—that is, if the declaration only indirectly refers to an object of T. If I write

   Base b = *new Derived;

b does not address a Derived object stored on the native heap. The value b has no connection to the Derived object allocated through the new expression. Rather, the Base portion of the Derived object is sliced off and memberwise-copied into the independent stack-based instance of b. There is really no vocabulary to describe this within the CLI object model.

To delay resource commitment until run-time, two forms of indirection are explicitly supported in C++:

Pointers:   T *pt = 0; 
References: T &rt = *pt; 

Pointers conform to the C++ Object Model. In

            T *pt = 0; 

pt directly holds a value of type size_t that is of fixed size and extent. Lexical cues are used to toggle between the direct use of the pointer and the indirect use of the pointed-to object. It can be famously unclear at times which mode applies to what or when or how: *pt++;

References provide a syntactic relief from the seeming lexical complexity of pointers while retaining their efficiency:

Matrix operator+( const Matrix&, const Matrix& ); 
      Matrix m3 = m1 + m2;

References do not toggle between a direct and an indirect mode; rather they phase-shift between the two: (a) at initialization, they are directly manipulated, but (b) on all subsequent uses, they are transparent.

In a sense, a reference represents a quantum anomaly in the physics of the C++ Object Model: (a) they take up space but, except for temporary objects, they are immaterial; (b) they exhibit deep copy on assignment and shallow copy on initialization; and (c) unlike const objects, they really are immutable. While they are not all that useful within ISO-C++ except as function parameters, they turn out to be an inspirational pivot upon which the language revision pirouettes.

The C++.NET Design Challenge

Literally, for every aspect of the C++ extensions to support CLI, the question always reduces to "How do we integrate this (or that) aspect of the Common Language Infrastructure into C++ so that it (a) feels natural to the C++ programmer, and (b) feels like a first-class feature of the CLI itself. By all accounts, this balance was not achieved with the original language design.

The Reader Language Design Challenge

So, to give you a flavor of the process, here is the challenge: How should we declare and use a CLI reference type? It differs significantly from the C++ Object Model: a different memory model (garbage collected), different copy semantics (shallow copy), different inheritance models (monolithic, rooted to Object, supporting single inheritance only with additional support for interfaces).

The Original Managed Extensions for C++ Design

The fundamental design choice in supporting the CLI reference type within C++ is to decide whether to remain within the existing language or to extend the language, thereby breaking with the existing standard.

How do you make that decision? Either choice is going to be criticized. The criteria boils down to whether one believes the additional language support represents either a domain abstraction (think of concurrency and threads) or a paradigm shift (think of object-oriented type-subtype relationships and generics).

If you believe the additional language support simply represents yet another domain abstraction, you will choose to remain within the existing language. If you see the additional language support as representing a shift in programming paradigm, you will extend the language.

In a nutshell, the original language design saw the additional language support as simply a domain abstract—which was awkwardly referred to as the managed extensions—and so the design choice followed logically to remain within the existing language.

Once we had committed ourselves to remain within the existing language, only three alternative approaches are really feasible—remember, I've constrained our discussion to be simply how to represent a CLI reference type:

  1. Have the language support be transparent. The compiler will figure out the semantics contextually. Ambiguity results in an error, and the user will disambiguate the context through some special syntax (as an analogy, think of overload function resolution, with its hierarchy of precedence).
  2. Add support for the domain abstraction as a library (think of the standard template library as a possible model).
  3. Reuse some existing language element(s), qualifying the permissible usages and behavior based on the context of its use outlined in an accompanying specification (think of the initialization and downcast semantics of virtual base classes, or the multiple uses of the static keyword within a function, at file scope, and within a class declaration).

Everyone's first choice is #1. "It's just like anything else in the language, only different. Just let the compiler figure this out." The big win here is that everything is transparent to users in terms of existing code. You haul your existing application out, add an Object or two, compile it, and, ta-dah, it's done. No muss, no fuss. Complete interoperability both in terms of types and source code. No one argues that scenario as being the ideal, much as no one argues the ideal of a perpetual motion machine. In physics, the obstacle is the second law of thermodynamics, and the existence of entropy. In a multi-paradigm programming language, the laws are considerably different, but the disintegration of the system can be equally pronounced.

In a multi-paradigm language, things work reasonably well within each paradigm, but tend to fall apart when paradigms are incorrectly mixed, leading to either the program blowing up or, even worse, completing but generating incorrect results. We run into this most commonly between support for independent object-based and polymorphic object-oriented class programming. Slicing drives every newbie C++ programming nuts:

DerivedClass dc;    // an object
BaseClass &bc = dc; // ok: bc is really a dc
BaseClass bc2 = dc; // ok: but dc has been sliced to fit into bc2

So, the second law of language design, so to speak, is to make things that behave differently look different enough that the user will be reminded of it when he or she programs in order to avoid... well, screwing up. It used to take half an hour of a two-hour presentation to make any dent in the C programmer's understanding of the difference between a pointer and a reference, and a great many C++ programmers still cannot clearly articulate when to use a reference declaration and when a pointer, and why.

These confusions admittedly make programming more difficult, and there is always a significant trade-off between the simplicity of simply throwing them out, and the real-world power that their support provides. And the difference is the clarity of the design, as to whether they are usable or not. And usually the design is through analogy. When pointers to class members were introduced into the language, the member selection operators were extended (-> to ->*, for example), and the pointer to function syntax was similarly extended ( int (*pf)() to int (X::*pf)()). The same held true with the initialization of static class data members, and so on.

References were necessary for the support of operator overloading. You could get the intuitive syntax of

      Matrix c = a + b;  // Matrix operator+( Matrix lhs, Matrix rhs ); 
     c = a + b + c;    

but that is hardly an efficient implementation. The C-language pointer alternative, while providing efficiency, broke apart with its non-intuitive syntax:

// Matrix operator+( const Matrix* lhs, const Matrix* rhs ); 
Matrix c = &a + &b;  
c = &( &a + &b ) + &c;

The introduction of a reference provided the efficiency of a pointer, but the lexical simplicity of a directly accessible value type. Its declaration is analogous to the pointer, and that was easy to internalize,

       // Matrix operator+( const Matrix& lhs, const Matrix& rhs );
    Matrix c = a + b;   

but its semantic behavior proved confusing to those habituated to the pointer.

So, the question then is, how easily will it be for the C++ programmer, habituated to the static behavior of C++ objects, to understand and correctly use the managed reference type? And, of course, what is the best design possible to aid the programmer in that effort?

We felt that the differences between the two types were significant enough to warrant special handling, and therefore we eliminated choice #1. We stand by that choice, even in the language revision. Those that argue for it, and that includes most of us at one time or another, simply haven't sat down and worked through the problems sufficiently. It's not an accusation; it's just how things are. So, if you took the earlier design challenge and came up with a transparent design, I am going to assert that it is not in our experience a workable solution, and press on.

The second and third choices, that of resorting to either a library design, or reusing existing language elements, are both viable, and each have their strong proponents. The library solution became something of a litany within Bell Laboratories due to the easy accessibility of Stroustrup's cfront source. It was a case of, Here Comes Everybody (HCE), at one point. This person hacked on cfront to add concurrency, others hacked on cfront to add their pet domain extension, and each paraded their new Adjective-C++ language, and Stroustrup's correct response was, no, that is best handled by a library.

So, why didn't we choose a library solution? Well, in part, it is just a feeling. Just as we felt that the differences between the two types were significant enough to warrant special handling, we felt that the similarities between the two types were as significant to warrant analogous treatment. A library type behaves in many ways as if it were a type built into the language, but it is not, really. It is not a first class citizen of the language. We felt, as best as we could, we had to make the reference type a first class citizen of the language, and therefore, we chose not to employ a library solution. This remains controversial.

So, having discarded the transparent solution because of a feeling that the reference type and the existing type object model are too different, and having discarded the library solution because of a feeling that the reference type and the existing type object model need to be peers within the language, we are left with the problem of how to integrate the reference type into the existing language.

If we were starting from scratch, of course, we could do anything we wished to provide a unified type system, and—at least until we made changes to that type system—anything we did would have the shine of a spanking brand-new widget. This is what we do in manufacturing and technology in general. We are constrained, however, and that is both a blessing and a curse. We can't throw out the existing C++ object model, so anything we do must fit into it. In the original language design, we further constrained ourselves not to introduce any new tokens; therefore, we must make use of those we already have. This doesn't give us a lot of wiggle-room.

So, to cut to the chase, in the original design, given the constraints just enumerated (hopefully without too much confusion) the language designers felt that the only viable representation of the managed reference type, was to reuse the existing pointer syntax—references were not flexible enough since they cannot be reassigned and they are unable to refer to no object:

// the mother of all objects allocated on the managed heap...
Object * pobj = new Object;
 
// the standard string class allocated on the native heap...
string * pstr = new string; 
 

These pointers are significantly different, of course. For example, when the Object entity addressed by pobj is moved through a compaction sweep through the managed heap, pobj is transparently updated. No such notion of object tracking exists for the relationship between pstr and the entity it addresses. The entire C++ notion of a pointer as a toggle between a machine address and an indirect object reference doesn't exist. A handle to a reference type encapsulates the actual virtual address of the object in order to facilitate the runtime garbage collector much as a private data member encapsulates the implementation of a class in order to facilitate extensibility and localization, except that the consequences of violating that encapsulation in a garbage collected environment is considerably more severe.

So, while pobj looks like a pointer, many common pointerish things are prohibited, such as pointer arithmetic and casts that step outside the type system. We can make the distinction more explicit if we use the fully qualified syntax of declaring and allocating a reference managed type:

// ok, now these looks different ...

Object __gc * pobj = __gc new Object; 
string * pstr = new string;  

At first blush, the pointer solution seemed reasonable. After all, it seems the natural target of a new expression, and both support shallow copy. One problem is that a pointer is not a type abstraction, but a machine representation (with a tag type recommendation as to how to interpret the extent and internal organization of the memory following the address of the first byte), and this falls short of the abstraction the software runtime imposes on memory and the automation and security one can extrapolate from that. This is a historical problem between object models that represent different paradigms.

A second problem is the (metaphor alert—a strained metaphor is about to be attempted—all weak-stomached readers are advised to hold on or jump to the next paragraph) necessary entropy of a closed language design that is constrained to reuse constructs that are both too similar and significantly different and result in a dissipation of the programmer's energy in the heat of a desert mirage. (Metaphor alert end.)

Reusing the pointer syntax turned out to be a source of cognitive noise for the programmer: you have to make too many distinctions between the native and managed pointers, and this interferes with the flow of coding, which is best managed at a higher level of abstraction. That is, there are times when we need to, as system programmers, go down a notch to squeeze some necessary performance, but we don't want to dwell at that level.

The success of the original language design is that it supported the unmodified recompilation of existing C++ programs, and provided support for the Wrapper pattern of publishing an existing interface into the new managed environment with a trivial amount of work. This could then add additional functionality in the managed environment, and, as time and experience dictated, one could port this or that portion of the existing application directly into the managed environment. This is a magnificent achievement for C++ programmers with an existing code base and an existing base of expertise. There is nothing of which we need to be ashamed in this.

However, there are significant weaknesses in the actual syntax and vision of the original language design. This is not due to inadequacies of the designers, but in the conservative nature of their fundamental design choice to remain within the existing language. And that resulted from a misapprehension that the managed support represented not a domain abstraction but an evolutionary programming paradigm that required a language extension similar to that introduced by Stroustrup to support Object-Oriented and generic programming. This is what the revised language design represents, and why it is both necessary and reasonable despite some of the grief it engenders for those who committed themselves to the original language design. This is the motivation behind both this guide and the translation tool.

The Revised C++/CLI Language Design

Once it became clear that support for the Common Language Infrastructure within C++ represented a distinct programming paradigm, it followed that the language needed to be extended to provide both a first class coding experience for the user, and an elegant design integration with the ISO-C++ standard in order to respect the sensibility of the larger C++ community and engage their commitment and assistance. It also followed that the diminutive name of the original language design, The Managed Extensions for C++, had to be replaced as well.

The flagship feature of the CLI is the reference type, and its integration within the existing C++ language represented a proof of concept.  What were the general criteria? We needed a way to represent the managed reference type that both set it apart and yet felt analogous to the existing type system. This would allow people to recognize the general category of form as familiar while also noting its unique features. The analogy is the introduction of the reference type by Stroustrup in the original invention of C++. So the general form becomes

      Type TypeModToken Id [ = init ];

where TypeModToken would be one of the recognized tokens of the language reused in a new context (again, similar to the introduction of the reference).

This was surprisingly controversial at first, and still remains a sore point with some users The two most common initial responses I recall are (a) I can handle that with a typedef, wink, wink, and (b) it's really not so bad. (The latter reminds me of my response to the use of the left and right shift operators for input and output in the iostream library.)

The necessary behavioral characteristics are that it exhibit object semantics when operators are applied to it, something the original syntax was unable to support. I liked to call it a flexible reference, thinking in terms of its differences with the existing C++ reference (yes, the double use of the reference here—one referring to the managed reference type and the other referring to the "it's not a pointer, wink, wink" native C++ type—is unfortunate, much like the reuse of template in the Gang of Four Patterns book for one of my favorite design strategies):

  1. It would have to be able to refer to no object. The native reference, of course, cannot do that directly, although people are always showing me a reference being initialized to a reinterpret-cast of a 0. (The conventional way to have a reference refer to no object is to provide an explicit singleton representing by convention a null object which often serves as a default argument to a function parameter.)
  2. It would not require an initial value, but could begin life as referring to no object.
  3. It would be able to be reassigned to refer to another object.
  4. The assignment or initialization of one instance with another would exhibit shallow copy by default.

As a number of folks made clear to me, I was thinking of this puppy backwards. That is, I was referring to it by the qualities that distinguished it from the native reference, not by the qualities that distinguished it as a handle to a managed reference type.

We want to call the type a handle rather than a pointer or reference because both of these terms carry baggage from the native side. A handle is the preferred name because it is a pattern of encapsulation—someone named John Carolan first introduced me to this design under the lovely name of the Cheshire Cat since the substance of the object being manipulated can disappear out from under you without your knowledge.

In this case, the disappearing act results from the potential relocation of reference types during a sweep of the garbage collector. What happens is that this relocation is transparently tracked by the runtime, and the handle is updated to correctly point to the new location. This is why it called a tracking handle.

So, the final item I wish to mention about the new tracking reference syntax is the member selection operator. To me, it seemed like a no-brainer to use the object syntax (.). Others felt the pointer syntax (->) was equally obvious, and we argued our position from different facets of a tracking reference's usage:

// the pointer no-brainer
T^ p = gcnew T;

// the object no-brainer
T^ c = a + b;

So, as with light in physics, a tracking reference behaves in certain program contexts like an object and in other situations like a pointer. The member selection operator that is used is that of the arrow, as in the original language design.

A Summary Digression on Keywords

Finally, an interesting question is to ask is, why did Stroustrup add class to the C++ language design? There is no real necessity for its introduction since the C-language struct is extended within C++ to support everything that is possible to do with a class. I have never asked Bjarne about this, so I have no special insight, but it is an interesting question and seems somewhat relevant given the number of keywords added to C++/CLI.

One possible answer—I call it the foot soldier shuffle—is to argue that, no, the introduction of class was absolutely necessary. After all, not only is the default member access different between the two keywords, but so is the access level of the derivation relationship as well. So of course how could we not have both?

But back then, of course, introducing a new keyword that was not only incompatible with the existing language but imported from a different branch of the language tree (Simula-68) risked offending the C-language community. Was the difference in implicit default access rules really the motivation? I can't convince myself of that.

For one thing, the language neither prevents nor warns if the designer using the class keyword makes the entire implementation public. There is no policy in the language itself with regard to public and private access, and so it hardly seems reasonable to suggest that the default unlabeled access levels permissions is considered an important property—that is, important enough to outweigh the cost of introducing an incompatibility.

Similarly, the wisdom of defaulting an unlabeled base class to private inheritance seems questionable as a design practice. It is both a more complex and less understood form of inheritance since it does not exhibit type/subtype behavior and thus violates the rules of substitutability. It represents a reuse not of interface but of implementation, and having private inheritance be the default is, I believe, mistaken.

Of course, I couldn't say that in public because in the language marketplace, one should never admit one iota of imperfection in the product, since that is providing fodder to the enemy who will be swift to seize on any competitive advantage to gain market share. Ridicule is particularly popular in the intellectual niche. Or, rather, one doesn't admit imperfection until the new, improved product is ready to be rolled out.

What other reason could there be for the introduction of the class incompatibility? The C-language conception of a struct is that of an abstract data type. The C++ conception of a class (well, of course, it did not originate with C++) is that of a Data Abstraction, with its accompanying ideas of encapsulation and interface contract. An abstract data type is just a contiguous chunk of data associated with an address—point to it, cast it about, pick it apart, and move on swiftly. A data abstraction is an entity with lifetime and behavior. It's of pedagogical significance, because words make a world of difference—at least within a language. This is another lesson the revised design takes to heart.

Why didn't C++ just drop struct altogether? It is inelegant to retain the one and introduce the other, and then literally minimize the difference between them. But what other choice was there? The struct keyword had to be retained, because C++ had to be as closely backward compatible with C as possible; otherwise, not only would it have been less popular with the existing programmer base, but it probably would not have been allowed out the door. (But that's another story for another time and place.)

Why is a struct by default public? Because otherwise, existing C programs would not compile. That would be a disaster in practice, although one would certainly never hear that mentioned in Advanced Principles of Language Design. There could have been an imposition within the language to impose a policy such that the use of struct guarantees a public implementation whereas the use of class guarantees a private implementation and public interface, but that would serve no practical purpose and would therefore be a bit too precious.

In fact, during testing of the release of the cfront 1.0 language compiler from Bell Laboratories, there was a minor debate within a small circle of language lawyers as to whether or not a forward declaration and subsequent definition (or any such combination) had to consistently use the one or other keyword, or should they be allowed to be used interchangeably. If struct had any real significance, of course, that would not have been allowed.

Acknowledgement

I would like to thank the various members of the Visual C++ Team for their continued help and guidance in helping me understand the issues involved in the evolving migration from the original Managed Extensions for C++ to the revised C++/CLI language design. Special thanks go in particular to Arjun Bijanki and Artur Laksberg, both of whom endured great confusion on my part. Thanks go as well to Brandon Bray, Jonathan Caves, Siva Challa, Tanveer Gani, Mark Hall, Mahesh Hariharan, Jeff Peil, Andy Rich, Alvin Chardon, and Herb Sutter. All have been of incredible help and responsiveness. This document is a tribute to all their expertise.

Related Books

STL Tutorial and Reference Guide by David Musser, Gillmer Derge, and Atul Saini, Addison-Wesley, 2001

C++ Standard Library by Nicolai Josuttis, Addison-Wesley, 1999

C++ Primer by Stanley Lippman and Josee Lajoie, Addison-Wesley, 1998

 

About the author

Stanley Lippman, an Architect on the Visual C++ team at Microsoft, began working on C++ with its inventor Bjarne Stroustrup back in 1984 within Bell Laboratories. In between, he worked in Feature Animation at Disney and DreamWorks, and was a Software Technical Director on Fantasia 2000.

Show:
© 2015 Microsoft