Visual C++ .NET

Language Enhancements and Simplified GUI Development Enrich Your C++ Apps

Richard Grimes

Code download available at:VisualC++NET.exe(61 KB)

This article assumes you're familiar with C++

Level of Difficulty123

SUMMARY

Managed Extensions for C++ is the preferred programming language for developing Windows Services. Visual Studio .NET 2003 introduces C++ support for designers, providing all the RAD facilities that were available to other languages for developing forms, controls, components, and DataSets. Furthermore, support has been added for the creation of verifiable assemblies with C++.

In this article, the author reviews these additions as well as the new compiler and linker switches, demonstrating how C++ remains the premier systems language while becoming a powerful tool for .NET GUI development as well.

Contents

Designers
References
Window Forms Designer and Satellites
Localizing Non-form Resources
Web References
Verifiable Library Assemblies
New Compiler Switches
New Linker Switches
Conclusion

W hen work started on the Microsoft® .NET Framework, the C++ team had the most difficult task of all of the language teams. Unlike the C# team, which had a clean slate, the C++ team had a mature language full of features—some useful, some not. Their job was clear: make sure that all code that compiled and ran as native code would also compile and run as intermediate language (IL). Although this sounds straightforward, it actually involved a lot of effort, and the result, It Just Works (IJW), is a great feature that deserves the accolades it has received. However, the work on IJW had a deleterious effect on Visual C++®: there simply was not enough time to add the bells and whistles that were present in the other Microsoft .NET-compliant languages.

With the Visual Studio® .NET 2003 release, the Visual C++ team had the opportunity to add some of the missing polish as well as other new features. Managed C++ has always been a first class common language runtime (CLR)-targeted language, but now it has the RAD features of the other CLR-targeted languages, which gives you the ease of use of designers and raw power of C++.

Designers

C++ is the language of choice for developing fast, efficient code for Windows® Services (formerly known as Windows NT® Services), COM+ components, and library code where efficiency is a priority. However, a sizable portion of C++ developers also use the language to create GUI applications, either by writing to the Win32® API directly or using MFC. With version 1.0 of the .NET Framework you could write a managed GUI application with C++, but one of the features that did not make the cut in the previous version of Visual Studio .NET was support for designers. The effect of this was that while developers working in C# and Visual Basic® .NET could use RAD features to drag and drop components and controls from the toolbox or the Server Explorer, the C++ developer had to do all this work by hand.

This has changed in Visual Studio .NET 2003, which now supports the Windows Forms Designer, the Controls Designer, the Components Designer, and the XML Schema (Dataset) Designer. Designers allow you to create forms and composite controls by dragging and dropping them in a WYSIWYG fashion. The composition is serialized to your project as code generated by a CodeDOM. Visual Studio .NET 2003 has a CodeDOM for C++ (in the MCppCodeDomProvider and MCppCodeDomParser assemblies), which means that the designers can parse and generate C++. Figure 1 shows a C++ Windows Forms project where a form is being edited using the Windows Forms Designer.

Figure 1 Editing with Windows Forms Designer

Figure 1** Editing with Windows Forms Designer **

The C++ designers have been written to fit into the designer architecture used by C# and Visual Basic .NET in the first version of Visual Studio .NET; in those languages a class is implemented in a single file. As a consequence, the project wizards for C++ will put the code for the class in the header file, although it does generate a .cpp file. When the designer generates new methods for event handlers, it will place the implementation in the header file. However, as a C++ developer you are used to separating implementation from interface. You can remove the implementation of methods from the header file and place them in the associated .cpp file. The situation when you are most likely to do this is with the InitializeComponent method. As with C#, this method is used to perform initialization on the components that you added through the designer. If you move the implementation to the .cpp file, the designer will recognize this by continuing to place initialization code in the method body.

References

The toolbox contains controls and components that can be dropped onto a designer's surface. These components will be implemented in many different assemblies, and they can even be COM components. When you drop a component onto a designer, the designer will change the project's settings to enable the compiler to access the type information of the component's assembly (or type library) and to enable the final output assembly to access the assembly (or COM server) at run time.

This is achieved through the References folder, which you can see in Solution Explorer. Managed C++ accesses an assembly's metadata through the #using statement. The C++ team decided not to employ #using for components dropped onto designers; they chose to use the command line equivalent, /FU, instead. The /FU switch can be used multiple times on the compiler command line and gives the compiler a path to each metadata file that will be used. All entries in the References folder will add the /FU switch to the project's default compiler command line. The References folder can contain assemblies that are in a project beneath the current solution, assemblies in another location on your hard disk, standard assemblies that are listed on the .NET tab of the Add Reference dialog, and type libraries. Each of these assemblies requires different handling.

By default, Visual Studio .NET 2003 will set a configuration's output folder for each project in a solution to the same folder. This means that at run time all the project outputs will be in the same folder and there will be no problem locating the assemblies. If you select an assembly from another location, the Add References dialog will copy the assembly to the current configuration's output folder (this procedure will be repeated when you build the solution for another configuration) so that this assembly is in the same folder as the assemblies that use it. In both cases, the path used for /FU will be the path to the assembly in the output folder.

The Add Reference dialog will also list standard assemblies. These are typically the Framework assemblies which can be found in the Global Assembly Cache (GAC). The IDE actually looks for a key under the following location in the registry:

HKLM\Software\Microsoft\VisualStudio\7.1\AssemblyFolders

For the Enterprise Architect edition, use 7.1Exp rather than 7.1.

The default value of each child key is the location of the metadata files. The /FU entry for the assemblies added through this tab will have the path given in the key under AssemblyFolders. Framework assemblies are usually installed into the Global Assembly Cache and are often stored as native images with ngen. However, there will be an additional copy of these assemblies in the current framework folder under %systemroot%\Microsoft.NET, which will allow you access to its metadata. If you add such an assembly to the References folder, the path to the copy under %systemroot%\Microsoft.NET will be used in /FU.

Type libraries present a different issue. The /FU switch cannot be passed a type library; instead, the Add References dialog will use an interop assembly. For example, if you add a reference to the Microsoft.mshtml type library, the path for /FU will be:

C:\Winnt\assembly\Gac\Microsoft.mshtml\7.0.3300_b03f5F7f11d50a3a \\Microsoft.mshtml.dll

This is the primary interop assembly generated with tlbimp, signed by Microsoft, and inserted into the GAC when Visual Studio is installed. If you add your own COM server, the dialog will use tlbimp to create an interop assembly, which will be copied to your project configuration's output folder. This means that even though C++ can access the COM server through IJW, the actual access will be performed through COM interop.

Note that types in modules (.netmodule) and .obj files can be accessed through #using. For a module, this is the same as using the linker /addmodule switch, and for an .obj file it is the same as adding the .obj file to the linker command line. However, the Add Reference dialog does not allow you to add a module or an .obj file, so it will only work with assemblies.

Window Forms Designer and Satellites

The Windows Forms Designer was always the most visible of the designers in the first version of Visual Studio. Managed C++ missed this first wave of tools for the .NET Framework, but to a large extent this was not a problem because all of the controls and their events were adequately documented in the MSDN® library. This meant that you could develop forms and controls without a designer, although it wasn't quite as quick a process.

The one area where the Windows Forms Designer is useful is in the development of satellite assemblies. A satellite contains localized resources as opposed to the assembly that uses the satellite, which only has locale-neutral resources. A satellite assembly is a library assembly and takes the short name of the neutral assembly appended with .resources. The culture part of the satellite name defines the name of the culture for which it holds resources, and since the Windows file systems only recognizes the short name of the assembly, each satellite should be stored in a subfolder with the name of the culture. At run time the form code uses a ResourceManager object, which will search for and load the appropriate satellite for the current thread's CurrentUICulture setting and access the resource associated with the form.

The Windows Forms Designer allows you to develop localized forms using satellite assemblies. Each form has a Language property, and when you change its value, the designer will load a .resx file associated with the specified culture. The .resx file is an XML file that contains the localized resources, and any binary resources are serialized first before being stored in the file. This means that if the binary resources (most likely icons) change, they have to be serialized again by adding the resource through the Property window. You will develop your satellites simply by changing the form's properties for each culture you support; the IDE does all of the heavy lifting for you at build time.

Behind the scenes, the build process uses resgen to compile each .resx file, naming the compiled resources after the .resx file (that is, the culture is part of the name of the compiled resources). The satellite assemblies are created using the assembly linker tool (al.exe), which is passed the compiled resources file as well as the culture that you used for the Language property of the form. Unfortunately, you have little control over the build process; the only build property you can change is the name of the compiled resources file by way of the property pages of the .resx file.

Indeed, this name is alluded to in the code generated by the New Project wizard, which warns you that if you change the name of the class (from Form1) you are responsible for editing the resource's filename property. My solution is to remove the Form1.h file from the project and use the Add New Item wizard to create a form with a more meaningful name (the full instructions are in the code download available at the link at the top of this article). I would have preferred a more creative name of the first form in a project, perhaps based on the project name. Calling the first form Form1 just adds more work, which is not what a wizard should be doing.

Localizing Non-form Resources

Some resources are not associated with forms. For example, if you decide to follow the Microsoft recommendations and use the EventLog class to log messages to the Windows NT Event Log, you have the responsibility of localizing the messages that it logs. I recommend that instead you use ReportEvent through platform invoke or IJW. This is because the Event Log is designed to perform localization when the message is read (when the locale of the reader is known), but the EventLog class is designed for localization to be performed as the message is logged and when the reader of the message is unknown.

To create localized resources, you should use the naming scheme of name.culture.resx, where name is the name of the resource that you pass to the constructor of ResourceManager and culture is the culture for which the resources are localized. For UK English, you would use en-GB, and for the neutral resource, you would omit this part of the name. The build process will assume that any .resx file with a culture in its name will be used to generate a satellite assembly, and any resource without a culture will be embedded as the neutral resource. However, at the time of writing (release candidate 1), there were a couple of problems with this, which derive from the fact that you don't get complete control over the command lines used to create the resources.

The first problem is trivial. When the Add New Item wizard adds a .resx file, it provides the following for the name of the compiled resource file, where culture is the culture identifier that you have used:

$(IntDir)/$(RootNamespace).$(SafeParentName).<culture>.resources

The problem is in the use of the two macros, $(RootNamespace) and $(SafeParentName), which are new with Visual Studio .NET 2003. $(RootNamespace) will be the namespace where a form is defined (for a non-form resource, the name of the project is used). $(SafeParentName) is described by the MSDN Library as the name of the immediate parent, which is usually the form name. However, for a file that you add through the Add New Item wizard, this macro will be the static string "ResourceFiles". As a consequence, the name that you pass to the constructor of ResourceManager should be a combination of the two. For example:

// project called Test ResourceManager __gc* rm; rm = __gc new ResourceManager( S"Test.ResourceFiles", Assembly::GetExecutingAssembly());

The solution is to edit the Resource File Name property and remove the $(SafeParentName) macro. Furthermore, since the name of the resource is always assumed to be the root namespace, if you have multiple resources in the same culture, they will each compile to the same file name—with each new compilation overwriting the last file.

Web References

The References folder context menu also gives you an option to add a Web Reference, but strangely, when you add a reference, it will not actually be added to the References folder. Instead, each Web Reference will get a node at the top level in Solution Explorer. The entries within this node are a .discomap that contains information about the discovery documents available for the Web Service and the .disco and .wsdl files.

The wsdl.exe tool is run on the .discomap file to generate a Web proxy for the service. In Visual Studio .NET 2002, a similar procedure was performed, but Web Services Description Language (WSDL) was requested to create a C# file that was compiled at build time to a .netmodule and linked to the assembly. As I said earlier, Visual Studio .NET 2003 has a CodeDOM for C++ and this is passed to the wsdl.exe tool through its /language switch so that wsdl.exe creates a managed C++ proxy. Curiously, the build process will compile this code into an assembly, giving it the same name as that of the generated proxy source file. Therefore, if you are accessing a Web Service on the local machine, the node generated in Solution Explorer will be called localhost, the proxy source file will be called localhost.h, and the proxy assembly will be called localhost.h.dll. You can change the name of the node, but you cannot directly access the name of the generated proxy assembly so that .h will always appear in its short name.

Verifiable Library Assemblies

One of the restrictive aspects of the first version of Visual Studio .NET was the lack of verifiability of C++ assemblies. When the CLR loads an assembly, it performs certain tests to check that the assembly has not been corrupted or altered in some way that would adversely affect your machine. Part of this process involves an analysis of the code that ensures it has valid IL and also that the stack is set up correctly before each opcode is called. It also performs checks to ensure that code is valid, like making sure that jumps are only performed within a method. However, some valid code may perform actions that are unsafe and thus can only be performed in a trusted context. These trickier checks are called verification.

There are several steps involved in making a library verifiable; the most important involves writing code that will not fail verification. Thus, you cannot use __nogc types or unmanaged pointers. You should be wary of using interior pointers in your code because although you can assign and dereference them, if arithmetic is performed on them, the code will not be verifiable. Exceptions must be managed objects, so you cannot throw primitive types.

Of course, a verifiable assembly should not use unmanaged code of any kind. This means you should not be bringing in code through static libraries, though you can use platform invoke. This also discounts the use of the CRT so you have to use the linker switch /noentry. Also be wary of casts: you cannot use static_cast<> for downcasts between class types and you should never use reinterpret_cast<> because it will produce unverifiable code. Keywords like __asm, __try, and __except are used with unmanaged code, so they are not allowed and clearly #pragma unmanaged is also not allowed.

The C++ compiler can optimize code which you will turn on for release configurations. However, the code that the otherwise verifiable code that optimizer generates will actually fail the verification process, so in all configurations, you should turn off optimizations with /Od. The verifier does not like empty data sections in the portable executable (PE) file, so you should add a global variable in one of the source files so that you get the following in the initialized data section:

extern "C" int _dummy = 1;

The compiler adds a call to an internal C runtime (CRT) function called _check_commonlanguageruntime_version to check if it's version 1.1 of the CLR. However, since you have removed support for the CRT by using the /noentry linker switch, the linker will complain that the CRT is not accessible by complaining that it can't find the symbol _main. The way to get around this problem is to link with the nochkclr.obj file provided in the Visual C++ .NET libraries folder or to compile with /clr:initialAppDomain described in the next section.

Of course, the purpose of an assembly being verifiable is that it can be verified! By default, the C++ compiler will add [SecurityPermissionAttribute] to your code to tell the runtime to skip verification, so you have to explicitly tell the runtime to perform the action with the assembly-level attribute:

[assembly: SecurityPermissionAttribute( SecurityAction::RequestMinimum, SkipVerification=false)];

The final step is to mark the assembly to indicate that it only contains IL. To do this, you have to alter the CLR header of the PE file and add the COMIMAGE_FLAGS_ILONLY flag. There is no mechanism in the IDE to do this, but Visual C++ .NET provides the source code for a tool called silo.exe (in the SetILOnly project).

New Compiler Switches

The compiler has one new switch: /clr:initialAppDomain. In order to understand the reason for this switch, consider the code in Figure 2 and the following snippet:

// Compile with /LD but without /clr so an unmanaged DLL is created typedef void (*FUNC) (void); extern "C" __declspec(dllexport) void ExternalFunc(FUNC f) { f(); }

The function ExternalFunc is implemented in a native DLL and it is used to simulate the situation of calling a native function while passing a function pointer to managed code in your assembly. ExternalFunc merely calls the function pointer which then will print out the name of the application domain. Therefore in the Proc method there is a managed-to-native transition followed by a native-to-managed transition. If you create an instance of ADClass in your main function and then call Proc, you should get the name of the default application domain, which will be the exact same name as the process.

Figure 2 Reentering Managed Code

// Compile with /clr and link to the export library created from // compiling the DLL in the code in text that follows #using <mscorlib.dll> using namespace System; typedef void (*FUNC) (void); extern "C" void ExternalFunc(FUNC f); void ShowAppDomain() { AppDomain* ad = AppDomain::CurrentDomain; Console::WriteLine(ad->FriendlyName); } public __gc class ADClass : public MarshalByRefObject { public: void Proc() { ExternalFunc(::ShowAppDomain); } }; void main() { AppDomain* ad = AppDomain::CreateDomain(S"new domain"); Assembly* assem = Assembly::GetExecutingAssembly(); ADClass* a = static_cast<ADClass*>( ad->CreateInstanceAndUnwrap(assem->FullName, S"ADClass")); a->Proc(); }

Now consider the main function shown in Figure 2. This creates a new application domain and creates an instance of the ADClass in the new domain before calling Proc. Logically, you would expect this code to print "new domain" at the console. However, there is a problem in the first version of the .NET Framework: when the call is made to native code, the thunk code forgets the application domain where the call originated. This means that when the native code calls back into managed code, the runtime does not know which application domain to use. In the first version of the Framework, the runtime was consistent and ensured that the callback into managed code would go to the first application domain. Consequently, if you compile and call this code using version 1.0 of the Framework, you will see the name of the first application domain (the name of the process) printed on the console.

To allow native code to call managed code, the compiler creates a vtable. At compile time, this is populated with the metadata token of the appropriate function. At run time, the CLR performs a fixup and will read the value in the vtable and determine the address of the identified method. It then puts this address into the vtable slot. The PE file requires an actual address at compile time, so the compiler creates a simple thunk that jumps to whatever value is in the vtable. The compiler will pass the address of this thunk to the unmanaged function when you call ExternalFunc.

With version 1.1 of the .NET Framework, the vtable includes a flag that indicates to the runtime that if the call comes into the assembly from native code, the application domain that's used will be that of the last application domain used to call into native code. If you look at an assembly with ILDasm, you'll see that there is a directive to perform the fixups on the vtable called .vtfixup. For the new behavior, the .vtfixup directive will have the flag retainappdomain, which is the default when using the /clr switch.

Unfortunately, this flag is invalid for version 1.0 of the CLR. If you want to compile code that will run on version 1.0, you have to rely on /clr:initialAppDomain, which uses the fromunmanaged flag on the .vtfixup directive. There is no property for a managed project in Visual Studio .NET that enables you to use this switch. However, if you use both /clr and /clr:initialAppDomain on a command line, the compiler will use the latter. So if you are compiling with the IDE, you can put /clr:initialAppDomain in the Additional Options edit box on the C/C++ Command Line Project property page.

New Linker Switches

There are five new linker switches in three main groups: resources, assembly signing, and debugging information. Only one of these switches, /assemblydebug, is available through the IDE. If you want to use the others, you'll have to use the Linker Command Line Project property page.

As the name suggests, the /assemblylinkresource switch allows you to supply the name of a resource file that will be linked to the assembly. The default is to embed resource files so that they become part of the assembly's PE file. Linking is used when the resource file will be a separate file that you deploy with the other assembly files. If you want to link a compiled .resx resource file, don't allow the IDE to build the .resx. Instead, you should exclude the file from the build, add a separate custom prelink build step, and also use the Linker Command Line Project property page to specify the /assemblylinkresource switch.

The /keyfile and /keycontainer switches let you to specify the name of a file or cryptographic container that contains the key to sign an assembly. These switches allow you to provide this information on the linker command line rather than through custom attributes in a source file. Indeed, these custom attributes are merely instructions passed through to the linker, so the new switches represent no new behavior. Similarly the /delaysign linker switch is equivalent to the [AssemblyDelaySign] attribute and indicates that the assembly will eventually be signed, but not right now.

The final linker switch, /assemblydebug, is interesting. This switch is available on the Linker Debugging Project property page through the Debuggable Assembly property. This switch is equivalent to the [Debuggable] attribute, which the compiler adds to indicate to the runtime that the assembly can be debugged. The values passed through the attribute tell the runtime to track information important to the debugger and to turn off just-in-time (JIT) optimization. When you build an assembly with Visual Studio .NET 2002 using the /Zi switch, the [Debuggable] attribute will be provided, as shown here:

[assembly: Debuggable(true, true)];

This is fine for a debug configuration, but if you use the /Zi switch to generate symbols for a release configuration, it means that the runtime will not use JIT optimizations. With Visual Studio .NET 2003 you have to explicitly specify the [Debuggable] attribute either with a custom attribute or through the /assemblydebug switch.

The /assemblydebug linker switch will specify [Debuggable(true, true)] and should be used for debug builds. If you want to generate symbols for a release configuration, you can change the Debuggable Assembly property on the Linker Debugging property page to explicitly specify that the assembly is not debugger-friendly. For example, use "No runtime tracking and enable optimizations (/ASSEMBLYDEBUG:DISABLE)".

Conclusion

Visual Studio .NET 2003 has several new features for managed C++ that were missing in the first version, such as support for designers, linked managed resources, and a CodeDOM for C++. These are features that enhance the RAD facilities of the language and catch up with the other popular CLR-targeted languages. Updates to how the code is generated means that native calls into multiapplication domain assemblies are handled better, release configuration assemblies can be created with symbols (without affecting the assembly performance), and verifiable assemblies can be created. All in all, these improvements help C++ retain its reputation as the best language for writing systems code while pushing it further into the world of GUI development at the same time.

For related articles see:
Managed Extensions Bring .NET CLR Support to C++
Tips and Tricks to Bolster Your Managed C++ Code in Visual Studio .NET

For background information see:
https://msdn.microsoft.com/vstudio

Richard Grimeswrites and speaks at conferences about the Microsoft .NET Framework. Richard is the author of Programming with Managed Extensions for Microsoft Visual C++ .NET, 2nd Edition (Microsoft Press, 2003), which has been updated for Visual Studio .NET 2003.