Export (0) Print
Expand All
Abortable Thread Pool
The Analytic Hierarchy Process
API Test Automation in .NET
Asynchronous HttpWebRequests, Interface Implementation, and More
Bad Code? FxCop to the Rescue
Basics of .NET Internationalization
Behind the Scenes: Discover the Design Patterns You're Already Using in the .NET Framework
BigInteger, GetFiles, and More
Binary Serialization of DataSets
Building Voice User Interfaces
Can't Commit?: Volatile Resource Managers in .NET Bring Transactions to the Common Type
CLR Inside Out: Base Class Library Performance Tips and Tricks
CLR Inside Out: Ensuring .NET Framework 2.0 Compatibility
CLR Inside Out: Extending System.Diagnostics
CLR Profiler: No Code Can Hide from the Profiling API in the .NET Framework 2.0
Concurrent Affairs: Build a Richer Thread Synchronization Lock
Custom Cultures: Extend Your Code's Global Reach With New Features In The .NET Framework 2.0
Cutting Edge: Collections and Data Binding
Const in C#, Exception Filters, IWin32Window, and More
Creating a Custom Metrics Tool
DataGridView
DataSets vs. Collections
Determining .NET Assembly and Method References
Experimenting with F#
File Copy Progress, Custom Thread Pools
Finalizers, Assembly Names, MethodInfo, and More
Got Directory Services?: New Ways to Manage Active Directory using the .NET Framework 2.0
High Availability: Keep Your Code Running with the Reliability Features of the .NET Framework
How Microsoft Uses Reflection
ICustomTypeDescriptor, Part 2
ICustomTypeDescriptor, Part 1
Iterating NTFS Streams
JIT and Run: Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects
Lightweight UI Test Automation with .NET
Low-Level UI Test Automation
Make Your Apps Fly with the New Enterprise Performance Tool
Managed Spy: Deliver The Power Of Spy++ To Windows Forms With Our New Tool
Memory Models: Understand the Impact of Low-Lock Techniques in Multithreaded Apps
Microsoft Java Virtual Machine Update
Microsoft .NET Framework Delivers the Platform for an Integrated, Service-Oriented Web, Part 2
Mini Dump Snapshots and the New SOS
Mutant Power: Create A Simple Mutation Testing System With The .NET Framework
NamedGZipStream, Covariance and Contravariance
.NET Internationalization Utilities
.NET Profiling: Write Profilers With Ease Using High-Level Wrapper Classes
No More Hangs: Advanced Techniques To Avoid And Detect Deadlocks In .NET Apps
The Perfect Host: Create and Host Custom Designers with the .NET Framework 2.0
Phoenix Rising
Scheme Is Love
Security Enhancements in the .NET Framework 2.0
Sepia Tone, StringLogicalComparer, and More
Software Testing Paradoxes
Stay Alert: Use Managed Code To Generate A Secure Audit Trail
Stream Decorator, Single-Instance Apps
StringStream, Methods with Timeouts
SUPERASSERT Goes .NET
Tailor Your Application by Building a Custom Forms Designer with .NET
Test Harness Design Patterns
ThreadPoolPriority, and MethodImplAttribute
ThreadPoolWait and HandleLeakTracker
Three Vital FXCop Rules
A Tidal Wave of Change
To Confirm is Useless, to Undo Divine
Touch All the Bases: Give Your .NET App Brains and Brawn with the Intelligence of Neural Networks
Transactions for Memory
Trustworthy Software
Tune in to Channel 9
UDP Delivers: Take Total Control Of Your Networking With .NET and UDP
UI on the Fly: Use the .NET Framework to Generate and Execute Custom Controls at Run Time
Unexpected Errors in Managed Applications
Unhandled Exceptions and Tracing in the .NET Framework 2.0
Using Combinations to Improve Your Software Test Case Generation
Wandering Code: Write Mobile Agents In .NET To Roam And Interact On Your Network
What Makes Good Code Good?
XML Comments, Late-bound COM, and More
Expand Minimize

Understanding Enterprise Services (COM+) in .NET

 

Shannon Pahl
Microsoft Corporation

April 2002

Summary: Provides technical details behind the integration of Microsoft .NET and COM+ services and describes the services available to managed code. (26 printed pages)

Contents

Introduction
Transactions
Deployment
Serviced Components
Object Lifetimes
Security
Remote Components
Conclusion

Introduction

This article requires some familiarity with the Microsoft®.NET Framework and COM+ services. Familiarity with Enterprise Services is not necessary but would be helpful. For a good background on these topics, refer to:

COM provides one way to write component-based applications. It is well known that the plumbing work required to write COM components is significant and repetitive. COM+ is not so much about a new version of COM; rather, COM+ provides a services infrastructure for components. Components are built and then installed in COM+ applications in order to build scalable server applications that achieve high throughput with ease of deployment. (If a component does not need to use any services, then it should not be placed in a COM+ application). Scalability and throughput is achieved by designing applications from the outset to make use of services such as transactions, object pooling and activity semantics.

The .NET Framework provides another way to write component-based applications and has the advantages over the COM programming model of better tool support, the common language runtime (CLR), and a much easier coding syntax. The COM+ services infrastructure can accessed from managed and unmanaged code. Services in unmanaged code are known as COM+ services. In .NET, these services are referred to as Enterprise Services. Deriving a class from ServicedComponent indicates that services will be required for a component. (If a component does not need to use any services, then it should not derive from ServicedComponent). Tool support has improved to enable programmers to write server based applications yet the same issues of scalability and throughput are still in the realm of good programming practices. The basic idea behind services is design for throughput and scalability from the outset and leverage Enterprise Services to easily implement those design patterns where appropriate.

It could be argued that the services infrastructure design actually has little to do with COM or even components: COM+ services can now be applied to COM components, to .NET components, and even to other entities that are not considered components, such as ASP pages or arbitrary code blocks (see the Services without Components COM+ feature on Microsoft Windows® XP).

All the COM+ services that are available today are available to .NET and COM objects. Some of these services include, transactions, object pooling and construction strings, JIT, synchronization, role-based security, CRM and BYOT. For a complete listing of services on Microsoft Windows 2000, see Services Provided by COM+ in the Platform SDK. Microsoft Windows XP includes a new version of COM+, namely COM+ 1.5, which has additional services that can also be used with .NET components.

Transactions

In order to write managed applications that use services, classes requiring services must derive from ServicedComponent and use various custom attributes to specify the actual services required. This section introduces these concepts and how they affect writing managed code. A more detailed explanation is provided in later sections.

Suppose a class Account has been written (the actual code is listed later) and is located in the BankComponent assembly. This class could be used as follows:

BankComponent Client

using system;
using BankComponent;
namespace BankComponentClient
{
      class Client
      {
        public static int Main() 
        {
          Account act = new Account();
          act.Post(5, 100);
          act.Dispose();
          return 0;
        }
      }
}

In order to build the client, the reference must be added to the BankComponent namespace. In addition, a reference must be added for the System.EnterpriseServices assembly—in the BankComponentClient namespace, the client calls Dispose() and the ServicedComponent constructor, which are methods defined in System.EnterpriseServices, not the assembly containing BankComponent. This is a general .NET requirement when a derived class does not override all the base class methods.

The BankComponent Server code shows the implementation of the Account class in .NET, which uses transactions. The class Account derives from the class System.EnterpriseServices.ServicedComponent. The Transaction attribute marks the class as requiring a transaction. The Synchronization and JIT services are configured automatically because the Transaction attribute is used. The AutoComplete attribute is used to specify that the runtime must automatically call the SetAbort function for the transaction if an unhandled exception is thrown during the execution of the method; otherwise, a SetComplete function is called. The ApplicationName attribute associates this assembly with a COM+ application that stores the service configuration data for this application. Further modifications that are required for this class are highlighted in the code.

BankComponent Server

using System.EnterpriseServices;
[assembly: ApplicationName("BankComponent")]
[assembly: AssemblyKeyFileAttribute("Demos.snk")]

namespace BankComponentServer
{
      [Transaction(TransactionOption.Required)]
      public class Account : ServicedComponent
      {
            [AutoComplete]
            public bool Post(int accountNum, double amount)
            {
            // Updates the database, no need to call SetComplete.
            // Calls SetComplete automatically if no exception is thrown.
            }
      }
}

The code in the BankComponent Server namespace shows how easy it is to use COM+ services in .NET. A summary of the complete process from coding to deployment is listed below:

  1. Write the server assembly.
  2. Build the assembly:
    1. Sign the assembly. A key file can be generated once for the project. It is not necessary to generate one for each compilation. A key can be created using the Microsoft .NET command prompt and sn.exe:
      sn –k Demos.snk
      
      
    2. Compile the code. A reference must be added for System.EnterpriseServices.
  3. Deploy the application.

    An assembly that uses serviced components must be registered with the COM+ catalog. The ServicedComponent class and the custom attributes are the two key concepts for gaining access to COM+ services from managed code. The configuration of the service is stored in the COM+ catalog. The objects reside and execute within the CLR. The managed object and its associated COM+ context is depicted in Figure 1 and will become clearer in the next two sections.

    Figure 1. Services associated with managed components

    With COM+ components you need to configure the catalog manually, but with serviced components, the catalog can be updated based on the attributes in the code. An assembly can be explicitly registered using the command line tool regsvcs.exe or by writing scripts that access a managed API. More details are provided in the deployment details section below. During development, XCopy deployment is provided as a convenience by simply copying the assembly into the application directory. Whenever a client application creates instances of classes derived from ServicedComponent, the runtime detects whether or not it has already registered the assembly in a COM+ application. If it has not been registered, the local directory is searched for the assembly and if found, all the serviced components in the assembly are registered in a COM+ application and the activation can then proceed. This is known as lazy registration but it does not work for all scenarios. For instance, assemblies that are marked as a COM+ server app require explicit registration (see below), and lazy registration does not work for unmanaged clients calling managed serviced components. Lazy registration is useful during development time, otherwise use scripts, code or RegSvcs to register the assembly.

  4. Possibly place the assembly in the GAC. See the deployment section for more details.
  5. Run the client.

Deployment

Custom attributes are one of the two key concepts of accessing COM+ services from managed code. Custom attributes are used to specify the services that are required, such as the Transaction custom attribute in the previous code listing. These attributes store the configuration options for a service in the assembly metadata. The custom attributes are used by having some code load the assembly and using reflection, create instances of the attributes and call methods on the attribute to extract the service configuration stored in the attribute. The information can then be written to the COM+ catalog. The code that performs these and other steps is contained in EnterpriseServices.RegistrationHelper. In order to make the process of registration easier, all forms of registration use the component EnterpriseServices.RegistrationHelper. This component is accessible as a managed class as well as a COM object.

Figure 2. Registering serviced components

Conceptually, RegistrationHelper performs the following steps:

  • Uses RegistrationServices.RegisterAssembly to register the assembly in the registry. Therefore, classes appear in the registry as COM components written in managed code and have the InprocServer32 key pointing to mscoree.dll. If a managed class does not implement any interfaces, the class's public methods do not appear in the COM+ catalog, unless the ClassInterfaceAttribute is used. This means that service configuration associated with the method level cannot be stored in the catalog. However, some COM+ services can be configured at the method level and require the component to expose an interface as viewed in the COM+ catalog. For example, COM+ role-based security on the method level requires a component to implement an interface in order to configure the service. More is discussed about this issue in the security section.
  • Generates a COM type library from the assembly using TypeLibConverter. ConvertAssemblyToTypeLib.
  • Registers the type library. So far, this is very much the same as RegAsm.exe /tlb.
  • Finds or creates a COM+ application. The name is extracted from the ApplicationName attribute, the assembly name or the supplied application name/GUID.
  • Uses the type library to configure the COM+ application using the COM+ admin APIs.
  • Goes through all the custom attributes and uses IConfigurationAttribute to write configuration data for the particular service to the COM+ catalog.

RegistrationHelper will attempt to do theses steps within a transaction using RegistrationHelperTx, a class within a COM+ application that is created when .NET is installed. Therefore, if registration fails, the COM+ catalog and registry will be restored to their original state. Currently, the generated type libraries will however remain on disk (or in the GAC if the assembly was in the GAC). If the assembly being registered references other assemblies that also use COM+ services, all assemblies in the dependency graph will undergo the same steps as listed above.

Since RegistrationHelper accesses the COM+ catalog, it requires unmanaged code permissions and admin rights on the machine. Therefore, the same is true for clients to RegistrationHelper namely: lazy registration, RegSvcs or your scripts/code. This also implies that code downloaded from the Internet or stored on a network share cannot be registered.

It is possible to code incompatible attribute combinations, such as requiring a transaction and setting synchronization to disabled. These combinations are currently detected during registration time when writing them to the COM+ catalog and not during compile time. Some attributes have implicit dependencies on other attributes, for instance, when using the Transaction attribute only, this is equivalent to using the Transaction, JustInTimeActivation and Synchronization attributes. When a managed component is registered, the COM+ catalog default values are used unless attributes are used to overwrite the 'unconfigured' default values. For instance, if a component is registered and does not have a Transaction attribute specified, the unconfigured default value for the transaction setting in the catalog is set to TransactionOption.Disabled. This approach allows a developer to remove an attribute from the code if the component no longer requires it and then when the assembly is registered again, the catalog entry for transaction is appropriately reset. A detailed list of these unconfigured default values are specified in the online documentation. Default configured values are the default values in the parameters of an attribute, for instance, just using the attribute [Transaction] indicates TransactionOption.Required.

Since the configuration data for services on managed classes are stored in the COM+ catalog, certain catalog entries may also be modified administratively after registering an assembly. Some services should not be modified in this manner. For instance, disabling the transaction service in the catalog can cause the code to operate incorrectly. Deployment-specific settings like object construction strings and security roles can be manipulated post registration. XCopy deployment of assemblies containing serviced components may not be sufficient when post registration settings are made. The COM+ application import and export feature helps to distribute the current state of the application. Further information about import and export is provided in the remoting section.

In some cases the catalog is not consulted for configuration data but is extracted purely from the assembly metadata. The cases are for AutoComplete, JIT, Object pooling (although the pool size is extracted from the catalog) and the secure method attribute. More details about this issue are discussed in the sections describing those services.

The process of registering an assembly will automatically generate the GUIDs that COM+ requires. If the assembly is not signed, the GUIDs are generated based only on type and namespace names. Therefore non-unique GUIDs can be generated if the assembly is not signed. A similar fate is imposed on .NET assemblies that do not even use COM+ services but require unique type names. Therefore, assemblies that use COM+ services must be signed. Registration will fail if assemblies are not signed. Registration also implies that a .NET class that uses COM+ services has one global configuration data store. Although it is possible to copy private assemblies to multiple application directories, all such applications ultimately refer to one configuration data for a serviced component. Therefore, changing the configuration data in the COM+ catalog affects all applications that use the class. This is evident with multiple vroots in a Microsoft ASP.NET configuration that all comprise a copy of the same assembly that uses serviced components. One way to have multiple configurations for the same COM+ application is to use COM+ partitions on Microsoft Windows .NET. To use the COM+ partitions service in .NET, do not use the ApplicationID attribute—in order to install the same component in multiple partitions, COM+ requires unique application IDs.

In general, the GAC is used whenever a client needs to access assemblies that are not in the same directory as the client application directory or if the assembly is loaded into another process, which is not located in the directory of the client. Conceptually, private assemblies that use serviced components are actually shared assemblies—their use of configuration data it shared. If ApplicationActivationOption is set to library, it is possible to use transactions on a class in an assembly and use that assembly in one client if all assemblies are loaded from the same directory. When an assembly that uses ApplicationActivationOption is set to server, the assembly is loaded by dllhost.exe, which most likely is not in the same directory as the client. Assemblies that use serviced components in COM+ server apps should be placed in the GAC. Assemblies that use serviced components in COM+ libraries apps may not need to be placed in the GAC (unless they are located in different directories). The only exception is when hosting with ASP.NETassemblies should not be placed in the GAC to enable shadow copy to operate correctly.

To remove a .NET application that uses serviced components, remove the assembly from the GAC (if it has been registered with the GAC), de-register the assembly from COM+ using regsvcs.exe then delete the assembly and the associated type libraries.

Versioning

It is possible to fix GUIDs that COM+ requires by using a GUID attribute. However, it is advised that versioning be used instead of explicitly using GUIDs. Whenever a new method signature is created or when classes are decorated with different service attributes, the major or minor version number of the assembly should be incremented. Registration should be performed once for a particular version. When registering a new version of the assembly, new GUIDs are generated for that version and the components are registered in the same COM+ application using the same component name. The components will therefore appear multiple times in the COM+ application. However, each component has unique IDs given by the GUIDs. Each instance refers to a particular version of the component. This is often noticed when building .NET applications using Microsoft Visual Studio® .NET. The environment adds the attribute [assembly: AssemblyVersion("1.0.*")] to a project. Each new build of the project will generate a new build number and hence new GUIDs are generated when the assembly is re-registered. Therefore, it may be preferable to manually increment the build numbers when appropriate. Clients will bind to an assembly using CLR version policy and therefore the correct version of the class in the COM+ application will be used. Some side-by-side scenarios when writing assemblies (managed servers) that use serviced components are: (some aspects of activation used below will be described in the next section)

  • Managed client, managed server, no fixed GUIDs used in the assembly.
  • The client will load the assembly specified by version policy.
  • Managed client, managed server, fixed GUIDs used.
  • If the client activates a class and uses version policy to get to an old version of the assembly, the fixed GUID in the code will be used during activation to extract the service information from the catalog. Therefore, information from the last registered assembly using this GUID will be used to create the object, which in fact may be the newer version and hence there would be a type cast exception when attempting to cast from the object actually created (v2) to the reference in the code (v1).
  • Managed client, managed server, no fixed GUIDs, change only the build number.
  • Although new GUIDs will be generated, the type library will still have the same version number since type libraries only have two numbers for the version. This may still work but if version 2 is installed over version 1, then version 1 is uninstalled, the type library for version 2 will be unregistered. Solution 1: The next release of the .NET Framework (V1.1) addresses this issue by enabling the type library to be versioned independently of the assembly. This implied that when changing the assembly version number, the type library version should be changed. Solution 2: Only make use of major and minor version numbers.
  • Unmanaged client, managed server, no fixed GUIDs used.
    • The client will use a GUID to create the component. Interop will resolve the GUID to a name then version policy gets applied. If version 1 and version 2 of an assembly are on a machine and policy is used to get to version 2, the unmanaged client will get version 2.
    • Install version 1, install version 2, uninstall version 1. Now the client cannot create the component unless there is version policy to redirect to version 2. In addition, registry entries must exist for version 1 registration information. One way to create registry information for an uninstalled version 1 is to use the COM+ aliasing feature on Windows XP.

Versioning applies to all the components in the same COM+ application, that is, there is no automatic way to version the application itself. For instance, the roles on the application cannot be versioned using version policy. Use the application name attribute to do application versioning.

Serviced Components

Activation

The Enterprise Services infrastructure is founded on the concept of a context. A context is an environment for objects with similar execution requirements. Services can be enforced during activation and/or during method call interception. Although COM+ services is written in unmanaged code, the integration of COM+ services with .NET is much deeper than just using the COM interop technology in .NET. Without deriving from ServicedComponent, the registration process would not have the desired effect.

Serviced components can be activated and hosted in a variety of combinations. As depicted in figure 3, this discussion will refer to three cases, in-process (same app-domain), cross app-domain (same process) and cross process activations. The importance of these cases is the boundaries that are crossed when making calls on components. The in-process activation gives rise to a potential cross context boundary, the cross app-domain case has both cross context and cross app domain boundaries, while the cross process case deals with cross machine / cross process and cross context boundaries.

Figure 3. Activation hosts for serviced components

The implementation of serviced components relies on .NET remoting, which provides an extensible mechanism to plug in services written in unmanaged or managed code. Serviced components derive from ContextBoundObject and implement various interfaces, such as IDisposable. The activation chain in the CLR is easily customized using ProxyAttribute derived custom attributes. Interception can be customized by writing custom real proxies. When a new serviced component derived class is required, the activation chain is customized so that the activation call actually calls a managed C++ wrapper for CoCreateInstance. This allows COM+ to set up the unmanaged contexts and services based on the information stored in the COM+ catalog from a previously registered assembly. This is also the stage where lazy registration is implemented. During the registration of the assembly, the InprocServer32 key points to mscoree.dll, thus redirecting the COM+ CreateInstance ultimately back to the runtime to create the real managed object. Therefore, during activation, a custom real proxy object is created. The in-process version of this proxy is known as the serviced component proxy or SCP. This is depicted in figure 4.

Figure 4. Activation path

The return path from the activation call marshals the managed references from managed code, through unmanaged COM+ and back into managed code (the reverse path of line 1 in Figure 4). Depending on where the real object was created, the client side un-marshals the reference into the relevant form. In the case of in-process activation, Figure 5 indicates the reference is unmarshaled as a direct reference to the transparent proxy (TP). Cross app-domain references are unmarshaled as a .NET remoting proxy. Cross process or cross machine references (figure 6) require more to be unmarshaled: COM interop makes calls on IManagedObject implemented by ServicedComponent during activation and unmarshaling. The remote serviced component proxy (RSCP) makes calls on IServicedComponentInfo during activation to obtain the URI of the server object, which implies that two remote calls are made during activation. When COM+ role-based security is required at the method level, these interfaces need to be associated with a role in order for the unmarshaling to succeed when the infrastructure makes calls on these interfaces. The security section will discuss the implications that cross process activation and marshalling has on configuring role-based security.

Figure 5. Infrastructure for in process calls

Figure 6. Infrastructure for out of process calls

The activation chain has therefore been customized to create a custom real proxy (used for interception) and to create the unmanaged contexts, leaving COM+ with only the context infrastructure that is needed to perform the semantics of the interception services. The COM+ context is now associated with a managed object, not a COM object.

Interception

Figure 7 depicts the in-process method call infrastructure. The custom proxy (SCP) enables managed calls to get intercepted. During activation, the COM+ context id is stored in the SCP. When one managed object calls a serviced component, the context id stored in the target SCP is compared against the context id of the current context. If the context IDs are the same, the call is executed directly on the real object. When the context IDs are different, the SCP makes a call to COM+ to switch the contexts and render the service of entering the method call. For in-process calls, this is similar to AppDomain.DoCallBack but with AppDomain being COM+. The 'DoCallBack' function first enters COM+ (step 2 in Figure 7), which switches the context and renders the service, then the callback function calls on the SCP. The SCP does the data marshalling and calls the method on the real object. When the method exits, the return path allows COM+ to render the semantics for leaving a method call (step 5 in Figure 7) . COM+ is only used to render the service. Data marshalling and the method call is performed within the .NET runtime, so that converting types like String to BSTR is not required when calling methods. Data marshalling would be required if COM interop was used for in-process calls. The call to render the service in unmanaged code is therefore not a COM interop call for in-process calls.

Figure 7. Infrastructure for in process calls

Calls on static methods are not forwarded to the transparent and real proxies. Therefore, static methods cannot make use of interception services; instead, they are called within the client's context. Internal methods will get called within the correct context, meaning that a client calling an internal method on an object configured for a new transaction will take part in a new transaction. However, since method level services require an interface in the COM+ catalog (more on this topic next, and in the security), internal methods cannot be configured for method level services. Services can apply to properties but the method level attributes (like AutoComplete) must be placed on the getter and setter methods individually.

The AutoComplete attribute is a convenient way to use transactions without writing any code to access the service. Alternatively, ContextUtil.SetAbort or ContextUtil.SetComplete can be used. This service is configurable in the COM+ explorer by setting a checkbox on the properties of the method. However, managed objects do not need to implement interfaces. This is also true for serviced components. When the method is not declared on an interface, the configuration for method level services cannot be written to the catalog on registration; the configuration can only be stored in metadata. When no interface exists for the method, context switching is done from the SCP using the configuration information stored on IRemoteDispatch.RemoteDispatchAutoDone when the AutoComplete attribute is present. When AutoComplete is not present, IRemoteDispatch.RemoteDispatchNotAutoDone is used. IRemoteDispatch is an interface implemented by ServicedComponent. Unmanaged clients can only call serviced components that do not have interfaces using IDispatch (late bound) and therefore the AutoComplete semantics cannot be enforced due to the absence of a real proxy in that case. Even when interfaces are used, the configuration of AutoComplete is still driven by the metadata for managed clients. The DCOM method call is made on RemoteDispatchAutoDone only in the out of process case. Out-of-process components don't use the DoCallBack mechanism, instead, DCOM can be used to deliver the call and render the service. If the method is on an interface, the interface method on the remote serviced component is called using DCOM, otherwise the call is dispatched to the IRemoteDispatch interface on ServicedComponent. This means that even calls like Dispose() are called through DCOM, the implications of which are discussed later.

Contexts

The class ContextUtil is used to access the associated COM+ object context and its properties. This provides similar functionality as the object returned by CoGetObjectContext in unmanaged code. The managed object context associated with a serviced component serves a different purpose than the associated unmanaged object context. This is manifested by writing three managed objects, one with Transactions required (acting as the root), the other two not deriving from serviced component (acting as child objects exemplifying context-agile managed objects). The non-serviced components will behave as if they were serviced components with Transactions supported, that is, they can make calls to resource managers and use ContextUtil.SetAbort if needed. When the root object gets created, the associated unmanaged context is created and associated with the current thread. When a call to the child objects is made, since they are not associated with an unmanaged context, no COM+ context change is needed, hence the thread still maintains the root's unmanaged context id. When a child object calls on resource managers, the resource managers will in turn extract the unmanaged context from the thread executing the child object, which is the root object's unmanaged context. Relying on this is dangerous and in future versions the unmanaged context may merge with the managed context and therefore the child objects will be associated with a potentially different managed context; the resource managers will not pick up the root object's context. Therefore, upgrading to a new version of .NET could break code that depends on this type of behavior.

Performance Results

In this section, the performance of the managed client, managed server serviced component solution is compared with the unmanaged client / server solution. The in-process case is described in the following table. A ServicedComponent configured for transactions was written in C# with a single method that simply adds numbers. A corresponding C++ implementation was used for comparison. This comparison shows the difference between the managed vs. the unmanaged solutions without doing any real work. In-process activations are about 3.5 times slower in the managed solution and method calls are about 2 times more expensive when there is a context switch. However, when comparing serviced component method calls that require context switching vs. those that don't, there is about 3 orders of magnitude difference indicating the success of the in-process serviced component interception infrastructure. For out of process solutions, activations are about 2 times more expensive and cross context method calls about 3 times more expensive.

Table 1 shows scaled times for in-process activation and methods calls using managed vs. unmanaged solutions.

Table 1. In-process activation and method calls

  Managed solution Unmanaged solution
Activation 35 10
Cross-context-do-nothing method call 2 1
Cross-context-do-work method call 200 100

Activations are about one order of magnitude more expensive than method calls to the 'do-nothing' method. Adding in some work to simply get a DTC transaction (but do nothing with it) levels the activation and method call times to the same order of magnitude. When the method call simply opens up a pooled database connection, the method call work is then an order of magnitude greater than both the activation and 'do-nothing' method call combined, proving that the overhead of the serviced component infrastructure is theoretical when real work is added to the experiment.

Object Lifetimes

Just-In-Time Activation

The just-in-time (JIT) service is generally not used in isolation. It is used implicitly with the transaction service and most often with object pooling. However, this example helps to highlight some interesting topics. In the code below, a .NET class is written that uses only the JIT service.

using System;
using System.EnterpriseServices;
[assembly: AssemblyKeyFile("Demos.snk")]
[assembly: ApplicationName("JITDemo")]

namespace Demos
{
    [JustInTimeActivation]
    public class TestJIT : ServicedComponent
    {
       public TestJIT()
       {  // First to get called
       }
       [AutoComplete]
       public void DoWork ()
       {       // Show doneness using .. 
                  // 1. The autocomplete attribute or
                  // 2. ContextUtil.DeactivateOnReturn = true or
                  // 3. ContextUtil.SetComplete();
       } 
       public override void Dispose(bool b)
{      // Optionally override this method and do your own 
// custom Dispose logic. If b==true, Dispose() was called
// from the client, if false, the GC is cleaning up the object
}
    }
}

The class derives from ServicedComponent and uses the JIT attribute to indicate the specific service required. In order to override the Activate and Deactivate methods in unmanaged code, the class is required to implement the IObjectControl interface. The class ServicedComponent instead has virtual methods that may be overridden to handle the Activate and Deactivate events. However, neither ServicedComponent nor its real proxy, SCP, implement IObjectControl. Instead, the SCP creates a proxy tearoff when the IObjectControl interface is requested by COM+. The calls by COM+ on the tearoff are then forwarded to the ServicedComponent's virtual methods. The DeactivateOnReturn bit is set using either the AutoComplete attribute on the method, calling ContextUtil.SetComplete(), ContextUtil.SetAbort() or setting ContextUtil.DeactivateOnReturn. Assuming the DeactivateOnReturn bit is set during each method call, the sequence of method calls would be: the class's constructor, Activate, the actual method call, Deactivate, Dispose(true) and eventually the class's finalizer if one exists. The same sequence is repeated when another method call is made. A good design practice is to only override the Activate and Deactivate methods to know when the object is being taken out and put back into the object pool. The remaining logic of Activate and Deactivate should be placed in the class's constructor and Dispose(bool) methods. The DeactivateOnReturn bit can be set using one of the following approaches:

  1. The client uses the object's state only for a single method call. On entry to the method, a new real object is created and attached to the SCP. On exiting the method, the real object is deactivated, first by making calls to Dispose(true) followed by the real objects finalizer if one exists. However, the associated COM+ context, SCP and TP remain alive. The client code will still maintain its reference to what it believes is an actual object (the transparent proxy). The next method call made by the client on the same reference will result in a new real object being created and attached to the SCP in order to service the method call (see the section on object pooling to remove the requirement of creating a new object). To deactivate the real object, the real object needs to indicate doneness when a method call exits. This can be accomplished by using:
    1. the AutoComplete attribute on a method of the class
    2. either one of two method calls on the ContextUtil class, DeactivateOnReturn or SetComplete
  2. The client makes multiple method calls on the same object without deactivating the object after each method call by setting the doneness bit to false before exiting the method. For example, scoping a serviced component that uses JIT on the form level and having two form buttons call methods on the same object instance by having the methods explicitly set the doneness bit to false. At some point, the doneness bit should be set to true. This approach implies a contract exists between the client and object. This can be done implicitly or explicitly by the client:
    1. The client knows to call a certain method on the object when it is done in order to deactivate the object. The method implementation uses the ideas in option 1. The object reference can be called again using the same calling sequence, implying a new real object will be created.
    2. The object is explicitly destroyed by the client when it calls the Dispose() method on the object. Dispose() is a method defined on ServicedComponent and in-turn calls Dispose(true), the class's finalizer (if one exists) and then tears down the associated COM+ context. In this case, no further method calls can be made on the object reference. An exception will be thrown if this is attempted. If there are many clients using the same object, calling Dispose() should be done only when the last client is done with the object. However, the stateless nature of JIT objects leads design practices towards a single instance per client model.
    3. The object never sets its doneness bit to true and the client never calls Dispose(). The real object, proxies and context gets destroyed when garbage collection takes place. The method call order initiated by the GC would be Deactivate, Dispose(false) then the classes finalizer (if one exists).

All serviced components have an associated COM+ context, which is stored as a reference in the SCP (or RSCP in the remote case). The reference is released only when the GC takes place or if the client calls Dispose(). It is better not to rely on the GC to clean up the context: The COM+ context holds onto one OS handle and some memory possibly delaying the release of these handles until a GC occurs. Also, although ServicedComponent does not have a finalizer, the SCP does implement a finalizer, meaning that the COM+ context reference will never get garbage collected on a first collection. In fact, when the finalizer on the SCP eventually gets called, the context is still not destroyed by the finalizer thread, instead, the work of destroying contexts is removed from the finalizer thread and placed on an internal queue. This was done because it was found that the finalizer thread can get consumed by work in certain stress environments where serviced components are being rapidly created, used and go out of scope. Instead, an internal thread services the queue, destroying old contexts. In addition, any application thread creating a new ServicedComponent will first attempt to take an item off the queue and destroy an old context. Therefore, calling Dispose() from the client will tear down the COM+ context sooner using the client thread and it will release the handle and memory resources that the context consumes. Sometimes Dispose() can throw exceptions. One case is if the object lives in a non-root transaction context that has aborted—the Dispose() call may observe a CONTEXT_E_ABORTED exception. Another case is explained in object pooling.

From a performance viewpoint, it is better not to implement a finalizer in a ServicedComponent derived class and instead place this logic in the Dispose(bool) method. Although the SCP does implement a finalizer, the real object's finalizer is called using reflection.

A good design practice for using JIT is to:

  • Place custom activation and finalization code in the constructor and Dispose(bool) methods, not to implement a finalizer and use a single call pattern by indicating doneness using the AutoComplete attribute on the method.
  • Call Dispose() from the client when the client is done with the object.

The discussion has assumed the client is managed and the component is in-process. When the component is out-of-process: (More details are outlined in the remoting section)

  • The GC will only clean up the objects when their .NET remoting lease time has expired for client-activated objects.
  • As noted earlier, when calling methods on out-of-process components, DCOM is used to switch the context and deliver the method call. If the component has been deactivated by JIT earlier and then a call to Dispose() is made, the server context will be entered and the real object will be re-created to service the DCOM call and finally deactivated again. For in-process components, if the real object has been deactivated, no attempt is made to switch to the correct context before servicing the Dispose() call (which would re-activate the component), instead, only the context is destroyed.

Object Pooling

The basic premise of object pooling is object reuse. Object pooling is most often used with JIT. This is true for both pooled COM components and pooled .NET components.

using System;
using System.EnterpriseServices;
[assembly: AssemblyKeyFile("Demos.snk")]
[assembly: ApplicationName("OPDemo")]

namespace Demos
{
[ObjectPooling(MinPoolSize=2, MaxPoolSize=50, CreationTimeOut=20)]
[JustInTimeActivation]
public class DbAccount : ServicedComponent
{
   [AutoComplete]
   public bool Perform ()
   {      // Do something
   }
   public override void Activate()
   {   // .. handle the Activate message
   }
   public override void Deactivate()
   {   // .. handle the Deactivate message
   }
   public override bool CanBePooled()
   {  // .. handle the CanBe Pooled message
      // The base implementation returns false
      return true;
   }
}
}

As is the case when using JIT, object pooling can be used in one of two ways:

  1. Single call pattern. In the code, the object is retrieved from the pool when the client attempts to make a method call and is returned to the pool on exit from the single method call assuming JIT is used with object pooling and the doneness bit is set to true during the method call. The same single call approaches to using JIT applies here too. The constructor is called only once when the object is created and placed in the pool. The method call order when using JIT and pooled objects is: Activate, the method call, Deactivate then CanBePooled. If CanBePooled returns true, the object is put back into the pool (although the context remains alive as discussed earlier). The same method call order is repeated for subsequent method calls (without calling the constructor again) after an arbitrary object is extracted from the pool (serviced components cannot make use of parameterized constructors). Finally, if the client calls Dispose() on the pooled object only the context is destroyed in the in-process case. In the out-of-process case, as noted earlier, the call to Dispose() can re-activate the object. If the object is pooled, one must be obtained from the pool, meaning that Dispose() can throw an exception with CO_E_ACTIVATION_TIMEOUT.
  2. Multi-call pattern. Using similar multiple method call approaches highlighted in the JIT service, the object can be placed back into the pool only after a number of method calls on the object. However, if the client does not call Dispose and JIT is not used, then there is no way to ensure that the pooled object's child objects that require finalization can be resurrected when the object gets put back into the pool by the GC. When the pooled object is garbage collected, there would also be no guarantee within Deactivate which members are still valid. In the next release of the .NET Framework (V1.1), canBePooled and Deactivate are not called and the object is not put back into the pool. With this approach there is a more consistent model—in Deactivate child objects are alive, in Dispose(), child objects are not guaranteed to be alive. Therefore, it is critical that Dispose() is called for pooled objects that do not make use of JIT, otherwise the object will not be returned to the pool.

It is acceptable for an administrator to modify the pool size and timeouts after the assembly has been deployed and registered. The pool size changes take effect when the process is restarted. On Windows XP or better, the pool size applies to each application domain within the process. On Windows 2000, the pool size is process wide with pooled object residing in the default application domain, which means that if a pooled object is required from another app domain within the same process, the client effectively communicates across app domain to the pooled object. One realization of this is using pooled .NET objects defined in a COM+ library application from within an ASP.NET application where each IIS vroot is housed in separate application domains.

Serviced components cannot make use of parameterized constructors.

Security

Code Access Security (CAS)

The .NET Framework security enables the ability for code to access resources only if it has permission to do so. To express this, the .NET Framework uses the concept of permissions, which represent the right for code to access protected resources. Code requests the permissions it needs. The .NET Framework provides code access permission classes. Alternatively, custom permission classes can be written. These permissions can be used to indicate to the .NET Framework what the code needs to be allowed to do and to indicate what the code's callers must be authorized to do. Any code path through System.EnterpriseServices requests unmanaged code permissions.

Code access security in .NET is most useful in applications where code is downloaded from the web and the author may not be fully trusted. Typically, applications that use serviced components are fully trusted, require security to flow between multiple processes and enable the configuration of roles at deployment time. These are features exposed by COM+ role-based security.

Any code path through System.EnterpriseServices requires unmanaged code permissions. This implies the following:

  • Unmanaged code permission is required to activate and perform cross context calls on serviced components.
  • If a reference to a serviced component is passed to untrusted code, methods defined on ServicedComponent cannot be called from the untrusted code. However, custom methods defined on a class derived from ServicedComponent may be called from untrusted code in some circumstances: Calls from untrusted code can be made on those custom methods that do not require context switching, interception services and if the implementation of the method does not make calls to members of System.EnterpriseServices.

In addition, in .NET version 1, the security stack is not copied when a thread switch is made so that custom security permissions should not be used within serviced components.

Role-Based Security (RBS)

System.EnterpriseServices provides security services to .NET objects that mirror the functionality of the COM+ security mechanisms. When a COM+ server application is used to host the components, the RBS features requires that the DCOM transport protocol be used to activate the components from a remote client. More details about remoting are provided in the next section. The security call context and identity in COM+ is therefore available to managed code. In addition, CoImpersonateClient, CoInitializeSecurity and CoRevertClient are familiar calls generally used on the server side, while CoSetProxyBlanket is generally used on the client side.

Certain security settings are not stored in metadata using attributes, for instance, adding users to roles and setting the process security identity. However, assembly level attributes can be used to configure what appears in the security tab of the COM+ explorer for a COM+ server application:

  • Enabling authorization for the application (ApplicationAccessControlAttribute(bool)). This is required to be true to support RBS.
  • The security level (ApplicationAccessControlAttribute(AccessChecksLevelOption)). If set to AccessChecksLevelOption.Application, users assigned to roles in the application are added to the process security descriptor and fine-grained role checking at the component, method, and interface levels is turned off. Security checks are therefore performed only at the application level and library applications rely on the host process for process-level security. If the attribute is set to AccessChecksLevelOption.ApplicationComponent, then users assigned to roles in the application are added to the process security descriptor and role-based security checks are performed on the application. In addition, access checks must also be enabled for each component requiring RBS by applying the ComponentAccessControl attribute on the class. In a library application, role-based security checks are performed as if it was a server application. The security property is included on the context for all objects within the application and the security call context is available. If an object has a configuration incompatible with the context of its creator, it is activated in its own context. Programmatic role-based security relies on the availability of the security call context.

    For any meaningful access checking to work for COM+ library applications, choose to perform access checks at the process and component level.

  • The impersonation and authentication selections correspond to the ImpersonationLevel and Authentication properties of the ApplicationAccessControl attribute.

    The SecurityRole attribute can be applied to the assembly, class or method level. When applied at the assembly level, users in that role can activate any component in the application. When applied to the class level, users in that role can, in addition, call any method on the component. Application and class level roles can be configured in metadata, or administratively by accessing the COM+ catalog.

    Configuring RBS at the assembly level using metadata:

    [assembly: ApplicationAccessControl(true,
    AccessCheckLevel=AccessChecksLevelOption.ApplicationComponent)]
    // adds NTAuthority\everyone to this role
    [assembly:SecurityRole("TestRole1",true)]
    // add users to roles administratively
    [assembly:SecurityRole("TestRole2")]
    
    

    Configuring RBS at the class level in metadata:

    [assembly: ApplicationAccessControl(true,
    AccessCheckLevel=AccessChecksLevelOption.ApplicationComponent)]
    …
    [ComponentAccessControl()]
    [SecurityRole("TestRole2")]
    public class Foo : ServicedComponent
    {
    public void Method1() {}
    }
    
    

    RBS at the assembly or class level can be configured administratively because those entities exist in the COM+ catalog after the assembly has been registered. However, as discussed earlier, class methods do not appear in the COM+ catalog. To configure RBS on methods, the class must implement methods of an interface and must use the SecureMethod attribute on the class level, or SecureMethod or SecurityRole at the method level. In addition, the attributes must appear on the class method implementation, not the interface method in the definition of the interface.

  • The easiest way of using RBS on methods is to apply the SecureMethod attribute on the class level and then configure roles (either administratively or by placing the SecurityRole attribute on methods).
    [assembly: ApplicationAccessControl(true,
    AccessCheckLevel=AccessChecksLevelOption.ApplicationComponent)]
    Interface IFoo
    {
    void Method1();
    void Method2();
    }
    [ComponentAccessControl()] 
    [SecureMethod]
    public class Foo : ServicedComponent, IFoo
    {
    // Add roles to this method administratively
    public void Method1() {} 
    // "RoleX" is added to the catalog for this method
    SecurityRole("RoleX")
    public void Method2() {}
    }
    
    

    Using SecureMethod on the class level allows all methods on all interfaces in the class to be configured with roles in the COM+ catalog administratively. If the class implements two interfaces each with the same method name and roles are configured administratively, then the roles must be configured on both methods as they appear in the COM+ catalog (unless the class implements the specific method, for instance, IFoo.Method1). However, if the SecurityRole attribute is used on the class method, then all methods with the same method name are automatically configured with that role when the assembly is registered.

  • The SecureMethod attribute can also be placed on the method level.
    [assembly: ApplicationAccessControl(true, 
    AccessCheckLevel=AccessChecksLevelOption.ApplicationComponent)]
    Interface IFoo
    {
       void Method1();
       void Method2();
    }
    [ComponentAccessControl()] 
    public class Foo : ServicedComponent, IFoo
    {
    // Add roles to this method administratively
    [SecureMethod]  // Or use SecurityRole (translates to
      SecureMethod++)
       public void Method1() {}
       public void Method2() {}
    }
    
    

    In the example, IFoo and both methods appear in the COM+ catalog and therefore roles can be configured on either method administratively, however, method level RBS is only enforced on Method1. Use SecureMethod or SecurityRole on all methods that will be required to participate in method level RBS security or place SecureMethod on the class level as outlined previously.

Whenever RBS is configured on the method level, the Marshaller role is required: When method calls are made and RBS is not configured on methods, the serviced component infrastructure makes calls on IRemoteDispatch. When method calls are made and RBS is configured on methods (when the SecureMethod attribute is present), the method call is made using DCOM using the interface associated with the method. Therefore, DCOM will guarantee RBS is enforced at the method level. However, as discussed in the Activation and Interception sections, COM interop and the RSCP will then make calls on IManagedObject (in order to let remote activators marshal the reference into their space) and IServicedComponentInfo (to query the remote object). These interfaces are associated with serviced components. Since the component is configured to do method level checks, a role is required to be associated with these interfaces if the infrastructure is to make these calls successfully.

Therefore, a Marshaller role is added to the application when the assembly is registered and users must then be added administratively to this role. Most often all users of the application are added to this role. This is somewhat different from unmanaged COM+ where configuring RBS on methods does not require this additional configuration step. Automatically adding 'Everyone' to this role during registration is a potential security hole since anyone would now be able to activate (but not call) components where before they might not have had the rights to activate them. The Marshaller role is also added to the IDisposable interface to allow clients to dispose the object. An alternative to the Marshaller role is for users to add the relevant roles to each of the three interfaces mentioned.

Remote Components

The class ServicedComponent contains MarshalByRefObject in its inheritance tree and therefore can be accessed from remote clients. There are many variations of how to expose serviced components remotely. Serviced components can be accessed remotely using:

  • The HTTP channel with serviced components called from or written in ASP.NET offers good security and encryption options, along with known scalability and performance. When used with SOAP, more interoperability options exist. Serviced components can be hosted in IIS/ASP.NET as a COM+ library application. If a COM+ server application is used, the IIS/ASP.NET host can access the components using DCOM.
  • An alternative way to expose a serviced component as a SOAP endpoint is discussed in COM+ Web Services: The Check-Box Route to XML Web Services.
  • DCOM when serviced components are hosted in Dllhost. This option offers optimal performance and security and the ability to pass service contexts cross-machine. The major design question when choosing a remoting technology should be whether or not the services need to flow across machines. For instance, within a server farm where a transaction is created on one machine and it is required for the transaction to continue on another machine, DCOM is the only protocol that can be used to achieve this. However, if clients need to simply call a remote ServicedComponent, then the HTTP channel or SOAP endpoint approaches are good alternatives.
  • A .NET remoting channel (for instance, a TCP or custom channel). To use the TCP channel you need a process listening on a socket. In general, a custom process is used to listen on a socket and then host serviced components either as a COM+ library or server application. Alternatively, Dllhost can be used as the listener. Either approach is least likely to be used and would require writing a custom socket listener with proven performance, scalability and security. Therefore, the ASP.NET or DCOM solutions are the best approaches for most projects.

In order to access a serviced component remotely using DCOM and hosted in Dllhost, first ensure the assembly is registered in a COM+ server application and placed in the GAC on the server machine. Then, use the COM+ application export facility to create an MSI file for an application proxy. Install the application proxy on the client. Embedded in the application proxy is the managed assembly. The installer will also register the assembly and place it in the GAC on the client machine. Therefore:

  • The .NET Framework is required to be installed on the client and sever. This is required on the client machine even if only unmanaged clients will access remote serviced components. On Windows 2000 platforms, Service Pack 3 is also required.
  • After uninstalling the proxy, the assembly must also be removed from the GAC.

The infrastructure after the server component has been activated in managed code from the client side is shown in figure 6.

Using DCOM implies the CLR is hosted in Dllhost, which means the application configuration file dllhost.exe.config resides in the system32 directory. This also implies that the configuration file applies to all Dllhost process on the machine. In the next release of the .NET Framework (V1.1), the COM+ application root directory can be set in the COM+ application and that directory is used to discover configuration files and assemblies for the application.

For client-activated objects, whenever an object's URI is requested, a lifetime lease is created for that object. As described earlier in the Activation section, a URI is requested by the remote serviced component proxy. This can also occur when an existing in-process serviced component is marshaled to a remote process—a URI is requested whenever an MBR object is marshaled by .NET outside an application domain. The URI is used to ensure object identities in .NET are unique and to prevent proxy chaining. Therefore, when a managed client activates a remote serviced component, lease times are used on the server object. Notice that unmanaged clients do not have a remote serviced component proxy on the client side and therefore do not request the object's URI. Instead, unmanaged clients use DCOM to ensure object identity. Therefore lease times on serviced components are not used when they are activated from unmanaged clients.

Where lease times are involved with serviced components, it is good practice to set the InitialLeaseTime and RenewOnCallTime timeout values to a small value, possibly even as small as 10 seconds. Serviced components are destroyed either using Dispose() or having the GC clean up the objects. When Dispose() is called, the remote serviced component proxy will release the reference it has on the DCOM proxy and then make itself available to the next GC. The server object will process the Dispose call (or a new server object is created to service the remote call to Dispose()), destroy the associated COM+ context and then make itself available to the next GC but only when the lease time has timed out. When the client does not call Dispose(), the server will first have to wait for the client side GC to release the reference to the DCOM proxy and only then make itself and the COM+ context available to the next GC after the lease time has expired. Therefore, call Dispose() and in addition, reduce the default lease time. Even if the client remains alive and the lease time expires, the DCOM reference to the server object will keep the server object alive. However, the DCOM reference is not always used to keep the serviced component alive. When the client accesses the object through a CLR remoting channel or COM+ SOAP services, only the strong reference that is due to the lease will keep the serviced component alive.

Conclusion

This article has discussed just some of the services available to managed code. All the COM+ services are available to managed code, such as transaction isolation levels, process initialization, services without components and process recycling. The .NET Framework now provides equal access to all COM+ services in a consistent and logical manner. Moreover, a number of innovative parts of the .NET Framework, such as ASP.NET, Microsoft ADO.NET and Messaging, integrate deeply with .NET Enterprise Services, making use of services such as transactions and object pooling. This integration provides for a consistent architecture and programming model. The System.EnterpriseServices namespace provides the programming model to add services to managed classes.

Show:
© 2014 Microsoft