Standard Marshaling Architecture

As mentioned previously in this chapter, COM uses the ORPC protocol for all cross-apartment access. This fact may be interesting architecturally, but few developers want to program low-level communications code. To take advantage of ORPC communications, COM objects need to do nothing beyond implement IUnknown to facilitate ORPC-based cross-apartment access. By default, when CoMarshalInterface is first called on an object, the object is asked whether it wishes to handle its own cross-apartment communications. This question comes in the form of a QueryInterface request for the IMarshal interface. Most objects do not implement the IMarshal interface and fail this QueryInterface request, indicating that they are perfectly happy to let COM handle all communications via ORPC calls. Objects that do implement the IMarshal interface are indicating that ORPC is inappropriate and that the object implementor would prefer to handle all cross-apartment communications via a custom proxy. When an object implements the IMarshal interface all references to the object will be custom marshaled. Custom marshaling is discussed later in this chapter. When an object does not implement the IMarshal interface, all references to the object will be standard marshaled. Most objects elect to use standard marshaling, and that is the focus of this section.

When CoMarshalInterface first determines that an object wishes to use standard marshaling, a special COM object called the stub manager is created. The stub manager acts as the network-wide identity of the object and is uniquely identified by an Object Identifier (OID) that represents the object's identity across all apartments. There is a one-to-one correspondence between stub managers and COM object identities. Each stub manager refers to exactly one COM object. Each COM object that is using standard marshaling will have exactly one stub manager. The stub manager holds at least one outstanding reference to the object, which keeps the object's resources in memory. In this respect, the stub manager is yet another in-process client to the object. The stub manager keeps track of the number of outstanding external references and will remain alive as long as there is at least one outstanding reference somewhere in the network. Most external references are simply proxies, although intermediate marshaled object references can hold the stub running to ensure that the object is still alive when the first proxy is created. When outstanding proxies/references are destroyed, the stub manager is notified and it decrements its count of external references. When the last external reference to the stub manager is destroyed, the stub manager destroys itself, releasing its outstanding references to the actual object. This simulates the effect of having client-side references keeping the object alive. Techniques for explicitly controlling the lifetime of the stub will be discussed later in the chapter.

The stub manager simply acts as the network identity of the object and does not understand how to handle incoming ORPC requests that are destined for the object.8 To translate incoming ORPC requests into actual method invocations on the object, the stub manager requires a helper object that knows the details of the interface's method signatures. This helper object is called an interface stub and must properly unmarshal the [in] parameters that are present in the ORPC request message, call the method on the actual object, and marshal the HRESULT and any [out] parameters into the ORPC response message. Interface stubs are identified internally using Interface Pointer Identifiers (IPIDs) that are unique within an apartment. Like the stub manager, each interface stub holds a reference to the object; however, the interface held will be a typed interface, not simply IUnknown. Figure 5.3 shows the relationship between the stub manager, interface stubs, and the object. Note that some interface stubs know how to decode more than one interface type, whereas others understand only one interface.

Figure 5.3 Stub Architecture

When CoUnmarshalInterface unmarshals a standard marshaled object reference, it technically returns a pointer to the proxy manager. The proxy manager acts as the client-side identity of the object and, like the stub manager, does not have any a priori understanding of any COM interfaces. The proxy manager does, however, know how to implement the three methods of IUnknown. Any redundant calls to AddRef or Release simply increment or decrement an internal reference count in the proxy manager and are never transmitted using ORPC. The final Release on the proxy manager does destroy the proxy, sending a disconnect request to the object's apartment. QueryInterface requests on the proxy manager are handled somewhat differently. Like the stub manager, the proxy manager has no a priori knowledge of COM interfaces. Instead, the proxy manager must load interface proxies that expose the actual interface being remoted. The interface proxy translates method invocations into ORPC calls. Unlike the stub manager, the proxy manager is directly visible to programmers, and to maintain the correct identity relationships, the interface proxies are aggregated into the proxy manager's identity. This gives the client the illusion that all of the interfaces are exposed from a single COM object. Figure 5.4 shows the relationship between the proxy manager, interface proxies, and the stub.

As Figure 5.4 illustrates, the proxy communicates with the stub via a third object called the channel. The channel is a COM-supplied wrapper around the RPC runtime layer. The channel exposes the IRpcChannelBuffer interface:

Figure 5.4 Proxy Architecture

[ uuid(D5F56B60-593B-101A-B569-08002B2DBF7A),local,object ]
interface IRpcChannelBuffer : IUnknown {
// programmatic representation of ORPC message
   typedef struct tagRPCOLEMESSAGE {
      void   *reserved1;  
      unsigned long     dataRepresentation;   // endian/ebcdic
      void   *Buffer;   // payload goes here
      ULONG   cbBuffer;   // length of payload
      ULONG   iMethod;   // which method?
      void   *reserved2[5];
      ULONG   rpcFlags;

// allocate a transmission buffer
   HRESULT GetBuffer([in] RPCOLEMESSAGE *pMessage,
         [in] REFIID riid);
// send an ORPC request and receive an ORPC response
   HRESULT SendReceive([in,out] RPCOLEMESSAGE *pMessage,
         [out] ULONG *pStatus);
// deallocate a transmission buffer
   HRESULT FreeBuffer([in] RPCOLEMESSAGE *pMessage);
// get distance to destination for CoMarshalInterface
   HRESULT GetDestCtx([out] DWORD *pdwDestCtx,
         [out] void **ppvDestCtx);
// check for explicit disconnects
   HRESULT IsConnected(void);

Interface proxies use the SendReceive method on this interface to cause the channel to send ORPC request messages and receive ORPC response messages.

Interface proxies and stubs are simply COM in-process objects that are created by the proxy and stub managers using normal COM activation techniques. The interface stub must expose the IRpcStubBuffer interface:

[ uuid(D5F56AFC-593B-101A-B569-08002B2DBF7A),local,object ]
interface IRpcStubBuffer : IUnknown {
// called to connect stub to object
   HRESULT Connect([in] IUnknown *pUnkServer);
// called to inform stub to release object
   void   Disconnect(void);
// called when ORPC request arrives
   HRESULT Invoke([in] RPCOLEMESSAGE *pmsg,
         [in] IRpcChannelBuffer *pChannel);
// used to support multiple itf types per stub
   IRpcStubBuffer *IsIIDSupported([in] REFIID riid);
// used to support multiple itf types per stub
   ULONG   CountRefs(void);
// used by ORPC debugger to find pointer to object
   HRESULT   DebugServerQueryInterface(void **ppv);
// used by ORPC debugger to release pointer to object
   void   DebugServerRelease(void *pv);

The Invoke method will be called by the COM library when an ORPC request arrives for the object. On input, the RPCOLEMESSAGE will contain the marshaled [in] parameters, and on output, the stub must marshal the method's HRESULT and any [out] parameters that will be returned in the ORPC response message.

The interface proxy must expose the interface(s) it is responsible for remoting in addition to the IRpcProxyBuffer interface:

[ uuid(D5F56A34-593B-101A-B569-08002B2DBF7A),local,object ]
interface IRpcProxyBuffer : IUnknown {
   HRESULT   Connect([in] IRpcChannelBuffer *pChannelBuffer);
   void   Disconnect(void);

The IRpcProxyBuffer interface must be the nondelegating unknown of the interface proxy. All other interfaces the interface proxy exposes must delegate their IUnknown methods to the proxy manager. It is in the method implementations of these other interfaces that the interface proxy must use the channel to send ORPC requests to the interface stub's Invoke method, which then executes the method in the object's apartment.

Interface proxies and interface stubs are dynamically bound and share a single CLSID for both the proxy and stub. This bifurcated implementation is often called an interface marshaler. The class object of the interface marshaler exposes the IPSFactoryBuffer interface:

[ uuid(D5F569D0-593B-101A-B569-08002B2DBF7A),local,object ]
interface IPSFactoryBuffer : IUnknown {
   HRESULT CreateProxy(
      [in] IUnknown *pUnkOuter,   // ptr to proxy manager
      [in] REFIID riid,   // the requested itf to remote
      [out] IRpcProxyBuffer **ppProxy, // ptr. to proxy itf.
      [out] void **ppv   // ptr to remoting interface
   HRESULT CreateStub(
      [in] REFIID riid,   // the requested itf to remote
      [in] IUnknown *pUnkServer, // ptr to actual object
      [out] IRpcStubBuffer **ppStub // ptr to stub on output

The proxy manager calls the CreateProxy method to aggregate a new interface proxy. The stub manager calls the CreateStub method to create a new interface stub.

When a new interface is requested on an object, the proxy and stub managers must resolve the requested IID onto the CLSID of the interface marshaler. Under Windows NT 5.0, the class store maintains these mappings in the NT directory, and they are cached at each host machine in the local registry. The machine-wide IID-to-CLSID mappings are cached at


and the per-user mappings are cached at


One or both of these keys will contain a subkey for each known interface. Under Windows NT 4.0 or earlier, there is no class store and only the HKEY_CLASSES_ROOT\Interface area of the local registry is used.

If the interface has an interface marshaler installed, there will be an additional subkey (ProxyStubClsid32) that indicates the CLSID of the interface marshaler. The following illustrates the required registry keys for a marshalable interface:


These registry entries state that there is an in-process server with a CLSID of {1A3A29F3-D87E-11d0-8C4F-0080C73925BA} that implements the interface proxy and stub for interface IRacer ({1A3A29F0-D87E-11d0-8C4F-0080C73925BA}). This implies that HKCR\CLSID will have a subkey for the interface marshaler mapping the CLSID onto the appropriate DLL filename. Again, under Windows NT 5.0, this mapping may exist in the class store, which can dynamically populate the local registry. Because interface marshalers must run in the same apartment as the proxy manager or the stub manager, they must use ThreadingModel="Both" to ensure that they can always load into the correct apartment.

Implementing Interface Marshalers

The previous section illustrated the four interfaces used by the standard marshaling architecture. Although it is possible to implement interface marshalers using manual C++ coding techniques, it is rarely done in practice. This is because the IDL compiler can automatically generate the C source code for an interface marshaler based on the IDL definition of an interface. MIDL-generated interface marshalers serialize method parameters using the Network Data Representation (NDR) protocol, which allows the parameters to be unmarshaled on a variety of host architectures. NDR takes into account differences in byte ordering, floating point formats, character sets, and alignment issues. NDR supports virtually all C-compatible data types. To support passing interface pointers as parameters, MIDL generates calls to CoMarshalInterface/CoUnmarshalInterface to marshal any interface pointer parameters. If the parameter is a statically typed interface pointer:

HRESULT Method([out] IRacer **ppRacer);

the generated marshaling code will marshal the ppRacer parameter by passing the IID of IRacer (IID_IRacer) to the CoMarshalInterface/ CoUnmarshalInterface calls. If instead the interface pointer is dynamically typed:

HRESULT Method([in] REFIID riid, 
            [out, iid_is(riid)] void **ppv);

then the generated marshaling code will marshal the interface using the IID passed at runtime in the first method parameter.

MIDL generates interface marshaler source code for every nonlocal interface defined outside the scope of the library statement. In the following pseudo-IDL:

// sports.idl
[local, object] interface IBoxer : IUnknown { ... }
[object] interface IRacer : IUnknown { ... }
[object] interface ISwimmer : IUnknown { ... }
[helpstring("Sports Lib")] 
library SportsLibrary {
   interface IRacer; // include def. of IRacer in TLB
   [object] interface IWrestler : IUnknown { ... }

only the IRacer and ISwimmer interfaces would have interface marshaler source code. MIDL would not generate marshaling code for IBoxer because the [local] attribute suppresses marshaling. MIDL also would not generate a marshaler for IWrestler because it is defined inside the scope of a library statement.

When presented with the IDL just shown, the MIDL compiler would generate five files. The file sports.h would contain the C/C++ definitions of the interfaces, sports_i.c would contain the definitions of the IIDs and LIBIDs, and sports.tlb would contain the tokenized IDL for IRacer and IWrestler suitable for use in COM-aware development environments. The file sports_p.c would contain the actual interface proxy and stub method implementations that perform the method call-to-NDR transformations. This file would also contain the C-based vtable definitions for the interface proxy and stub along with other MIDL-specific management code. Because interface marshalers are COM in-process servers, the standard four entry points (DllGetClassObject et al) must also be defined. These four methods are defined in the fifth file, dlldata.c.

All that is needed to build an interface marshaler from these generated files is to write a makefile that compiles the three C source files (sports_i.c, sports_p.c, dlldata.c) and links them together to build the DLL. The four standard COM entry points must be explicitly exported using either a module definition file or linker switches. Note that by default, dlldata.c contains only definitions of DllGetClassObject and DllCanUnloadNow. This is because the supporting RPC runtime library under Windows NT 3.50 supported only these two routines. If the interface marshaler will be used only under Windows NT 3.51 or later (or under Windows 95), the C preprocessor symbol REGISTER_PROXY_DLL should be defined when compiling the dlldata.c file to compile the standard self-registration entry points as well. Once the
interface marshaler is built, it should be installed into the local registry and/or the class store.

The Windows NT 4.0 implementation of the COM library introduced support for fully interpretive marshaling. Depending on the interface, using the interpretive marshaler can vastly improve the performance of an application by reducing the working set size. The preinstalled interface marshalers for all COM standard interfaces use the interpretive marshaler. Microsoft Transaction Server requires interface marshalers to use the interpretive marshaler.9 To enable the interpretive marshaler, simply run the MIDL compiler using the /Oicf command-line switch:

midl.exe /Oicf sports.idl

At the time of this writing, the MIDL compiler would not overwrite a preexisting _p.c file, so this file must be deleted when changing this setting. Because /Oicf-based interface marshalers will not work on pre-Windows NT 4.0 versions of COM, the C preprocessor symbol _WIN32_WINNT must be defined to some integer greater than or equal to 0x400 when the marshaler's source code is compiled. The C compiler will enforce this at compile time.

A third technique for generating interface marshalers is supported for a limited class of interfaces. If an interface uses only the primitive data types supported by VARIANTs,10 the universal marshaler can be used. Adding the [oleautomation] attribute to an interface definition enables the universal marshaler:

[ uuid(F99D19A3-D8BA-11d0-8C4F-0080C73925BA), version(1.0) ]
library SportsLib {
      uuid(F99D1907-D8BA-11d0-8C4F-0080C73925BA), object,
   interface IWrestler : IUnknown {
      import "oaidl.idl";
      HRESULT HalfNelson([in] double nmsec);

The presence of the [oleautomation] attribute informs the RegisterTypeLib function to add the following additional registry entries when registering the type library:





The CLSID {00020424-0000-0000-C000-000000000046} corresponds to the universal marshaler, which is preinstalled on all platforms that support COM, including 16-bit Windows.

The primary advantage of using the universal marshaler is that it is the only supported technique for standard marshaling between 16-bit and 32-bit applications. The universal marshaler is also compatible with Microsoft Transaction Server. A side benefit of the universal marshaler is that if the type library is installed on both the client and object host machines, no additional interface marshaler DLL will be needed. The primary disadvantage of using the universal marshaler is the limited support for parameter data types. This is exactly the same as the limitation imposed by dynamic invocation and scripting environments, but it is a serious limitation for designing low-level systems programming interfaces.11 Under Windows NT 4.0, the initial cost of calling CoMarshalInterface/CoUnmarshalInterface will be somewhat greater using the universal marshaler. However, once the interface proxy and stub are instantiated, the method invocation performance is identical to /Oicf-based marshalers.

Standard Marshaling, Threads, and Protocols

The details of how COM actually maps ORPC requests onto threads are undocumented and subject to change as the implementation of the COM library evolves. The descriptions contained in this section are accurate at the time of this writing, but certain implementation details may change in subsequent releases of COM.

When the first apartment is initialized in a process, COM enables the RPC runtime layer, turning the process into an RPC server. If the apartment type is an MTA or RTA, the ncalrpc RPC protocol sequence is used, which is a wrapper around Windows NT LPC ports. If the apartment type is an STA, a private protocol sequence is used that is based on Windows MSG queues. As objects that reside in the process are first accessed by off-host clients, additional network protocol sequences are registered in the process. When a process first begins to use protocols other than the Windows MSG protocol, the RPC thread cache is started. This thread cache begins with one thread that listens for incoming connection requests, RPC requests, or other protocol-specific activity. When any of these events happen, the RPC thread cache will dispatch a thread to service the request and continue to wait for additional activity. To avoid excessive thread creation/destruction overhead, these threads return to the thread cache, where they will wait for additional work. If no additional work arrives, the threads will destroy themselves after a predefined period of inactivity. The net effect is that the RPC thread cache grows and shrinks based on the busyness of the objects that are exported from the process' apartments. From a programming perspective, the important observation is that the RPC thread cache dynamically allocates threads based on ORPC requests that arrive on all protocols except the Windows MSG protocol, which will be discussed later in this section.

When an incoming ORPC request is dispatched to a thread from the cache, the thread extracts the IPID from the header of the ORPC call and finds the corresponding stub manager and interface stub. The thread determines the type of apartment the object resides in, and if the object is in the MTA or an RTA, the thread enters the object's apartment and calls the IRpcStubBuffer::Invoke method on the interface stub. If the apartment is an RTA, subsequent threads will be held at bay for the duration of the method call. If the apartment is the MTA, then subsequent threads may access the object concurrently. For intraprocess RTA/MTA communications, the channel is able to shortcut the RPC thread cache and reuse the client thread simply by entering the object's apartment temporarily. If MTAs and RTAs were the only types of apartment, this would be all that is required.

Figure 5.5 Singlethreaded Apartment Call Dispatching

Dispatching calls to an STA is more complex, simply because no other threads can enter an existing STA. Unfortunately, when ORPC requests arrive from off-host clients, they are dispatched using threads from the RPC thread cache, which by definition cannot execute in the object's STA. To enter the STA and dispatch the call to the STA's thread, the RPC thread uses the PostMessage API function to enqueue a message onto the STA thread's MSG queue as shown in Figure 5.5. This queue is the same FIFO queue used by the windowing system. This means that to finish dispatching the call, the STA thread must service the queue via some variation on the following code:

MSG msg;
while (GetMessage(&msg, 0, 0, 0))

This code implies that the STA thread has at least one window that can receive messages. When a thread enters a new STA by calling CoInitializeEx, COM creates a new invisible window by calling CreateWindowEx. This window is associated with a COM-registered window class whose WndProc looks for a predefined window message and services the corresponding ORPC request by calling the IRpcStubBuffer::Invoke method on the interface stub. Note that because windows, like STA-based objects, have thread affinity, the WndProc will execute in the apartment of the object. To avoid excessive thread switching, the Windows 95 release of COM introduced an RPC transport that bypasses the RPC thread cache and calls PostMessage from the thread of the caller. This transport is available only when the client is on the same host as the object, because the PostMessage API does not work over the network.

To help prevent deadlock, all COM apartment types support reentrancy.12 When a thread in an apartment makes a call through a proxy to an object outside the caller's apartment, incoming method requests can continue to be serviced while the caller's thread is waiting for the ORPC response from the original call. Without this support, it would be impossible to build systems based on collaborating objects. In the following code, assume that CLSID_Callback is an in-process server that supports the threading model of the calling thread and that CLSID_Object is a class that is configured to activate on a remote machine:

ICallback *pcb = 0;
HRESULT hr = CoCreateInstance(CLSID_Callback, 0, CLSCTX_ALL,
   IID_ICallback, (void**)&pcb);
assert(SUCCEEDED(hr)); // callback object lives in this apt.
IObject *po = 0;
hr = CoCreateInstance(CLSID_Object, 0, CLSCTX_REMOTE_SERVER,
   IID_IObject, (void**)&po);
assert(SUCCEEDED(hr)); // object lives in different apt.
// make a call to remote object, marshaling a reference to
// the callback object as an [in] parameter
hr = po->UseCallback(pcb);
// clean up resources

As shown in Figure 5.6, if the caller's apartment did not support reentrancy, then the following implementation of the UseCallback method would cause a deadlock:

STDMETHODIMP Object::UseCallback(ICallback *pcb) {
   HRESULT hr = pcb->GetBackToCallersApartment();

Figure 5.6 Interapartment Callbacks

   return S_OK;

Recall that when an [in] parameter is passed to the proxy's UseCallback method, the proxy calls CoMarshalInterface to marshal the ICallback interface pointer. Because the pointer refers to an object that resides in the caller's apartment, the caller's apartment becomes an object exporter and any cross-apartment calls on the callback object must be serviced in the caller's apartment. When the IObject interface stub unmarshals the ICallback interface, it creates a proxy to pass to the UseCallback method implementation. This proxy represents a transient connection to the callback object that lives for the duration of the call. The lifetime of this proxy/connection can exceed the scope of the call if the method implementation simply calls AddRef on the proxy:13

STDMETHODIMP Object::UseCallback(ICallback *pcb) {
   if (!pcb) return E_INVALIDARG;
// hold onto proxy for later use
   (m_pcbMyCaller = pcb)->AddRef();
   return S_OK;

The connection back to the client's apartment will last until the proxy is released by the object. Because all COM apartments can receive ORPC requests, the object can call back into the client's apartment whenever it chooses.

Reentrancy is implemented differently for each apartment type. The MTA implementation is the simplest, as MTAs make no concurrency guarantees, nor do they address which thread will service any given method call. When a reentrant call arrives while an MTA thread is blocked in the channel waiting for the ORPC response, the RPC thread that receives the reentrant request simply enters the MTA and services the call using the RPC thread. The fact that another thread in the apartment is blocked waiting for an ORPC response is not relevant to call dispatching. In the case of the RTA implementation, when a thread executing in an RTA makes a cross-apartment call through a proxy, the channel yields control of the apartment, releasing the RTA-wide lock and allowing incoming calls to be serviced. Again, because RTA-based objects do not have thread affinity, an RPC thread that receives an ORPC request can simply enter the RTA and service the call once the RTA-wide lock is acquired.

The implementation of reentrancy for STAs is more complex. Because STA-based objects have thread affinity, when a thread makes a cross-apartment call from an STA, COM cannot allow the thread to make a blocking call that would prevent incoming ORPC requests from being serviced. When the caller's thread enters the channel's SendReceive method to send the ORPC request and receive the ORPC response, the channel steals the caller's thread and places it in an internal windows MSG loop. This is not unlike what happens when a thread creates a modal dialog box. In both cases the caller's thread is needed to service certain classes of window messages while the operation is in progress. In the case of modal dialog boxes, the thread is needed to service basic window messages to ensure that the overall user interface does not appear frozen. In the case of a cross-apartment COM method call, the thread is needed to service not only normal user-interface window messages but also window messages that correspond to incoming ORPC requests. By default, the channel will allow all incoming ORPC calls to be serviced while the client thread waits for an ORPC response. This behavior is customizable by installing a custom message filter for the thread.

Message filters are unique to STAs. A message filter is a per-STA COM object that is used to decide whether or not to dispatch incoming ORPC requests. Message filters are also used to determine the disposition of pending user-interface messages when the STA's thread is waiting for an ORPC response inside the channel. Message filters expose the IMessageFilter interface:

[ uuid(00000016-0000-0000-C000-000000000046),local,object ]
interface IMessageFilter : IUnknown {
   typedef struct tagINTERFACEINFO {
      IUnknown   *pUnk;   // which object?
      IID   iid;   // which interface?
      WORD   wMethod;   // which method?

// called when an incoming ORPC request arrives in an STA
   DWORD HandleInComingCall(
      [in] DWORD dwCallType,
      [in] HTASK dwThreadIdCaller, 
      [in] DWORD dwTickCount,
      [in] INTERFACEINFO *pInterfaceInfo

// called when another STA rejects or postpones 
// an ORPC request
   DWORD RetryRejectedCall(
      [in] HTASK dwThreadIdCallee,
      [in] DWORD dwTickCount,
      [in] DWORD dwRejectType

// called when a non-COM MSG arrives while the thread is
// awaiting an ORPC response
   DWORD MessagePending(
      [in] HTASK dwThreadIdCallee,
      [in] DWORD dwTickCount,
      [in] DWORD dwPendingType

To install a custom message filter, COM provides the API function CoRegisterMessageFilter:

HRESULT CoRegisterMessageFilter([in] IMessageFilter *pmfNew,
    [out] IMessageFilter **ppmfOld);

CoRegisterMessageFilter associates the provided message filter with the current STA. The previous message filter is returned to allow the caller to restore the prior message filter at a later time.

Whenever an incoming ORPC request arrives on an STA thread, the message filter's HandleIncomingCall method is called, giving the apartment an opportunity to either accept, reject, or postpone the call. HandleIncomingCall is used both for reentrant and non-reentrant calls. The dwCallType parameter indicates which type of call was received:

typedef enum tagCALLTYPE {
 CALLTYPE_TOPLEVEL, // STA not in outbound call
 CALLTYPE_NESTED,   // callback on behalf of outbound call 
 CALLTYPE_ASYNC,    // asynchronous call 
 CALLTYPE_TOPLEVEL_CALLPENDING, // new call while waiting 
 CALLTYPE_ASYNC_CALLPENDING   // async call while waiting

Nested (reentrant) and toplevel callpending (non-reentrant) calls occur while the thread is waiting in the channel for an ORPC response. Toplevel calls occur when there are no active calls in the apartment.

COM defines an enumeration that the implementation of HandleIncoming-Call must return to indicate the disposition of the call:

typedef enum tagSERVERCALL {
 SERVERCALL_ISHANDLED, // accept call and forward to stub
 SERVERCALL_REJECTED,  // tell caller that call is rejected
 SERVERCALL_RETRYLATER // tell caller that call is postponed

If the message filter's HandleIncomingCall returns SERVERCALL_ISHANDLED, the call will be forwarded to the interface stub for unmarshaling. The default message filter always returns SERVERCALL_ISHANDLED. If HandleIncomingCall returns SERVERCALL_REJECTED or SERVERCALL_RETRYLATER, then the caller's message filter will be informed of the disposition of the call and the ORPC request will be discarded.

When a message filter rejects or postpones a call, the caller's message filter is informed via the RetryRejectedCall method. This call happens in the context of the caller's apartment, and the message filter's implementation of RetryRejectedCall can decide whether or not to retry a postponed call. The dwRejectType parameter indicates whether the call was rejected or postponed. The caller's channel implementation will decide what action to take based on the value returned by RetryRejectedCall. If RetryRejectedCall returns -1, then the channel will assume that no retries are desired and will immediately cause the proxy to return the HRESULT RPC_E_CALL_REJECTED. The default message filter always returns -1. Any other value returned by RetryRejectedCall is interpreted as the number of milliseconds to wait until retrying the call. Because this negotiation happens inside the channel, the ORPC request does not need to be regenerated by the proxy. In fact, interface marshalers are completely oblivious to the activities of the message filter.

When an STA-based thread is blocked in the channel waiting for an ORPC response, non-COM-related window messages may arrive on the thread's MSG queue. When this occurs, the STA's message filter is notified via the MessagePending method. The default message filter allows certain window messages to be dispatched in order to keep the overall windowing system from appearing frozen; however, input events (e.g., mouse clicks, key presses) are discarded to prevent the end-user from beginning new user interactions. As has already been stated, message filters are unique to STA apartments and are not supported for RTAs or MTAs. Message filters simply allow better integration of COM with user-interface threads. This implies that all user-interface threads should run in singlethreaded apartments. Most user-interface threads will want to install a custom message filter to ensure that incoming calls are not dispatched while the application is in a critical phase where reentrancy could cause semantic errors. Message filters should not be used as a general-purpose flow control mechanism. The implementation of message filters is notoriously inefficient when calls are rejected or postponed, making them poorly suited as a flow control mechanism for high-throughput applications.

8 Logically, the stub manager handles remote calls to IUnknown methods; however, this task is actually implemented by the apartment object that exposes the IRemUnknown interface.

9 MTS also requires the marshaler to be built using a special runtime library that allows MTS to find out information about an interface based on its interpretive format.

10 Variants are a data type used by scripting environments and are described in Chapter 2.

11 It is likely that future implementations of the COM library will remove this restriction. Consult your local documentation for more details.

12 At the time of this writing, COM provided no non-reentrant apartment types. It is possible that future versions of COM could provide new apartment types that do not support reentrancy.

13 It is a common misconception that Connection Points are required to enable bidirectional communication or callbacks. As described in Chapter 7, Connection Points are required only for supporting event handlers in Visual Basic and scripting environments.

© 1998 by Addison Wesley Longman, Inc. All rights reserved.