|COM+ and MTS, DCOM and MSMQ, Serialization in .NET |
| evelopers frequently ask me for clarification on Microsoft's strategy for the future with regard to COM+, Microsoft® Transaction Services (MTS) with its features of JIT activation and object pooling, Microsoft Message Queuing (MSMQ), and DCOM. What's in store for Web farms versus app servers versus ASP and component integration? Since everybody's clamoring for answers, let's take these questions one at a time. First, I'll deal with the COM+ and MTS issue.
Using COM+ and MTS COM+ is alive and well, so if you need features of COM+ or MTS, you can use the appropriate technology with your Microsoft .NET components. In my mind, components are part of any type of solution, distributed or not. A component in .NET looks like a COM component in that they are both DLLs containing classes that can be instantiated by another application. The main difference between them is the way they are implemented, which is outside the scope of this discussion.
When considering what components to use in an application, there are several options. Should you use COM+, MTS, or neither? Both COM+ and MTS are designed to work with COM components. Thus, when you build a component to run in either, it must work with the COM binary standard and must be registered in the registry before it will work.
The .NET Framework (and thus Visual Basic .NET) supports COM+ and MTS through COM Interop services. A Windows-based application (either a COM component or other application that can call a COM component) can call a .NET component, and a .NET application can call a COM component. This two-way interoperability is quite powerful and allows you to mix technologies in your applications.
As I said, COM+ and MTS were designed to work with COM components. When you place a .NET component (assembly) in an MTS package or a COM+ application, the component can be called by a .NET application in the same way as if it were not an MTS or COM+ component.
One of the underlying considerations when using COM Interop is the overhead incurred. .NET and COM use different execution methods (.NET uses the Common Language Runtime; COM does not) and .NET assemblies and COM components are implemented differently (.NET uses a type standard while COM uses a binary standard). Calling from one environment to the other adds some amount of overhead, so only do it when necessary. In fact, there are approximately 20 to 30 CPU instructions executed for each Interop operation. When you call a method on a class that is hosted in COM+, this overhead is incurred during each call.
If you absolutely must have features that COM+ or MTS provide, then host your component in COM+ or MTS and make sure you need all that functionality. If your component is performing transactions on a single database and will always work against only one database, then you don't necessarily need COM+ to implement those transactions; you can implement them with ADO.NET. However, if you need object pooling or transaction support for multiple databases, then use COM+ or MTS.
To implement a component in MTS or COM+, you should see the MSDN documentation for the .NET Framework and check out the information on COM+. The docs will show you how to implement the components and how to use various attributes to automate the process.
Remote Communications Using DCOM and MSMQ Instead of DCOM, you can use either .NET Remoting or Web Services. What's the difference? Think about your applications and how you communicate between various application segments and consider an application architecture that has an ASP.NET front end. You can put the code in the application in several places.
First, you can add the code into the ASP.NET code-behind page. This makes the page similar to most ASP pages, where the code is tied to one output page. In this scenario, the application is not really distributed at all; rather it's a two-tier application with an ASP.NET front end and a database for the back end.
Second, you could put the generic code in a module. A module file is a simple Visual Basic file in your project with a module definition. You can place both functions and global variables in the module, then you can use both of them from anywhere in the application. This lets the module take the place of include files for the purpose of storing common code.
Third, you can begin to migrate the common code into a set of classes. Classes are files or sections of a file that have a class definition and contain an interface (properties and methods). A class can be created in your ASP.NET project, which makes it part of the project, or you can create a separate project and implement the class in it. This latter approach is preferred for common code because you end up with a DLL just as you would in the COM world, and this DLL can be shared among many applications.
Let's assume that the class contains some data access code that performs transactions. To illustrate, I'll use the design in Figure 1. This figure shows two .aspx pages calling a business object to get the information for the page. Either the Customer, Product, or Order object then calls the database layer object, which performs some type of action that either retrieves data from or sends data (or update information) to the database.
Figure 1 Order Architecture
This application does not necessarily need MSMQ, .NET Remoting, or Web Services because all of the components are on the same server. You could move the database layer component or the business components onto another server, but it's almost always going to be a faster way to put all of the components on the same server as the ASP.NET application. Why? Because the components can run inside the same application domain (process) as the ASP.NET application, eliminating interprocess communication. This is the fastest way to access a component. Whenever an application calls out of process there is overhead involved, which can slow down your app.
You can also split things up and put components on another server when there is a business need for it. For instance, suppose the application allows users to enter data that is processed and then sent to a database. Assume that the system gets heavy traffic and bottlenecks are frequent. One way to add some zip to the application is to insert MSMQ into the process. Basically, your application can collect the information from the user, let the user know it is processing the information, then send the information over MSMQ. The application on the other end of the queue can pull the information off the queue and update the database. The performance in this type of application will be quite good because the user does not have to wait for the database update. This structure will only work, however, if the application can support an architecture where you can have this type of disconnected operation.
Of course, there are also many other times when you need a queuing system and MSMQ can be used in those scenarios as well. The System.Messaging namespace includes classes that allow you to add queuing support to your applications. The queuing features are also integrated in Visual Studio .NET to allow you to easily access these features from your application.
In moving the database layer to the database server you could expose the database component with either a Web service or via .NET Remoting. Using either of these technologies, you could also create a component server for each component. You can use .NET Remoting or Web Services in this scenario, depending on your needs. In one scenario, the Application Server hosts the ASP.NET application and the Component Server hosts the components used by the Application Server. There are two ways the application on the Application Server can communicate with the components on the other server.
First, you could simply create a Web service on the Application Server and have it provide methods to access the methods of the various components. This approach allows an application or component on the Application Server to access the components on the Component Server through the Web service. The advantage of this approach is that it's flexible and quite easy to do. The downside is that it requires you to create a Web service explicitly to expose methods of the classes.
Second, the applications on the Application Server could access the components on the other server using .NET Remoting, which is quite flexible and has advantages over Web Services in many scenarios. .NET Remoting is simple to set up. The .NET Remoting architecture is built into the .NET Framework and provides the architecture you need for interprocess communication between components on different systems. The architecture works by providing a transport channel that your applications can use to communicate over the network.
The .NET Remoting support extends to many different areas such as built-in security. You also control the security of the data transmitted between the two components. One difference from Web Services is its ability to change the type of packet transmitted between the end points of a call. Web Services is based on the industry standard Simple Object Access Protocol (SOAP). Remoting can use SOAP, but can also use XML and even binary formatting. You choose the format type. You may prefer binary because it is more secure and more compact than a verbose XML format.
Figure 2 LAN with Firewall
The architecture shown in Figure 2 is slightly more complex. There is now a minicomputer on the LAN and a Web server that is connected to the LAN across a firewall. The applications on these different servers and the clients that use those applications communicate by using either Web Services or .NET Remoting on all of the servers and the clients can connect with either. Because most Web servers can host Web Services, the .NET applications can also all talk to the minicomputer through those services, and vice versa. Then, minicomputer applications can call Web Services exposed on the .NET servers. .NET Remoting can also communicate with servers not running .NET, enabling .NET applications to communicate with servers and apps on different operating systems.
Of course, firewalls can be a problem. Since they frequently support HTTP, but no other protocols, what happens between the Web server and the other servers? Of course, Web Services will work in this scenario, since HTTP is the protocol. .NET Remoting also supports HTTP as the transmission protocol, but you can also use TCP/IP for performance or security reasons.
Serialization in .NET For the many readers who have asked about serialization in .NET, I'll provide a brief overview here. In general, a class provides a serialize method that pulls data out and a deserialize method that takes a stream and places it back into the class.
In the past, it was a programmer's job to handle this serialization. Now, this functionality is built into the .NET Framework. Think about a salesman who needs to download a set of data from a server to a laptop before going to see a client. Then he works with the data locally and later reconnects to the server application to dump the data back to it. The workstation application can connect to the server via a Web service, which can call a method in a class that retrieves the data. The data is serialized and the results are sent to the client. The client can then store those results locally by saving the serialized XML stream. Later, the salesman can restart the application to load the XML stream into a local copy of the class. Later, the application can send the serial stream back to the server application where it is deserialized and the database is updated.
Other classes in .NET use serialization also. For instance, the DataSet will automatically serialize and deserialize itself in and out of an XML stream. .NET Remoting and the queuing support use serialization as well.
Send your questions and comments for Ken to email@example.com.
| Ken Spencer works for the 32X Tech Corporation (http://www.32X.com), which produces a line of developer courseware. Ken also spends much of his time consulting or teaching private courses.|