Around the Horn

Engineer a Distributed System Using .NET Remoting for Process Intensive Analysis

Nate D'Anna

This article discusses:

  • Using .NET Remoting for distributed processing systems
  • Processor-intensive scientific analysis
  • Data acquisition fundamentals
  • Virtual instrumentation for measurement and automation
This article uses the following technologies:
C# and .NET Remoting

Code download available at:DistributedNETRemoting.exe(161 KB)

Contents

Use Case and Data Acquisition Fundamentals
Virtual Instrumentation
Is .NET the Right Choice?
Declaring and Publishing the Remote Analysis Object
Activating the Analysis Object
Passing Data Between the Client and Server
Using the Remote Object From a Client
Performance Improvements
A Natural Evolution
Conclusion

Imagine you're testing an F-16 Falcon fighter plane in a wind tunnel to identify structural defects. You are continuously acquiring 2.5 million measurements per second of time-sensitive wind tunnel data from this fighter plane into your PC. Yet this raw data is meaningless without processor-intensive analysis. With this analyzed data, you can pinpoint dangerous micro-fractures in the skin of a multimillion dollar plane. You need to perform this test in a wind tunnel that does not charge by the day or by the hour, but by the minute. To make an on-the-fly risk assessment of the plane, you need to be able to change the plane's pitch, yaw, and roll, and immediately see how these changes stress different parts of the plane. Because of the significant processing time required for analysis, processing the data locally can be too slow for the tester's needs.

In the past, CPU limitations required that all data had to be logged to file and then analyzed offline. If the analysis showed that changes needed to be made to the test, the entire system would need reworking and testing at significant cost. A better solution is to distribute the analysis functions to multiple servers, allowing nearly real-time computation and display of results.

Before the advent of the Microsoft® .NET Framework, creating a distributed cluster of computers to perform scientific analysis was often expensive in terms of hardware cost, programming and debugging time, and maintenance. You had to purchase expensive servers, spend time debugging network communication, design a distributed system completely different from a system deployed locally, and maintain a melting pot of error handling, data acquisition, networking, and analysis code. Now with .NET, you are promised the performance and power of C++ and DCOM, with the ease of programming in Visual Basic®.

In this article, I'll show you how I engineered a distributed computing system in C# to perform analysis of real-world data continuously acquired at high sampling rates. This architecture can be modified for other processor-intensive applications such as complicated database queries. First, I'll look at the real-world case of testing an F-16 for structural defects. Then, I'll discuss how and why .NET is a good fit for these systems. I'll implement a basic client-server system using .NET Remoting technology. Finally, I'll show you how to scale this system to use multiple servers. From there, you can use your imagination to add load balancing and more advanced distribution schemes.

Use Case and Data Acquisition Fundamentals

In the fighter plane example, 2.5 million samples per second (S/s) of sound data are acquired from an F-16 fighter plane in a wind tunnel. In a real-world situation, it wouldn't be just sound data from a single microphone, but from an entire array of sensors, like those shown in Figure 1. This data helps engineers identify critical aerodynamics and structural defects in the plane. For this example, the 2.5 million S/s are actually divided among 100 sensors each sampled at 25,000 S/s. Each sample consists of an 8-byte double, which means 20MB/s stream across the PCI bus—well under the 132MB/s maximum transfer rate of a 32-bit PCI bus. After streaming, the data will arrive in the PC RAM via direct memory access (DMA), completely bypassing the CPU.

Figure 1 Data Acquisition and Analysis

Figure 1** Data Acquisition and Analysis **

Before this data can be sent to RAM, it needs to be converted from analog to digital. A series of PC-based data acquisition (DAQ) modules digitize the analog signals into zeros and ones (see Figure 1). Each sensor is connected to a separate channel on the DAQ module, with samples digitized by the DAQ module's analog-to-digital (A/D) converter. A circular buffer in the RAM stores the raw acquired data. As each new block of data is transferred from the DAQ module via the PCI bus, it is copied to the next contiguous block of RAM. After this data has been transferred to the RAM, it still needs to be transferred from the DAQ driver memory to the memory allocated by application software. This second transfer is a software-timed operation initiated by the DAQ client application. After this operation, the raw data is available for analysis, data logging, and presentation. But if the CPU is busy doing other processing or handling interrupts, consecutive reads might be processed at unpredictable times. If consecutive transfers are delayed too much, previously acquired data is overwritten by new data before the app has transferred the original data to application memory. As a result, data is lost and the F-16 needs to be retested.

If analysis is performed locally, it must fully execute between transfers, leaving enough time to transfer more data out of the circular buffer. Yet, for more complicated analysis routines, such as octave analysis, each analysis operation might take too long to complete—meaning the streaming DAQ data will overflow the circular buffer and data will be lost.

There are several options for preventing the loss of data:

Option 1 Buy a faster computer or more processors, but this is an expensive option, even as hardware costs decrease.

Option 2 Decrease the sampling rate of the data acquisition to reduce the amount of data to analyze, thus reducing the processor load. Unfortunately, this option is equivalent to losing data because acquiring too few data points results in less accurate measurements.

Option 3 Use the time between buffer transfers to pass the raw data to remote machines instead of performing the processor-intensive analysis locally. Each remote machine can then process the data locally, freeing processor time on the acquisition machine. As a result, the analysis algorithm's computational complexity can increase without requiring a correspondingly decreased sampling rate on the acquisition machine.

Virtual Instrumentation

The key component of the system discussed here is virtual instrumentation. With virtual instrumentation, powerful software technologies such as .NET are combined with modular, off-the-shelf hardware such as plug-in DAQ modules. This is drastically different from traditional measurement systems that consist of box instruments such as oscilloscopes and function generators. Traditional instrumentation systems limit you to the specifications of the vendor. If you need to implement a new analysis algorithm or more acquisition channels, you are forced to buy a completely new system—assuming that such a system exists.

With virtual instrumentation, you can use a standard PC or cluster of PCs to create measurement and automation applications. Virtual instrumentation empowers you to use software technologies, such as .NET, to define your hardware's behavior. As software and hardware technology continues to improve, you can change or expand to suit different application needs without entirely reimplementing your setup or buying a new system.

Another great feature of virtual instrumentation that makes it especially well-suited for use with high-channel count acquisitions is the ability to synchronize hardware channels within picoseconds (ps). You augment the PC with external buses like Real-Time System Integration (RTSI) and PCI Extensions for Instrumentation (PXI) to synchronize timing clocks between individual acquisition modules and different PC systems.

Is .NET the Right Choice?

The next question is whether .NET is the right option for processor-intensive analysis. In "Harness the Features of C# to Power Your Scientific Computing Projects,", Fahad Gilani argued that managed code is ideal for scientific computing applications. Gilani noted that the common language runtime (CLR) provides fast performance, language interoperability, just-in-time (JIT) compilation, automatic memory management, object-oriented language constructs, and high-precision floating point operations that are suited perfectly for many scientific applications. For applications that require distributed computing, such as wind tunnel testing of F-16s, the following three major benefits make .NET Remoting and managed code a great choice.

Performance Despite the overhead of the common language runtime (CLR), C# and Visual Basic .NET stand up surprisingly well to C++, as shown by Gilani in his article. The .NET Framework also features several communication protocols. Because performance is crucial here, I will use a TCP protocol as opposed to HTTP or Web services. However, changing from an efficient TCP channel to a more scalable HTTP protocol is as simple as changing a line in the configuration file.

Ease of implementation Implementation of a remotable object is accomplished by inheriting from MarshalByRefObject. With this implementation, it is easy to migrate code engineered for a single application domain on a single machine to one that can span multiple servers at once.

Flexibility and robustness The .NET Framework offers a wide variety of behaviors and programming paradigms, such as multiple activation models, events, asynchronous method calls, and one-way method calls.

Creating a distributed analysis system begins with a client and server. The client acquires DAQ data and displays the analyzed data. The server contains an object that performs the analysis and an application that hosts the remote object. Each time the DAQ client acquires new data, the client asynchronously invokes an analysis method on the server, passing the raw data in a serializable data packet. When the analysis completes, the server passes another serializable data packet containing the response analysis back to the client, as shown in Figure 2.

Figure 2 Distributed Analysis

Figure 2** Distributed Analysis **

For the scenario described in this article, analysis is happening in an intranet environment, not an Internet environment, so for performance reasons, securing the network channel is not actually required. However, if you really want better security, you would need to ensure encryption and data integrity for the communication channel, possibly using the SSPI sample available in the MSDN® Library at .NET Remoting Authentication and Authorization Sample – Part II. In scenarios such as testing an F-16 fighter, security would obviously be of prime importance.

Declaring and Publishing the Remote Analysis Object

Architecting a distributed analysis system begins with an object that needs to be available from other AppDomains. In this case, the AppDomains are running on separate computers, but remoting can also be used to communicate between AppDomains in the same process. In the F-16 testing use case, the object that performs remote scientific computing, AnalysisClass, is made remotable by inheriting from System.MarshalByRefObject (see the code in Figure 3). AnalysisClass performs a fast Fourier transform (FFT) to extract frequency domain data, but this class could just as easily be replaced with your own remotable class.

Figure 3 Remote Analysis Class

public class AnalysisClass : MarshalByRefObject { public AnalyzedDataPacket RunAnalysis(RawDataPacket rawPacket) { // The frequency component between each sample // returned from the analysis function double df; // Perform FFT analysis on raw data double[] temp = Analyze(rawPacket.GetRawData(), 1.0/rawPacket.SamplingFrequency, out df); // Create a data packet with the analyzed data AnalyzedDataPacket analyzedPacket = new AnalyzedDataPacket(rawPacket.TimeStamp, df, temp); // On the server, display packet information Console.Write("Time: {0}\n", analyzedPacket.TimeStamp.ToString()); return analyzedPacket; } // Override InitializeLifetimeService to never expire public override object InitializeLifetimeService() { return null; } }

When marshal by reference (MBR) is used, an instance of AnalysisClass is referenced on the server via a proxy of this object on the client's AppDomain. To the client, this proxy seems to be a local instance of the remote object, but it simply references the actual object on the server, as shown in Figure 2. The proxy communicates with the remote object, but it allows the programmer to be oblivious to this intermediate layer. Think of MBR as passing a pointer of the remotable object between the client and server. In reality, a pointer-like object called an ObjRef is passed between AppDomains. The ObjRef does not contain a memory address; instead it contains the server location (IP address or server name), the object that it references, and the registered channels.

To make AnalysisClass available to remote clients, this class is published by another application called AnalysisServer (see Figure 4). AnalysisServer listens for incoming calls and processes client requests. It's a simple console app that must be run on the server. Simply place AnalysisServer.exe in the Startup folder, so that it runs as soon as a user logs in. For further robustness, create AnalysisServer as a Windows® Service project, thus running without requiring a user to log in on the server machine.

Figure 4 Analysis Server

class AnalysisServer { static void Main(string[] args) { // Create a channel to exchange data between the // server and client TcpChannel channel = new TcpChannel(4000); ChannelServices.RegisterChannel(channel); // On the server, create an instance of AnalysisClass AnalysisClass myRemoteObject = new AnalysisClass(); // After the channel is registered, the object needs to be // registered with the remoting infrastructure. So, // Marshal is called RemotingServices.Marshal(myRemoteObject, "AnalysisServer"); Console.WriteLine ("AnalysisClass has been published on the server\n"); Console.WriteLine("Press enter to exit."); Console.ReadLine(); RemotingServices.Disconnect(myRemoteObject); ChannelServices.UnregisterChannel(channel); } }

Activating the Analysis Object

With MBR, there are three major choices for the behavior of the remote object. The two best known are server-activated objects (SAOs) and client-activated objects (CAOs). The activation model chosen can greatly affect the responsiveness of the system. If the analysis routine (RunAnalysis) is called synchronously from the client, the activation model chosen could also drastically affect data loss on the DAQ client. Server-activated objects are not instantiated on the server until the first time a method of the remote object is called, rather than when the new keyword is used. SAOs offer two unique modes: single call and singleton. With single call, a new object is created each time a client calls a method on the remote object. With singleton, one object services all requests from clients when the first client calls a method on the remote object.

Under both modes, the first time the RunAnalysis method is called, the server must deal with the overhead of creating a new instance of AnalysisClass. Using singleton, this overhead occurs only once, regardless of the number of clients. Under single call, this overhead is incurred during every method call. Knowing that RunAnalysis needs to be called multiple times per second, the added time consumed using single-call mode to instantiate AnalysisClass seems pointless and will reduce the responsiveness of the system. Singleton seems like the logical choice in that it only creates one instance of AnalysisClass for all clients, but the first method call will be slowed by the time required to instantiate the remote object. Regardless of whether you have chosen single call or singleton, these activation models aren't optimal, especially because SAOs do not allow activation of objects with non-default constructors.

An alternative to SAOs are CAOs, which do not wait to instantiate the remote object until the first method call. CAOs are instantiated when the client calls new to create the remote object. Thus, the first time a method is called, the remote object has already been instantiated. This means a shorter round-trip for the data to the server and back to the client. CAOs also allow you to use non-default constructors when instantiating remote objects. Another CAO characteristic is that a unique instance is activated by each client, which means that state information can be stored between successive calls to the remote object. This feature might be a requirement for some analysis routines, especially statistical analyses, which need to store results from previous analysis runs. Yet, for the analysis routines used in this example, such as FFTs and octave analyses, state information is not required. CAOs introduce unneeded memory and overhead for each additional client that is activated.

So you're probably wondering at this point how you can activate your remote object if neither server-activated nor client-activated objects are a perfect match? Luckily, a third technique exists that provides only one instance of the remote object for all clients, like a server-activated singleton combined with the performance of a CAO. As shown in Figure 4, an instance of the remote object is declared on the server. Then the object is made available to clients through marshaling to an ObjRef using the RemotingServices.Marshal method. Remember, an ObjRef is essentially a network pointer that can be passed across a registered channel to another AppDomain, even if it exists on a different computer.

Passing Data Between the Client and Server

In this example, I used MBR to declare AnalysisClass because the actual computation needed to occur remotely. However, simply passing references to the data between the client and the server is not sufficient—instead, the actual values, in addition to information that identifies each data packet, must be exchanged. In contrast to MBR, marshal by value (MBV) allows AppDomains to exchange actual data—simply add the [Serializable] attribute to the MBV class or structure. Thus, MBV fits perfectly for creating a DataPacket class to share data between the client and server.

The code in Figure 5 declares an abstract base class named DataPacket, which stores identifying information such as a time stamp. Two classes inherit from this base class, representing the actual data being passed between the client and server—RawDataPacket and AnalyzedDataPacket. These classes store information that uniquely identifies the DataPacket and an array of doubles that stores the sensor data. As shown in Figure 3, the RunAnalysis method accepts an object of type RawDataPacket and returns an object of type AnalyzedDataPacket to the caller.

Figure 5 Marshal by Value Data Packet Objects

[Serializable] abstract public class DataPacket { private DateTime _timeStamp; protected DataPacket(DateTime timeStamp) { _timeStamp = timeStamp; } public DateTime TimeStamp { get{ return _timeStamp;} } } [Serializable] public class RawDataPacket : DataPacket { private double[] _rawData; private double _samplingFrequency; public RawDataPacket(DateTime timeStamp, double samplingFrequency, double[] data) : base(timeStamp) { _samplingFrequency = samplingFrequency; _rawData = data; } public double SamplingFrequency { get{ return _samplingFrequency; } } public double[] GetRawData() { return _rawData; } } [Serializable] public class AnalyzedDataPacket : DataPacket { private double[] _analyzedData; private double _df; public AnalyzedDataPacket(DateTime timeStamp, double df, double[] data) : base(timeStamp) { _df = df; _analyzedData = data; } public double Df { get{ return _df;} } public double[] GetAnalyzedData() { return _analyzedData; } }

Using the Remote Object From a Client

The final step to engineering this distributed analysis system requires creating a client that continuously acquires sensor data from a DAQ module, passes this data to the remote server, then displays the analyzed data. You can begin by obtaining a proxy from the remote analysis server:

_analysisClass = (AnalysisClass)Activator.GetObject( typeof(AnalysisClass), String.Format("tcp:/ /{0}:4000/AnalysisServer", computerName));

From the client application, _analysisClass can be treated as a local instance of AnalysisClass, but it is really only just a proxy to the remote instance of this class on the server.

The next step is to configure the data acquisition. In Figure 6, the Configure method creates an instance of a DAQ module with the specified sampling rate and number of samples to read. The sampling rate (measured in samples per second) indicates how fast the DAQ module acquires sensor data and transfers it to the circular buffer. Samples to read indicates how many samples to transfer from the circular buffer to the application memory of the client. In order to transfer data from the circular buffer, I create an event callback called DAQBoard_SampleSetComplete that is invoked each time additional DAQ data is available. When this event is fired from the DAQ module, raw DAQ data is transferred into application memory. You can retrieve the DAQ data with a call to e.GetData, which is the equivalent of transferring data from the circular buffer to application memory.

Figure 6 Data Acquisition Client

public class DAQViewer : System.Windows.Forms.Form { //Remote analysis class and delegate for remote analysis private AnalysisClass _analysisClass; private delegate AnalyzedDataPacket AnalysisDelegate( RawDataPacket rawDataPacket); private AnalysisDelegate _runAnalysisDelegate; // Delegates to marshal data to UI thread private delegate void RawDataUIDelegate( double[] data, int counterSent, DAQSampleSetStatus daqStatus, double timeBetweenReads); private delegate void AnalyzedDataUIDelegate( double[] data, int counterReceived, double df); // DAQ settings private int _samplingRate, _samplesToRead; private DAQBoard _daqBoard; // Network packet counters private int _counterSent; private int _counterReceived; public DAQViewer() { InitializeComponent(); _counterReceived = 0; _counterSent = 0; } protected override void Dispose(bool disposing) { if( disposing ) { if(components != null) components.Dispose(); if(_daqBoard != null) _daqBoard.Dispose(); base.Dispose( disposing ); } } // Configure remoting and DAQ board public void Configure(string computerName, bool remotingEnabled, int samplingRate, int samplesToRead) { _samplesToRead = samplesToRead; _samplingRate = samplingRate; // If remoting is enabled, retrieve an ObjRef of an // instance of AnalysisClass on the analysis server. // Else, create an instance of AnalysisClass locally if(remotingEnabled) { _analysisClass = (AnalysisClass) Activator.GetObject( typeof(AnalysisClass), String.Format( "tcp://{0}:4000/AnalysisServer", computerName)); this.Text = "Connected to " + computerName; } else { _analysisClass = new AnalysisClass(); this.Text = "Connected to localhost"; } // Create an instance of a DAQ board with the specified // sampling rate and samples to read _daqBoard = new DAQBoard(_samplingRate, _samplesToRead); // Configure the SampleSetComplete event to call // DAQBoard_SampleSetComplete when new DAQ data is // ready on the circular buffer _daqBoard.SampleSetComplete += new DAQSampleSetCompleteEventHandler( DAQBoard_SampleSetComplete); // Configure delegate to call the RunAnalysis // method on the remote machine _runAnalysisDelegate = new AnalysisDelegate( _analysisClass.RunAnalysis); } // This method is called when new DAQ data is ready. // It then sends the raw data to the analysis server private void DAQBoard_SampleSetComplete(object sender, DAQSampleSetCompleteEventArgs e) { // Get data from the DAQ board double[] data = e.GetData(); // Safely update the number of packets sent System.Threading.Interlocked.Increment(ref _counterSent); // Create a data packet to send to the analysis server RawDataPacket rawDataPacket = new RawDataPacket( DateTime.Now, _samplingRate, data); // Asynchronously invoke the remote analysis // method called RunAnalysis on remote machine _runAnalysisDelegate.BeginInvoke(rawDataPacket, new AsyncCallback(AnalysisComplete), null); // Marshal data to UI thread object[] args = new object[]{e.GetData(), _counterSent, e.Status, e.TimeBetweenReads}; this.Invoke(new RawDataUIDelegate(UpdateUIRawData), args); } // This method is called when analysis of the // data on the analysis server has completed private void AnalysisComplete(IAsyncResult ar) { // Retrieve a packet of analyzed data AnalyzedDataPacket analyzedDataPacket = _runAnalysisDelegate.EndInvoke(ar); // Safely update the number of packets received System.Threading.Interlocked.Increment(ref _counterReceived); // Marshal data to the UI thread object[] args = new object[]{analyzedDataPacket.GetAnalyzedData(), _counterReceived, analyzedDataPacket.Df}; this.Invoke( new AnalyzedDataUIDelegate(UpdateUIAnalyzedData), args); } // Update UI for raw data display private void UpdateUIRawData(double[] data, int counterSent, DAQSampleSetStatus daqStatus, double timeBetweenReads) { // Display raw data on the raw graph rawDataCtrl.PlotY(data, 0.0, 1.0/_samplingRate); // Update the number the number of packets // sent on the sent control sentCtrl.Value = (double) counterSent; if(daqStatus == DAQSampleSetStatus.Overflow) overflowCtrl.Value = true; // Update the Delay Between DAQ reads control delayCtrl.Value = timeBetweenReads; } // Update UI for analyzed data display private void UpdateUIAnalyzedData( double[] data, int counterReceived, double df) { // Display analyzed data on the analyzed graph analyzedDataCtrl.PlotY(data, 0.0, df); // Update the number of packets // received on the received control receivedCtrl.Value = (double)counterReceived; } // Start acquisition on the DAQ board public void Start() { _daqBoard.StartAcquisition(); } // Stop acquisition on the DAQ board public void Stop() { _daqBoard.StopAcquisition(); } }

Within the DAQBoard_SampleSetComplete callback, the raw data is encapsulated, along with a time stamp, in an instance of RawDataPacket. The instance of RawDataPacket is sent to the remote server with a call to the _analysisClass.RunAnalysis method on the remote object. You could simply call this method synchronously with:

AnalyzedDataPacket analyzedDataPacket = myAnalysisClass.RunAnalysis(myRawDataPacket);

However, if this blocking call to the remote server takes too long to complete, data loss could result because the client machine will not be able to service subsequent DAQBoard_SampleSetComplete events. Thus, you will invoke the RunAnalysis method asynchronously, freeing the DAQ client to process subsequent events to prevent data loss. To accomplish this, use a delegate that asynchronously calls RunAnalysis with BeginInvoke:

_runAnalysisDelegate = new AnalysisDelegate(_analysisClass.RunAnalysis); _runAnalysisDelegate.BeginInvoke(rawDataPacket, new AsyncCallback(AnalysisComplete), _runAnalysisDelegate);

When the analysis has completed on the analysis server, AnalysisComplete is called on the client. From this AnalysisComplete callback, you call EndInvoke to obtain the analyzed data:

AnalyzedDataPacket analyzedDataPacket = _runAnalysisDelegate.EndInvoke(ar);

AnalyzedDataPacket contains timing and identification information, in addition to the analyzed data that can then be displayed. The GetAnalyzedData method is called to obtain the analyzed data from the AnalyzedDataPacket:

double[] data = analyzedDataPacket.GetAnalyzedData();

Figure 7 Displaying the Data

Figure 7** Displaying the Data **

With this array of analyzed data, you can now display the data however you see fit (see Figure 7). Remember though, that all user interface updates must happen from the UI thread, so you must marshal the analyzed data to the UI thread like so:

object[] args = new object[]{ analyzedDataPacket.GetAnalyzedData(), _counterReceived, analyzedDataPacket.Df}; this.Invoke(new AnalyzedDataUIDelegate(UpdateUIAnalyzedData), args);

Figure 8 shows the architecture for the DAQ client.

Figure 8 DAQ Client Architecture

Figure 8** DAQ Client Architecture **

Performance Improvements

In this example, I used two 1.2MHz Pentium III machines, each with 256MB of RAM running version 1.1 of the .NET Framework. Figure 9 shows the maximum number of samples per second that could be acquired before data loss resulted on the DAQ module. I compared the maximum sample rate when the analysis was performed locally versus remotely, and for consistency, the number of samples read was kept at 10 percent of the sampling rate. You can see that the benefits of remoting become clearer as the computation complexity increases.

Figure 9 Analysis of Performance

Figure 9** Analysis of Performance **

So far, I have only used one analysis server to distribute the load of time-consuming analysis. Extending this architecture to multiple analysis servers allows even faster sampling from the DAQ client and a more responsive display. Faster sampling allows you to ensure that the phenomena that you are measuring are more accurately represented. Increasing responsiveness allows analysis results to be viewed sooner as changes are made to the F-16.

To quickly scale this application to multiple servers, you can create a top-level application that instantiates one instance of DAQViewer for each available analysis server, as in Figure 10. DAQViewer was the object created that represents the DAQ client (refer back to Figure 6). Each element in the array of DAQViewers represents a different DAQ module that passes its data to a separate server. From this top-level application, you can pass the IP address of each analysis server.

Figure 10 Top-Level Application

public class MainForm : System.Windows.Forms.Form { private System.Windows.Forms.TextBox remoteComputersCtrl; // Create an array of DAQViewers to represent each available DAQ board // and analysis server private DAQViewer[] _daqViewers; ... [STAThread] static void Main() { Application.Run(new MainForm()); } private void startAcquisition_Click( object sender, System.EventArgs e) { int numComputers, samplingRate, samplesToRead; // Get form values for sampling rate, samples to read, // and number of computers samplingRate = Convert.ToInt32(samplingRateCtrl.Value); samplesToRead = Convert.ToInt32(samplesToReadCtrl.Value); numComputers = remoteComputersCtrl.Lines.Length; // For each analysis server name, create a new // DAQViewer to acquire, analyze, and display // DAQ data. _daqViewers = new DAQViewer[numComputers]; for(int i=0; i<numComputers; i++) { _daqViewers[i] = new DAQViewer(); _daqViewers[i].Show(); _daqViewers[i].Configure(remoteComputersCtrl.Lines[i], enableRemoting.Checked, samplingRate, samplesToRead); _daqViewers[i].Start(); } } private void stop_Click(object sender, System.EventArgs e) { // Stop data acquisition for all instances of DAQViewers if(_daqViewers != null) foreach(DAQViewer daqViewer in _daqViewers) daqViewer.Stop(); } }

A Natural Evolution

Asynchronous method calls work well to pass raw data to the remote object and return the analyzed data to the same application. In this case, the client acquires the data and displays the analyzed results—meaning a user has to be logged on locally to view the results. In a wind tunnel or other harsh environment like a nuclear plant, you can imagine the need for an acquisition system that is physically separate from the data viewers. Also, some applications require multiple data viewers. For these cases, asynchronous method calls quickly reveal their limitations. Let's now expand this basic server-client architecture.

Figure 11 Multitier System

Figure 11** Multitier System **

Events could replace asynchronous method calls. If you want the data to be displayed locally, you can register the AnalysisComplete event on the DAQ client. Or, the AnalysisComplete event could just as easily be registered on another machine. If multiple analysis servers were available, the AnalysisComplete events for each server could be registered to one display machine. This machine would be responsible for displaying and logging the results, thus leaving the acquisition machine free to acquire data faster. To create multiple viewers, use ASP.NET or a Web service on the server to publish the analyzed results.

Thus far, I have known the network address of all available machines in advance. Another logical progression would be to use a load-balancing server that receives all acquired DAQ data. This server would then dynamically poll for available servers, possibly using remoting, and pass raw data to available servers. Each server would then pass the analyzed data to a central display machine (see Figure 11).

Conclusion

Managed code promises to combine the power of C++ and DCOM with the ease of use of Visual Basic. I put C# and the .NET Framework to the test to see if it could back up this bold claim in a real-world application. C# and .NET successfully met this challenge, proving that off-the-shelf hardware combined with flexible software can save you programming time and money. In the future, I look forward to technologies like PCI Express, dual-core processors, and the .NET Framework 2.0, all of which will allow even faster sampling and greater processing speeds.

Nate D'Anna is a product manager for Measurement Studio at National Instruments in Austin, TX. Nate graduated with a B.S. in Computer Science from Vanderbilt University and can be reached at nate.danna@ni.com.