Architecting Disconnected Mobile Applications Using a Service Oriented Architecture
Windows Mobile-based devices
Windows Mobile 2003 Second Edition software for Pocket PCs
Windows Mobile 2003 Second Edition software for Smartphones
Microsoft® SQL Server™ CE
Microsoft .NET Compact Framework
Summary: The Windows Mobile platform, which includes Microsoft .NET Compact Framework and Microsoft SQL Server CE, encapsulates the complex tasks of communication management and data exchange while your device is in a disconnected state. (16 printed pages)
Mobile devices, such as the Windows Mobile-based Pocket PCs, have grown significantly popular over the last several years. In the car, on a plane, or out in the middle of nowhere, applications on the device can operate without a connection to any other computers, the Internet, or intranet. Devices can then partner with a desktop computer system by means of a modem, network card, or simply by placing the mobile device in a cradle. New or modified data on either the device or desktop computer system is then automatically migrated and synchronized.
Currently, the major disconnected applications for these devices are used for e-mail and contact management. More and more businesses, however, want to enable users with applications beyond basic e-mail and contact management. Businesses want to bring functionality from their enterprise applications to the mobile device.
There are many things a developer needs to take into account when identifying scenarios suitable for the mobile device, such as working with mobile form factor devices and designing and implementing mobile business functionality. The core functionality of these applications, communication management and data exchange between device and enterprise servers, can be the most difficult to implement because you are dealing with intermittent connections and data which can be modified offline. Communication issues can involve working with various protocols, socket programming, and managing communication peripherals such as modems and network cards. Data exchange issues can involve guaranteeing delivery of data, handling delivery failures, and effective delivery of varying types of data of different sizes. The functionality of these core applications can be time consuming and expensive to implement and test.
Microsoft has significantly reduced the complexity of building mobile enterprise applications with the Windows Mobile platform including the Microsoft® .NET Compact Framework and Microsoft SQL Server™ CE. These technologies encapsulate the complex tasks of communication management and data exchange. With the Windows Mobile platform, enterprise developers can build mobile applications quicker, cheaper, and with development talent that is more widely available.
Most developers construct applications with the assumption that the connection to the server-side data store is always available to the application. If the connection becomes unavailable, the application might not function. For Web-based applications, if the connection is not available, the application is also unlikely to be available.
When developing disconnected applications, companies need to make a critical decision about how they want to enable enterprise applications on a mobile device. On the surface, it looks feasible to assume connectivity by means of network cards, modems, or telephone carrier networks. Wireless carrier coverage in many countries is nearly complete, and this is a big justification for some companies to retool their Web applications for rendering in a mobile browser.
But total coverage isn't a guarantee. Users are going to need to use applications when their modem or network card is unavailable or when their carrier network coverage is degraded or unavailable. What if the user is in an elevator or underground or the local network cells are down? Even if coverage is available, a roaming user might not want to pay the extra fees to connect. For these reasons, mobile applications should be designed not to rely on a network connection to provide functionality. Connectivity can be the best-case scenario for an application but should not be required for using the application.
Note Two typical and well-defined scenarios that are enabled by disconnected applications are Field Service and Sales Force Automation programs. Further information about implementing these scenarios can be found in the article Northwind Pocket Service: Field Service for Windows Mobile-based Pocket PCs.
Development models for mobile applications range from thin clients built by using Web technologies to fully autonomous smart clients that exchange data with the server when connectivity permits. Full autonomy is the most compelling case for business applications because of the unlikely availability of full network coverage at all times. Autonomous clients can be built by using technologies such as Web page caching and forwarding as well as fully functional applications that can use files or relational databases to store data.
The autonomous mobile application typically needs to interact with a server-side business application that needs to be mobilized. This is an important consideration for developers because this interaction must often resolve an impedance mismatch between the mobile application architecture and the existing business application architecture, which could be caused one or more of the following factors:
- Mobile applications that contain a subset of the total business data
- Mobile applications that contain a subset of the total business rules
- Data schema and storage engine differences
Imagine a disconnected autonomous mobile application that requires a product catalog. A salesperson using this application offline might not have the most up-to-date prices on the company products. Similarly it is unlikely that an application running on a Pocket PC device contains a comprehensive tax calculator for all regions of the world.
Data stored by most business applications does not make sense unless it is viewed through the business logic of that application. One way to look at mobile application development is that the mobile application is simply an agent for a server-side application and not the final authority for business processes. It initiates processes and submits data and then lets the server-side application complete and validate the input through the use of standard interfaces. This division of labor, where the server is the master, is not always possible, but it is valid for many scenarios. When the mobile device is used as the master for business data storage and execution of business rules, difficult requirements are imposed on the device, such as data durability and business logic processing.
A simple time clock application is an example: a complicated Enterprise Resource Planning (ERP) product supports employee time management functionality. This functionality needs to be accessed by an employee badge scanner application running on a low-powered device. While the ERP application cannot trust the badge scanner application to run critical business logic, such as vacation balance update, it does allow the application to tell it when the employee comes and goes to work. In this interaction, the ERP application is the master. It protects its business logic and data from the outside world but allows the outside world to submit input, which it then either accepts or rejects. A standard interface is needed to bridge the gap between the simple scanner application (the agent) and the complex ERP application (the service). This paradigm is known as a service-oriented architecture.
Service-oriented architecture (SOA) aims to solve the problem of distributed application development. A service can be described as an application that exposes a message-based (asynchronous) interface, encapsulates its data, and manages ACID (Atomic, Consistent, Isolated, Durable) transactions (using the two-phase commit protocol) within its data sources. Generally, SOA is defined as a set of service providers that expose their functionality through public interfaces. The interfaces exposed by the service providers can then be individually consumed or aggregated into composite service providers.
While the composition of services into aggregate services is a powerful concept in SOA, an equally powerful concept is the definition of individual service characteristics. These characteristics are what make SOA highly suitable for mobile applications, where client and server typically share very little in terms of hardware and software capabilities.
Services may also provide RPC-style interfaces if requirements dictate. However, request-response scenarios are typically avoided by services builders. Synchronous request-response scenarios make it difficult to decouple services from their clients and make it difficult for a service to have well-defined service level agreements, such as uptime and performance guarantees.
Services are typically constructed in the following four tiers:
- Business process façade
- Business rules
- Data access
Note An additional tier, the presentation tier, is constructed independently and is not part of a service. Like other clients, the presentation tier communicates with the service by means of one of the service interfaces.
The interface is the bridge between clients of a service and the service's business process façade. A single service can have multiple interfaces, such as Web services, queuing systems, or simple file shares. Generally, a service provides a coarse stateless interface — for example, UpdateCustomer(Customer) instead of UpdateCustomerName(id, string).
The business process façade is called by means of the interfaces and then implements the controller pattern to combine discrete business rules into more coarse processes. The façade also provides the translation facilities to convert data in the private internal schema of a service to the service's public schema.
The business rules encapsulate atomic business operations that can be orchestrated by the façade. Correct factorization of the business rule layer is crucial for reuse of code within a service. An example of the difference between a business process and a business rule is the following: accepting an order document from a customer is a business process, whereas checking the availability of line items and calculating the sales tax are business rules. Services typically expose only coarse processes and not discrete rules.
The complete encapsulation of data access is one of the most important tenets of a service. Business processes and rules work with one or more data access APIs in a service. Services guard their data by enforcing all business and data integrity rules and do not allow outside clients to participate in any ACID transactions that would put a service at the mercy of the client in terms of possible concurrency and locks. (Consider a client that enters into a transaction with a service and then proceeds to display a message box.) Services are responsible for the state of data in their control; they are the final word for their data.
The cost associated with constructing services with coarse interfaces, fully encapsulated business logic, and data access layers is that building client applications becomes more difficult. Business logic is hidden from client applications. Client applications cannot query the state of data. A solution comes in the form of agents.
Agents can be defined as a "smart proxies" that can serve as intermediaries between a service and its clients. Agents can be simple proxy classes or full-fledged applications. Agents build on the location independence and availability provided by services. They also add functionality that makes interacting with services easier and sometimes more scalable. One example of agent functionality is the shopping basket, which gathers information to be submitted to a service. The shopping basket functionality in the agent shields the client from having to build a complicated order request with multiple line items for submittal to a service.
Users typically work with applications by doing multiple small operations. For example, to create an e-mail message, a user has to set the recipients, the subject, the body, and so on. The agent provides a granular interface to the application developer. It is the responsibility of the agent to validate each discrete operation and then submit the sum of the operations to the back-end service. An agent can have reference data that allows requests to be constructed and validated prior to submission. Such validation can reduce the number of failures inside the service.
The agent is a facilitator that is not a part of the service and is not trusted by the service. All interaction between an agent and a service is authenticated, authorized, and validated by the service. A real-world example of an agent is an insurance broker. The broker provides a friendly interface to an insurance company (service). The agent knows how to create valid insurance quotes and policies. Most of the time, the agent's work is accepted by the insurance company, but the insurance company reserves the right to make final judgment.
Data management is another differentiator between an agent and a service. Services manage multi-user writable data and contain business logic that protects the integrity of this data. Because all users of the service must share the same writable data, it is usually difficult to scale out a service. A service is typically scaled up. The agent, on the other hand, deals with read-only reference data and single-user writable data. Because agents do not deal with writable data for all users, they can easily be scaled out. An agent supporting a thick client may have writable data for a single client that it maintains to support offline scenarios, while an agent in a Web form may have data for blocks of partitioned users. In both cases the agent constructs service requests by combining its read-only reference data and the per-user writable data into service requests.
Service-Oriented Architecture and Disconnected Mobile Applications
How does service-oriented architecture enable the building of disconnected mobile applications? The characteristics of disconnected mobile (autonomous) applications map remarkably well to services and agents. There is usually a back-office application that encapsulates data and business logic and has some of the characteristics of a service. There are one or more clients that interact with the service. If the clients are disconnected, they have facsimiles of business logic, data, and reference data, much like agents. Following the guidelines of service-oriented architecture makes factoring disconnected mobile applications a much easier task.
Most back-office applications provide some sort of a service interface and façade. The interface may be a queuing system, Web services, or a file share. The first task in building a disconnected mobile application is to either identify or build a service interface that is compatible with the mobile platform. For example, a file share or a queuing system may not be the appropriate way to communicate with a service from a public network.
The next step is to identify a subset of functions in the service interface that are expected to be available while the client is disconnected. For example, in a sales force automation application, users should be able to create contacts and capture opportunities while their devices are disconnected. (Conversely, if a product availability check is required, a synchronous connected function is more appropriate.) If there is a set of functionality that has to run on the client while disconnected, client-side requirements for business logic, offline data, and reference data must be identified. Typically it is not possible or prudent to replicate all business logic, data, and reference data to a mobile disconnected client. Subsets of each must be identified and used as inputs into the design of the service agent.
The following steps provide a summation of how to design a mobile business application by applying the service-oriented architecture:
- Identify the application to be mobilized.
- Build or identify a service interface that is compatible with the communication mechanisms available to the mobile device.
- Make sure the service interface provides the ability to retrieve the data and reference data needed by the client in disconnected mode.
- Identify which functions in the service interface make sense in a disconnected environment. Build these as one-way calls.
- Identify which functions in the service interface make sense only in a connected environment. Build these as request-response.
- For the functions that are available in the disconnected scenario, build an agent.
- Identify the subset of business logic, data, and reference data that the agent needs.
- Build a granular interface into the agent that hides the coarse service interface.
- Build a user interface that uses the granular interface provided by the agent to run business functions.
Figure 1 shows the layers in a disconnected mobile application.
Figure 1. Layers in a disconnected mobile application
Disconnected Business Logic
There are two roles for business logic in a disconnected application: to improve the user experience by simulating what would have happened if connectivity was present, and more importantly, to minimize errors when the real business logic is run by the server-side services. An example of the former role can be seen in a simple appointment-setting scenario. When an appointment is created, the duration is calculated and displayed to the user from the start and end time. Calculating the duration is not a critical operation but helps the user use the disconnected application. In the latter role, data validation and product serial number validation are examples of business logic that is typically run by disconnected clients to reduce later rejections of offline transactions by the server-side service.
How does a developer decide what business logic to build into an offline agent? Following are some basic considerations:
- Business logic related to the state of a record is typically easy to build offline. If an opportunity is closed, do not allow an appointment to be created. When an opportunity is closed, create a history record.
- Business logic related to the state of multiple records is difficult to build. Consider an operation to delete a customer record. The master business logic will not allow a customer record to be deleted if there are any open orders or invoices against it. The disconnected application can be assumed to have stale or missing data, so implementing this business logic will be troublesome. What if the orders or invoices are not replicated to the device?
- Data validation rules are good candidates for offline scenarios. Data validation can be based on two types of reference data: metadata that defines the data schema and business data such as product catalogs and price lists.
- Cleanup logic for offline scenarios, such as cascading delete operations, is also important.
Transaction Replay, Data Concurrency, and Data Synchronization
Most data synchronization implementations depend on database replication, such as Microsoft SQL Sever Merge Replication. Some problems with database replication in disconnected mobile applications are the following:
- Services guard their data sources. They do not want clients to access their databases directly.
- Services typically aggregate data in the business logic layer for presentation. If databases are directly accessed, the amount of business logic that must be replicated to the offline client is greatly increased.
- Offline clients have a variety of hardware and software capabilities. Database replication tightly binds client architecture to the server architecture.
- Replication tables from the client to the server can bypass business logic. The classic example of this problem is the bank debit and credit transaction, where a minimum balance must be maintained. Replication schemes usually replicate the current values, so the history of transactions is lost.
- Service interfaces should provide mechanisms to manage optimistic concurrency.
For a disconnected application to work offline, it must record transactions that are created while offline and play them back against the service application. The transactions can be recorded in place in the data record or queued up. The service agent determines when a transaction should be created. Most disconnected applications are built with transaction queues. The user performs multiple actions against the user interface, which performs equivalent actions against the agent, which decides when the transaction document should be created and enqueued. An agent will typically enqueue a document when the user initiates a submit or save operation. It is the agent's job to execute the offline business logic, simulate updates in the offline store, and enqueue the transaction document for replay against the service interface.
Transaction replay against the service interface should be reliable—that is, the system should either deliver the transaction to the server-side service or should report an error to the user who created the transaction. The easiest way to build such a system is with a queuing product. These products have reliability features built in. Reliable delivery can also be built on top of RPC style interfaces as multi-step operations.
It is important to note that reliability is hard to achieve with synchronous interfaces, such as Web services. This issue will be addressed in the near future as the Web service stack evolves. Currently, the main problem with synchronous calls to the service interface is that if there is a failure, the client or server does not know whether the other was notified of the failure. For example, the transaction could be delivered successfully to the service but the client could get disconnected before receiving the confirmation. There are various schemes to avoid this problem; one is to have multiple calls. Submit the transaction with a known ID, and then confirm the ID as received in a subsequent call. Such a scheme allows clients to resubmit potentially failed transactions and allows the service to achieve idempotency (duplicate transactions can be detected).
Another important characteristic of the transaction store and forward mechanism should be in order delivery. The importance of this can clearly be seen with credit/debit transactions. If they are not in the right order, the results are wrong.
In multi-user environments, there are two methods of enforcing data correctness: pessimistic concurrency and optimistic concurrency.
Pessimistic concurrency specifies that a client place a lock on all records that the client intends to update. For disconnected applications, this is not possible. It is also not prudent for a service to expose pessimistic concurrency functionality to its clients because a client may apply arbitrary locks and disappear.
The more appropriate model is optimistic concurrency. Under this method, the client submits the current and previous state of a record to the service. If the service determines that the data has changed since it was last downloaded by the client, it rejects the client transaction.
There are many cases where neither pessimistic nor optimistic concurrency methods are available; perhaps a service does not support them. These scenarios are typically classified as "last writer wins" because whoever submits the last transaction will overwrite everyone else's work. Consider a scenario where a sales person synchronizes customer records and stays offline for two weeks. In the meantime, an office worker updates the same customer records. If the sales person also submits some updates at the end of the week, they will overwrite the office worker's changes. In this scenario, the approach taken by most applications is to minimize the damage by submitting only changed fields. It is hoped that multiple users will not update the same field. This solution, combined with the fact that in real life disconnected applications data is typically segregated by users, allows most applications to function in a reasonable manner.
Data synchronization is typically a difficult problem. When you're developing disconnected mobile applications, data synchronization is usually a one-way operation from the server to the client. As noted earlier, data from the client goes back to the server as discrete transactions. The general model is that the client sends transactions up to the server and then pulls changes from the server as deltas against its current offline state. The server is the master or the system of record. The basic problem in synchronization is determining what deltas to send to the client. The deltas can be of three basic types:
- Tombstones (delete a record)
A server must figure all the deltas of each classification to send. There are many scenarios of data usage that can affect how deltas are calculated. Factors that affect delta detection strategies include the rate of change of data and the visibility of data from the client perspective. The developer should consider the following questions when designing a synchronization scenario:
- Do all clients synchronize the same data?
- Does each client have a different view of the data?
- How often does the data change?
- How big is the data?
If all of the clients synchronize the same data, data can be versioned at the record level and a global version can be kept. Each record can be assigned create, update, and delete versions. The versions can be created and kept up-to-date by using the schema and business logic extensibility mechanisms available in most server application platforms.
The following is an example of version number–based replication where all devices synchronize the same data. The initial condition is that the system is started and the global version is equal 0.
Global Version = 0
A record is created in the system. A system callout (trigger), which is fired for the record create operation, is used to start a transaction and increment the global version number. In the same transaction, the global version number is assigned to the newly created records.
Global Version = 1 Record-1 Create Version = 1 Update Version = 0 Delete Version = 0
A device that has never synchronized queries the server for records that have changed since it synchronized last. A version number stored on the device identifies when it synchronized last. In this example, this version number will be zero. The server can compare the device version (0) with the server version (1) and find all the records where create, update, or delete versions are greater than the device version. Based on this lookup, the server will send a delta representing a create operation for Record-1 to the device as well as the current global version (1).
Record-1 is now updated on the server. The versions are updated by a system callout and look as follows:
Global Version = 2 Record-1 Create Version = 1 Update Version = 2 Delete Version = 0
Now if a device comes to the server with a device version of zero (0), it should get a delta representing a create operation. If a device comes with a device version of one (1), it should get a delta representing an update.
Record-1 is now deleted. The versions are updated by a system callout and look as follows:
Global Version = 3 Record-1 Create Version = 1 Update Version = 2 Delete Version = 3
In this scenario, if the device comes with a version less than the create version or greater than or equal to the global version, it gets nothing. Otherwise, it gets a delta representing a tombstone for Record-1.
There are other ways to keep track of tombstones, such as separate tombstone tables if the record is hard deleted by the delete operation, but the basic premise is that because all devices synchronize the same data, a global version combined with record-level version numbers can be used to compute deltas.
There are scenarios where such a replication scheme is not feasible. One scenario is where the service does not support schema extensibility so that the record-level version fields cannot be added or where transacted system callouts are not available to manage the version numbers. Because one global version number is used for synchronization, the devices all have to synchronize the same set of data. Devices cannot subscribe to subsets of data. This scheme is also more suitable for reference data or other that is not updated often because all updates must first lock and update the global version number, which can be a serious concurrency issue.
Data synchronization by maintaining server copies of client data is one way to avoid the preceding problems. This mechanism can also be used when each client subscribes to different sets of data. The basic idea is to keep of copy of either the data or keys to the data that each device has on the server. The server can use this information to generate the three types of deltas: tombstones, updates, and creates. The server store is sometimes referred to as the client object tracker or the object tracker. Here is how the synchronization is done:
- Users define subscriptions for their clients. These subscriptions are queries that the service can process to return a set of data when the client requests synchronization.
- A client object tracker database is defined that contains a client key, a unique key for a record, and a tombstone bit.
- When the client requests synchronization, the saved queries are run and a data set is returned.
- Each record in the data set returned is compared with the client object tracker representation of that record.
- Before processing begins, the tombstone bit for all records in the object tracker is set to one. It is assumed that all records are deleted.
- If the unique key for a record in the returned dataset is not found in the object tracker, the key and the modified time are copied into the object tracker. A delta in the form of a create is sent to the client. The tombstone bit for the object tracker record is not set.
- If the unique key for the record is found in the object tracker, the modified time in the query result is compared with the object tracker record. If the modified time has not changed, no delta is sent to the client. If the modified time has changed, an update is sent to the client. In either case, the tombstone bit for the record is set to zero.
- After all the records have been processed, all the records that still have their tombstone bit set are sent as tombstones to the client. Calculating tombstones in this way takes into account when objects are deleted, when they move out of subscription (that is, the stored queries no longer select them), or when the client is no longer authorized to select them.
Overall data synchronization is a bit more involved than the two simple mechanisms previously outlined. In either case, the problem of cancel must be handled. What if the client requests a synchronization and then cancels it? In the global version number case, this result is fairly straightforward because the next time the client requests a synchronization it should send up its current version and the correct deltas will be computed. In the case of the object tracker, a similar scheme can be employed, where a client sends up some identifier that identifies its current state and the server uses this to determine how to calculate deltas. In the object tracker approach, if two object trackers are kept per client, the problem of cancel can be solved.
The following is an example of the object tracker approach:
- The client requests synchronization.
- The server runs the client's saved queries and builds up an object tracker. The server sends the deltas computed from the object tracker with a unique synchronization point identifier. (Think of this as a GUID or other unique entity.)
- The client receives the deltas and the synchronization point identifier.
- The client applies the deltas and saves the synchronization point identifier.
- The client requests synchronization again, passing the last synchronization point identifier it received.
- The server uses the synchronization point identifier to determine the following:
- Did the client apply the deltas from the last object tracker? The synchronization point identifier last sent would have been returned.
- Is the client completely out of sync with the server? An unknown synchronization point identifier or no synchronization point identifier would have been returned.
- Did the client cancel the last sync? The synchronization point identifier before the last sync would have been returned.
In each listed scenario, if two object trackers are kept for each client, one for the current synchronization request and one for the previous synchronization request, then canceling and data loss scenarios can be handled.
Another set of issues associated with data synchronization is how to apply deltas on the client. Typically all of the deltas must be applied atomically on the client if there are relationships between the data. For example, it would be grossly incorrect to apply all of the deltas associated with sales orders but fail to apply the deltas associated with customers. The approach generally taken here is to cache deltas on the client until the last one is received. At this point, all deltas are applied in a single transaction.
When an interface is designed in a service-oriented architecture, one of the first things addressed is the schema of the messages that will be exchanged between the client and server. In most cases today, XML schemas are used. In the architecture discussed so far, messages from the client to the server must contain enough information to enable optimistic concurrency functionality. Messages from the server to the client are the data synchronization deltas and positive and negative acknowledgements. There is a basic rule that covers both: they both should be versioned.
In an environment where clients and servers can evolve separately and are likely written by different teams, it is very important that all communication between client and server be versioned. Beyond that, each scenario has differing requirements that lead to a concrete schema. Some patterns, such as Diff-Grams and property bags, are useful constructs to consider. Diff-Grams, defined in ADO.NET, enable optimistic concurrency, and property bags make programmatic manipulation of data easier.
Putting It All Together
Disconnected mobile applications can be built in ways that are different than those presented in this article. The reason to choose a service-oriented architecture and to put the effort into solving the problems that arise in that architecture is that in today's world of differing form factors and device capabilities, the service-oriented architecture defines a clear separation between client and server. Figure 2 sums up how the architecture is defined so that it can be put into practice at a very high level.
Figure 2. Service-oriented architecture
Figure 2 illustrates two services, an agent that talks to both services, and a disconnected client application that works with the agent. Communication between the services and between the agent and the services is by means of a well-defined public interface. The messages that flow between the entities are defined with XML schemas that can be versioned (have a version number at a minimal). The transport for the messages can be either reliable (like Microsoft Message Queue) or unreliable (like Web services), a combination of both, or maybe even a custom reliable transport built on Web services or other technologies like SQL Replication.
The service agent provides a very granular programmatic interface to the disconnected application. It simulates some of the business logic that would have executed in the server-side business application and is able to craft up and parse the potentially complex XML messages that are sent between it and the two services.
The disconnected application can range from an inventory reporting application in a vending machine to a fully functional user interface. It translates the user's actions into calls into the service agent. After the service agent has executed local business logic, the disconnected application displays the resulting offline state.
The true test of whether an architecture is feasible is when an application is built on it. One application that uses the service-oriented architecture is Microsoft Business Solutions CRM Mobile (CRM Mobile for short). CRM Mobile is a mobile client for Microsoft Business Solutions CRM (MSCRM). CRM Mobile is a .NET-based rich client and is targeted at Windows Mobile-based Pocket PCs.
Microsoft CRM Mobile has the following statistics:
- Completely developed in C#.
- Has 150K lines of code, ~40KB on Pocket PC with .NET Compact Framework.
- Includes metadata-driven business logic, user interface, and message schemas.
- Has 4 SQL CE databases, ~40 tables. Transactions coordinated across databases. Schemas updated on the fly when metadata changes.
- Uses Web services and asynchronous messaging built on SQL CE RDA Protocol.
CRM Mobile users have the ability to work either in a connected or disconnected mode. Like its desktop computer counterpart, CRM Mobile is easy to use, customize, and maintain. It offers users rich account and opportunity management functionality, as shown in Figure 3.
Starting at the server, there is the existing business application: MSCRM. This application exposes its business processes with a well-defined SDK but hides and protects its data like any good business application should. The SDK for MSCRM supports Web services and COM, but it does not currently support reliable delivery semantics.
The first order work item for CRM Mobile was a reliable delivery service interface for MSCRM. This interface, called the Message Bus, is built using Microsoft SQL Server on the server and Microsoft SQL Server CE on the client. The RDA protocol available in SQL Server CE is used to transport bits reliably over HTTP. The layer on top of the message bus is the proxy/stub layer. Proxies and stubs are defined for each of the business process APIs defined by MSCRM. These proxies and stubs serve the same purpose as Web service proxies. They serialize a method invocation into the Message Bus format and then deserialize it back into a method invocation.
Diff-Grams are used to communicate before-and-after images of records when transactions are sent to the server to facilitate optimistic concurrency. MSCRM does not support optimistic concurrency, so the information available in the Diff-Gram is used to send only changed fields minimizing data overwrites.
There is a service agent on top of the proxy/stub layer that mimics some of the business logic implemented in MSCRM. Data validation rules are run on the client as well as business rules that depend on single record state. The agent maintains an offline store whose schema is significantly different from the MSCRM offline store. The store is implemented with SQL Server CE. Because the schemas on the client differ from the schemas on the server and because MSCRM does not make its database publicly available, the SQL Server CE Merge replication functionality was not used to replicate data.
A custom data synchronization service exists on the server. It uses both mechanisms described in the data synchronization discussion: global version–based synchronization and object tracker–based synchronization. Synchronization deltas are cached on the client until the last delta is received, and then all deltas are applied in one transaction. The synchronization service allows the client to create data subscriptions to fine-tune how much and what types of data are synchronized to the client. The messages exchanged between the client and server are versioned. The server sends negative acknowledgements if there is a version mismatch.
This description of CRM Mobile, while accurate, is cursory and does not do justice to the complexity of the application. MSCRM is a fully customizable application, and so is CRM Mobile. The user interface and the database schema are customizable by the user. To make this possible, the application is fully metadata driven. Metadata is synchronized just like reference data (using global version sync), and metadata changes cause user interface, business rule, and schema changes on the client in real time.
Many businesses are deploying disconnected mobile applications. Advances in hardware and software make the task of extending business application logic to mobile devices very feasible. The Windows Mobile platform plus the .NET Compact Framework and SQL Server CE, combined with the principles of service-oriented architecture, allow the development of remarkably complex and robust applications that scale from phones to laptops. The responsibility of the application developers and architects is to ensure that their applications are factored such that extending to each different form factor is an incremental task and not a complete rewrite. If the principles of a service-oriented architecture are followed, the task of factoring properly is made easier.