Applying Changes

After the destination provider obtains a change batch and handles conflicts, changes must be applied to the destination replica. This is typically done in the destination provider's ProcessChangeBatch (for managed code) or ProcessChangeBatch (for unmanaged code) method, and can be most easily achieved by using a change applier object provided by Sync Framework.

How Changes Are Applied

After the destination provider has the batch of changes from the source provider, the destination provider applies the changes to the destination replica. A change applier object provided by Sync Framework can be obtained by creating a NotifyingChangeApplier object (for managed code) or by calling IProviderSyncServices::CreateChangeApplier (for unmanaged code). The ApplyChanges (for managed code) or ApplyChanges (for unmanaged code) method detects conflicts and calls methods on the destination provider to apply changes to the destination replica.

Processing a Change Item

To process a typical change item, the change applier first calls the LoadChangeData (for managed code) or ISynchronousDataRetriever::LoadChangeData (unmanaged code) method on the source provider to start data transfer. This method returns an object (for managed code) or an IUnknown interface (for unmanaged code) that represents the data-transfer mechanism. The change applier then calls the SaveItemChange (for managed code) or ISynchronousNotifyingChangeApplierTarget::SaveChange (for unmanaged code) method on the destination provider, and passes the data-transfer object as part of the save change context. The destination provider can then transfer the data to the destination replica. Any failures to obtain data or to process changes are indicated by using the RecordRecoverableErrorForItem (for managed code) or ISaveChangeContext::SetRecoverableErrorOnChange (for unmanaged code) method. This method records a recoverable error for this item in the learned knowledge object that is contained in the change batch. Alternately, when constraint conflicts are used, call RecordConstraintConflictForItem (for managed code) or ISaveChangeContext2::SetConstraintConflictOnChange (for unmanaged code) to report a constraint conflict. The constraint conflict can then be resolved as specified by the application or the provider.

Updating Learned Knowledge

While the changes are being applied, the change applier updates the learned knowledge that is contained in the change batch. The learned knowledge is the knowledge of the source replica projected onto the changes in the change batch, and represents what the destination replica will learn when it applies all the changes in the change batch. The learned knowledge is updated in the following ways:

  • If only a subset of changes were applied because of interruptions or cancellations, the change applier uses the project operator to restrict the knowledge to only the set of changes that was applied.

  • If applying some changes failed, the change applier excludes them from the knowledge as well.

Be aware that provider implementers do not have to perform these project, union, and exclude operations manually. This is because the change applier performs them on behalf of the provider.

Saving Updated Destination Knowledge

After the learned knowledge has been updated, it is combined with the destination replica's knowledge. The destination provider must replace the knowledge of the destination replica with this knowledge atomically. This atomicity can be achieved by saving updated knowledge only one time per batch if all changes in a batch are applied within a single transaction. The change applier assists with this by calling the StoreKnowledgeForScope (for managed code) or ISynchronousNotifyingChangeApplierTarget::SaveKnowledge (for unmanaged code) method on the destination provider at the end of every batch. The knowledge passed to this method is the updated knowledge that is to be applied to the destination replica. Alternately, the destination provider can call the GetUpdatedDestinationKnowledge (for managed code) or ISaveChangeContext::GetKnowledgeForScope (for unmanaged code) method to get the updated knowledge.

When the providers use change units to represent subitems, there are some differences in the way that changes are applied. For more information, see Synchronizing Change Units.

Special Considerations for Hierarchical Replicas

Synchronization of hierarchical replicas can encounter certain complications that are caused by batching change sets from the source to the destination. Sync Framework does not support the concept of data hierarchy. Therefore, it is completely up to the providers to correctly handle these situations.

The most common complication is the order of parent and child relationships in update operations. For example, consider the following scenario:

  1. The source replica has created a new folder and a set of new items inside it.

  2. The destination provider requests changes from the source provider. The source provider sends its list of changes in two batches. However, the change batch that contains the creation of the parent folder arrives after the change batch that contains the child items.

  3. The destination provider must decide where to store the set of items that have arrived in the first batch, because the information on where to store them will not arrive until the second batch.

Reducing Hierarchical Updates

If there are updates, the simplest way to reduce parent/child relationship complexities is to make the source provider responsible for ordering the global IDs. This way the parent items always arrive before their child items.

When destination providers deal with transmission of updates that are made in a hierarchical store, they might receive out-of-order parent/child relationships. Destination providers must be able to recover from this situation by either dropping the out-of-order change and noting an exception in their knowledge, or queuing the change to be applied later. Because the size of items might be large, just dropping the change and noting knowledge exceptions is probably the most effective approach.

Reducing Hierarchical Deletes

Destination providers determine whether an item is a container item. For empty containers, deletes can occur immediately. However, if the container has items that have not been marked for deletion, the provider has the following options:

  • Queue the deletes for later processing. After all the children of the container are marked for deletion, the actual delete can be triggered.

  • Drop this request and set an exception in the knowledge to indicate the-out-of-order receipt of an item.

To address scenarios in which a parent is deleted in the hierarchy and then a child is added, the following rule is observed: Queued deletions are only good until the end of a pair-wise session and are not persisted among the participants.

See Also

Reference

ISynchronousNotifyingChangeApplier Interface
ISynchronousNotifyingChangeApplierTarget::SaveKnowledge
ISynchronousDataRetriever Interface
ISaveChangeContext Interface
NotifyingChangeApplier
StoreKnowledgeForScope
IChangeDataRetriever
SaveChangeContext

Concepts

Implementing a Standard Custom Provider