Wednesday, February 25, 2009

WCF Queued Dual HTTP Request Response Router

There are a lot of examples available for how to make WCF-based publish/subscribe messaging solutions, but not very many thay provides a simple queued dual HTTP request-response router. IDesign provides the queued Response Service from Juval Löwy's book Programming WCF Services, which provides a feature rich set of WCF goodies. Recommended. I've used it in combination with Sasha Goldshtein's Generic Forwarding Router to create a dual channel queued router.

The router container uses HTTP endpoints externally to interact with consumers, and MSMQ queues internally to talk to the service providers. This gives fewer queues to manage and monitor, and the consumers need not know anything about MSMQ.

The message forwarding ChannelFactory is cached for best performance. Read more about Channel and ChannelFactory caching in Wenlong Dong's post Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices.

Some useful resources related to making this router work:

Note the poorly documented requirement that the WAS activated service endpoint path must be reflected in the MSMQ queue name:

<add key="targetQueueName" value=".\private$\ServiceModelSamples/service.svc" />

<endpoint address=
"net.msmq://localhost/private/servicemodelsamples/service.svc"


Failure to adhere to this convention will prevent messages added to the queue from automatically activating the WAS-hosted WCF service. Also ensure that the WAS app-pool identity has access rights on the queues.

Download the router example code here. The code is provided 'as-is' with no warranties and confers no rights. This example requires Windows Process Activation Service (Vista/WinServer2008), MSMQ 4.0 and .NET 3.0 WCF Non-HTTP Activation to be installed.

Friday, February 20, 2009

WCF: Message Headers and XmlSerializerFormat

Sometimes you need to use the classic XmlSerializer due to interoperability or when doing schema first based on XSD contracts that contains e.g. XML attributes in complexTypes. I've used the [XmlSerializerFormat] switch on services many times without any problems, but recently I had to make use of a custom header - and that took me quite some time to get working.

This is the wire format of the header:

<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" >
<s:Header>
<h:CorrelationContext xmlns:h="urn:QueuedPubSub.CorrelationContext" xmlns="urn:QueuedPubSub.CorrelationContext" xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" >
<CorrelationId>c4e03aae-9501-46e9-bbb8-a9ddf6c4fe15</CorrelationId>
<FaultAddress>feil</FaultAddress>
<Priority>0</Priority>
<ResponseAddress>svar</ResponseAddress>
</h:CorrelationContext>
. . .

The WCF OperationContext provides access to the message headers:

CorrelationContext context = OperationContext.Current. IncomingMessageHeaders.GetHeader<CorrelationContext> (Constants.CorrelationContextHeaderName, Constants.CorrelationContextNamespace);

I had used [XmlElement] attributes to control the name and namespace of the [Serializable] class members, only to get this error:

'EndElement' 'CorrelationContext' from namespace 'urn:QueuedPubSub.CorrelationContext' is not expected. Expecting element 'CorrelationId'.

That really puzzled me. To make a long story short, the MessageHeaders class and GetHeader<T> method only supports serializers derived from XmlObjectSerializer, such as the DataContractSerializer - but not XmlSerializer. To make this work, your header class must be implemented as a [DataContract] class even for [XmlSerializerFormat] services.

My working message header contracts looks like this:

[DataContract(Name = Constants.CorrelationContextHeaderName, Namespace = Constants.CorrelationContextNamespace)]
public class CorrelationContext
{
[DataMember]
public Guid CorrelationId { get; set; }
[DataMember]
public string FaultAddress { get; set; }
[DataMember]
public int Priority { get; set; }
[DataMember]
public string ResponseAddress { get; set; }
}


[MessageContract(IsWrapped = true)]
[Serializable]
public abstract class MessageWithCorrelationHeaderBase
{
[MessageHeader(MustUnderstand = true, Name = Constants.CorrelationContextHeaderName, Namespace = Constants.CorrelationContextNamespace)]
[XmlElement(ElementName = Constants.CorrelationContextHeaderName, Namespace = Constants.CorrelationContextNamespace)]
public CorrelationContext CorrelationContext { get; set; }
}

This code was made and tested on .NET 3.5 SP1.

Thursday, February 12, 2009

SOA CIM: Common Information Model

As I've written about before, there is a lot of confusion and different interpretations of what a "Common Information Model" (CIM) is. The most common reification of CIM is to imply that it is the same concept as in chapter 4 "XML: The Foundation for Business Data Integration" in David Chappell's seminal book Enterprise Service Bus from 2004. Dave describes the need for having a common XML data format for "expressing data in messages as it flows through an enterprise across the ESB", but never uses the term CIM or describe any modeling approach. This common format will naturally comprise any kind of XML data used in the service bus, including messages, business entities and event data.

The CIM concept in relation to SOA was described by Mike Rosen and
Eric Roch in 2006: it defines a common representation of business entity objects that provides a canonical format and unified semantics for a business domain.
The above figure is from Mike Rosen's article Business Architecture and SOA. As can be seen in the figure, CIM comprises a shared information model that is the basis for the documents that flows through the business processes built using the capabilities provided by the services.

Mike explains the role of CIM like this:

[The SOA Business Model] has to manage the sharing of services and information across processes. In other words, it needs to eliminate redundancy, overlap, and gaps between services so that each business capability is implemented once, by the organizational unit that is responsible for that capability. And that those services are used by all the different processes needing those capabilities. In addition, all of the information shared between services must be identified in the common information model. In other words, all services that are related to the same business concepts must use the same information to describe those concepts. Finally, the SOA Business Model must ensure that all of the information passed into and out of the business services (mostly in the form of documents) is defined in the [shared semantic] information model.

I hope this clarifies the definition of what a CIM is:
it is a model that comprises shared information entities such as "Customer", but not actual messages like "CustomerHasMoved" defined in the business process information model (BPIM). The message types such as "AddressChange" (document) are part of this logical model as projections of the shared entities.

The "Canonical Schemas" used in messages are part of the service model's standardized service contracts. Note that these schemas are generated from, but are not part of CIM. Generating the canonical schemas from CIM makes enforcing "Schema Centralization" unnecessary, as they already share a common basis. This helps alleviate the negative effects of a one true schema approach; e.g. the versioning ripple effect through the canonical data model that affects all related service contracts - even if not intended.

Do not confuse CIM with the centralized SOA logical data model (LDM) approach. The services speak "CIM" natively. Service messages are not transformed to/from CIM by a message broker as in the LDM schema bus approach. Still, using the message broker approach is handy to include non-conformant services in your SOA solutions. Enforcing a LDM for your information will cause the same problems as the traditional EAI hub-and-spoke canonical data model (CDM).

I recommend reading chapter 5 "Service Context and Common Semantics" in the "Applied SOA: Service-Oriented Architecture and Design Strategies" book by Michael Rosen et al. Figure 5-12, 5-14 and 5-15 in the book shows how a grapical projection DSL for documents (message types) might look.

Tuesday, February 10, 2009

SOA: Business Event Message Models

InfoQ has published an article about SOA Message Type Architecture by Jean-Jacques Dubray. The article shows how to model 'message type' artifacts based on an enterprise common data model using a DSL, and also outlines how the modeled artifacts can be used to generate XML schemas for use in your service contracts. The message type DSL is not for modeling messages, it is just for modeling the types used as message payloads. In WCF terms, a message type is a [DataContract].

Note that even if the message type model contains a set of standardized verbs, the model does not cover business process aspects such as flow and actions. You will still need to analyze the business processes that pass those message types around to model the action, query and notification events that drive the processes. I've written several times about creating such a model that comprise both the business events and the message types, a business process information model (BPIM). I like JJ's approach; I just think that we need to model also the business capabilities and interactions that utilize the message types to get a complete set of artifacts for service contracts.

One difference is that I prefer using a common information model (CIM) as the basis for modeling the message types, rather than an enterprise data model (EDM). It is a lot of effort to create an EDM that covers all information in all systems-of-record in a company; and the moment you have completed the all-encompassing model, your CxO will inform you that parts of the business have been outsourced or that a new business will have to be incorporated, or even just that the CRM system is to be replaced. Change is the only constant. Thus I prefer starting small by creating a CIM that covers only the business entities comprised by the business processes that are about to be service-oriented. As there will be multiple resource domains in your architecture, there will be multiple CIM models as your SOA grows. Federate these domain models by creating context maps for cross-domain capabilities and logic only.

Create a CIM according to the Domain-Driven Design (DDD) principles and use the domain entity objects when implementing the business logic underlying your services. Design the BPIM based on the CIM, ensuring that the model is canonical for each process domain. As the BPIM is a projection of the CIM, the service interface messages will have an unambiguous mapping to/from the underlying domain resources (entities and aggregates). Mapping data between the message types and objects will be required in the service implementation (provider container).


The 'message type' artifacts should also be partitioned into bounded contexts according to business area as in Domain-Driven Design; where each service domain comprise a set of cohesive services based on the same underlying CIM, delivering an BPIM for the specific business domain. Having clear domain boundaries make it easier to analyze, model and design, and version the artifacts - it also aids the discoverability by the consumers by providing a clear business context for the provided services.

Your SOA solution will, as it grows, encompass multiple service domains each with its own BPIM. Composite services that compose business capabilities across two or more domains will require mediation between the message models. Even if "transformation avoidance" is considered a SOA best practice, it is unrealistic that you will be able to avoid mediation completely in your service bus (composition container). In addition, you cannot expect to enforce your model upon the extended enterprise, think of e.g. third-party and outsourced capabilities.


The message types exposed by a service domain implicitly become the "canonical schema". They enable better service discoverability, reuse and composition as they all share the same underlying data model - which also ensures that the message payloads have common semantics within the service domain. In combination with business functions, the schemas provide a complete standardized service contract.

As you have seen, the BPIM and CIM approach fits well with DDD. It also fits well with the middle-out SOA approach - including the recommendation to start small and think big picture, not big bang.

Sunday, February 01, 2009

Service Compatibility - A Primer

In a comment on the InfoQ article Contract Versioning, Compatibility and Composability about my definition of service forwards and backwards compatibility, the problem of talking about compatibility of services compared to the definition of schema compatibility is acknowledged.

The "problem" is that a service version that is compatible with the specifications of older versions of the service, can be achieved using both backwards and forwards compatible schemas. That is correct, but doesn't preclude having a definition of forwards and backwards compatibility for service providers, a.k.a "services". Service compatibility is based on ability to validate a message, it is not based on using wildcards in the schema definition.
For a definition of the three types of forward compatibility, see my post on schema, service and routing compatibility.

Seen only from the service provider perspective, how it handles incoming messages is what defines if a service is forwards or backwards compatible (or both). How consumers handle messages sent by the service is really not of any concern for the provider - wait, read on.

Thinking about this within SOAP 1.x constraints, where a WSDL operation has a request message and a response message with fixed schema definitions/version (unilateral contracts), will lead to the conclusion that operations cannot be classified as forwards or backwards compatible, only the message schemas. This is a limitation of SOAP, but not of messaging in general.

In the following examples, the v1.2 service provider is backwards compatible and interacts with a v1.1 consumer. However, the schemas are not designed to be forwards compatible - they do not support XML extensibility (schema wildcards). In this scenario, the consumer can do either XSD validation of response messages against the v1.1 schema, or do 'validation by projection' of response messages - i.e. do "ignore unkown" validation. Doing 'validation by projection' is a recommended practice for compatibility and really simplifies building SOA solutions - this is also how WCF validates messages. So how to handle consumers that only do XSD validation, without relying on schema wildcards?

In REST, the consumer can put an "accept formats" header in the v1.1 request message, and the service provider can then respond with a v1.1 schema even if the service version is v1.2. The service provider adheres to it's obligations by being backwards compatible, and the consumer is allowed to express it's version expectation - the service has bilateral contracts.

Service Virtualization is a mechanism that can help with service versioning. The task of such an abstract endpoint intermediary is to handle versioning through both service compatibility and schema compatibility. A virtual service supports multiple versions of the service on the same endpoint, and must be capable of processing older requests. The virtual endpoint must have a mechanism that allows for the latest major v1.2 provider to handle v1.1 consumers. The intermediary mediates between the schema versions by transforming or projecting/enriching the messages as needed.

Back to the example, the service provider v1.2 response message can be stripped down to a v1.1 message by the intermediary as it is sent back to the consumer. The net effect is that the service has virtual bilateral contracts.

In messaging in general, by definition there are no duplex channels, only one-way channels (see
Enterprise Integration Patterns by Hohpe/Woolf). On top of this, you can have a logical two-way channel for message exchange patterns such as request-response, specified using a "reply-to" address and a "reply-format" (bilateral contracts). The message compatibility is defined by the schema constructs, but just as in REST, the version of the incoming message does not dictate the version of the response message. It is the implementation of the endpoint that processes the messages that defines the compatibility policy of the endpoint, not the schemas.

So, service compatibility do not require using forwards compatible schemas in addition to backwards compatible schemas. The message validation policy is what defines service compatibility.

It is of course much simpler to just have a service compatibility policy based on that the schemas used in the services must be both forwards and backwards compatible - as shown in the "SOAP-style unilateral contracts" service compatibility figures.



Click figures to enlarge.

This way, the service provider or consumer platform need not handle "request-format" and "reply-format" that have different versions. In such a unilateral schema compatibility policy world, services are just intrinsically compatible through schema compatibility.