Friday, December 22, 2006

How to: Expose WCF service also as ASMX web-service

In my last post, I wrote about my endeavors to consume a WCF service from an ASMX client to support non-WCF clients such as .NET 2 clients. The WCF service is exposed using a BasicHttpBinding endpoint and our main consumer is a Flex 2 web-application.

As the support for WS-I Basic Profile 1.x compliant web-services is quite limited in Flex, I had to make ASMX-style clients works. What made it worse, is that there is a bug in Flex regarding the support for xsd:import: Flex will repeatedly request the XSD files on .LoadWSDL(), causing bad performance for the web-service and the client. Use Fiddler to watch all the HTTP GETs issued by Flex. The solution is to ensure that the generated WSDL is a single file, without any xsd:import, and this is just what a classic .NET 2 ASMX web-service provides.

WCF has good support for migrating ASMX web-services to WCF, and this feature can also be used to actually expose a native WCF service also as an ASMX web-service. Start by adding the classic [WebService] and [WebMethod] attributes to your service interface:

[ServiceContractAttribute(Namespace = "http://kjellsj.blogspot.com/2006/11"
Name = "IProjectDocumentServices")]
[ServiceKnownTypeAttribute(
typeof(DNVS.DNVX.eApproval.FaultContracts.DefaultFaultContract))]
[WebService(Name = "ProjectDocumentServices")]
[WebServiceBinding(Name = "ProjectDocumentServices"
ConformsTo = WsiProfiles.BasicProfile1_1, EmitConformanceClaims = true)]
public interface IProjectDocumentServices
{
[OperationContractAttribute(Action = "GetDocumentCard")]
[FaultContractAttribute(
typeof(DNVS.DNVX.eApproval.FaultContracts.DefaultFaultContract))]
[WebMethod]
DocumentCardResponse GetDocumentCard(KeyCriteriaMessage request);
. . .

Note that there is no need for adding [XmlSerializerFormat] to the [ServiceContract] attribute, just leave the WCF service as-is. Note also that you must specify the web-service binding Name parameter to avoid getting duplicate wsdl:port elements in the generated WSDL file. You should apply the [WebService] attribute using the same Name and Namespace parameters as for the [WebServiceBinding], to the class implementing the service. If not, you might get a wsdl:include and and xsd:import in the generated WSDL file.

Then add a new .ASMX file to the IIS host used for your WCF service, using the service implementation assembly to provide the web-services. I added a file called "ProjectDocumentWebService.asmx" with this content:


<%@ WebService Language="C#" Class="DNVS.DNVX.eApproval.ServiceImplementation
.ProjectDocumentServices, DNVS.DNVX.eApproval.ServiceImplementation" %>

That's it. You now have a classic ASMX web-service at your disposal. WCF supports running WCF services and ASMX web-services side-by-side in the same IIS web-application: Hosting WCF Side-by-Side with ASP.NET.

Most likely you must also apply [XmlElement], [XmlArray] and [XmlIgnore] at relevant places in your message and data contracts to ensure correct serialization (XmlSerializer) and ensure consistent naming for both WCF and ASMX clients/proxies. Remember that the XmlSerializer works only with public members and that it does not support read-only properties. Do not mark message and data contract classes as [Serializable] as this changes the wire-format of the XML, which might cause problems in Flex 2.Note that the ASMX-style WSDL will not respect the [DataMember (IsRequired=true)] setting or any other WCF settings because it uses the XmlSerializer and not the DataContractSerializer. You will get minOccurs=0 in the generated WSDL for string fields, as string is a reference type. This is because the XmlSerializer by default makes reference types optional, while value types by default are mandatory. Read more about XmlSerializer 'MinOccurs Attribute Binding Support' at MSDN.

Note the difference between optional elements (minOccurs) and non-required data (nillable) in XSD schemas. This is especially useful in query specification objects, make all the elements optional and apply only the specified elements as criteria - even "is null" criteria.


The unsolved WSDL problem described in my last post is no longer an issue for ASMX-clients, as the WSDL generated by the classic ASMX web-service of course natively supports .NET 2 proxies generated by WSDL.EXE. The issue with the Name parameter for the [CollectionDataContract] attribute is still a problem, it must be removed.

Thanks to Mattavi for the "Migrating .NET Remoting to WCF (and even ASMX!)" article that got me started.

Thursday, December 21, 2006

Consume WCF BasicHttpBinding Services from ASMX Clients

The interoperability of WCF services has been touted quite a bit: just expose an endpoint using BasicHttpBinding and you will have a WS-I Basic Profile adherent web-service. This is of course backed by some simple examples of math services, all RPC-style and none of them messaging-style services.

We have for some time been consuming our BasicHttpBinding WCF services from a Flex 2 client and a WinForms smart client using a WCF generated client proxy (svcutil.exe). Just to be certain that our partners would have no problem setting up an application-to-application integration; I decided to test our services ASMX-style using "Add web reference" in Visual Studio (wsdl.exe). This to ensure that the services are consumable from non-WCF systems, i.e. systems without the .NET 3 run-time installed.

Well, surprise, surprise; it wasn't as straightforward as shown in the examples. There are several details that you need to be aware of, and some bugs in the service/message/data contract serializing mechanism.


I started by creating a new WinForms application and just used "Add web reference" to reference my "projectdocumentservices.svc?WSDL" file and WSDL.EXE generated an ASMX-style proxy for me. So far this looks good. I then added code to call my HelloWorld method on the proxy, which gave me this run-time error:

System.InvalidOperationException: Method ProjectDocumentServices.GetDocumentRequirement can not be reflected. ---> System.InvalidOperationException: The XML element 'KeyCriteriaMessage' from namespace 'http://DNVS.DNVX.eApproval.ServiceContracts/2006/11' references a method and a type. Change the method's message name using WebMethodAttribute or change the type's root element using the XmlRootAttribute.

Note that the Flex 2 client has no such problems with the BasicHttpBinding web-service, thus it must be related to how the generated proxy interprets the WSDL.

My service uses the specified message in several operations:

[System.ServiceModel.OperationContractAttribute(Action = "GetDocumentCard")]DocumentCardResponse GetDocumentCard(KeyCriteriaMessage request);

[System.ServiceModel.OperationContractAttribute(Action = "GetDocumentCategory")]DocumentCategoryResponse GetDocumentCategory(KeyCriteriaMessage request);

[System.ServiceModel.OperationContractAttribute(Action = "GetDocumentRequirement")]DocumentRequirementResponse GetDocumentRequirement(KeyCriteriaMessage request);

I turned to MSDN for more info and found the article 'ASMX Client with a WCF Service' (note the RPC-style math services) which lead me to 'Using the XmlSerializer'. So, accordingly, I added the [XmlSerializerFormat] attribute to the [ServiceContract] attribute in my service interface. Then I compiled and inspected the service, and got a series of different errors when trying to view the generated WSDL:

[SEHException (0x80004005): External component has thrown an exception.]

[CustomAttributeFormatException: Binary format of the specified custom attribute was invalid.]

[InvalidOperationException: There was an error reflecting type 'DNVS.DNVX.eApproval.DataContracts.DocumentCard'.]

Applying the well-known XML serialization attributes [Serializable] and [XmlIgnore] at relevant places in the data and message contracts helped me to isolate the problem to the use of collections and the WCF [CollectionDataContract] attribute.



To make a long story short, there is a bug in the WCF framework that affects .asmx but not .svc when using the [CollectionDataContract(Name="...")] attribute parameter:


"This happens only with Asmx pages but not for Svc due to a known Whidbey issue. Removing the "Name" parameter setting would work. If it is possible, you could also have your client instead use svc only."

Removing the Name parameter from all my [CollectionDataContract] attributes made all the WSDL reflection errors disappear, and I was again able to view the generated WSDL file. This time for a service that used the [XmlSerializerFormat] attribute.

Full of hope, I updated the web reference in my test client and ran it. Calling the test method lead me straight back to my original problem: the message contract used in multiple operations. Note that removing all but one of the operations makes the problem go away.


According to 'Message WSDL Considerations' WCF should have done this when generating the WSDL:

"When using the same message contract in multiple operations, multiple message types are generated in the WSDL document. The names are made unique by adding the numbers "2", "3", and so on for subsequent uses. When importing back the WSDL, multiple message contract types are created and are identical except for their names."

Inspecting the WSDL showed me that multiple XSD schemas are not generated. Certainly another WCF issue/bug. The problem can be solved by deriving the message class into new classes with unique names and using the derived classes in the operations. But when you have a message that is used ubiquitously, e.g. DefaultOperationResponse, this is not a viable solution.

As of now, our BasicHttpBinding WCF service cannot be consumed by ASMX clients. So much for interoperability...

[UPDATE] Read my new post about how to expose a WCF service also as an ASMX web-service.

Thursday, December 07, 2006

JSR235: Service Data Objects

In my last post, I wrote about how the SOA 'boundaries are explicit' and 'services are autonomous' tenets should make you design service operations that work on an entity as a unity rather than on the bits and pieces making up the unity. This is the wellknown 'paper document' message metaphor.

The most common reason for developers breaking the "unity" rule is for optimizing transport performance, i.e. to make the amount of data passed across the wire as small as possible. The rationale behind some of the design decisions of e.g. the Amazon Web Services shopping cart follows this thinking: "Amazon has to be prepared for the one customer in 60 million with 20,000 items in their shopping cart" (Werner Vogels, vice president, world-wide architecture and CTO at Amazon.com). How would you implement an operation for changing a few of the 20,000 cart items ?

In my opinion, this kind of transport optimizations should not surface in the service operations; it should be handled by the service framework. Just as e.g. encoding and encryption should be handled by the framework.

The Java community is working on the 'Service Data Objects' specification (JSR 235) to create a better framework for handling entities in service-oriented architectures. The goals of SDO are to help simplify and unify data access across different data source types. Data transport is one aspect of SDO. Another aspect is the change tracking mechanism, which is closely related to data transport: unchanged values of the original tree are omitted when serializing the data graph.

Change tracking applies to any kind of data object, including what is called 'complex data objects' in the SDO spec - i.e. aggregate root objects/entities such as orders. This is a feature that anyone who has implemented an order service would wish for: having automagical support for knowing which changes have been made to the list of order items, simplifying e.g. altering the order in the database.


Service Data Objects has several interesting aspects, and I recommend that you read this IBM article and some of the resources referenced in it.

While you're at it, also read this Architecture Journal article 'Autonomous Services and Enterprise Entity Aggregation'. It discusses how the explicit/autonomous tenets influence the use of core system entities in a service-oriented event-driven design.

Friday, December 01, 2006

Service Architecture: JBOWS, SOA or WOA

A few days ago, I got into a discussion of whether the Amazon Web Services are true SOA or just a programming model exposed using web-services. The discussion started when I said that CRUD style operations are not according to SOA best practices; and that operating on parts of a domain object of type aggregate root should be avoided, favoring actions on the complete entity to be exposed as service operations.

Before making my SOA-vs-JBOWS case, a short introduction to WOA: as adhering to all the SOA service design principles and the different WS-* technologies can be quite daunting and complex, a lot of service-oriented solutions have emerged that take a simpler approach (REST, SOAP, POX, etc). Gartner has coined the acronym WOA (Web-Oriented Architecture) for these kinds of solutions that implement and expose services using more pragmatic techniques following the WOA tenets.

First the "what's in an operation name" issue:

It is not best practice to use CRUDy style operation names that conveys only that the domain object state is going to change in the repository (e.g. UpdateCustomerAddress). You should rather use operation names that reveals the event in the business process that caused the action (e.g. CustomerHasMoved). The point is that your service will most likely perform some business logic on the domain object in conjunction with storing it in the database. The operation name should convey the fact that the state of the real-world entity represented by the domain object has changed.


Note that you need not change the operation names very much, sometimes renaming UpdateXxx to ChangeXxx can be sufficient to convey the correct semantics. Focus on creating business process driven services rather than data-driven services.

The discussion I had was about if names such as AddXxx, ModifyXxx and DeleteXxx are CRUDy style names or not. I think they are to closely named after what the code is going to do with the domain object repository rather than reflecting changes to the real-life entity, i.e. it is more a programming model (WOA) than a SOA operation.


Note that CRUDy operations and names can be quite ok, especially for information services, which will still be needed to manage data in your repositories. There will still be a need for creating new customers, even in SOA.

Then on to the operations on different types of domain objects issue:

A common metaphor for designing SOA operation and data contracts is to think of paper forms being passed around to clerks to fulfil a business process. Another metaphor is mail orders, the point is that the 'document' contains all data needed to perform the business process.

Documents can be simple domain objects or aggregate root objects. An order is an example of an aggregate root object - it contains general order data and a set of order items. The order items belong to the order, i.e. it is an identifying relationship and not just a relation. As a general rule, an operation should process a whole document and never mess directly with 'identified items' within the document. Such operations would be very CRUDy style, and breaks the "boundaries are explicit" tenet.


Note that it is perfectly legal to operate on normal relations, such as adding orders to a customer. It is the nature of the domain and its business processes that decide if a relation is identifying or not, thus there is no hard rule to help you decide whether incremental operations are ok or not.

Using the 'document' metaphor, you would implement a 'RegisterOrder' operation to add new orders. You should, however, not implement operations for changing a subset of the order items; you must rather implement a 'ChangeOrder' or a 'CancelOrder' operation.

In my opinion, the Amazon shopping cart is analogous to the above order domain object. Thus, according to SOA best practices, they should not expose operations such as ChartModify that allows you to do incremental changes to the items in the chart. I understand that they do this for simplicity and performance reasons, and that the shopping cart is really not registered until it is submitted; but some developers use the AWS as general best practices for SOA and apply the same programming model blindly.

I don't propose that Amazon Web Services are not properly designed; they are just WOA rather than SOA.
A closing note: there is no "one or the other" between SOA and WOA, in fact they are both central in web 2.0. Read more about it in this report from the Web 2.0 Summit.

Friday, November 24, 2006

WCF Exception Shielding: Providing FaultContract Details using Generics

The WCF design guidelines recommends using the exception shielding pattern to sanitize unsafe exceptions and expose only a controlled set of exceptions using fault contracts. Thus, for every operation that you implement in your service adapter (facade), your method must have a try-catch block, code to create a fault contract object containing relevant exception information, and finally code to create and throw a System.ServiceModel.FaultException. This code will look the same in all your operation methods, and you will soon conclude that the code needs to be centralized into a static utility method to achieve better design and maintainability. In addition, a single static method that creates and provides a ready-to-throw fault exception object is also a good candidate for adding e.g. tracing and logging to your service facade.

I have used a combination of custom exceptions and generics to design a flexible exception handling mechanism that implements the exception shielding mechanism of our WCF services. The design involves a a FaultException<T> generics method that constraints T to be of type
FaultContracts.DefaultFaultContract (
where T : <base class>) that allows us to create a fault exception containing any fault contract type that we add to our service. Our fault contracts do of course all derive from DefaultFaultContract.

The design also involves a set of custom exceptions derived from System.ApplicationException. This allows us to throw exceptions from any component used by the service facade and catch them in the service operation methods, and still be able to extract data from the exception for use as information in the fault contract. The exception handling mechanism relies on this exception polymorphism to be able to handle any type of exception the same way, using a single method.

This is how the exception handler code looks like:

public static FaultException<T> NewFaultException<T>(Exception ex)
where T : FaultContracts.DefaultFaultContract, new()
{
StackFrame stackFrame = new StackFrame(1);
string methodName = stackFrame.GetMethod().Name;

T fault = new T();
//fault.SetFaultDetails(ex);
Type type = fault.GetType();
type.InvokeMember("SetFaultDetails", BindingFlags.Public BindingFlags.Instance BindingFlags.InvokeMethod, null, fault, new object[] { ex });

//add tracing as applicable
//add logging as applicable

return new FaultException<T>(fault, new FaultReason(methodName + " failed, check detailed fault information."));
}

As you can see, the code is quite simple as it delegates to the object <T> to actually fill any relevent exception data and other details into the fault contract.

Note the use of reflection to call the SetFaultDetails method of the fault contract object. This is needed as .NET does not correctly resolve the different overloaded versions of a method in a generics type, it will just call the overload that has the compile time type of the parameter. Thus, it will always call the
SetFaultDetails(Exception) method, even if the run-time type is a derived exception type. Using InvokeMember circumvents this problem.


Also note the use of the StackFrame object to get the name of the method that caught the exception (i.e. the method that calls NewFaultException exception handler).

As which info is relevant will vary with the type of fault contract and the type of exception, the DefaultFaultContract class have virtual (overridable) methods for any exception that needs special treatment, in addition to a method for System.Exception that other exceptions defaults to:

[System.Runtime.Serialization.DataContractAttribute(Namespace = "http://kjellsj.blogspot.com/2006/11/DefaultFaultContract", Name = "DefaultFaultContract")]
public class DefaultFaultContract
{
. . .

public virtual void SetFaultDetails(Exception ex)
{
//default type
this.errorId = (int) EApprovalErrorCode.Undefined;
this.errorMessage = ex.Message;
}

public virtual void SetFaultDetails(EApprovalException ex)
{
this.errorId = (int)ex.ErrorCode;
this.errorMessage = ex.Message;
}


}

Note that you should not expose implementation details in your fault contract data. The use of the exception message in the above code is just for illustration purposes. This is definitely not recommended, as the goal of using the exception shielding pattern is to sanitize the exposed information to avoid providing the service consumer with data that can be used e.g. for attacking and exploiting your system.

Finally, this is how all our service operation exception handlers look like now:

try
{
. . .
}
catch (Exception ex)
{
throw MessageUtilities.NewFaultException<FaultContracts.DefaultFaultContract>(ex);
}

The type of fault contract <T> used will of course vary with the operation. Remember that all our fault contracts inherits the DefaultFaultContract class and that the virtual SetFaultDetails method ensures that the different exceptions get the correct shielding and handling.

This exception shielding mechanism is designed to be flexible and simple, and using a one-liner for handling errors in the service operation is as easy as it gets.

[UPDATE] More details about WCF faults in this 11 part article by Nicolas Allen.

Wednesday, November 22, 2006

WCF Contracts: Separating Structure from Data

In my last post about designing WCF data contracts, I showed how to make a simple, straightforward hierarchical contract. In this post, I will refine the contract to make the building blocks of the data contract more reusable and how to adhere to WS-I basic profile best practices wrt naming of collections.

The data contract design of my last post is well suited from getting all data about a domain object structure in one go. It can also be used to add/modify nodes to/in the structure; but as each node data contract contains a combination of node metadata and collections of related items, it is not well suited for operations that inserts or updates items in the structure. The XSD schema of the data contract has no way of conveying what will happen with any collection items contained in the data when inserting a new ProjectDocuments node. The service might just ignore them (the 'easy' way), but this will add concealed semantics to the operations and violates the "share contract, not class/implementation" tenet of SOA. It would be better if the contract was unambiguous and easily understandable for the service consumers (the 'simple' way).

So to refactor the 'project documents' data contract into something that is well suited for both retrieving and modifying data, separating structure from data is needed. The metadata of each node in the hierarchy must be extracted into a new data contract, that becomes just another element in the node along with the collections. The new contract for the metadata is what we needed to make the schemas for insert and update operations simpler to use. In addition, the service will now have less subtle semantics.

The new ProjectDocuments contract using the new DocumentCategory data contract looks like this (click to enlarge):
The takeaway is that you should "normalize" your data contracts to some extent to make them more reusable across data contracts, messages and operations.

My colleague Anders Norås made me aware that naming collections ArrayOfXxx is not according to WS-I basic profile recommendations. WCF supports specifying the both collection element name and the collection item element name using the [CollectionDataContract] attribute. This attribute cannot, however, be applied to class members, only to a whole class. Thus, you will need a separate class for each type of collection in your data contracts:

namespace KjellSJ.Sample.Application.DataContracts
{
[CollectionDataContract(Namespace = "http://kjellsj.blogspot.com/2006/11/DocumentCard", Name = "DocumentCardCollection", ItemName = "DocumentCard")]
public class DocumentCardCollection : Collection<DocumentCard>{}
}


The use of the custom named collection is straightforward:

[System.Runtime.Serialization.DataMemberAttribute(IsRequired = true, Name = "DocumentCardList", Order = 3)]
private DocumentCardCollection _fieldDocumentCardList;

public DocumentCardCollection DocumentCardList
{
get { return _fieldDocumentCardList; }
}


The number of XSDs in your WSDL will increase significantly when applying custom names to collection. I recommend that you group related DataContract classes into a single namespace, e.g. the contract classes DocumentCategory, DocumentCard and DocumentCardCollection should have the same namespace. All contracts in a specific namespace must be versioned as one, so group your contracts wisely.

If you have code that uses the MetadataExchangeClient object to do MEX stuff, you must ensure that the MaximumResolvedReferences setting has room for all the new metadata. I use the MetaData Explorer from IDesign to inspect the service contract, and had to increase the resolve limit beyond the default 10.

Friday, November 17, 2006

WCF: Designing a Hierarchical Data Contract

We have now started designing the data contracts of our WCF services using the WSSF (Web Service Software Factory). All samples and demos I have seen so far shows simple entity objects such as a customer with a few basic value type fields. In this post I will show how to design a data contract for hierarchical data that uses value types, enums, entity objects and collections of entities.

The WSSF wizards require you to design the contracts bottom-up, as any object must exists before it can be added to the contract. Note also that a wizard cannot be rerun to expand data contracts, it can only create new contracts. This limitation is espesiaclly daft when it comes to service interfaces: you cannot add another operation to an existing service contract using the wizard. Thus, you must start by defining the leaf nodes of the structure and work your way up to the root element of the hierarchy.

I still recommend that you do a little design up front: start by identifying and writing down the use cases, identityfy the services (logical groups of operations) that you need to support the use cases, and finally write down a list of the logical operations that you think are sufficient to implement the business processes of the use cases. Use high level terms (verbs and nouns) to outline the set of operations (list, get, register, fulfill, etc; customer, offer, order, invoice, etc). Ensure that the operations in fact are business services, and not a RPC/CRUDy style programming model exposed as web-services.

The data in this example is a list of documents organized into a categorizing hierarchy. Each document again consists of metadata (the document card) and a list of files making up the document (classical DM stuff). Each file has to have an associated file type (enum).

I now prefer using the contract attributes over creating XSD schemas first, even if I still am a big fan of the contract first approach. I always extract the XSD schemas and inspect them with XmlSpy to ensure that the generated schemas looks the way I would like them to be.

This is how the data contract schema should look (split into two for readability, click to enlarge):


The file type is an XSD enum:
The data contract wizard makes it easy to add value types, enums and objects to the contract. It is not, however, possible to add a System.Collections.ObjectModel Collection<T> using the wizard. To add a collection, first add just the type (T) to the contract, then generate the contract class and modify it like this:

using System;
using System.Runtime.Serialization;
using System.Collections.ObjectModel;

namespace KjellSJ.Sample.Application.DataContracts
{

/// <summary>
/// Data Contract Class - ProjectDocuments
/// </summary>

[System.Runtime.Serialization.DataContractAttribute(Namespace = "http://kjellsj.blogspot.com/2006/11/ProjectDocuments", Name = "ProjectDocuments")]
public partial class ProjectDocuments
{
public ProjectDocuments()
{
_fieldChildNodes = new Collection<ProjectDocuments>();
_fieldDocumentCardList = new Collection<DocumentCard>();
}

private System.String _fieldId;

[System.Runtime.Serialization.DataMemberAttribute(IsRequired = true, Name = "Id", Order = 1)]
public System.String Id
{
get { return _fieldId; }
set { _fieldId = value; }
}

private System.String _fieldCode;

[System.Runtime.Serialization.DataMemberAttribute(IsRequired = true, Name = "Code", Order = 2)]
public System.String Code
{
get { return _fieldCode; }
set { _fieldCode = value; }
}

private System.String _fieldDescription;

[System.Runtime.Serialization.DataMemberAttribute(IsRequired = false, Name = "Description", Order = 3)]
public System.String Description
{
get { return _fieldDescription; }
set { _fieldDescription = value; }
}

[System.Runtime.Serialization.DataMemberAttribute(IsRequired = false, Name = "ChildNodes", Order = 4)]
private Collection<ProjectDocuments> _fieldChildNodes;

public Collection<ProjectDocuments> ChildNodes
{
get { return _fieldChildNodes; }
}

[System.Runtime.Serialization.DataMemberAttribute(IsRequired = false, Name = "DocumentCardList", Order = 5)]
private Collection<DocumentCard> _fieldDocumentCardList;

public Collection<DocumentCard> DocumentCardList
{
get { return _fieldDocumentCardList; }
}

}
}

As you can see, I have made the DocumentCardList a Collection<DocumentCard> and moved the DataMember attribute to the private member instead of the public property. The reason for moving the attribute is that I made the collection property read-only, a best practice, and the data contract serializer does not support read-only properties.

Note also the self reference to ProjectDocuments in the ChildNodes collection that makes this schema a hierachy. The collection becomes a XSD array:

<xs:complexType name="ArrayOfProjectDocuments">
<xs:sequence>
<xs:element name="ProjectDocuments" type="tns:ProjectDocuments" nillable="true" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>

Note that to adhere to the WS-I basic profile best practices, you should customize the collection element name using the [CollectionDataContract] attribute. Read more about using collections in the data contract here at MSDN.

I prefer not to mark as DataContract any enum that is referenced by a DataMember in the schemas. If you apply DataContracty to the enum, it will cause the fields using the enum to be interpreted a XSD string data type instead of a enum data type (xs:enumeration). Just leave the enum class as is and use it as any other datatype in the DataContract wizard. Read more about using enums in data contracts here at MSDN.

Wednesday, November 01, 2006

Getting Started with WCF (Indigo)

These days I am at a new project at a new customer after more than one year of rearchitecting and functionally porting an old VB6 solution to .NET 2.0 WinForms, and at the same time centralizing 88 client-server distributions to a single application server farm three tier distribution.

The new project is about creating an e-biz platform for providing services to partners and other systems; you know the buzz words: SOA and ESB. The solution will use Windows Communication Foundation (WCF) to expose and host the services implemented using .NET.

I have found these resources to be useful when getting started with WCF:


[UPDATE] Some more useful links:

You should also download the WCF/ASMX Service Factory toolkit and guidelines ([UPDATE] v2 des-06), these add-ins to Visual Studio is invaluable to get you started. The toolkit automates lots of the plumbing by providing wizards that generates code following the design guidelines.

Btw, I will be at the DevConnections in Vegas next week. See you there =D

Tuesday, October 17, 2006

Enter Supervising Presenter, exit MVP

This june I wrote about how we used model reference data and the Model-View-Presenter pattern in a WinForms container control with pluggable views. What I did not say was that we did not implement pure MVP, but rather a modified version that fitted better with WinForms data binding and object data sources.

Just about the same time in june, Martin Fowler retired the MVP pattern for just about the same reasons, and the new data binding friendly pattern is the Supervising Controller. The MVP pattern has actually been split in two, the other pattern being the Passive View.

So if you're still using MVP, be sure to catch up on the new patterns.

Wednesday, October 04, 2006

.NET deep clone - IsClone<T> using CloneFormatter

In some previous posts, I have written about how to implement an entity row state mechanism using .NET deep clone and an IsDirty method based on comparing two objects to see if they are clones of eachother. The IsClone<T> method was based on comparing the byte stream created by the BinaryFormatter, just as the Clone<T> method uses the BinaryFormatter to do deep cloning.

[UPDATE] A simple way to implement IsDirty when using IProperyNotificationChanged in data-binding enabled entity objects.

As I wrote in my last post, using the BinaryFormatter and the [NonSerialized] attribute in the IsClone<T> method is OK as long as you do not interfere with other stuff that use the BinaryFormatter, such as .NET remoting. The same applies to using the XmlFormatter, which will affect e.g. web-services. Thus, a separate formatter was needed to support the 'is clone' logic and fix the null/nothing string and the decimal problems. Enter the CloneFormatter based on the code in this article about Serialization Formatters.

I have made these changes to create the CloneFormatter:

private void WriteSerializableMembers(object obj, long objId)
{
System.Reflection.MemberInfo[] mi = FormatterServices.GetSerializableMembers(obj.GetType());
if (mi.Length > 0)
{
object[] od = FormatterServices.GetObjectData(obj, mi);
for (int i = 0; i < mi.Length; ++i)
{
System.Reflection.MemberInfo member = mi[i];
object data = od[i];
//KJELLSJ: do not serialize when marked with "CloneNonSerializedAttribute"
if (IsMarkedCloneNonSerialized(member) == true) continue;


if (member.MemberType == MemberTypes.Field)
{
FieldInfo fi = (FieldInfo)member;
if (data == null)
{
//KJELLSJ: ensure that null/nothing string gets "normalized"
if (fi.FieldType == typeof(string)) data = String.Empty;
}
else
{
//KJELLSJ: ensure that decimal gets "normalized"
if (fi.FieldType == typeof(decimal)) data = Convert.ToDecimal(Convert.ToDouble(data));
}
}
WriteMember(member.Name, data);
}
}
}



// Is the type attributed with [CloneNonSerialized]?
private bool IsMarkedCloneNonSerialized(MemberInfo mi)
{
object[] attributes = mi.GetCustomAttributes(typeof(CloneNonSerializedAttribute), false);
return attributes.Length > 0;
}


void ReadMember(long oid, MemberInfo mi, object o, SerializationInfo info)
{
//KJELLSJ: do not deserialize when marked with "CloneNonSerializedAttribute"
if (IsMarkedCloneNonSerialized(mi) == true) return;

// Read member name.
. . .
}

In addition, a new attribute was needed to be able to exclude class members from the 'is clone' logic:

[AttributeUsage(AttributeTargets.Field, AllowMultiple=false)]
public class CloneNonSerializedAttribute : System.Attribute
{
public CloneNonSerializedAttribute()
{
//only purpose is for marking fields as not-serialized by the CloneFormatter
}
}


Apply the [CloneNonSerialized] attribute to any class member that should not be part of the 'is clone' comparison.

The new IsClone<T> looks like this:

public static bool IsClone<T>(T sourceA, T sourceB)
{
if (sourceA == null sourceB == null)
return false;

MemoryStream streamA = new MemoryStream();
MemoryStream streamB = new MemoryStream();

IFormatter formatter = new CloneFormatter(); formatter.Serialize(streamA, sourceA);
//must create new formatter to reset the ObjectIDGenerator counter
formatter = new CloneFormatter();
formatter.Serialize(streamB, sourceB);

if (streamA.Length != streamB.Length) return false;

byte[] hashA = streamA.GetBuffer();
byte[] hashB = streamB.GetBuffer();

if (hashA.Length != hashB.Length) return false;

for (int i = 0; i < hashA.Length; i++)
{
if (hashA[i] != hashB[i]) return false;
}

//if here, objects are equal
return true;
}


Note that I have dropped the MD5 hashing in favor of plain byte-by-byte comparison as suggested by Nuri in a comment to the previous version. This should perform better, even if the for loop now gets longer that max 16 bytes.

Note also that the Clone<T> method still uses the BinaryFormatter to do the cloning. Use the CloneFormatter only for the IsClone<T> logic.

Note: serializing is not the most performant way to do cloning, read more at Anders Norås' blog.

Thursday, September 28, 2006

.NET Serialization: Custom Formatter, IFormatter

This spring I wrote about how to use cloning and serialization to implement an entity row-state mechanism. We used the BinaryFormatter in the 'clone' and 'is dirty' methods as web-services was the protocol between our tiers (XML serialization), and .NET remoting was not to be used (binary serialization).

Of course, this "absolute requirement" has now changed and using the BinaryFormatter and the NonSerialized attribute is no longer an option as all properties/fields must now be available through remoting. It was time for a custom formatter as the BinaryFormatter class is sealed/NotInheritable.

As the help topics on IFormatter contains only API level details, I though I should share this excellent documentation and working code with you: Serialization Formatters (published by Universität Karlsruhe).

The article also explains the little known ISerializationSurrogate mechanism; this allows you to provide your own serialization mechanism for specific classes/object types/value types, even when using one of the standard formatters. This is typically used to serialize classes that are not marked as [Serializable]. It can also be used to provide specific handling of null/nothing strings and for normalizing the serialization of decimal values. The former is a problem for the .IsDirty mechanism because a null/nothing string is different from a ""/String.Empty string, while the latter is a problem because the decimal value 1 might be different from the value 1.0 as the binary representation might change.

You might also find this Advanced Serialization MSDN article useful.

[UPDATE] Serializing object graphs that comprises inheritance requires some extra work. If you need to serialize inherited composite objects, the XmlInlcude(typeof(DerivedClass)) class attribute is the best solution when derived classes are know at compile-time. If you need to serialize "unkown" derived objects, the CodeProject XmlSerializer and 'not expected' Inherited Types article provides a really simple solution (the comments section includes a Generics NodeSerializer<T>).

Some of the MSDN help topics lacks working, real-life examples of how to implement an interface or abstract class; which is too bad and makes developers waste a lot of time in "the desert of desperation". I had the same problem last year looking for a working example of how to implement a SoapExtension. I wish Microsoft could do a litte more to help developers fall into "the pit of success".

Monday, September 25, 2006

MSCRM 3: Move contact, retain history

My Objectware colleage Bjarne Gram has published an article that shows how to implement a pure client-side solution to move a contact to a new company while retaining all historical info by linking back to the former employer and a deactivated version of the contact.

Read the article over at Barney's blog.

Monday, September 11, 2006

Intention Revealing Interfaces, Bad Naming and Maintenance

As an architect I am trying to guide the other programmers on my current project that the design of our components is very important for the reusability of our solution. There are some bad naming of methods and properties that makes me think of the famous "How to write unmaintainable code" article, and to make things worse, this especially applies to some core common objects (which are of course reused from another project group :).

The naming focus is on intention revealing interfaces, but still new methods/properties/class members with creative misspelling, abstract names such as PerformDataFunction, abbreviations such as IntAccount (integer, internal, international ?) , pops up in the design now and then.

What triggered me to write this post is misleading name reuse. One of the domain objects is the AccountNumber and the account number does either belong to the bank or it does not. In addition, an account belong to the bank can be an internal account for e.g. book-keeping, currency exchange, etc. Looking through the .IsXxx properties of the object I found .IsInternal, but no property to tell me if the account belongs to the bank or not. As the object did not reveal its intention to me, I had to find and ask the developer. And surprise, surprise; the domain term 'is internal' had been used as the 'own bank' indicator as it "seemed natural at the time".

"So why did you not change it when you discovered the misleading term?" I asked, knowing the answer: "I am afraid that it will break the other project. Also, it might be a lot of work to make it build after the refactoring". I know that the "other project" has low unit tests coverage and a complicated build configuration, but none of the solutions using the common component is in production yet.

This leads me to the difference between public and published, i.e. it is the published parts of your object model/API you cannot change (it is the contract of your service). The public parts is subject to change without notice. As none of the components are published yet, we decided to refactor the design of the object model to comply with best practices, now rather than later.

The main goal of the refactoring of our components is to comply with some of the central design goals of our components: "power of sameness" and "tools for communication (revealing intention)". As it is now, coding against the common core components is like being an archeologist; piecing together fragments left behind by long gone developers.

Being complex is easy, being simple is hard - Brad Abrams

Monday, August 28, 2006

Using Excel to generate picklist XML for MSCRM

In a recent post, Mitch Milam asked me to provide an Excel based solution to generating the customization XML for MSCRM picklist values. So, here it is: it uses a small VBA macro to loop through a named range of cells to generate the XML (see figure).

The range name is entered into A2 ("CountryList" in the example) and this defines the set of picklist item values. The option offset number to start with is entered into B2 ("10" in the example), it becomes the value of the first generated option. Use one as the 'Start option number' when adding items to an empty picklist.

I have added a button control to run the GenerateCustomizationXml method that generates the customization XML:

Sub GenerateCustomizationXml()

Dim namedRange As range
Dim outputCell As range
Dim listName As String
Dim xml As String
Dim ctr As Integer

listName = range("A2").Text
ctr = range("B2").Text

Set namedRange = range(listName)

xml = "<options nextvalue=""" & ctr + namedRange.Cells.count & """>"

For Each listItem In namedRange.Cells

xml = xml & "<option value=""" & ctr & """><labels><label description=""" & listItem.Text & """ languagecode=""1033"" /></labels></option>"

ctr = ctr + 1

Next

xml = xml & "</options>"

Set outputCell = range("B4")
outputCell.Value = xml

End Sub

The macro generates the XML with the correct option value numbers, calculates the nextvalue number, and puts the result into B4. Mark the B4 cell and click CTRL-C to copy it to the clipboard. Follow the steps outlined in Mitch's blog to import the new picklist values into MSCRM. Refer to the SDK for more details.

The macro is generic and can be used for any named range of cells in the worksheet. Just enter the range name in A2 and run the macro. Naming a range of cells is as easy as selecting the range of cells and typing in the name in the 'name box' in the upper left corner of the worksheet.

Tuesday, August 15, 2006

MSCRM 3 country field: apply a standard picklist

In a previous post, I referred to the DHTML solution by Michael Höhne that uses JavaScript to convert the country field from a text box to a picklist. The drawback of that solution is that the list of countries is provided by an inline JavaScript array, and is thus not very easily maintained.

I prefer that a MSCRM super user can maintain the content of the country picklist like any other picklist in the system. This is, actually, easy to achive by modifying the script a little bit: instead of using an array, just copy it from a hidden custom picklist field that contains all countries. In addition, I will show how to make it work with 'quick create'.

Start by adding a new field (attribute) of type 'picklist' to e.g. the account entity. Give the new field the name 'adminCountry' and add the list of countries. Save and close the added field.

Then open the account form and add the new 'adminCountry' field to the 'Address' section. Make the field read-only and hide the label.


Then click on 'Form Properties', and select 'OnLoad' in the 'Event list' and click 'Edit'. Turn on 'Event is enabled' and use this JavaScript to copy the values from the 'new_adminCountry' picklist:

//************************************************************
//Original author: Michael Höhne
//source: http://www.stunnware.com/crm2/topic.aspx?id=JS1
//************************************************************
//The lookup field to change. You can use this code for any field you like.
var fieldName = "address1_country";

//I'm saving the current field value to set it as the default in the created combobox.
var defaultValue = crmForm.all.item(fieldName).DataValue;

//This is the TD element containing the text box control. We will replace the entire innerHTML to replace
//the input type="text" element with a select element.
//KJELLSJ: replace the INPUT itself

//var table = crmForm.all.item(fieldName + "_d");
var input = crmForm.all.item(fieldName);

//This is the beginning of our new combobox. It's a standard HTML declaration and all we need to do is to
//fill the appropriate options. You should check the original HTML code to get the appropriate values for
//req (field required level) and the tab index.
var select = "<select req='0' id='" + fieldName + "' name='" + fieldName + "' defaultSelected='' class='selectBox' tabindex='1170'>";
//KJELLSJ: build options separately
var options = "";

//KJELLSJ: hide the 'new_adminCountry' picklist
var picklist = crmForm.all.item("new_adminCountry");
picklist.style.display = "none";
//KJELLSJ: inject countries from hidden 'new_adminCountry' picklist
options = picklist.innerHTML;
options = options.replace(/selected/i, ""); //remove selection

options = options.replace(/value=\d+>/g, ">"); //remove numeric values
options = options.replace(/>([\w ]+)</g, "value='$1'>$1<"); //use name as value
var defaultValueFound = false;


//Here's the part that ensures that an existing entity will always display the stored value of the
//country field, no matter if it is included in the option list or not. If it is set and it was not found
//in the previous loop, then defaultValueFound will still be false and we have to add it as a separate
//option, which is also SELECTED.
if ((defaultValue != null) && (defaultValue.length > 0) && !defaultValueFound) {
//KJELLSJ: add selected country as first option
options = "<option value='" + defaultValue + "' SELECTED>" + defaultValue + "</option>" + options;
}

//Close the open select element.
//KJELLSJ: concatenate the select and options
select = select + options + "</select>";

//Finally, I replace the entire definition of the text box with the newly constructed combobox. IE is very
//smart and will instantly update the window. You now have a combobox with a list of all available countries
//in the world and it will be saved directly to the address1_country field.
//KJELLSJ: replace the INPUT itself
//table.innerHTML = select;
input.outerHTML = select;


I have changed the script to use .outerHTML on the <INPUT> element as the 'quick create' form mode does not have a named <TD> element.
Note also the use of regex to transform the standard list of options into a picklist that will store the name of the country instead of the picklist value number.

Save the modified account form and publish the customizations (Actions-Publish). Test the script by opening an existing account and by creating a new account. Also remember to test that the customization works correctly in 'quick create' form mode for the entity. Use Fiddler to see the source of a 'quick create' web-dialog.

I have made a simplification to the 'selected country' logic by just adding the existing country as the first item in the picklist. I prefer that the current value is the top value in a picklist, the same way I prefer that the most used values are at the top of the list.

It is rather simple to extend the script to find the correct country in the options string: use .indexOf() to find the correct option element, then use .replace() to inject the "SELECTED" text into the options string.

This solution combines the best of the DHTML approach with the ease of the replacement-country-picklist approach, while avoiding the need for an OnSave/OnLoad script to keep the standard country field in sync with the selected item in the picklist.

Manually entering all countries in the world to the picklist is not fun, but this tool at Mitch Milam's blog should make things simpler. Alternatively, it should be rather trivial to use Excel to generate the XML from a range of cells. Most customers provides the set of picklists as Excel worksheets, afterall.

Friday, August 04, 2006

BBoM: Golden Hammer, Hammer-Nail Metaphor

The Golden Hammer and "When you have a big enough hammer, everything starts to look like a nail" are two metaphors that are related to the Big Ball of Mud pattern. A central part of BBoM is that developers (coders) stick to the tools and technologies that they know, independent of the actual problem at hand.

This kind of thinking is more common than you might believe. A while ago, I did some work assisting on an integration project that exported data from a RDBMS to an FTP server. As this was a typical ETL process, I recommended using DTS on one of the client's existing SQL Server 2000 servers. DTS is perfect for fetching data, transforming and writing it to files, and then transferring them using FTP.

One of the developers, however, was not familiar with DTS; but he knew how to write Windows services in VB6. And so he implemented his part of the ETL as a service - running on his PC (on his own, doing his "thing"). In addition, there is no 'deployment & configuration' documentation (also very typical for BBoM solutions). The 'truck factor' for his solution is one, and now he is gone/unavailable for six months; and there is a huge note on his PC: "DO NOT LOG OFF OR TURN OFF THIS COMPUTER. PRODUCTION CODE IS RUNNING HERE". Say no more . . .

Another of my favorite phrases is "Assumption is the mother of all fuck-ups" (from the Steven Seagal movie Dark Territory). I apply it anytime a programmer coder tells me that something need not be tested/checked/verified.

Wednesday, July 05, 2006

MSCRM 3: Convert text boxes to picklists using DHTML

All of our MSCRM customers are quite annoyed that the address country field is just a text box and not a drop down list with all existing countries. This makes e.g. reporting by country hard to do. There are several suggested workarounds out there, but Michael Höhne has come up with a very neat tick using DHTML and JavaScript in the OnLoad event of the MSCRM form.

The script replaces the country field with a picklist with the same name/id as the standard text field. This technique can of course be applied to any text field that you would rather see as a picklist.

A nice extension to Michael's script would be to remove the hardcoding of the picklist values with dynamic fetching of the values using a web-service AJAX style. This would make maintenance of the picklist content simpler, providing the super user with a centralized location for picklist management. After all, the country field is used several places in MSCRM; and the lists of countries seems to change every week these days.

Arash Ghanaie-Sichanie's excellent article "Accessing Web Services From CRM Forms" shows how to implement dynamic lookup of values. Note that using a web-service might not be feasible for the MSCRM laptop client (offline).

The full ISO 3166 country list can be found here.

Monday, June 26, 2006

MSCRM 3 Laptop Client – Delete Contact Synchronization/Tracking

The laptop client of MSCRM 3 synchronizes a user’s MSCRM contacts with the default Outlook contacts. By default this comprises the local data group ‘My contacts’, i.e. all contacts owned by the user.

This synchronization works fine and allows a tracked CRM contact to be modified either through the MSCRM contact form or through the Outlook contact form. However, when deleting a contact in either location, users can get confused by the result as the synchronization mechanism results vary, and a contact might continue to exist either in Outlook or in MSCRM, but never in both places.

Central to the synchronization mechanism is the Outlook-MSCRM link. This link is what relates a MSCRM contact with an Outlook contact, and defines that a contact shall be updated during synchronization. This link can be OK or broken, and this is what defines if an Outlook contact is tracked or not. It is the user's synchronization data groups that defines which contacts will be synced (created, updated).

[UPDATE] This article on the MS CRM Team Blog shows the complete set of rules regarding deleted item synchronization.

The result of a contact deletion vary dependent of the owner of the contact and whether it was deleted in MSCRM or in Outlook; and the result might be a deletion of the contact one or both places:



All testing behind these rules have been run using ‘CRM-Synchronize Outlook with CRM’ after each create/delete/change owner action. Remember the “My Active Contact” filter when testing these rules.

The term “tracking removed” means that the Outlook-MSCRM link is broken. The term “syncing removed” means that not only is the link broken, it is removed and the contact is not longer comprised by the synchronization. The latter means that you can create a new Outlook contact with the same data as the deleted one and apply tracking, causing a duplicate MSCRM contact to be created at the next synchronization. The lack of a (broken) tracking link deters the syncing mechanism from detecting the contact duplication.

The easiest way to check whether an Outlook contact is tracked or not, is to open the contact and see if the CRM toolbar says “Track in CRM” (not tracked, no link or broken link) or “View in CRM” (tracked, link is OK). Note that if the ownership changes after a contact have been synced to Outlook, then the Outlook contact will behave like an untracked contact with a broken link.


The second thing about CRM contact deletion that confuses users, is the behavior of the tracking mechanism when one of their contacts was deleted in MSCRM and then recreated, but fails to synchronize again. The MSCRM laptop client will give you a warning if you try to re-apply tracking of an Outlook contact that has been deleted in MSCRM (i.e. was previously linked); telling you that it is no longer synchronized, and asking if you want to create a new record in MSCRM. Note that if you delete and recreate in Outlook, there will be no warning.

At this point in the synchronization/tracking adventure, the user will try to re-link Outlook and MSCRM as best they can. It is now very likely that duplication of a re-created contact will happen.

When an owned contact was deleted in MSCRM (will exist in Outlook), this is what typically happens when users try to re-apply tracking:

  1. Create the contact again in MSCRM and synchronize
  2. You now will have two versions of the contact in the Outlook contact folder; a new tracked/synced contact, and the old Outlook contact

When an owned contact was deleted in Outlook (will exist in MSCRM), this is what typically happens when users try to re-apply tracking:
  1. Create the contact again in Outlook, track it and synchronize
  2. You will now have two versions of the contact in MSCRM; a new tracked/synced contact, and the old MSCRM contact

So, do not apply any of the above steps to fix the broken links and re-enable tracking and synchronization for deleted MSCRM contacts. This is the correct routine to re-apply synchronization tracking:
  1. Open the ‘Set Personal Options’ dialog using the ‘CRM-Options’ menu
  2. Turn off syncing of contacts and click OK
  3. Use ‘CRM-Synchronize Outlook with CRM’ to run the sync, this clears the deletion tracking (broken link tracking)
  4. Turn syncing back on and rerun the sync, this will automatically re-create Outlook contacts for the existing MSCRM contacts (as the deletion tracking is gone)

Re-creating MSCRM contacts for the existing Outlook contacts require that you apply these steps to each of the contacts:
  1. Open the Outlook contact and click “Track in CRM”
  2. Click save to get the warning question “Would you like to create a new record in CRM?”, optionally use “View existing record in CRM” to verify that the contact does not exist
  3. Answer ‘yes’ and this will create new, tracked MSCRM contact
  4. Repeat the steps for all deleted MSCRM contacts
NOTE: it is better to recreate the deleted MSCRM contacts first, as you then will have fewer Outlook contacts to process. After all, recreating the deleted Outlook contacts is an automatic process.

In addition, I recommend that you create a new contact folder in Outlook for your private contacts, as this makes it really simple to keep the biz contacts separate from your wife and grandmom.

The delete synchronization and the tracking confusion is one of the top issues at the MSCRM news group. Read this deletion tracking explanation at the MSCRM team blog.


Note that the MSCRM desktop client is always online, and therefore no synchronization is needed.

Thursday, June 22, 2006

Integrating team-sites into MSCRM (part II)

The MSCRM team has published a step-by-step guide for removing the chrome from SharePoint team-sites integrated into MSCRM. This was one of the three tasks that I outlined in my previous post on this topic. Note that I recommend hiding the chrome rather than deleting it from the web-part-page. I will add some details about the other tasks regarding the view columns and the view toolbar in this post.

What you will soon find out when integrating a team-site into a MSCRM <iframe>, is that in order to keep the integrated appearance, you need to control all navigation options in the team-site. This applies to both standard hyperlinks and to JavaScript onclick links. The navigation options
allows a user to open other web-pages inside the <iframe>. As you cannot easily control navigation options in these other pages, navigating away from the tailored view can lead to a less integrated appearance.

I recommend that you tailor the MSCRM view of the WSS team-site to show only the
necessary information to the user and provide only a few action options in the view, plus links to open the standard SharePoint team-site or doc-lib in a new browser for full access to the collaboration features of WSS. This method is similar to the view/actions functionality of the new MOSS 2007 Business Data Connector (not to mention IBF...). Check out the video about MOSS 2007 including BDC at Channel 9.

You need to review the navigation options in the team-site web-page to ensure that users will not be able to stray off-limits. As I explained in part I, you will not be able to customize all navigation options this way, as most of the content of the team-site is dynamically emitted by the web-parts on the page. Thus, you need to control the navigation dynamically after the page has completed loading using JavaScript and DHTML. I use a script that modifies all applicable hyperlinks to open a new browser; and that removes all onclick link actions, except on list sorting links.

This script was left to you as an exercise in part I, but for those of you that prefer copy-paste coding, here it is:

<script language="javascript">

function OnLoadSetHyperlinkTarget()

{
var links = document.getElementsByTagName('a');
//alert('Number of hyperlinks: ' + links.length);

for(i=0; i<links.length; i++)

{
var link = links[i];
if(link.id == 'EXCLUDE') continue;

link.target = '_blank';


if(link.onclick != '' && link.href != 'javascript:')

{
//alert('<A> onlick, id: ' + link.id);
link.onclick = '';
}

if(link.href != '')

{
if(link.href == 'javascript:')
{
//leave sorting links as-is
}
else if(link.href.substring(0,10) == 'javascript')
{
//alert('<A> href javascript, id: ' + link.id);
link.href = '';
link.onclick = '';
}
}
}
}
</script>


As you can see by examining the script, you can preserve hyperlinks by setting their id = 'EXCLUDE'. This is useful when adding your own hyperlinks that should not navigate out of the <iframe>. Note that the script removes all onclick handlers, thus it should not be used in combination with the full toolbar of lists and doc-libs.

Add this to the very bottom of the page to run the script:

</body>
<script language="javascript">

OnLoadSetHyperlinkTarget();
</script>
</html>


Note that the running of the script at the end of page load can be refined by using a onload JavaScript to wait for the page readyState to become "complete" before running it. Just running the script from after the <body> tag should work well enough in MSIE.

You must also review which column types you include in the MSCRM view of the document library. The reason for this is that the some of the column types provides options that causes navigation. I.e. use the "Name (linked to document)" as the document link column, rather than the column type that provides the drop down edit menu. Use "Modify settings and columns-Views-Edit view" to customize the MSCRM view to contain only the most basic meta-data and options; and provide a link to open the standard SharePoint view in a new browser with full doc-lib features.


Regarding the document library toolbar type, it is safer to use the 'Summary' toolbar in the new view, rather than the full toolbar. Alas, the functionality of the full toolbar might be more important than controlling the navigation options. If this applies to you, then use the full toolbar - just remeber to modify the above script accordingly.

At last, I want to refresh your memory on these two WSS tips from part I:

Do not hesitate to remove the SharePoint "Modify shared page" menu link. The toolbox can always be summoned using this querystring: ?mode=edit&PageView=Shared
&toolpaneview=2 (see
SharePoint tweaks).

Also try this nifty little SharePoint querystring trick: ?contents=1