Wednesday, December 21, 2005

VS2005 Design-time set DataSource error III

Design-time data-binding in .NET 2.0 WinForms is a real productivity booster that I use extensively. I have, however, had some problems with using the new object binding source mechanism (error I, error II) and today yet another problem. I began designing a new user-control a few days ago, and being a lazy developer, I copied some data bound comboboxes that I needed from another user-control.

I should have know better, copying controls was a recipy for immediate disaster in the VS2005 betas; you could be sure that the Visual Studio designer would fail when you later on re-opened the user-control. The same applied to renaming a control after modifying some of its properties. These problems have been fixed in the release version of VS, although opening a form still randomly causes the cursor to go into a blink frenzy for quite some time while re-syncing with the .Designer file.

Copying those combos kept some of the data binding properties (ValueMember, DisplayMember), but did not copy the binding sources (more on this below). I added the object binding sources in the 'Data Sources' explorer, and after some refreshing of the project data sources (error II), object binding seemed to work properly.

Today, I added another object binding source to the 'Data Sources' explorer of the user-control. If I then tried to use the DataSource property at design-time, I got this error:

Object reference not set to an instance of an object

Setting the data binding at run-time work fine. So does copying the original data source binding code from the hidden .Designer file, it even shows correctly in the control properties explorer, but cannot be changed due to the above error. But this is not how I want to do it, I wanted design-time binding to work. Any attempt to modify data binding on any controls in the form would cause the design-time error.

In an attempt to fix the binding problems, I used an old trick when working with software and computers; I restarted Visual Studio, opened the solution and cleaned it. Then I clicked the DataSource property of my failing combo. Now I got this error, which anyone who have parsed XML will recognize:

Root element is missing

After Visual Studio had shown this puzzling error (as usual no intelligble info provided in the message), all design-time data binding worked properly again and the combos were filled at run-time. Oh joy! . . . But the object binding source that I had added lastly, was gone from the 'Data Sources' explorer. As long as the design-time binding kind of worked again, I could live with that.

I suspected that there was a problem with one XML files in the solution and checked the various .RESX files and some other files with XML content, but I was not been able to find any error. When I had just about given up, I noticed that the three listed data sources in the 'Data Sources' explorer also happened to be the three first .datasource files in the \My Project\DataSources\ folder (use 'Show all files' in the 'Solution Explorer'). Thus, I opened the fourth .datasource file, and voila, it was empty! Visual Studio clearly just stops looping through the data sources when it encounters an error.

I excluded the empty file from the project, reloaded the solution, and then all the project data sources were listed in the 'Data Sources' explorer. Design-time binding now works as expected again. Oh, even more joy!

I am still sceptical to copying object bound controls from one WinForms user-control to another. The VS designer might punish you. Maybe I am just being paraniod.

Monday, December 19, 2005

VS2005 Crash on binding to data source II

Today I got a problem with VSTS 8.0.50727.42 that caused Visual Studio to just die, sometimes even without opening the error reporting dialog. This happened when trying to bind a DataGridView to a newly added object data source by using the popup toolbox of the grid. VS would vanish when I clicked the 'Choose data source' combo. No error message.

As I have had problems with the VS2005 data source mechanism before (data source crash I), I have some experience in troubleshooting data sources. I stared out checking for invalid, zombie data sources in the \My Project\DataSources\ folder (VB.NET), and removed an old, unused data source. This did not solve the problem, but I prefer to diagnose a clean solution.

When I made the new business entity object that I was adding as the new binding source, I also refactored some of the other entity objects to remove some obsolete properties and changed some property names to reflect gained knowledge about the domain.

I found out that VS2005 is not fond of such changes in the assemblies used as object binding sources. It is however, quite easy to make the data sources reflect the changes:
  • Select the project in the solution explorer
  • Click the 'Show all files' button
  • Navigate to 'My Project' and expand it
  • Navigate to the 'DataSource' child node and expand it
  • Select each of the data sources in turn
  • Right-click the data source and select 'Refresh'
Note that there is no refresh option on the 'DataSources' node in solution explorer, you have to manually refresh every single data source in your solution.

After refreshing the data source definition cache as described, VS2005 no longer performs harakiri in response to refactoring objects used as data sources.

Friday, December 09, 2005

.NET deep clone - IsDirty check using IsClone<T>

Those of you that read my colleague Anders Norås' blog may have tried the deep clone using serialization method (see also article at MSDN Mag). This is really useful e.g. when implementing undo, transactions, and other mechanisms that need to make copies of objects.

[UPDATE] Refer to this post for a faster IsDirty check and more reliable IsClone method: IsClone using custom formatter.

Another useful appliance of the clone method is for implementing an IsDirty property in your business entity objects or in other areas of your application. I have implemented an IsClone method that checks whether two objects are identical or not:

public static bool IsClone<T>(T sourceA, T sourceB)
{
IFormatter formatter = new BinaryFormatter();
Stream streamA = new MemoryStream();
Stream streamB = new MemoryStream();

formatter.Serialize(streamA, sourceA);
formatter.Serialize(streamB, sourceB);

if(streamA.Length != streamB.Length) return false;

streamA.Seek(0, SeekOrigin.Begin);
streamB.Seek(0, SeekOrigin.Begin);

byte[] hashA = new System.Security.Cryptography.MD5CryptoServiceProvider().ComputeHash(streamA);
byte[] hashB = new System.Security.Cryptography.MD5CryptoServiceProvider().ComputeHash(streamB);

for (int i = 0; i < 16; i++)
{
if (hashA[i] != hashB[i]) return false;
}

//if here, objects have same hash = are equal
return true;
}


The "source" objects must of course be deep serializable. The method uses hashing to achieve good performance. The compare using MD5 is based on code by Corrado Cavalli for fast comparison of byte arrays.

Wednesday, December 07, 2005

TFS server restored - I miss SourceSafe's checkout options

The TFS server (beta 3 refresh) of our project had to be re-installed due to database problems, but I continued working offline (without TFVC) on the components that I am solely responsible for. I have done this many times before when SourceSafe has been unavailable due to no access to the "database" file share. Afterall, as a consultant I am often required to make changes to a previous project deliverable after moving on to the next customer. Sometimes I even have to fix bugs (a rare experience for me).

Today, the TF Version Control system was operational again. The TFVC responsible rebuilt the project from his last good known copy of the source files. Time for me to merge in my "offline" changes. I opened the 'Source Control Explorer', browsed to the project and selected 'Get latest version (recursive)'. TFS dutifully discovers and lists the source files that I have made changes to, and prompts me to resolve the conflicts (conflict type is 'Writable file').

TFVC provides you with three options for resolving writeable file conflicts:

  • Check out and auto merge: nice when it works, but it never did for me
  • Overwrite local files/folder: does exactly what it says, just remember to make copies of your "offline" work
  • Ignore conflicts: pointless option as TFVC will fail later on if you try to checkout a writeable file

Note that if you try to check out a writeable file, you will get this error: "Checkout error or user cancellation - File was not checked out".

Where did the useful SourceSafe checkout options for writeable files go ? As I am sure that sometimes in the future I will have to make offline changes to files, I miss these two options in TFVC:

  • Check out and replace: yes, I made some changes, but I want to discard them now, and then continue working on the file to implement the needed changes
  • Check out and leave: yes, let me keep my changes - AND - make it easy for me to add my changes to this file to TFVC

Please, can I at least have the 'check out and leave local file' option back ? Pleeease, TFVC team !

TFVC is really cumbersome when merging "offline" changes back into the version control system:

  1. Make a backup of your offline files using Windows Explorer before even thinking about using TFVC
  2. Perform 'Get latest version' to see which 'Writable files' conflicts there are, and remember the list of files
  3. Resolve all conflicts with 'overwrite local files/folder', unless the 'auto merge' option works for you; the 'ignore' option takes you nowhere
  4. Perform 'Check out for edit' on the files, note that this will only work if you first have "resolved" the conflict by overwriting your offline changes
  5. Replace the old versions of the changed files with the "offline" files using Windows Explorer
  6. Finally check in the 'pending changes' set of source files

You will soon find out that the above should be done with a small set of files at a time, as it is easy to loose track of which source items have been merged or not.

I wonder what kind of usability tests they do at Microsoft to think that TFVC will always be available/online, or to decide that options that were available in SourceSafe for a good reason is no longer needed in the brave new team foundation world. Maybe it is true that the TB manager is so oblivious to the community that he had never heard of CruiseControl.NET.

Tuesday, December 06, 2005

TFS server down - continue working in VSTS

Yesterday our TFS server (beta 3 refresh) started behaving strangely; if I checked in a file and then immediately viewed the file through 'Source Control Explorer' history, it would open the correct file (correct file name in the titlebar), but it contained MSBuild XML content insteadof the original code. This did ofcourse cause all server-side builds to fail. The project's TFS gurus are working on restoring the different TFS databases in the correct order, but this has turned out to be non-trivial. TFS seems to work fine for a while, then the problems are back.

Thus, we are not able to use TFS as of now, but I need to continue working locally on my PC. The default source control settings in VS2005 is not very useful when a connection to TFS cannot be made; it will try to check out files and then just fail, not giving any fallback options. I had to make these changes to 'Checked-in items' in the VS2005 options to be able to 1) edit files and 2) save them locally:


Note that you must clear the 'read only' attribute of a file to be able to save it. Just select the file to overwrite in the 'Save as' dialog, then use 'alt-Enter' to change the properties of the file.

The old SourceSafe 'overwrite' option popup is not available when editing and saving a file that is under source control. You need to change the TFVC options before you can edit and save the file.

The reason for trying to restore the databases is that one of the developers could not delete a test project from TFS using the TFSDeleteProject tool, and then manually deleted some records in the database. The gurus have just given up the restore activities, and are now reinstalling TFS from scratch...

The morale: don't mess with stuff that works.

Tuesday, November 29, 2005

MSCRM 3.0 in a multi AD forest infrastructure

MSCRM 3.0 by default supports a single AD domain (really a single AD forest) and a single Exchange 'organization'. The full spectrum of MSCRM functionality will be available to your users when your infrastructure adheres to these requirements. I will call the AD domain into which you install MSCRM, SQL Server 2000/2005 and SRS, the native domain. The same term is used for the Exchange organization of the native AD domain.

These are most likely infrastructure challenges you will encounter outside the native domain:

  • Outlook desktop client: access to MSCRM platform services
  • Outlook laptop client: desktop + go offline and online
  • Exchange: Routing of incoming e-mails to MSCRM users and queues
  • SQL Server Reporting Services: access to SRS services for reporting
First an overview of what sould work and what should not, dependent on some 'unsupported' infrastructure scenarios:

If you have multiple AD forests without explicit trusts, then the users not in the native domain will get only basic MSCRM functionality; the web client over HTTPS with basic authentication. These users will not be able to use neither the online nor the offline Outlook client (MSCRM desktop / laptop client) as they are not logged on to the domain. Note that such users will not get full reporting functionality with SQL Server Reporting Services (SRS) in this scenario.

If you have multiple Exhange organizations without explicit trusts; then the users not in the native organization (forest) will get only basic send e-mail functionality, the 'e-mail router' will not be able to automatically route incoming e-mails as the mailboxes are not in the native organization. In addition, the router cannot access the native AD domain when not explicitly trusted.

If you have users in an NT4 domain with a one-way trust from the native domain, these users will be able to use both the web and desktop client. They will not be able to use the laptop client as they cannot go off/online, incoming e-mail will not be routed to them, and they will not get full reporting functionality.


Then an overview of how can a multi AD forest and Exchange organization be configured to support full MSCRM functionality:

First of all, forget getting full MSCRM functionality for NT4 domains. Microsoft does not support NT4 anymore, so your're on your own.

The good news is that your users across several AD forests will be able to get the full spectrum of functionality available in MSCRM 3.0. This will just require some configuration of your infrastructure.

The most important aspect is that you have to add at least one-way trusts from the native MSCRM domain to the other domains. Trusting requires a LAN, WAN, or VPN connection between your domains. Support for full MSCRM over plain HTTPS is not possible.

The MSCRM Outlook client requires Windows Authentication / Kerberos against the native AD domain and usage of the default security credentials on the client PC. Thus, by adding one-way trusts, your users will be able to use both the MSCRM desktop client and the laptop client. Sending e-mail will of course work, while routing of incoming e-mails to users and queues will require some more configuration (see below).

Note that basic SRS reporting functionality will be available with just one-way trusts. For full reporting functionality, two-way trusts are needed between the AD forests. Alternatively, you need to configure a fixed identity on the clients for accessing the SRS reports (KB article to be published).

MSCRM 3.0 now supports having multiple Exchange servers in your native Exchange 'organization', including Exchange clusters. It is no longer required that you have a single Exchange server handling all incoming internet e-mails for your 'organization', as the functionality of the MSCRM e-mail router has changed in v3.0.

The v1.2 router had this limitation, which made it impossible to have one common MSCRM database in a company with multiple Exchange organizations (mail domains). E.g. I work in a company with several daughter companies and thus mail domains (itera.no, objectware.no, gazette.no, etc). This meant that with v1.2 we could not get full mail functionality in MSCRM. With MSCRM 3.0, we finally can.

The router no longer inspects all incoming mail messages, but rather a specific MSCRM mailbox.

The inspection of all incoming mails have been replaced by Exchange rules that must be deployed to each Exchange server that contain one or more mailboxes of MSCRM users and queues.
Click to enlarge figure

The Exchange rules, the MSCRM mailbox and the E-mail Router by default require mailboxes to be in the native domain and native 'organization', as the router must be able to access the MSCRM platform services to do its work.

You can deploy the mail routing rules and components to other Exchange organizations, provided that you configure the routing service to use an identity that has access to the MSCRM platform services. This will of course require that you have at least a one-way trust between the AD domains.

Wednesday, November 23, 2005

MSCRM 3.0 added fields - row size limitation

MSCRM 1.2 had an undocumented limit to the number of (actually, the combined size of) fields you could add to an entity. At least the MBS marketing department did not know anything but "you can add any number of custom fields as you like". This limit is imposed by the SQL Server 2000 maximum row size of 8KB, minus some overhead for replication. In addition, v1.2 used updatable SQL views with 'before triggers', which further limited the available size. Some of the entities in v1.2 is quite large to begin with, e.g. the Contact entity, and you would soon hit the roof.

In v3.0, they have raised the limit by providing an extra full row for custom fields, i.e. a separate table on each entity for the added fields. In addition, SQL replication is gone, and so are the updatable views. Thus you will be able to exploit the full range of bytes in a row as you please. The new tables are named *ExtensionBase, e.g. AccountExtensionBase. All text is of course still unicode, thus each char will take up two bytes in the database row.

Note that all new custom fields are added to the *ExtensionBase table, custom fields are no longer injected into the native table of an entity.

SharePoint has a similar mechanism for custom metadata on lists; the metadata fields all share the same database table. Although this limitation exists, it rarely imposes practical restrictions in our solutions, and I think that the same will apply to MSCRM custom fields.

Saturday, November 19, 2005

Noogen.Validation - WinForms validation made easy

After doing mostly ASP.NET and SharePoint solutions, I was quite pleased with the validation mechanisms of ASP.NET. I was very surprised and disappointed when I moved to developing WinForms solutions, and had to downgrade from the ASP.NET validator and validation summary controls to the WinForms stuff.

Gone were the validators and I had to use an ErrorProvider on my forms, as if I need something to provide me with errors. The worst was the need for iterating recursively over all controls on a form when clicking OK to ensure that all error had been corrected and that validation events returned success, before e.g. calling my biz-logic to save changes. I just wanted to have my Page.IsValid property back.

At my current project we agreed that the standard validation mechanism was to awkward for us, and one of the developers did some research to find a component that would make WinForms validation as simple as WebForms validation. This lead us to Noogen.Validation at CodeProject, a control that we have been using for some time now.

I just love the simplicity and flexibility of the Noogen.Validation component, and I strongly recommend it. It is the best add-on component I have used since the Farpoint Spread control for VB6. Thanks, Noogen!

Wednesday, November 16, 2005

System.Transactions LTM "limitation"

I have used the 'transcation context' aspect of component services, such as [Transaction(TransactionOption.Required)] in .NET and MTSTransactionMode in VB6, in a lot of components I have implemented since the first release of MTS/COM+. I just love having declarative transactions through the context, as this gives maximum flexibility in the ways components can be instanciated, mixed and used. EnterpriseServices does, however, incure some performance overhead due to e.g. using the DTC (Microsoft Distributed Transaction Manager).

ADO.NET 2.0 provides a promising new mechanism that is similar to the COM+ transaction context, through the System.Transactions namespace (MSDN Mag intro article). System.Transactions provides a new Lightweight Transaction Manager (LTM) that in combination with SQL Server 2005 is capable of providing transaction context through the TransactionScope class, without the overhead of DTC. The LTM is the starting point of a SS2K5 transaction, and it can be promoted into using DTC when other resource managers are involved in a transaction.

I was very disappointed when I implemented my first nested TransactionScope and ran my unit test on the biz logic method. The code gave this exception:

"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."

This was caused by code that involved two TableAdapters within a single transaction. Each TableAdapter contains its own SqlConnection that it opens and closes when appropriate. All connections are identical and use the same SS2K5 database (resource manager). Still, LTM decides that due to two connections being opened, a full DTC transaction is needed. Microsoft has confirmed this "limitation", which I say is rather a design error in System.Transactions. It is afterall the same resource (database) within the same resource manager (SS2K5). Quote MSDN: "The LTM is used to manage a transaction inside a single app domain that involves at most a single durable resource."

What is the point of using System.Transactions instead of System.EnterpriseServices when even the simplest real life scenario with multiple TableAdapters causes DTC to be required ? This is required e.g. when updating an order and its order lines (one DataSet, two DataTables, two TableAdapers). Microsoft should really start providing samples that go beyond single connection, single class, single component, single assembly applications.

I recognize that System.Transactions is a lightweight framework as opposed to System.EnterpriseServices, but it should not cause "bad" component interface design such as passing open SqlConnection objects around as parameters to avoid DTC for a single resource.

[UPDATE] If you need to stay LTM, you should consider using the DbConnectionScope provided by the ADO.NET team. Note that this will use a single connection for the duration of the scope; staying LTM, but also keeping the connection resource open longer - which counters connection pooling advantages.

[UPDATE] Read about the LTM improvement in .NET 3.5 and SQL Server 2008 and some less known System.Transactions gotchas here.

Monday, November 14, 2005

VSTS/TFS - xcopy to latest build; assembly references

The Team Build system of the Team Foundation Server builds a solution to a drop folder $(DropLocation) with a new sub-folder for each successful build $(BuildNumber). Using a dynamic folder as the source of a project's referenced assemblies in Visual Stuido is not supported, thus a post build action is needed to copy the built assemblies to a fixed location.

The task of copying the generated assemblies to a fixed 'latest build' folder is called 'Publish' in TFS. How to configure this custom action to xcopy *.* is, however, not bleeding obvious when setting up your build; in addition, the documentation seems to be incorrect. We have used this custom action configuration (see last reply) to publish to our \latestbuild\ folder. The "workaround" is to use <CreateItem> instead of an <ItemGroup>.

I really think that the Team Build wizard should include an option to specify a latest build location in the Location step.

These MSDN blog posts 'Part I' and 'Part II' explain how assembly references are resolved in Team Build. Note how the recommendations are different for intra-solution and cross-solution references:
  • Intra solution: use project references, not file references to your assemblies
  • Cross solution: use file references to your assemblies, and add an AfterBuild custom post build step in each of the assembly projects to copy the generated assemblies to the common 'binaries' location
Note that the 'post build step' custom action must be added to the assembly project, not the team build project. Ensure that you scroll down to see all the text of Manish Agarwal's part II posting on assembly references.

'Part III' is about references to a set of common assemblies, and this is where the 'xcopy' team build custom action comes in handy, e.g. to the shared location \Objectware.ShipBroker.Application\latestbuild\

Friday, November 11, 2005

VS2005 Add new data source wizard crash

Today I had a strange problem with VSTS 8.0.50727.42: I could not add a new data source of type 'object' to my Windows control library project. Adding a database or web-service data source worked fine.

The wizard would crash when trying to open the 'Select the object you wish to bind to' page of the wizard. The error message was this:

An unexpected error has occured.
Error message: Object reference not set to an instance of an object.


If I made a new Windows control library project and added classes from my business entity assembly as object data sources, the wizard worked as expected (kind of stupid that several classes cannot be added in one go, though). After several hours of experimenting with project references, structure, and even names, I finally saw the pattern of when the wizard worked and when it crashed. The wizard is dependent on your project having at least one public class in the root folder of your project.

In my Windows control library project I have structured the source code into several folders with no classes in the root. VS by default generates a class called 'UserControl1' in the root, and if you delete this class, the wizard will fail.

I now use a dummy class 'XDummyClassForDataBinding' in my project.

Thursday, November 10, 2005

VSTS - test project location and output

As a seasoned developer, I have a legacy of project folder structure preferences. Among other things, I like to keep non-source code stuff such as solutions and setup projects separate from the actual source code.The structure typically looks like this:

\source
\sln
\app1
\src
\app1
\test
\app1.test
\setup
\latestbuild
\references


\test
\app1.test
\testresults

Adding a unit test using VSTS (right click method name - "Create unit test"), however, creates the test project folder as a subfolder at the location of your .SLN file. It is easy to move the generated test project. Just remove it from the solution, move the project files and folders with Explorer, then add the test project to the solution from the new location (Add-Existing project). You should also move the localtestrun.testrunconfig file to the applicable test project folder. Note that the Test Manager file (.VSMDI) of a solution cannot be moved.

The bottom \test\ folder in the above list is the target for all output and reports created when running unit tests. I use a folder outside the \source\ folder to keep this stuff separate from the source code of the unit tests and the application itself.

VSTS produces a 'run details' file each time you run a unit test, and the test results are stored as .TRX files in a TestResults folder (yes) at the location of your .SLN file. The location of the test output was configurable in 'Edit test run configurations - Local test run - Deployment' in the VSTS betas, but this setting is now visually gone. Fear not, the setting is still in the .testrunconfig file, which is plain XML.

Open your .testrunconfig file and edit these elements:

<userDeploymentRoot type="System.String">..\..\test\app1.test\testresults\ </userDeploymentRoot>
<useDefaultDeploymentRoot type="System.Boolean">False</useDefaultDeploymentRoot>


Note that the last setting must be false, not true as someone has posted on forums.microsoft.com.

With these modifications to the default VSTS unit testing structure, my solution is now the way I like it. Maybe I am fighting the Visual Studio system too much, afterall Microsoft may have done usability studies to decide that their structure is the best...

Wednesday, November 09, 2005

MSCRM: issues with one-way trusts between domains

At one of our customer we had to setup a new Active Directory domain for MSCRM 1.2 as their existing domain was NT4. Thus, all the users and their mailboxes stayed in the NT4 domain, while MSCRM and SQL Server were installed in the new AD domain. This deployment is "supported" by Microsoft, but beware of the small print and ommisions.

First of all, "go offline" in Sales for Outlook (SFO) does not work when the users recide in a trusted NT4 domain. We never got to test "go online" for obvious reasons. This might be due to v1.2 using SQL Replication, which in v3.0 has been replaced by the good, old BCP tool. Note that v3.0 still uses MSDE as the offline database and not SQL Express. Both SQL Server 2000 and 2005 are supported by MSCRM 3.0 as the master database.

Then the famous "E-mail Router": setting up routing of incoming e-mails as shown in the implementation guide works, sort of. Install a new Exchange Server in the AD domain and use either a CRM subdomain or forwarding of non-CRM e-mails to the original Exchange Server. Beware of the small print, however! Only e-mails to mailboxes registered in the native AD domain of MSCRM will be processed by the router. Thus, mails to a user will not be routed, even when a reply to a MSCRM e-mail, as they are in the NT4 domain. The only AD mailboxes we had were for queues (support@myco.com, etc), and routing of incoming e-mails to these queues works like a breeze.

We are currently deploying MSCRM 3.0 in a simmilar scenario, this time with five customer divisions, each with its own AD domain that are not within a single, common AD forest. Each domain (customer division) has its own Exchange server. I will post our experiences on the limitations with this infrastructure later on.

Note that an Exchange 'organization' cannot span AD forests, and that MSCRM is limited to one Exchange 'organization'. This restricts MSCRM with full Exchange e-mail functionality to a single AD forest.

Tuesday, November 01, 2005

Configuration of c360 "My workplace" add-in

Anyone that implements professional MSCRM solutions will at some point need one or more c360 add-ins. We have used their SearchPak, Email to Case (just love it), and this week the "My Workplace" add-in. The workplace add-in allows users to personalize the view of queues; selecting activity and case columns, specifying sorting, etc. The standard MSCRM queue view sorts alpabetically on subject, while sorting on received/due date is normally requested.

The installation went quite OK. As usual we had to replace our customized isv.config file with our backup copy of the original, otherwise the setup kit will not be able to modify the file. This is a bit annoying, as a diff-merge is then needed to merge the news changes into the working copy of our customized isv.config file. Make sure that every added <NavBarItem>element is on one line only, as linebreaks will prevent MSCRM from running, and lock the config file. Use iisreset.exe to release the file lock if you get syntax problems.

The added "My Workplace" module (QueueManager) would not load, responding with this error message: The request failed with HTTP status 400: Bad Request. The offending code was easily located after adding a new web.config file in the \custom\c360\ folder, then setting <customerrors mode="off" /> and <compilation debug="true" /> to see the actual error. It was the c360 license provider that was not able to call the MSCRM web-service to get details about the authenticated user. The call to .WhoAmI() method of the MSCRM platform proxy object resulted in a SOAP error.

In addition, c360 code is well behaved and writes info to the Windows application event log on the MSCRM server. The event source is "c360.Toolkit" and provides you with data such as the page URL and the URL of the web-service. The web-service URL shown in the event was wrong, using the server name instead of the IIS site name.

This is the "undocumented" way to configure the exact URL of the web-service:

  1. Open the \custom\c360\config\c360.config file
  2. Add this <appSettings> element:
    <add key="WebServicesUrl" value="http://server/MSCRMServices" />
Note that for some reason it is not the c360.QueueManager.config that must be changed.

You might need to use the IP-address of your MSCRM site instead of the host name to get things working. Authentication between IIS sites on the same server can sometimes be hard to diagnose when using e.g. host headers, but using the specific IP-address of a web-service has always worked for me. This MSDN article on IIS authentication and credentials is recommended reading.

The need for some of the c360 add-ins will decrease with MSCRM 3.0, but c360 will no doubt continue to provide products that will complement and enhance the standard MSCRM functionality. They have announced support for v3.0 for all their add-ins within two weeks of MSCRM v3.0 RTM.

Objectware is the Norwegian c360 partner.

Tuesday, October 25, 2005

TableAdapter with multiple related DataTables

In my ongoing VB6-to-VS2005 project, we have started using .NET 2.0 TableAdapters in the data access layer instead of SqlDataReaders to get strongly typed parameters and data when using our stored procedures. The main reason for this is that the customer in the existing solution has change management problems due to the untyped nature of DataReaders and the "late bound" style of accessing column values using .GetString[columnIdx].

When the customer has to change the result set of a stored procedure, this type of data access provides no compile time verification when getting a column value, and introduced bugs will not show up until run-time. Using TableAdapters "can dramatically increase your development and stabilization phase productivity by leveraging compile time verification for column changes or additions" [VB.NET team]. Compile time catching of bugs is way better than run-time exceptions, logging and debugging efforts.

The downside is of course lesser performance as a TableAdapter uses the DataSet/DataTable classes, but the size of our result sets are generally small and should thus not incure too much overhead. In addition, the number of required stored procedure calls might go down when retrieving 1:M related data from several related tables (e.g. master-details, multi-select picklists, etc). More on this below.

Read more about using TableAdapters in the VB.NET team blog and at MSDN. What is not covered in details in these articles, however, is how to work with hierarchical data; i.e. how to use TableAdapters to work with a DataSet that contains several related DataTables (e.g. order+order lines, person+favorite colors+favorite cars, etc).

Before you start to working with TableAdapters, you should recognize that a TableAdapter is not a new class in the .NET 2.0 framework. A TableAdapter is just a 'code generated' class that internally uses the classic DataAdapter, DataSet and DataTable classes. The generated code provides you with a typed dataset based on the SQL query / stored procedure you specify in the TableAdapter wizard. The generated class may also provide you with insert, update, and delete methods (see details at MSDN2). Note that datatable must have a defined key column before the insert, update, and delete SQL statements can be generated.

There are two main ways to create a set of TableAdapters. Start by selecting 'Data-Data Sources' from the VS2005 main menu, this opens the 'Data Sources' explorer. Then either create a new data source and select multiple stored procedures in the checkbox list, this will generate one TableAdapter for each selected "sproc"; or open a dataset in designer mode, right click and select 'Add-TableAdapter', this will add another TableAdapter to your dataset. Both ways will create multiple DataTables within a DataSet, however without any DataRelations.

Use the dataset designer to add the relations you need to be able to navigate between the DataTables at run-time. Adding relations allows you e.g. to use .GetChildRows(relationName)to read data related to the current master record.

Note that the relations are added to the .Relations collection of the DataSet, not as .ChildRelations of the "parent" DataTable. You must use the .Fill() method of each TableAdapter to populate all the DataTables and their DataRelations at run-time. Note that the .GetData() method returns a single DataTable without populating any relations.

Using TableAdapters have allowed us to reduce the number of stored procedure calls (round-trips to SQL Server) dramatically. The SqlDataReader code first read all the master records, then did three extra stored procedure calls for each of the master records to get the 1:M related data. E.g. with 20 master records, 61 round-trips to the SQL Server was needed. With TableAdapters, only 4 stored procedure calls are needed to fill the four related DataTables. Reading data for all master records is now as simple as doing a foreach() on the root DataTable and using .GetChildRows(relationName)to get the related data. This reduction should gain most of the raw performance difference between DataReaders and TableAdapters, or even perform better.

How to update related DataTables using TableAdapters is documented here at MSDN2.

Wednesday, October 19, 2005

Rebuild VSTO-O add-in on .NET2.0 RC1

Last week our VSTO-O add-in would not load when installed on new client PCs. This was caused by the automatic download and installment of .NET2.0 RC1 performed by the VSTO setup project, while the add-in was built with .NET2.0 beta 2. A rebuild of the add-in and the setup project was needed to make things work. Thus, I had to uninstall all beta 2 stuff, including .NET, Visual Studio 2005, VSTO runtime, SQL Express, etc. After that, I installed VS2005 RC1 on my PC, which now includes the VSTO project templates by default.

The existing VSTO-O project compiled without problems in VS2005 RC1, so I didn't need to create a new project and move the code manually as I had to when going from VSTO alpha to VSTO beta. You might not be that lucky (see the article referenced below).

Some modifications was needed for the setup project. In beta 2, you had to manually add several VSTO assemblies (DLLs) and install them to your target folder. I removed all those VSTO assemblies, and did a 'refresh dependencies', which in RC1 is able to correctly detect and add the required VSTO assemblies to the setup kit.

The add-in would, however, still not load in Outlook. As I know the details on how to get a VSTO-O add-in to load, I quickly turned to the registry to check the CLSID InprocServer32 setting for my add-in. In VSTO-O beta 2, the add-in loader was named VSTAddin.DLL, while it in RC1 has got the documented, correct name AddinLoader.DLL. Use 'View-Registry' in the VS2005 setup project, navigate to HKCU\Software\Classes\CLSID\{guid}\InprocServer32\ key and change the name of the VSTO loader stored in the (default) setting.

With these changes to my setup project, the add-in now installs, loads and works correctly on RC1.

Note that modifying the setup project is not needed if you choose to start with a new VSTO-O project and move the code manually from the old beta 2 project.

Mads Nissen has posted
an article that describes how to get your VSTO-O add-in to work with .NET2.0 RC1 and details about making a fully automated setup kit using a custom prerequisite with the setup project. He has also updated the CAS custom installer action to support uninstall.

Thursday, October 13, 2005

.NET2.0 ObjectDataSource and BindingList<T> VS typed datasets

I am currently at a project where some VB6 and .NET1.1 WinForms applications are to be ported and upgraded to .NET2.0. The current architecture is layered and employs (custom) business entity objects modelled on the domain of the customer. The solution is distributed over several tiers, and the communications technology of the architecure (.NET remoting) is also up for review. In addition, it is a design goal to enable the solution to provide services and entity data to external systems, both within the company and to e.g. partners.

Yesterday some of the developers started to argue for replacing the existing business entity objects with typed datasets. Their main reason for this was to get "maximum" developer productivity through two-way WinForms data binding in the user controls. This is a typical way of thinking for developers that have worked mainly with databases and ASP.NET solutions. In addition, scrapping the domain entity objects would be a step in the wrong direction from 'domain driven design' (DDD) and from 'service orientation' enabling the solution. This discussion is not a new one, refer to this MSDN article by Dino Esposito for pros & cons of datasets vs business entity objects. I also recommend reading the articles referenced at the end of the 'Cutting Edge' article. Note how they all agree on that exposing datasets in a service is a bad idea.

.NET2.0 provides several new mechanisms aimed at bringing the ease of dataset binding to business entity objects, allowing the developers to use design time tools and wizards to bind their GUI to entity objects. The ObjectDataSource (BindingSource) and the generic BindingList<T> are the main enabling object binding mechanisms in .NET2.0. Note that it is possible to provide object binding also in .NET1.1 as shown in this MDSN article by Paul Ballard, but it will require more coding and will not give full design time support for setting up the binding.

What you will find out is that the BindingList<T> has some 'last mile' problems when compared to the binding mechanisms provided by the DataView/DataSet combination. In our prototype/spike the lack of built-in implementation for sorting and filtering immediately surfaced. These methods are defined in the IBindingList interface, but is not implemented by BindingList<T>. A bit of googling led me to this blog entry and this GotDotNet workspace. Andrew Davey's BindingListView<T> provides a data bindable view for a BindingList, in the same way that a DataView provides a bindable view of a DataTable. This component implements much of the stuff described in Paul Ballard's article. One less argument for the typed dataset clan :-)

Typed datasets still has the upper hand when it comes to batch updates, optimistic concurrency, etc, when compared to custom business entity objects and collections. What the solution might need is an ORM framework that handles all the object-to-database stuff (the DDD repository pattern). I might be in for a classic turf war with the dataset-all-the-way clansmen!

Tuesday, October 11, 2005

VS2005 and VSTO-O reinstall blues

Due to some problems with my Visual Studio 2005 installation, I had to reinstall VSTS on my PC. This caused my VSTO Outlook projects to stop working in VSTS with the error message "Unable to load project file <file name>". No further details were given. I then reinstalled VSTO-O beta 2, the VSTO beta 2 run-time (VSTOR.EXE), and the Office 2003 PIAs. Still no luck. I then reinstalled the Windows Script 5.6 as this had solved a VS.NET problem for me before. My project would still not load.

I then tried to create a new VSTO-O project, and this time I got the famous "class not registered" error. As usual, no details about which class.

It was time for RegMon from System Internals, a company that provides high quality freeware tools for "under the hood" monitoring during deployment and troubleshooting. I opened VSTS, selected 'File-New-Project' and browsed to the "Outlook Add-in" project template. Then I captured all registry access done while I clicked OK in the VSTS 'New Project' dialog. RegMon captured a looong list of trace, but by filtering on process name "devenv*" the list was shortened to only a few hundred entries.

Starting from the bottom of the list, I soon found two "OpenKey" events that resulted in "NOT FOUND", with no further attempts to read info about the CLSID. Double-clicking the entry in RegMon opened the standard Registry Editor, in which I myself could see that the specified CLSID was nowhere to be found in the registry.

I then used another PC with a working copy of VSTS and used the standard Registry Editor to search for the CLSID (GUID), and it turned out to be MSXML6.DLL that was missing on my PC. You will find this DLL in the System32 folder.

I think that all this was caused by uninstalling SQL Express from my PC. It would not be the first time Microsoft installers do not mark shared DLLs as "permanent".

Saturday, October 01, 2005

Will they ever learn ?

I have this last week been involved in deploying a InfoPath+SharePoint (WSS) solution developed by my colleagues Mads Nissen and Rolf Inge Kirkeng. Things have gone quite well, except for annoyances caused by the language versions of WSS being different. That is, the WSS version used to develop and host the InfoPath forms in a form library, is the English version, while this customer had the Norwegian version.

I had saved the WSS team-site, including the form libraries, as a template (1033), and needed to convert it into language code 1044. Normally, you can rename the .STP file into .CAB, extract the files, perform a 'edit-replace all' on the LCID instances, build a new .CAB file, and finally rename the file to .STP. Unfortunately, this did not work this time, so I had to set up the WSS team-site manually, and then publish the InfoPath forms to the site, which automatically creates the form libraries with all the applicable metadata columns from the schema. End of my involvement.

Today, Rolf Inge finshed the deployment of the InfoPath solution, and ran into some problems with the scripts of the forms. The forms use several secondary data connections to retrieve pick list data, etc. from SharePoint lists. The problem turned out to be the naming of the standard columns of a form library. E.g. the forms expected 'Title', but in the Norwegian version this column has the name 'Tittel'. Such a low level of abstraction (internal name = display name) makes a solution really hard to standardize and deploy.

Has Microsoft not learned anything from the infamous experiences with localized WordBasic and Office VBA in the late 1990s ? It was a nightmare to deploy solutions (templates and macros), as each customer would most likely have a different object model "language", language literally meaning a national language. "ActiveWindow" would be "AktivtVindu" in the Norwegian Office versions. Even within a company you most likely would find several different installations of Office. I worked a lot for Det Norske Veritas that has offices around the world, and it was impossible to make a single, standard set of Office templates. Microsoft did in fact realize that this was not very productive, and these days the Office object model is all English.

I wish Microsoft soon would make every product use English internal names for programming, and separate, localized display names for the user interface. The same goes for the SharePoint central admin tools; please let us have the option to run e.g. the English admin tools, even on a Norwegian installation. The localized admin tools makes it hard to move between customers and quickly find the stuff you need to configure SharePoint. The strange terms sometimes used in the translations does not make it any easier.

Thursday, September 15, 2005

MSCRM case: worldwide customer evidence video!

A lot of the themes that I have blogged about here stems from a MSCRM and Exchange based solution that utilizes SharePoint, Office and Outlook as the front-end. I worked as the lead developer on the project and did a lot of cool MSCRM, AD, Outlook and Exchange stuff, plus some VSTO-O alpha development. Mads Nissen did the SharePoint stuff and lately an extra VSTO-O add-in.

This shipbroker solution was chosen by Microsoft to become one of a few worldwide customer evidence videos! Watch the video
here. I am (unfortunately?) not in the video.

The project would not have succeeded without the effort of a bunch of other people not mentioned here, but then noone gets forgotten.


PS! if the video won't start, download the launcher and start it using MediaPlayer (afterall, it is a Microsoft video).

Wednesday, August 31, 2005

Web-services: interoperability & encoding

I am currently implementing MSCRM at a customer that is also implementing an enterprise portal using SiteVision delivered by Objectware's Java department. The need for providing the SiteVision portal with information about MSCRM accounts made me implement a few .NET XML web-services to be used by the Java guys. The web-service provides basic read, create and update services for accounts and contacts. The 'data transfer object' was simply implemented as a string that would contain XML according to a predefined XSD schema.

The enterprise application integration server used is BIE (Business Integration Engine; Java open source), chosen by the Java lead developer on this project. Their use of my web-services went along quite smootly as was to be expected as .NET web-services complies with WS-I BP1.0 (intro article here). I had even used the WS-I compliance tools to check my web-services.

What caused a major halt on the integration implementation was when we started testing with data containing Norwegian characters (ÆØÅ). To be more precise, this was tested and found to be OK by using MSIE as the test-client, and by using XmlSpy to retrieve data about e.g. an account, modifying the data, and finally updating the account. I strongly recommend the XmlSpy SOAP Debugger tool both for testing and debugging web-serives.

The problem was that although BIE supports web-servies using UTF-8, it does not differentiate between UTF-8 as a text encoding in the web-service and UTF-8 encoded XML and the W3C character encoding rules that dictates that a unicode character must be encoded as a #xHHHH numeric character reference (i.e. 16-bit unicode code points/units; UTF-16 NCR). For more details see 'Can XML use non-Latin characters?'.

The MSCRM 1.2 object model supports and returns XML using the #xHHHH character encoding. All strings in .NET are unicode.

The character encoding went through these phases (examples for Æ and æ):

  • from our MSCRM web-service: Æ encoded as &#x00C6;
  • back from BIE on create/update: Æ "encoded" as &#195;† (not even valid UTF-8 code units)
  • data in MSCRM after operation: Æ
  • from our MSCRM web-service: æ encoded as &#x00E6;
  • back from BIE on create/update: æ encoded as &#195;¦ (which are valid UTF-8 code units)
  • data in MSCRM after operation: æ
To make the problem even worse, by running the same BIE test case (workflow route) over and over, these 'funny' characters would double each time, as each XML encoded character from our service was sent back as two UTF-8 code units from BIE and not as XML UTF-16 code units (the W3C character encoding standard).

A nice unicode character encoding test tool is available here.

The Java lead developer had run into these encoding problems in BIE and modified the portal code to do some character replacing (Æ to Æ, etc) on all text to/from BIE on their side, and I asked them to change their integration implementation to comply with the WS-I and W3C standards. Unfortunately, this lead developer is more interested in hailing the glory of the application architecture & design and the open source movement than complying with standards "invented by Microsoft" :-)

Thus, I had to write a SoapExtension to modify the incoming SOAP message before it was deserialized, to change the "wrong" UTF-8 encoding into correct XML W3C UTF-8 encoding. I used this GotDotNet sample as the basis for my code and this is how I modify the incoming and outgoing SOAP messages:

public override void ProcessMessage(SoapMessage message)
{
switch (message.Stage)
{
//INCOMING
case SoapMessageStage.BeforeDeserialize:
this.ChangeIncomingEncoding();
break;
case SoapMessageStage.AfterDeserialize:
_isPostDeserialize = true;
break;

//OUTGOING
case SoapMessageStage.BeforeSerialize:
break;
case SoapMessageStage.AfterSerialize:
this.ChangeOutgoingEncoding();
break;
}
}

public override Stream ChainStream( Stream stream )
{
//http://hyperthink.net/blog/CommentView,guid,eafeef67-c240-44cc-8550-974f5d378a8f.aspx

if(!_isPostDeserialize)
{
//INCOMING
_inputStream = stream;
_outputStream = new MemoryStream();
return _outputStream;
}
else
{
//OUTGOING
_outputStream = stream;
_inputStream = new MemoryStream();
return _inputStream;
}
}

public void ChangeIncomingEncoding()
{
//at BeforeDeserialize
if(_inputStream.CanSeek)
_inputStream.Position = 0L;

TextReader reader = new StreamReader(_inputStream);
TextWriter writer = new StreamWriter(_outputStream);

string line;
while((line = reader.ReadLine()) != null)
{
writer.WriteLine( Utilities.FixBieEncoding (line) );
}
writer.Flush();

//reset the new stream to ensure that AfterDeserialize is called
if(_outputStream.CanSeek)
_outputStream.Position = 0L;

}

public void ChangeOutgoingEncoding()
{
//at AfterSerialize
if(_inputStream.CanSeek)
_inputStream.Position = 0L;

Regex regex = new Regex("utf-8", RegexOptions.IgnoreCase);
TextReader reader = new StreamReader(_inputStream);
//HACK: TextWriter writer = new StreamWriter(_outputStream);
TextWriter writer = new StreamWriter(_outputStream, System.Text.Encoding.GetEncoding(_encoding));

string line;
while((line = reader.ReadLine()) != null)
{
// change the encoding only is needed
if(_encoding != null && !_encoding.Equals("utf-8"))
line = regex.Replace(line, _encoding);
writer.WriteLine(line);
}
writer.Flush();
}

The central method here is the ChangeIncomingEncoding method that converts all the "wrong" UTF-8 encodings from BIE into correct XML W3C NCRs. Note the resetting of the position to zero on the output stream after modifying the message; this is important as forgetting to do so will cause the AfterDeserialize step of the SoapMessageStage in ProcessMessage not to be called.

After deploying the modified web-service and testing it with XmlSpy, it was time to test it with the BIE workflow dashboard. The BIE developer had set up some test cases for me, and they all worked as expected. No more funny farm in the MSCRM database.

What an illustrious victory for Java-.NET web-service interoperability!

Note that you cannot test/debug your SoapExtension code by using MSIE as the test-client as HTTP POST/GET will not trigger the SoapExtension. I used XmlSpy to test and debug my code; just set some breakpoints, start your web-service in debug mode and leave it running, then trigger the SoapExtention by making a SOAP request using XmlSpy.

Friday, August 26, 2005

What's new in MSCRM 3.0

Microsoft has finally made public some white papers that describes the new features and new customization options of MSCRM 3.0, of which:

  • campaigns & marketing
  • creating new business entities; with offline support
  • adding new relationships to entitites (not just new attributes)
  • client-side validators and scripting support
  • customizing activities
  • workflow for activities and custom entities
  • better CRM e-mail integration with Outlook 'inbox' and 'sent items'
  • separate tables for custom entity attributes/relations
are the most needed improvements based on my MSCRM experience at several customers.

The use of a separate custom attribute/relation table for each entity extends the number of fields you can add to an entity in version 3.0, thus improving on the current limitation which is kind of an "official secret". MSCRM 1.2 is limited by the SQL Server 8K row-size restriction because all fields are added to the entity table (actually less because of replication overhead). This is especially hurtful for the contact entity as it is 80% full out-of-the-box in version 1.2.

To make a long story short, Mattew Wittemann has published a nice summary of the white papers, with several sceenshots, which is available here. Recommended reading!

The Microsoft MSCRM 3.0 white papers can be downloaded here (feature overview) and here (discloses new customization options).

Friday, August 12, 2005

Outlook recipients: AD contact postal address data

For our MSCRM customers we have made a small service that monitors the MSCRM database for changes to accounts and contacts, and maintains a specific Active Directory (AD) container that contains AD contacts that shadow the MSCRM accounts and contacts and their e-mail addresses. All these AD contacts are made available to Outlook through a new address book entry configured in Exchange System Manager. This ensures that all users have access to the e-mail information stored in MSCRM, even those that do not use Sales For Outlook. They can use the Outlook address book to pick or search for e-mail addresses and see details about an e-mail address such as contact name, company, postal address, etc.

The AD container is also used to hold AD distribution lists (mailing lists) built from the AD contacts. These distribution lists are also made available to Outlook through Exchange 2003. In addition, we have made a .NET add-in for Outlook that allows the users to preview the members of a list before sending e.g. the weekly newsletter e-mail (using an add-in toolbar in the e-mail Inspector window). The add-in allows the users to see the extra information stored in AD, and remove those members that should not get a mail this time. The users (ship brokers) typically decide that the e-mail should be sent to all members except those in Greece, and use the add-in to sort by country, multi-select the applicable members in a WinForms checkbox ListView and finally remove the selected members. This 'explodes' the mailing list into recipients, in addition to moving them to BCC to ensure that recipients do not see who else got this e-mail.

The code that maintains the AD contacts and the AD distribution lists runs on the Exchange Server 2003 to be able to modify data in both AD and Exchange (details here). Adding and maintaining AD contacts with .NET C# is quite easy, there are just a few pitfalls to be aware of when using AD contacts in combination with Exchange (details here).

The users recently requested the possibility to see the country of the MSCRM account/contact in the Outlook address book and when removing distribution list members. "No problem", I responded quickly and started checking postal address fields in Outlook and AD. I added full address information for an AD contact I knew was in the Outlook address book, waited for the RUS, and found the entered data in Outlook. I was ready to start coding !

First I added a 'country' column to my list view and then inspected the AddressEntry properties for access to postal address data. Unfortunately, Microsoft chose not to expose these properties in the Outlook object model. Fortunately, we were already using Redemption for other purposes in our add-in:

Redemption.MAPIUtils mapiUtils = new Redemption.MAPIUtils();
const int PR_COUNTRY = 0x3A26001E;
const int PR_EMAIL = 0x39FE001E;
foreach (Outlook.AddressEntry entry in list.Members)
{
string country = "";
object mapiCountry = mapiUtils.HrGetOneProp(entry.MAPIOBJECT, PR_COUNTRY);
if (mapiCountry != null) country = mapiCountry.ToString();


string smtp = entry.Address;
//check if Exchange address
if (smtp.StartsWith("/o="))
{
//get SMTP address from MAPI
smtp = mapiUtils.HrGetOneProp(entry.MAPIOBJECT, PR_EMAIL).ToString();
}

//add to listview
string[] items = new string[] { entry.Name, smtp, country };
ListViewItem itemX = new ListViewItem(items);
//keep name and address for later matching against recipients
itemX.Text = entry.Name;
itemX.Tag = entry.Address;
lstRecipients.Items.Add(itemX);
}
mapiUtils.Cleanup();


You will find a list of relevant MAPI property keys at OutlookCode.com.

Then I modified our AD updater service to read the country of each MSCRM account/contact to be able to set it on each AD contact. I used the ADSIEdit MMC snap-in to inspect the properties of my 'guinea pig' AD contact to see which property was used for storing the country. The 'co' property contained the country name, and the property 'countryCode' seemed to have something to do with a contact's country. Some googling lead me to MSDN, which revealed that these three properties must be set according to ISO 3166:

  1. c: two letter country designation
  2. co: country name
  3. countryCode: country code (integer)

You will find the ISO 3166 list here. Copy and save the list to a .TXT file, open it in Excel as a 'fixed width' file to convert the text into a useful .XLS table. Use SQL Server 2000 DTS to import the .XLS to a new table in your database, enabling you to lookup MSCRM country names in the ISO 3166 country list.

The code to set MSCRM data into an AD contact properties looks like this:

//set properties
adContact.Properties["DisplayName"].Value = displayName;
adContact.Properties["mail"].Value = mailAddress;
//NOTE: AD fails on empty string, "invalid attribute syntax"
if(firstName.Length!=0) adContact.Properties["givenName"].Value = firstName;
if(lastName.Length!=0) adContact.Properties["sn"].Value = lastName;
if(company.Length!=0) adContact.Properties["company"].Value = company;
if(department.Length!=0) adContact.Properties["department"].Value = department;

if(country.IndexOf(";")>0)
{
//format: name;code;number
country += ";;;"; //just to be sure
string[] parts = country.Split(new char[]{';'});
string countryName = parts[0];
string countryA2 = parts[1];
string countryCode = parts[2];
if(countryName.Length!=0) adContact.Properties["co"].Value = countryName;
if(countryA2.Length!=0) adContact.Properties["c"].Value = countryA2;
if(countryCode.Length!=0) adContact.Properties["countryCode"].Value = Convert.ToInt32(countryCode);
}

// Flush to the directory
adContact.CommitChanges();

Note that I chose to require the country information to be provided as a semicolon separated string just for "simplicity" as I have not used entity objects (data xfer objects) in my AD service. I think refactoring of my service interface is needed, soon the users will want "just one more field" to be added...

Programming tip: use a reverse for-loop (i--) when removing entries from the Outlook.MailItem.Recipients collection, because the collection changes when a recipient is removed and this messes up the iteration if not performed end-to-beginning.

Wednesday, August 10, 2005

Use a toolbar in multiple Outlook 2003 inspectors with VSTO

Making .NET add-ins for Outlook 2003 has become straightforward with VSTO 2005 (VSTO-O), especially of you only add your menus and toolbars and event handlers to the main window of Outlook (Outlook.Application.ActiveExplorer). Most of the samples available with VSTO-O do just this.

Adding toolbars and event handlers to the popup windows (Outlook.Application.Inspectors) that Outlook uses to show details about a mail, an appointment, a task, etc, will at first seem to be trivial, but there are some pitfalls. Things might work fine with a single inspector, but you need to test your Inspector add-in stuff by opening multiple inspectors at once and clicking your toolbar in all of them to test your event handler. Typically, only the first inspector will trigger the event, or your event will trigger once for each open inspector. Not to forget the event working for some time, then stop working due to the well-known 'garbage collector ate my event handler' mistake.

What you need to make your code support Outlook inspectors correctly, is an inspector wrapper class that gives your code one custom object per open inspector at run-time. The wrapper lets you add code that ensures that you handle only inspectors for specific item types; e.g. mails, but not contacts, tasks and appointments (check the Outlook.OlItemType of the Inspector.CurrentItem). The wrapper also ensures that the correct instance of your event handler code gets called when your toolbar is clicked in one of the multiple open inspectors. Finally, the wrapper keeps a reference to your toolbar and event handler for the lifetime of each inspector, solving the garbage collector problem.

I have used the inspector wrapper code written by Helmut Obertanner, which is available here at OutlookCode.com. The code is for .NET C# pre VSTO-O, but will work with a few modifications. Refer to the related discussions in the forum for how to solve diverse add-in problems.

[UPDATE] Helmut has provided an updated version of the explorer and inspector wrapper at his site, including applicable Marshal.ReleaseComObject() calls: download the X4UTools.

[UPDATE] If Outlook hangs around in the background when closed, then you have missed calling Marshal.ReleaseComObject() for some Outlook objects created or referenced by your add-in. This can also be the cause of the "The operation failed due to network or other communication problems. Check your connections and try again." message, be sure to release all Outlook (COM) objects you create.

I have modified the code slightly to work with "temporary" Outlook toolbars and to ensure that multiple inspectors functions correctly:


public class XMailItem
{
private const string _TOOL_EDITMAILINGLIST = "OW_EDITMAILINGLIST";
private const string _BTN_EDITMAILLIST = "Choose mailing list members";
private DateTime _createdDts = DateTime.Now;
private Office.CommandBar _toolBar;
private Office.CommandBarButton _btnEditMailingList;


. . .

private void MailItem_Open(ref bool Cancel)
{
#if DEBUG
DateTime tmp = _createdDts; //inspect to check which run-time inspector object this is
#endif
// event isn't needed anymore
_mailItem.Open -= new Microsoft.Office.Interop.Outlook. ItemEvents_10_OpenEventHandler(MailItem_Open);


// get the Inspector here
_inspector = (Outlook.InspectorClass)_mailItem.GetInspector;


// register for the Inspector events
_inspector.InspectorEvents_Event_Close += new Microsoft.Office.Interop.Outlook. InspectorEvents_CloseEventHandler(Inspector_InspectorEvents_Close);


//create the toolbar
this.InitializeMailToolbar();
}




private void InitializeMailToolbar()
{
try
{

if (_mailItem is Outlook.MailItem)

{
//find existing toolbar (same toolbar in all inspectors), even when temporary
try
{
_toolBar = _inspector.CommandBars[_TOOL_EDITMAILINGLIST];
}
catch (Exception)
{
//add toolbar
_toolBar = _inspector.CommandBars.Add(_TOOL_EDITMAILINGLIST, Office.MsoBarPosition.msoBarTop, false, true);
}


//find existing toolbar button
try
{
_btnEditMailingList = (Office.CommandBarButton)_inspector. CommandBars[_TOOL_EDITMAILINGLIST].Controls[_BTN_EDITMAILLIST];
}
catch (Exception)
{
//add button
_btnEditMailingList = (Office.CommandBarButton)_toolBar.Controls.Add(Office.MsoControlType.msoControlButton, Type.Missing, Type.Missing, 1, true);
_btnEditMailingList.Caption = _BTN_EDITMAILLIST;
_btnEditMailingList.Style = Office.MsoButtonStyle.msoButtonCaption;
}


_toolBar.Visible = true;

_btnEditMailingList.Visible = true;
//add event handler to button; each open inspector adds itself to the event handler chain (+=)
_btnEditMailingList.Click += new Microsoft.Office.Core._CommandBarButtonEvents_ClickEventHandler(_btnEditMailingList_Click);


}
}
catch (Exception ex)
{
MessageBox.Show("An unexpected error occurred during toolbar init: " + ex.Message, CONST.MSGBOX_TITLE, MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
}



void _btnEditMailingList_Click(Microsoft.Office.Core.CommandBarButton Ctrl, ref bool CancelDefault)
{
#if DEBUG
DateTime tmp = _createdDts; //inspect to check which run-time inspector object this is
#endif
this.ShowEditMailingListDialog();
}



private void Inspector_InspectorEvents_Close()
{
#if DEBUG
DateTime tmp = _createdDts; //inspect to check which run-time inspector object this is
#endif
try
{
//raise event, to remove us from active items collection
if (Item_Closed != null)
{
Item_Closed(this, new XEventArgs(_mailItem.GetHashCode()));
}


//cleanup resources; remove this from event handler chains
_btnEditMailingList.Click -= new Microsoft.Office.Core. _CommandBarButtonEvents_ClickEventHandler(_btnEditMailingList_Click);


_inspector.InspectorEvents_Event_Close -= new Microsoft.Office.Interop.Outlook. InspectorEvents_CloseEventHandler(Inspector_InspectorEvents_Close);

//release Outlook COM objects as applicable

Marshal.ReleaseComObject(_inspector);
Marshal.ReleaseComObject(_mailItem);
}
catch (System.Exception ex)
{
MessageBox.Show("An unexpected error occurred during inspector close: " + ex.Message, CONST.MSGBOX_TITLE, MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
}

. . .

} //XMailItem

It is important to know that the key of the Controls[] collection is actually the Caption of the button. Failure to use the same text in both places will cause multiple buttons to be added to the toolbar in some circumstances (e.g. use the next and previous mail buttons a couple of times to move back and forth, use the move to folder button, etc).

Note how the toolbar button click event handler for each wrapper is added (+=) to the button's event delegate chain. With no further code than this, Outlook (.NET) will be able to call only the event handler in the wrapper object of the inspector that triggered the click event of the common toolbar. Remember to remove your event handler from the event chain when the inspector closes.

Note that the toolbar is shared between all open inspectors, thus you must never delete it when an inspector closes as this will remove the toolbar from all open inspectors. Still, it is recommended to remove the toolbar when the add-in unloads, as a best practise, should the "temporary" flag not apply to your Outlook configuration (temporary has no effect when using Word as the mail editor).

If your toolbar opens dialog boxes, I strongly suggest that they are modal, .ShowDialog(), to avoid confusing your users with which dialog belongs to which inspector.

[UPDATE] Ken Slovak has published example code including explorer and inspector wrappers for Outlook 2007: templates from Professional Outlook 2007 Programming.