Monday, January 30, 2006

System.Transactions LTM - promoting to DTC

In an earlier post I wrote about how the ADO.NET 2.0 System.Transactions LTM mechanism for SQL Server 2005 gets promoted from LTM into a full DTC transaction when more than one connections are opened against the same SS2K5 database (the resource). This is bound to happen when using TableAdapaters, as they open and close their own SQL connections using the configured common connection string.

One possible solution is to open a single, common SQL connection and pass it around to all methods in the biz-logic and data access logic (a data session object) to ensure that all SQL and TableAdapters use a single connection to stay within the LTM limitation (i.e. avoid getting DTC). The drawback of this is that it is contrary to "aquire late, release early" use of SQL connections, and also introduces explicit code to open/close a connection. Thus, it re-introduces the negative aspects of explicit transactions, making it hard to reuse the components e.g. in higher level processes and services.

An alternative, simpler solution is available when you use the TableAdapter only for reading data: use the TransactionScopeOption.Suppress option when setting up the ambient transaction context of your method. This ensures that the current LTM transaction will not be affected by the TableAdapter read/fill/get operation. It does not matter that the TableAdapater opens a second SQL connection automatically, promoting to DTC will not happen as the code is using two different transaction contexts:

Using transxScope As TransactionScope = New TransactionScope(TransactionScopeOption.Suppress)

'code that utilize TableAdapter for reading data here

transxScope.Complete()
End Using


You must review your usage of TableAdapaters and ensure that they can be used outside transactions. If you use them for updating the database, or if you depend on reads to be inside a transaction (e.g. for row/table locking purposes, transaction isolation level), then you are out of luck with LTM. These scenarios requires the use of DTC or, when you require LTM, the passing-an-open-connection-around solution. The latter can be caused by DTC not being allowed by the customer's operations department.

Friday, January 20, 2006

Unit testing is not result testing (only)

Unit testing has been around for quite some time now for .NET, thanks to tools such as NUnit and TestDriven.NET. The idea of unit testing, however, still has not convinced the majority of developers to change their ways. With the release of VS2005 Team Suite (VSTS), unit testing are now on the agenda of most IT managers, and thus even the most ignorant programmers may soon have to deal with unit testing.

I have participated at several VSTS unit testing introductions at different customers the last year, and the initial response is the same everywhere: “this is nice in theory and in a small demo, but it will be too much extra work to implement unit tests that covers our code”. Add in the “how do we test databases with inserts, updates, delete?” challenge, and you know that the attendees are only human, skeptical to change and skeptical to ideas from the management.

The reason for this typical response, I believe, is due to the typical way unit testing is demoed: the math library. Write a test to add two numbers and make it pass. The assert phase of the test is always Assert.AreEqual, checking to see that the result is the sum of the two numbers. I bet almost every introduction to unit testing that you have read or seen, are in fact result testing.

People are being brainwashed into believing that unit testing is testing for exact results and nothing but testing results. Result testing is of course a very common type of unit testing, but you will soon find out that writing tests for anything but a math library is not trivial when testing for exactness of returned values/data/content is your perception of unit testing.

My point is that writing unit tests are much simpler when asserting that the returned data falls within an expected range, is not null, did not fail or failed as expected, etc. Think of unit testing more like doing calculations in your head when shopping, where a rough amount is adequate, rather than trying to get the correct answer with two decimals.

Another point on the ‘exact result testing’ mindset: when introducing unit testing, someone will always point out that is will require a lot of work to write tests that asserts every possible outcome of a real-life method. This is correct. But again, what is sufficient to start with? The best is the enemy of the good. Start with a few tests that assert the typical outcome of the method. Always add a new test when a bug is encountered, failing at first, fix the bug and make the test pass. Always add a new test when adding to a class or changing the way it works. Given enough time, this strategy will ensure that you get good enough unit testing coverage of your code.

One last typical dialog to illustrate the above point:
Developer: “The logic behind creating a new customer/account is quite complicated, and the combination of possible input to the business logic will be impossible to test, so why bother with writing extra code for unit testing?”
Me: “Ok. How do you test the biz-logic today? You do test it, do you not?”
Developer: “Of course! I have a test form with plenty of input fields, a couple of buttons, and a datagrid to show the results.”
Me: “How do you ensure that your testing covers all aspects of your biz-logic?”
Developer: “I enter some typical values, push the buttons and checks that the grid gets filled…”
Me: “So, you just check that the returned result seems to be as expected, or do you closely examine each and every value in the grid?”
Developer: “No… That is not necessary… As long as the result is not weird, I assume that the biz-logic is correct…”
Me: “Then, why do you insist on the unit tests to be ‘exact result’ tests, and that they should cover all combinations of input? Are you persistent enough to always do all your ‘push button’ tests each time you change your code? Wouldn’t an automated test regime that is at least as cautious about ‘output falls within range’ as your manual testing, be an improvement?”

At this point in the discussion, most developers admits that they are bored stiff by the test forms, and that writing unit tests instead of test forms seems to be far better.

The thing that influences most developers to start employing unit testing is that it makes refactoring their application much safer and allows them to change the architecture and design of their software with more confidence. Add in that automated unit testing also makes regression testing an application a breeze, no more multi-button test forms to fill out in a frenzy just before deadline.


Read this MSDN Mag article to learn more about VSTS unit testing (note the math library examples).

The next step on the path to becoming a true test believer is to embrace Test Driven Development (TDD). Check out Scott Bellware’s blog to learn more about TDD.

Sunday, January 15, 2006

VB8: Friend setters, but no friend assemblies

The new version of Visual Basic (VB8, a.k.a VB.NET2) released with VS2005, has got several new language features, such as generics and friend property setters. The latter is a feature I was looking forwards to employ in my current project. The need for easily restricting which code is allowed to modify the content of business entity objects (i.e. setting property values) seemed to be a perfect match for friend setters (internal in C#).

The scenario is that we have the typical set of components, objects, and layers in our application: business entities (BE), business logic (BL), data access logic (DAL), and also a client application (PL) that consumes the services provided by the business layer. The PL code should not be able to modify some of the entity object properties, such as the .Id (primary key) of the entity. The main task of the DAL is to transform the entities to and from the database, thus DAL must be able to set all of the properties in a BE object. Just what friend setters are for:

Private _id As Integer
Public Property Id() As Integer
Get
Return _id
End Get
Friend Set(ByVal value As Integer)
_id = value
End Set
End Property


One object model design task remains, and the plan was to use another new VB8 feature promoted on several MSDN blogs through the Whidbey beta period: friend assemblies. First, me explain why friend assemblies are needed as part of the application's design.

As this is a distributed solution, we want to deploy a minimum set of components to the clients, both for operational and for security reasons. Thus, PL has only access to BE objects and a set of service proxies, but the BL and DAL objects are not deployed to the client. In fact, PL, BL, BE, and DAL are four different assemblies (VB projects). As the DAL and the BE components are separate assemblies, and some of the property setters cannot be public if the PL (and other future components and service consumers) should not be able to change those properties, the need for the InternalsVisibleTo mechanism arised.

To my disappointment, VB8 does not support friend assemblies (well, actually the VB8 compiler does not). I guess the "beta information - subject to change without notice" disclaimer has been applied to this feature.

Monday, January 02, 2006

Outlook recipient autocomplete, AD contacts sharing SMTP address

We have implemented an extension to MSCRM that replicates accounts and contacts to Active Directory as AD contacts for use as an Outlook address book (see this post). It is not very common, but some accounts and contacts do actually share the same SMTP mail address. E.g. several departments of a large corporation might share the same mailbox, and your user expects to find each department in the MSCRM address book in Outlook as separate contacts.

A few days ago, the operations department at one of our customers raised a concern about the support in Exchange Server 2003 for resolving mail recipients against the set of AD contacts when some of them share the same SMTP address. "This is not allowed in AD, and you should fix your code to not add the same SMTP address multiple times to AD" he stated. What ? As "proof" he sent me a screenshot of an error he got when trying to add a new contact with a duplicate SMTP address using the AD Users and Computers admin tool:

This e-mail address already exists in this organization (c10312e7)

This is a known limitation in the Active Directory Users and Computers snap-in. To resolve this, do not create the contact with AD Users and Computers, but use the ADSI-edit MMC snap-in or the ADSI component from your code to add multiple AD contacts that must share the same SMTP address. The AD attributes to set are "mail", "mailNickname", "proxyAddress", and "targetAddress". Those fields can be modified to enter a SMTP address that exist for other contacts without error (more info at MSDN).

Note that you should not create Exchange mailboxes for any external AD contact, and never create AD contacts duplicating the mail addresses of internal users. This will lead to NDR errors during Exchange resolving and mail delivery. Also ensure that you use correct settings for the Exchange Recipient Update Service (RUS) on your AD contacts, as RUS will automatically create Exchange entries for your AD contacts when not disabled as applicable. Read more in this post.

A related problem to Exchange NDRs for unreliable AD contact data, is the Outlook 2003 recipient cache. This autocomplete mechanism is heavensent when adding recipients that you send e-mail to often, but it is a p.i.t.a when your favorite recipients change their e-mail address or their data change in AD (the source of Outlook contact groups, a.k.a Exchange Global Address Lists). E.g. removing contacts from AD might lead to errors when Exchange tries to resolve cached recipients.

It is not easy to change the cached recipients list in Outlook using Outlook. The only option you have is to delete cached entries, but not many users have been able to find where to do that or clear the cache. It is actually quite simple, but not very intuitive; just select the entry you want to remove in the autocomplete suggestion list and press 'Delete' on your keyboard. The whole list of cached recipients is stored in a .NK2 file in your application data folder. Delete this file to clear the cache completely.

Read this article at Outlook Exchange for more info about the Outlook recipient autocomplete cache (Outlook .NK2 file). Also check out the Ingressor NK2 Management tool for editing and managing the .NK2 file.