Thursday, January 25, 2007

WCF Security: wsHttpBinding with brokered Kerberos

Now that I have implemented a separate ASMX endpoint for interoperability with Flex 2, it was time to switch from basicHttpBinding to wsHttpBinding for better security (authentication, encryption, signing). Using this binding will provide message level security, preventing e.g. HTTP spies on the client from snooping the data exchange with our services.

Editing the WCF client and server configuration files is a quite daunting task, but luckily the
new version of WSSF offers nice wizards for configuring security for your services. The 'WCF security' menu allows you to easily add support for the most common providers: client X.509 certificates, Kerberos, ADAM, SQL Server, ActiveDirectory, server certificate.


Among the available providers, the Kerberos provider is the simplest to use if you don't want to use a certificate nor HTTPS/SSL, or you want/has to use Cassini (the VS2005 developer web-server), as shown in the figure (click to enlarge):


The wizard is straightforward, just remember to include metadata (MEX) in your service if you want to expose a WSDL file. Adding a list of authorized clients is not mandatory for this security mode. I recommend that you delete all content in the <system.serviceModel> element in the service config file before using the 'WCF Security' menu. This ensures a clean configuration settings section afterwards.

I then updated the service reference of our SmartClient application to get the correct settings for the client config file. This step is important, as the common configuration settings must be equal on both client and server for WCF communication to work.

With all the configuration finished, it was time for some testing. I got this error when invoking a WCF operation:

System.ServiceModel.Security.MessageSecurityException: The token provider cannot get tokens for target 'http://localhost.:1508/eApproval.Host/ProjectDocumentServices.svc'. ---> System.IdentityModel.Tokens.SecurityTokenValidationException: The NetworkCredentials provided were unable to create a Kerberos credential, see inner execption for details. ---> System.IdentityModel.Tokens.SecurityTokenException: InitializeSecurityContent failed. Ensure the service principal name is correct. ---> System.ComponentModel.Win32Exception: No credentials are available in the security package
--- End of inner exception stack trace ---


My client config file contained these security settings, which to me looked fine and dandy:

<security mode="Message">
<transport clientCredentialType="Windows" proxyCredentialType="None" realm="" />
<message clientCredentialType="Windows" negotiateServiceCredential="false" algorithmSuite="Default" establishSecurityContext="false" />
</security>

My suspects were the extra <message> element attributes not covered in Keith Brown's "Security in WCF" article: negotiateServiceCredential and establishSecurityContext. As usual with new Microsoft technology, the online help is rather lame, so some research was required.

Googling lead me to this excellent article by Michèle Leroux Bustamante: "Fundamentals of WCF Security" (6 pages). The 'Service Credentials and Negotiation' section contained the explanation for my error: "When negotiation is disabled for Windows client credentials, a Kerberos domain must exist". So by turning service identity negotiation on in both the client and the server config files, my client can now communicate securely with our WCF service. I also recommend turning on the 'secure session' context for better performance, unless your service is rarely used by the client.

A confusing part of the client side authentication settings for the 'Windows' security mode is the <endpoint/identity> element, which will typically contain your "user principal name" (UPN) in the <userPrincipalName> element when hosting with Cassini or self-hosting. The identity setting is not for specifying who the client user is, but for identityfying and authenticating the service itself. If you use IIS as the host, the identity must contain the "service principal name" (SPN). Note that if your IIS app pool identity is not NETWORK SERVICE, you must typically create the SPN explicitly.

The <identity> element is only used with NTLM or negotiated security. If you use non-negotiated security such as direct Kerberos, this element is not supported.

The security config now looks like this (server side settings):

<bindings><wsHttpBinding>

<binding name="BrokeredAuthenticationKerberosBinding">
<security mode="Message">
<message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true" />

</security>
</binding>
</wsHttpBinding>
</bindings>

Checking the on-the-wire format with Fiddler shows that the request and response messages now are encrypted; without using HTTPS or SSL transport level security:

The system provided bindings and the WSSF wizards do a good job in guiding you in the WCF settings jungle; but when your automated guide gets lost, you'd better have some survival skills. Knowing the inner workings of WCF is a must for professional service developers, and I strongly recommend reading
Michèle's article. The same goes for her blogs and the WCF stuff at IDesign.

Fiddler is highly recommended and can be downloaded from http://www.fiddlertool.com/fiddler/.

Friday, January 19, 2007

WCF: Core categories of data contracts

One of the famous SOA tenets is "services share contract, not class/implementation", meaning that it is the schema of your contract that is the main conveyor of how to consume the operations provided by your service. This has a huge impact on how you should design your contracts to provide for clear, understandable and comprehensive semantics, and also to minimize ambiguity in how to use your service. Contracts that have subtle or vague semantics are just more difficult to use and are thus more error prone. The same applies to contracts that are too flexible.

This post is about how to design data contracts that a simple to use, rather than easy to implement (simple vs easy); and at the same time keeping the number of data contracts to a minimum. The latter is important both for the consumers of your service and for the maintainability of your service. It is also important wrt SOA governance, the less stuff you have to govern, the better. Less schemas, less semantics, less maintenance, less governance.

Data contracts belong to one of these two groupings: altering state and querying information. Generally speaking, operations that modifies your system need to comply with stricter requirements and rules than operations that reads data from your system. This is because operations that can leave your system in an invalid state have greater technical impact on your business than operations that just returns information. Of course, if you disclose the incorrect information, your business could be in serious legal trouble.

The two data contract groupings can be further refined into several categories based on the different needs for expressing contract semantics and for being unambiguous. These five data contract core categories have manifested themselves through several more or less service-oriented solutions that I have implemented:
  • Insert/update contracts: Typically one contract per domain object. Optional contained data contracts must be avoided or specifically handled.
  • Delete contracts: Typically one contract per domain object.
  • Specification/criteria contracts: Typically one contract per result contract, but it is not uncommon that a single specification can relate to multiple result contracts. Optional members are perfectly standard; the same applies to nullable criteria. Composite specifications are normal.
  • Read/query result contracts: One or more contracts per domain object. Optional contained data contracts are allowed for flexibility and this is a key mechanism for keeping the number of result contracts to a minimum. Composite contracts are also allowed for the same reasons.
  • Batch update/import contracts: Typically one contract per domain object batch operation type. Composite contracts are normal. Optional composite or contained contracts must be specifically handled.
These are core data contract categories for entity/core services. You will need to have more than just these core data contracts to provide good, event-driven, specialized business process services (EDA) in different contexts (sales, support, accounting, logistics, partners, suppliers, customers, etc).

The term ‘domain object’ also comprises complex objects (aggregate root objects) such as an order or a document card. The term ‘contained’ is used for complex objects. The term ‘composite’ is used for contracts that consists of several domain objects. The term ‘batch update’ includes insert, update and delete actions or a combination of these actions.


Note that I use CRUDy terms in the categories for simplicity (easier for me), to cover any real-life event that affects the state of a domain object. E.g. the “customer has moved” event falls into the “update” category.

A result contract will typically contain a composite structure of domain objects, defined by exactly the same unambiguous data contracts used for insert/update actions. The main reason for defining data contracts in the first place is to promote standardization and reuse across services and operations. To be able to support both the rigid insert/update data contact requirements and the flexible result contract requirements; it becomes a must to separate structure from data, isolating the structure/composition to the result set data contracts. Structural elements in a data contracts implicitly impose subtle semantics: how will the service handle the omission of composite/contained domain objects.

Insert/Update Contracts

It is important that insert/update contracts have little room for ambiguity, especially for complex domain objects. E.g. if the customer data contract contains a collection of addresses, what will happen if a customer update action is performed and no addresses are provided: does it mean that the customer no longer have any addresses or does it just mean that your can update a customers phone number without having to specify the addresses?

Such contained objects must be either A) required or B) specifically handled and by default optional/ignored. Controlled optional elements can be handled the way that the .NET 1.x XmlSerializer handled optional elements: using an extra property to indicate the state of the optional element. The XmlSerializer uses a Boolean XxxSpecified property for each optional element, e.g. OrderShippedDateSpecified.

Rather than using just a Boolean for the contained optional object, I recommend using an enumeration that contains Ignore (default value) and then some other applicable actions; much like cascading actions in SQL Server. The customer data contract should contain both an AddressList collection and an AddressListAction enumeration with e.g. the values Ignore, Replace, Alter, Purge. The point is that the user has to explicitly assign an action on the contained collection, rather than the service just assuming that an empty collection means deletion of the existing children. Assumptions are semantic coupling, and that is something you should strive to avoid.

Note that these 'insert/update' contract recommendations apply to entity/core services, which are not the services you want to expose publicly. Your public services need to reflect the events of your service-oriented business processes, and these "published" services belongs to the 'application to application services' category. By layering your services according to the four service categories,
you will be able to expose more specialized operations with smaller contracts. Large contracts imply stronger coupling to the service, and as large contracts are more likely to change, your service will be more subject to breaking changes. Small contracts are simpler contracts, and simple contracts are important for the reusability, reliability, quality and robustness of your service (more about this in "Patterns for High-Integrity Data Consumption and Composition" by Dion Hinchcliffe).

You can still provide a very specific business operation that builds on the core service. E.g. the "customer has moved" event can be supported by an composite operation that takes only the customer key and the new postal address; which internally gets the complete customer, alters the address, and then stores the customer, in a single transaction using the core services.
Services at the A2AS layer allows you to be "liberal in what you accept" as they shield the consumers from the details of the core services.

Read/Query Result Contracts

Result data contracts should be able to fit multiple needs and support several views of domain objects and composite result sets. At the same time, a consumer should be able to control how much information that gets returned from the service. E.g. one consumer might not be interested in address information when fetching customer data. Thus, a result data contract will most likely comprise optional elements, and consumers will not fail if some data is not present in the result set.

An empty collection does not normally imply the same ambiguity for reads as it does for insert/update contracts. A consumer will typically assume that if a fetched complex object contains no elements for a contained data contract, then the object does not have any such children; e.g. that a customer has no addresses if the customer AddressList collection is empty. An extra metadata property could be added to the result data contract as an indication of whether an optional element actually contains data even if not returned due to the processed query specification.

Note that ‘not present’ in the result set is not the same as ‘missing’ from the result set, which is clearly an error and should have caused a service fault.

Batch Update/Import Contracts

Batch update contracts are typically used to alter the state of a set of (related) domain objects. E.g. to update the TaskList collection of a project by sending a message that contains the tasks to add, modify and remove as one batch. Batch operations are a good way to avoid having to expose transactions outside your service; package all domain objects that must be altered in a transaction into a single message and perform the update using a single transacted operation.

Note that each data contract must still follow the rules described for ‘insert/update contracts’ even when used as part of a batch contract.

To be ideal objects for batch operations, domain objects should expose a “row-state” property; if they don’t, you need something like ‘Service Data Objects’ to make your batch contracts really simple to use. A message with one collection per action should be the last alternative.

Monday, January 15, 2007

SOA Darwinism - Natural selection of agile services

The LEGO block analogy for SOA is wellknown and the message is that a service should be like a LEGO block: well-defined interfaces, reusability, easy to assemble into new compositions, orchestrations and mashups; but also a governance problem with all the different services floating around; and add that even if it is easy to compose new structures from Lego blocks, it might not be that simple to change Lego systems.

Another analogy that is useful when discussing service-orientation is Darwinism (Charles Darwin) - more specifically that specialization of a species to a specific habitat makes the species less adaptable and more vulnerable to changes in their environment. If you think of a service as a 'species' and of the 'habitat' as the service context, you'll see that a service that has a very high coupling to its context is not very well suited to being reused in another context. The service is just not agile. The service is just not ready for SOA bliss.

Just as if you relocated e.g. a moose to Africa, it would die as it is specialized to the climate and the type of food (birch/pine) of its habitat; a service that depends on e.g. the employee directory cannot easily be reused on your company's public web-site.

I think that the agility of a service is a good indication of whether your services are truly SOA rather than plain JBOWS. Providing good SOA services is more difficult than you think, making just a web-service is far to easy. This is the hardest message to get across to JBOWS developers claiming to provide SOA services. Afterall, a web-service that is specialized to fit perfectly to e.g. the current intranet application will become extinct when the business environment changes. And nothing changes faster than business. A moose in Africa stands a better chance.

Evolve, as natural selection will see to that repeating the failure of DCOM/CORBA for distributed systems using web-services has no future in SOA.

Btw, Dion Hinchcliffe has a good article about SOA, WOA and SaaS in 2007, referring to the web as a Darwinistic 'petri dish' software experiment that will show us what works.

Friday, January 12, 2007

Test smell == Design smell

One of the goals of Test Driven Development is to make sure that your services/object model meets the loose coupling/high cohesion goal of software design. If it is hard to write a unit test for one of your methods, the method has low "testability". Refer to the 'Testability' section of this article at Jeremy Millers' blog for more info about the relationship between testability and design.

Testability and reusability are two closely related aspects of a service/object model. If an operation is not easy to test, it is not easy to reuse. And if it is not easy to reuse, the operation is not well suited for use as part of a composable system. Thus a service with low testability will most likely be hard to reuse in a service-oriented architecture.

Services and operations need to be highly reusable, as a major benefit of truly service-oriented systems is agility: your boss have heard that you have e.g. a currency conversion service based on live exchange rates, and he needs just this functionality right now on the company web-site. If your operation has too many dependencies and cannot easily be reused stand-alone, i.e. the operation requires a complicated/smelly unit-test; the operation is just not ready for real-life SOA.

Low testability is easy to spot: the test method contains a lot of context setup code, session stuff, for-loops and if-else/switch, excessive reference data lookup, calling several other classes or operations, etc. I think of these test anti-patterns as 'test smells', after the term 'code smell' coined by Kent Beck. A test smell is usually an indication of bad design in the tested classes, thus the term 'design smell' comes into mind. A design smell is any anti-pattern to the existing software design best practices and patterns.

Note that test smells and design smells need not be code smells. E.g. for-loops in a test is smelly, for-loops in code is normal; methods with several parameters are OK in code, but is a design smell in service-oriented operations. And the other way; e.g. most code smells regarding the relationship between classes (feature envy, intimacy, tell-don't ask, etc) are also test smells.


These days I am writing some unit-tests to get to know some legacy services. The service was written by someone else, and this is a good way to review the design and reusability of the services. This unit-test code is an example of "the easiest way" to add a comment to an activity using the legacy services, including my "smelly" notes:

//TEST SMELL: CANNOT CONTROL TRANSACTION FROM TEST
//MAJOR DESIGN SMELL: DECLARATIVE TRANSACTIONS NOT SUPPORTED

using (TransactionScope transx = new TransactionScope (TransactionScopeOption.Suppress))
{
//TEST SMELL: FOR-LOOP FOR LOOKING UP REFERENCE DATA
string activityTypeId = null;
foreach (TableValue activityType in referenceData.activityTypeList)
{
//DESIGN SMELL: WHY ISN'T "TYPE" EXPOSED IN THE ACTIVITY OBJECT
if (String.Compare(activityType.name, activity.name)==0)
{activityTypeId = activityType.Id;break;}
}

Assert.IsTrue(activityTypeId != null, "activityTypeId not found in referenceData.activityTypeList collection.");

//DESIGN SMELL: WHY ISN'T "ACTIVITYID" EXPOSED IN THE ACTIVITY OBJECT
string activityId = Constants.NodeId;
string responsibleId = activity.responsible;

//MAJOR DESIGN SMELL: EVERYTHING IS A STRING
//CODE SMELL: PRIMITIVE OBSESSION
string text = "unit-test text";
. . .

string regDate = System.DateTime.Now.ToString(Constants.FORMAT_DATE);
string archive = "false";
string enumerate = "false";

//DESIGN SMELL: RETURNING ENTITIES FROM ADD OPERATION IMPOSES TWO-WAY OPERATION
//DESIGN SMELL: NOT A MESSAGE-BASED SERVICE OPERATION
Comment[] inserted = target.addComment(Constants.VesselGuid, path, activityTypeGuid, status, severity,id, text, responsibleGuid, regDate, commentTypeGuid, archive, enumerate);

Assert.IsTrue(inserted.Length>0, "addComment did not return the expected value.");

//transx.Complete(); //no commit == rollback
. . .

}

The example also shows one of the subtle test smells: the test cannot apply a transaction to the business operation as this will cause the test to fail. As this due to the implementation of the object model, this is a major design smell. This kind of logical design error can go undetected for a long time and cause hard to diagnose errors, as the code will not cause run-time errors until someone tries to create a new transacted operation by composing existing "explicit transaction" operations.

Unit-tests serve many purposes, and if for no other reason, you should write unit-tests to assess the testability of your code, and thus design quality, reusability and agility of the services.

PS! I have updated all my postings with the new tag/label mechanism of Blogger, so my RSS feed will be a little crazy due to the updates.

Thursday, January 11, 2007

WSSF v2 released: WCF contract first, contract versioning

Normally I wouldn't post just some links, but the release of the WCF web-service software factory version 2 merits an aggregation post.

Read about WSSF v2 details such as contract first supprt (WSCF), versioning guidance, etc, at Don Smith's blog; and download it from MSDN (not GotDotNet). The complete web-service versioning emerging guidance article is a must-read for anyone implementing "published" services.

Boy am I excited to check out the WSCF stuff, I just loved the WSCF tool provided by Christian Weyer/thinktecture a few years ago.


Note that the data contract wizard still assigns order attribute values incrementally, and not according to the best practices recommended by Microsoft:

"The Order property on the DataMemberAttribute should be used to make sure that all of the newly added data members appear after the existing data members. The recommended way of doing this is as follows: None of the data members in the first version of the data contract should have their Order property set. All of the data members added in version 2 of the data contract should have their Order property set to 2. All of the data members added in version 3 of the data contract should have their Order set to 3, and so on. It is okay to have more than one data member set to the same Order number."


Read Aaron Skonnard's Service Station article 'The Service Station for WCF' at MSDN for an introduction to the WCF WSSF.

Note that if you do not use VSTS, some of the new features such as the "WCF semantic code analysis" tool will not work (not even if you have VS2005 Pro + FxCop 1.35).

Wednesday, January 10, 2007

MSCRM 3: Filtered view of SharePoint document library

It has been a long time since my last Microsoft Dynamics CRM related post, so I thought I should share a litte JavaScript tip for filtering a SharePoint document library from MSCRM. The simplest way to add a document archive to MSCRM is to use a single, shared SharePoint document library for all MSCRM accounts and then use filtered views from MSCRM to make it look like each account have their own document library. This technique of course imply that you have no need for applying access control to the doc-libs on an account-by-account basis.

Add this JavaScript to the 'OnLoad' event of the account form to add a new button to the account toolbar:

var baseUrl = "http://companyweb/General%20Documents/Forms/AllItems.aspx?";
var urlSuffix = "";

var fieldMapping = new Array();
fieldMapping[0] = new Array();
fieldMapping[0][0] = "AccountName";
fieldMapping[0][1] = "name";
fieldMapping[1] = new Array();
fieldMapping[1][0] = "AccountPhone";
fieldMapping[1][1] = "telephone1";
fieldMapping[2] = new Array();
fieldMapping[2][0] = "AccountNumber";
fieldMapping[2][1] = "accountnumber";

for (i=0; i<3; i++)
{
var idx = i+1;
var value = crmForm.all.item(fieldMapping[i][1]).DataValue;
urlSuffix += "&FilterField" + idx + "=" + fieldMapping[i][0];
urlSuffix += "&FilterValue" + idx + "=" + value;
}

var url = baseUrl + urlSuffix.substring(1);
//alert(url);

var button = document.getElementById('New_1_315_Web Only');
if(button != null)
{
button.outerHTML = button.outerHTML +
'<SPAN class=menu id=_btnAccountDocs hideFocus title="Click to view account documents" style="PADDING-RIGHT: 3px; PADDING-LEFT:3px; PADDING-BOTTOM: 0px; PADDING-TOP: 3px" onclick=window.execScript(action) action="window.open(\'' + url + '\');" tabIndex=0 pr="3" pl="3"><DIV class=mnuBtn><IMG class=mnuBtn src="/_imgs/ico_18_1.gif">Account Documents</DIV></SPAN>';
}


The script is based on using the MSCRM 3 demo VPC, and adds the new button next to the "Web only" button added in the demo ISV.config file. Note: the filter string added to the outerHTML must be a single line, then linebeaks in the script shown here is just for readability.

The added "Account Documents" button will open a filtered view of the SharePoint doc-lib in a new window as shown in the figure (click to enlarge):



You can add the button (or any other HTML stuff you like) next to any HTML element in the MSCRM web-page, all you need to do is to find the ID of the element you want to use as the injection point. Search through the DHTML source to find the applicable location and look for the nearest HTML element containing an ID attribute. This is your injection point.

Using "View-Source" to view the HTML behind the page will not show changes made to the page after it has loaded in the browser. Use this litte JavaScript directly in the MSIE address field (after using CTRL-N to make the full browser appear) to view the actual, current HTML of the MSCRM web-page:


javascript:'<xmp>' + window.document.documentElement.outerHTML + '</xmp>';

By modifying the action script, it is quite easy to view the filtered SharePoint doc-lib in an IFRAME inside the account form (using the .location of the .form[] collection in the DHTML DOM). You will then need to make a customized view of the doc-lib suitable for inlining in the MSCRM form, see my post about removing the SharePoint chrome, controlling navigation, etc, for further details.

Tuesday, January 09, 2007

NUnit: deployment items with ReSharper/TestDriven.NET

I like my solutions to be self-contained, that is - by just checking out all files from the solution root in the source control management (SCM) system, the solution should be able to compile and run. This includes storing the referenced assemblies and their dependencies in the SCM: "everything you need to do a build should in there including: test scripts, properties files, database schema, install scripts, and third party libraries" [Fowler: Continuous Integration].

And as the unit tests should be treated as first class citizens in a solution, the SCM best practices must apply to the test project also. Adding the referenced assemblies to a NUnit test project is straightforward, but how do you add the dependencies of the references (i.e. the assemblies referenced by the referenced assemblies, and so on) ? For those using a native NUnit project this can be configured using the "Assemblies" tab in the "Project Editor".

For those using ReSharper and/or TestDriven.NET, just adding both the referenced assemblies and all their dependencies to the Visual Studio class library "NUnit" project will work. The downside of this is that you will get more object models/namespaces to choose from when coding your tests, and even if ReSharper somewhat helps you pick and create "using" statements, it can be confusing and thus a source of errors.

VSTS has a nice solution to this "extra binaries" problem, the deployment item mechanism, which I have used in other projects. So this is how I implemented support for using a \DeploymentItems\ folder in my NUnit test project (note that my build output folder is just \bin\ without debug or release):

[TestFixtureSetUp]
public void TestFixtureSetup()
{
foreach (string file in Directory.GetFiles(@"..\DeploymentItems\", "*.dll"))
{
string newFile = Path.Combine(@"..\bin\", Path.GetFileName(file));
if (File.Exists(newFile)) File.Delete(newFile);
File.Copy(file, newFile);
File.SetAttributes(newFile, FileAttributes.Normal);
}
}

Note that the file attribute is set to normal after copying it, this is to remove any read-only attribute on the assembly to ensure that it can be overwritten the next time the tests are run. Afterall, the deployment items must also be source controlled and will thus be read-only when you do a "get latest version" from your SCM.

The DeploymentItems folder is not limited to assemblies, just change the GetFiles filter to copy other items to the test execution location as well.


The solution is inspired by Scott Hanselman's post about unit testing with Cassini.

Thursday, January 04, 2007

ClickOnce deployment of WCF client (.NET 3)

Setting up ClickOnce deployment of a WCF client application is quite easy with Visual Studio 2005 and .NET 2 SmartClient functionality. The only configuration of the publish mechanism that is needed is to add the .NET 3 run-time to the prerequisites and make it available for installation by your users.

Download the signed .NET 3 redistributables (not only the bootstrapper):

Note that the .NET 3 packages must be copied to this directory on the developer PC:
\Program Files\Microsoft Visual Studio 8 \SDK\v2.0\BootStrapper\Packages\NETFX30

Then change the 'Publish' configuration of your WCF client (menu: Project > Properties > Publish > Prerequisites) to include the ".NET Framework 3.0" as a prerequiste and check the "Create setup program to install prerequisite components" as shown in the figure:



Note that ClickOnce will download and install the .NET 3 run-time when added as a prerequisite as shown in the figure, provided that you have a recent version of ClickOnce. Note also that the user must be 'local admin' on the target PC to be able to install the .NET 3 run-time.

Set the applicable check-for-updates options using the 'Application Updates' dialog before you publish the WCF client.

Finally, set the publish location and installation mode to "offline" and click the 'Publish Now' button. Alternatively, use the publish wizard, which is also accessible from the Build menu or by right-clicking the project and selecting 'Publish'. Wait for the publish build process to complete, then test the ClickOnce deployment from the installation page that was generated at the selected publish location.

The WCF client must have "full trust" to run. Configure this using the 'Security' tab in the project properties as shown in the figure:


Read more about ClickOnce 'Deployment Prerequisites' at MSDN.

[UPDATE] Read troubleshooting details at Joe Wirtley's blog.