Pogo69's Blog

May 7, 2011

The Missing LINQ – CRM 4.0 SDK – Advanced Developer Extensions

Filed under: .NET, C#, CRM, Cutting Code — Tags: , , , — pogo69 @ 09:01

In my inaugural post, I made brief mention of the newly introduced capabilities of the LINQ provider included as part of the most recent revision of the CRM 4.0 SDK – the Advanced Developer Extensions.  In this post, I’ll go into more detail about the features available and how it can increase developer productivity by reducing lines of code and increasing the testability and maintainability of the code.

The article wouldn’t be complete, however, without mentioning some of the current downsides; you can find the caveats later in the post.

NB: I started writing this post last year before CRM 2011 had been released but got distracted (not in the least bit unusual for me).  Most of what I’ve written is still applicable to the CRM 2011 LINQ provider as the underlying query mechanism (Query Expressions) is the same.

LINQ

One of the most exciting new features of the .NET framework is LINQ (Microsoft Language Integrated Query).  Using an applicable provider, LINQ provides SQL-like access to an underlying object model, via some compiler magic and a collection of extension methods (global static methods that take a “this” pointer as their first parameter).

LINQ allows strongly typed queries and results to an arbitrary underlying data model.  The Advanced Developer Extensions makes available a LINQ provider to access CRM entities and attributes via the xRM LINQ provider.

Advanced Developer Extensions

CrmSvcUtil.exe

In order to be able to write LINQ queries against the CRM, you must first generate the following:

  • DataContext class – allows a “connection” to the CRM and provides access to both collections of CRM entities and CRUD methods
  • Data Transfer Objects – a mapping between a CRM entity instance and a .NET object with which your source code can interact

If you wish to keep the model consistent and in sync, the CrmSvcUtil.exe utility must be re-run each time you make metadata changes to the corresponding CRM installation.

xRM LINQ Queries

So, let’s look at some simple queries:

DataContext xrm = new DataContext("Xrm.ConnectionString");

Guid contactId = new Guid("{12341234-1234-1234-123412341234}");

List<incident> myCases =
(
	from cs in xrm.incidents
	join c in xrm.contacts on cs.customerid.Value equals c.contactid
	where c.contactid == contactId
	select cs
).ToList();

This simple “join” query returns all Case entities assigned to the Contact referenced by the Guid “contactId”.  To write the equivalent query in QueryExpression syntax would consume many more lines of code, and largely obfuscate the intention of the query.

With LINQ, the simplified syntax allows us to see:

  • Selection criteria
  • Source entities
  • Link criteria
  • Output attributes
  • Ordering

and more, all in one simple statement.

Returning Custom Classes

The previous query returns entire CRM entities (via the corresponding Data Transfer Object); it is also possible to return only a selection of the entity’s attributes, thus:

var myCases =
	(
		from cs in xrm.incidents
		join c in xrm.contacts on cs.customerid.Value equals c.contactid
		where c.contactid == contactId
		select new
			{
				cs.incidentid,
				cs.title
			}
	);

The keyword “var” refers to an anonymous type; anonymous types allow the dynamic creation of an object, without having to explicitly declare the type of the resultant object.  Anonymous types may only contain read-only properties; if you wish to subsequently manipulate the content of the returned results, you must declare and instantiate instances of explicitly declared types.

The following provides an (almost real world, albeit abbreviated) example:

protected class AppointmentDataItem
{
	// appointment attributes
	public Guid AppointmentId { get; set; }
	public DateTime ScheduledStart { get; set; }
	public DateTime ScheduledEnd { get; set; }
	public DateTime Booked { get; set; }

	// case (job) attributes
	public string JobTicketNumber { get; set; }
	public Guid JobId { get; set; }

	// contact (customer) attributes
	public string CustomerName { get; set; }
	public string CustomerTelephone { get; set; }
	public string CustomerAddress { get; set; }
	public string CustomerCity { get; set; }
	public string CustomerState { get; set; }
	public string CustomerPostcode { get; set; }
	public string CalendarDisplayTitle { get; set; }
}

...

List<AppointmentDataItem> appointments =
	(
		from app in xrm.appointments
		join cs in xrm.incidents on app.regardingobjectid.Value equals cs.incidentid
		where cs.msa_partnercontactid.Value == contactId
		select new AppointmentDataItem
			{
				AppointmentId = app.activityid,
				ScheduledStart = activity.scheduledstart.GetValueOrDefault().ToLocalTime(),
				ScheduledEnd = activity.scheduledend.GetValueOrDefault().ToLocalTime(),
				Booked = activity.createdon.GetValueOrDefault().ToLocalTime()
			}
	).ToList();

...

foreach (AppointmentDataItem appointment in appointments)
{
	// get the Case object to which the Appointment is attached
	var job =
	(
		from cs in xrm.incidents
		join app in xrm.appointments on cs.incidentid equals app.regardingobjectid.Value
		where app.activityid == appointment.AppointmentId
		select cs
	).Single();

	appointment.JobId = job.incidentid;
	appointment.JobTicketNumber = job.ticketnumber;

	// get the Customer Contact to whom the Case is attached
	var customer =
	(
		from cust in xrm.contacts
		join cs in xrm.incidents on cust.contactid equals cs.customerid.Value
		join app in xrm.appointments on cs.incidentid equals app.regardingobjectid.Value
		where app.activityid == appointment.AppointmentId
		select cust
	).Single();

	appointment.CustomerName = customer.fullname;
	appointment.CustomerAddress = customer.address1_line1;
	appointment.CustomerCity = customer.address1_city;
	appointment.CustomerState = customer.address1_stateorprovince;
	appointment.CustomerPostcode = customer.address1_postalcode;
	appointment.CustomerTelephone = customer.telephone2;
	appointment.CalendarDisplayTitle = customer.fullname + " " + FormatTelephone(customer.telephone2) + "<br />" + FormatAddress(activity);
}

So, we:

  • Declare our new custom type
  • Query for, and return a List of instances of our new type
  • Update the properties of our object instances at will

Pretty cool; and we’re still “talking” to CRM in terms of entities and attributes using a relatively natural (to those of us with SQL skills anyway) language.

Restrictions (some of those Caveats I mentioned)

There are limitations to the queries that can be written using the xRM LINQ provider.  These restrictions are due to the underlying query mechanism to which the LINQ provider translates its queries – Query Expression.

One Entity’s Attributes per Query – The impact of Query Expression

Using Query Expressions, we can only return attribute values for a single entity at a time.  A corollary of this is that an xRM LINQ Query can only return attribute values for a single entity at a time.  For instance, the following is not valid and will error:

var contact =
	(
		from c in xrm.contacts
		join a in xrm.accounts on c.parentcustomerid.Value equals a.accountid
		where c.emailaddress1 != null
		select new
			{
				ContactName = c.fullname,
				AccountName = a.name
			}
		);

While the code will compile and run (it *is* valid C# and valid LINQ syntax), it will generate the following error at run-time as soon as you attempt to iterate the resultant collection:

The ‘select’ and ‘orderBy’ calls may only reference a single common entity type.

If you wish to retrieve more than one entity’s attributes in a single query, you are forced to “project” the CRM Entity’s Data Transfer Object(s) onto a local object collection via the ToList() method or similar.  Unfortunately, this brings up another caveat.

Projection of Data Transfer Objects onto Local Object collections

The following revised query (NB: the ToList() projection in the ‘from’ clause):

var contact =
	(
		from c in xrm.contacts.ToList()
		join a in xrm.accounts on c.parentcustomerid.Value equals a.accountid
		where c.emailaddress1 != null
		select new
			{
				ContactName = c.fullname,
				AccountName = a.name
			}
		);

will no longer error as the ToList() projection turns our Contact entity Data Transfer Objects into a local object collection.

What this means however, is that the LINQ provider will return ALL CRM Contact entities, BEFORE it applies either your predicates (where clause) or attribute selection.

In other words, each and every attribute of each and every entity instance is returned, and subsequently whittled down in memory by the remainder of your LINQ query.  This can be very expensive in terms of speed and bandwidth depending on the number and size of the entities in your query.

If you find this to be an issue, you will need to revise your code to use a mechanism such as that described above, wherein you create custom classes to retrieve the relevant data one entity type at a time.

Restricted Predicates – The impact of Condition Expressions

A Condition Expression takes the following form:

ConditionExpression condition = new ConditionExpression("<logicalattributename>", ConditionOperator, object);

This means that the predicates in your CRM LINQ queries must follow a similar format:

where <entityinstance>.<attribute> == "<value>"
  • The left-hand side of the predicate MUST be an entity attribute
  • The right-hand side of the predicate MUST be a variable or literal

This prevents us from comparing attribute values between entities or even from comparing attribute values from the same entity instance.  For instance, the following is NOT valid:

var contacts =
	(
		from c in xrm.contacts
		where c.modifiedon.Value > c.createdon.Value
		select c
	).ToList();
resulting in the following run-time (because again, it *is* valid c# and valid LINQ) error:
Lambda Parameter not in scope

Of course, you can “solve” this problem by projecting onto a local object collection as per above, but you risk the same performance issues previously described.

Many to Many Queries

I’m sure you can find examples somewhere in the SDK, but I figured I’d make mention of many-to-many queries, as they require some “unusual” syntax, once again due to their relationship to the underlying Query Expressions.

In a recent project my data model included a many-to-many relationship between the Category entity and the Module entity.  To query for each Module attached to a particular Category, my query looks like the following:

List<Module> desiredModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join cat in xrm.ame_categoryame_categories on mod_cat.ame_categoryid equals cat.ame_categoryid
		where cat.ame_categoryid == desiredCategoryId
		where mod.statuscode.Value == 1
		select new Module
			{
				CategoryId = cat.ame_categoryid,

				ModuleId = mod.ame_moduleid,
				ModuleNo = mod.ame_name,
				ModuleTitle = mod.ame_title
			}
	).ToList();

The first thing you will probably notice is that I used ToList() to project the Modules onto a local object collection.  This is so that I could populate my custom Module class with the Category to which it was attached.  In this case, I know that the number of Categories/Modules/Subjects in the system will never be prohibitively large and there are only 2-3 custom attributes on each, so I can live with the overhead.

The “unusual” thing about the query is that we have 2 “from” clauses – this won’t make any sense to SQL aficionados, but it is necessary due to Query Expression and the way that many-to-many relationships are represented via the CRM SDK – the Moniker.

More Cool Stuff

Custom Comparators (this is more C# than CRM, but it’ll get you thinking)

In the same project described above, I need to build lists of Modules and Subjects that a user requires to complete a selected qualification (Category).

First, the user selects their desired Category, from which a list of required Modules and Subjects is constructed:

// obtain desired modules/subjects
List<Module> desiredModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join cat in xrm.ame_categoryame_categories on mod_cat.ame_categoryid equals cat.ame_categoryid
		where cat.ame_categoryid == desiredCategoryId
		where mod.statuscode.Value == 1
		select new Module
			{
				CategoryId = cat.ame_categoryid,

				ModuleId = mod.ame_moduleid,
				ModuleNo = mod.ame_name,
				ModuleTitle = mod.ame_title
			}
	).ToList();

List<Subject> desiredSubjects =
	(
		from sub in xrm.ame_subjects.ToList()
		from mod_sub in sub.ame_ame_module_ame_subject
		join mod in desiredModules on mod_sub.ame_moduleid equals mod.ModuleId
		where sub.statuscode.Value == 1
		select new Subject
			{
				ModuleId = mod.ModuleId,

				SubjectId = sub.ame_subjectid,
				SubjectNo = sub.ame_name,
				SubjectTitle = sub.ame_title
			}
	).ToList();

Next, the user selects each Category that they currently hold, from which a list of currently held Modules and Subjects is constructed:

// obtain currently held modules/subjects
List<Module> heldModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join held in heldCategories on mod_cat.ame_categoryid equals held
		where mod.statuscode.Value == 1
		select new Module
		{
			CategoryId = held,

			ModuleId = mod.ame_moduleid,
			ModuleNo = mod.ame_name,
			ModuleTitle = mod.ame_title
		}
	).ToList();

List<Subject> heldSubjects =
	(
		from sub in xrm.ame_subjects.ToList()
		from mod_sub in sub.ame_ame_module_ame_subject
		join mod in heldModules on mod_sub.ame_moduleid equals mod.ModuleId
		where sub.statuscode.Value == 1
		select new Subject
		{
			ModuleId = mod.ModuleId,

			SubjectId = sub.ame_subjectid,
			SubjectNo = sub.ame_name,
			SubjectTitle = sub.ame_title
		}
	).ToList();

Next, I needed to calculate the difference.  That is, remove all Modules and Subjects currently held from the list of required Modules and Subjects.

I started with the following:

// calculate difference
List<Module> requiredModules = desiredModules.Except(heldModules).ToList();
List<Subject> requiredSubjects = desiredSubjects.Except(heldSubjects).ToList();

Unfortunately, it doesn’t work correctly.  As the relationship between each entity is many-to-many and I am storing the parent identifier in my local Module and Subject classes, the “held” Modules/Subjects never matched the “required” Modules/Subjects – for the Modules, the CategoryId didn’t match and for the Subjects, the ModuleId didn’t match:

internal class Module
{
	internal Guid CategoryId { get; set; }

	internal Guid ModuleId { get; set; }
	internal string ModuleNo { get; set; }
	internal string ModuleTitle { get; set; }
}

internal class Subject
{
	internal Guid ModuleId { get; set; }

	internal Guid SubjectId { get; set; }
	internal string SubjectNo { get; set; }
	internal string SubjectTitle { get; set; }
}

Enter, the Custom Comparator.  A custom comparator can be used to provide rules for comparison for classes that don’t meet the default property-wise comparison criteria.  Your custom comparator must inherit from IEqualityComparer.  For simplicity I’ll show only the Module comparator:

class ModuleComparer : IEqualityComparer<Module>
{
	public bool Equals(Module x, Module y)
	{
		//Check whether the compared objects reference the same data.
		if (Object.ReferenceEquals(x, y)) return true;

		//Check whether any of the compared objects is null.
		if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false;

		//Check whether the Modules' properties are equal.
		return x.ModuleId == y.ModuleId;
	}

	// If Equals() returns true for a pair of objects
	// then GetHashCode() must return the same value for these objects.
	public int GetHashCode(Module module)
	{
		//Check whether the object is null
		if (Object.ReferenceEquals(module, null)) return 0;

		return module.ModuleId.GetHashCode();
	}
}

So, my basis for comparison is only the ModuleId:

//Check whether the Modules' properties are equal.
return x.ModuleId == y.ModuleId;

and I use the ModuleId’s hash code as the Module’s hash code:

return module.ModuleId.GetHashCode();

So, I can now alter my code to the following:

// calculate difference
List<Module> requiredModules = desiredModules.Except(heldModules, new ModuleComparer()).ToList();
List<Subject> requiredSubjects = desiredSubjects.Except(heldSubjects, new SubjectComparer()).ToList();

and it returns the correct results.

April 19, 2011

Debugging Plugins in CRM Online

Filed under: C#, CRM — Tags: , , , , — pogo69 @ 16:10

Plugin development in CRM Online offers a unique challenge, in that we cannot interactively debug our code.

In the ideal world, you would test your plugin in a development/testing environment (VPC or equivalent) before moving your code to the production Online deployment.  However, if you either have no testing environment available or are experiencing issues specific to the production deployment (data dependent errors, for instance), a few lines of code can help ease the pain:

	public class Plugin : IPlugin
	{
		public void Execute(IServiceProvider serviceProvider)
		{
			IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

			// TODO - If you require tracing, uncomment the following line
			ITracingService trace = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

			Entity entity = null;

			// Check if the InputParameters property bag contains a target
			// of the current operation and that target is of type DynamicEntity.
			if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)
			{
				// Obtain the target business entity from the input parmameters.
				entity = (Entity)context.InputParameters["Target"];

				// TODO Test for an entity type and message supported by your plug-in.
				if (context.PrimaryEntityName != "<logicalentityname>") { return; }
				if (context.MessageName != "<message>") { return; }
			}
			else
			{
				return;
			}
			try
			{
				IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
				IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

				...
			}
			catch (FaultException<OrganizationServiceFault> ex)
			{
				System.Text.StringBuilder errors = ex.Detail.ErrorDetails.Aggregate(
					new System.Text.StringBuilder(), (list, error) =>
					{
						list.Append(string.Format("[{0}:{1}]", error.Key, error.Value));
						return list;
					});

				#if TRACE == true
				trace.Trace(
					@"ErrorCode: {0}
					ErrorDetails: {1}
					Message: {2}
					TraceText: {3}",
					ex.Detail.ErrorCode,
					errors.ToString(),
					ex.Detail.Message,
					ex.Detail.TraceText
				);
				#endif

				throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
			}
		}
	}

With your exception handler in place, when/if an exception is thrown, the details of the Exception will be written to the Trace Log which is available for download via the popup Error dialog.  Anything written to the Tracing service will be sent to the CRM Web Application client in this way, but only if the plugin throws an exception.

April 15, 2011

CRM 2011 – Visual Studio Plugin Templates

Filed under: C#, CRM — Tags: , , , , — pogo69 @ 15:27

I really liked the Visual Studio templates that shipped with the CRM 4.0 SDK.  They provide a handy blank workspace in which to create a new plugin or custom workflow activity without having to remember each time how to code them from scratch.

I was rather dismayed to find no trace of similar templates in the CRM 2011 SDK.  So I created my own.  I’m offering them here so that other developers can benefit from the same.  Let me know if you discover any issues; I’ll update the source and refresh the links.

The Templates

Custom Workflow Activity

http://www.mediafire.com/?5ai40s51qjqv24o

Plugin

http://www.mediafire.com/?ip9avvd5wnc87f0

How to Install

Simply copy the zip files (do not extract them – just copy the archives directly) into the folder:

<My Documents>\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\CRM 2011\

I called my new folder ‘CRM 2011′ – you can call it what you wish.  The name you choose will appear as a new category when you create a new Visual Studio project:

Automatically Referencing the SDK Libraries

I had an interesting query (see below) from Gonzalo Ruiz about whether we could have the template include references to the SDK libraries.  If you’ve used these templates, you will have noticed that there are comments at the top of each about which libraries require referencing before your SDK object references will resolve and compile.

The answer is yes… and no.

Why I Can’t Put the References in the Generic Template

The CRM SDK libraries are not installed in the GAC on a developer’s workstation.  Nor are they installed to a well known location.  The template cannot therefore know to what path the library references should point.  I’ll include the plugin template’s project file so that you can see what the existing assembly references look like:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProductVersion>8.0.50727</ProductVersion>
    <SchemaVersion>2.0</SchemaVersion>
    <ProjectGuid>{BCC9080F-3C19-4D40-B487-1E874F6D2BD1}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>$safeprojectname$</RootNamespace>
    <AssemblyName>$safeprojectname$</AssemblyName>
    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <SignAssembly>true</SignAssembly>
    <AssemblyOriginatorKeyFile>PluginKey.snk</AssemblyOriginatorKeyFile>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
    <DebugSymbols>true</DebugSymbols>
    <DebugType>full</DebugType>
    <Optimize>false</Optimize>
    <OutputPath>bin\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
    <DebugType>pdbonly</DebugType>
    <Optimize>true</Optimize>
    <OutputPath>bin\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
  </PropertyGroup>
  <ItemGroup>
    <Reference Include="System" />
    <Reference Include="System.Core">
      <RequiredTargetFramework>4.0</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Data.Services" />
    <Reference Include="System.Data.Services.Client" />
    <Reference Include="System.Runtime.Serialization" />
    <Reference Include="System.ServiceModel" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
  </ItemGroup>
  <ItemGroup>
    <Compile Include="plugin.cs" />
    <Compile Include="Properties\AssemblyInfo.cs" />
  </ItemGroup>
  <ItemGroup>
    <None Include="PluginKey.snk" />
  </ItemGroup>
  <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
  <!-- To modify your build process, add your task inside one of the targets below and uncomment it.
       Other similar extension points exist, see Microsoft.Common.targets.
  <Target Name="BeforeBuild">
  </Target>
  <Target Name="AfterBuild">
  </Target>
  -->
</Project>

For example, the LINQ library is referenced using:

    <Reference Include="System.Xml.Linq" />

because it is installed in the GAC and therefore requires no path information.

What You Can Do to Make Hard-Coded References Work for You and Your Team

  1. Install the latest version of the SDK in a well-known location – preferably a network location so that the entire development team can reliably load the same projects
  2. Update the template to include “hint” information in a reference to each required CRM SDK assembly

Plugin Template

I’ll step through it for the Plugin template – you can figure out from there what to do to the Custom Workflow Assembly template.

Expand Plugin.zip

In the resultant folder, you will find a file named ‘CRM 2011 Plug-in Test Template.csproj’ – edit this file in notepad (or similar).  The contents are exactly as I have posted above – an XML file that tells Visual Studio which resources to load for your project.

Edit ‘CRM 2011 Plug-in Test Template.csproj’ to Add SDK Assembly References

Add two references in the <ItemGroup> node that contains the existing assembly references.  It should look like the following when you’re done:

  <ItemGroup>
    <Reference Include="System" />
    <Reference Include="System.Core">
      <RequiredTargetFramework>4.0</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Data.Services" />
    <Reference Include="System.Data.Services.Client" />
    <Reference Include="System.Runtime.Serialization" />
    <Reference Include="System.ServiceModel" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
    <Reference Include="microsoft.xrm.client">
      <HintPath>\\<server>\<share>\<path to sdk>\bin\microsoft.xrm.client.dll</HintPath>
    </Reference>
    <Reference Include="microsoft.xrm.sdk">
      <HintPath>\\<server>\<share>\<path to sdk>\bin\microsoft.xrm.sdk.dll</HintPath>
    </Reference>
  </ItemGroup>

Much the same as the other references, but with “hints” to the location(s).

Package and Deploy

  1. Save the project file.
  2. Zip up the contents of the entire folder structure you extracted into an archive named Plugin.zip.
  3. Overwrite the existing Plugin.zip

Your new Plugin projects should now contain resolved references to CRM SDK libraries.

 

 

 

 

 

 

 

 

 

 

 

 

 

March 31, 2011

Microsoft.Crm.Sdk.dll v4.0 vs v5.0 – CRM 4.0 SDK Library Redirection to CRM 2011

Filed under: .NET, C#, CRM, Cutting Code — Tags: , , , , — pogo69 @ 11:12

NB: This issue has been fixed in CRM 2011 Rollup 1.

We completed the migration of our internal CRM 4.0 system to CRM 2011 last week.  It all went reasonably smoothly with only what appear to be a few lingering javascript customisations that don’t play nicely with the new system.

We also run an externally available Customer Support Portal using the Customer Portal Accelerator from CodePlex.  This web application leverages the XRM libraries in the CRM 4.0 SDK to provide a reference implementation for Customer Support, upon which developers with the requisite skills can build additional functionality.

While it would be nice to move to the new CRM 2011 specific versions of the Portal Accelerators, I had to get the existing version up and running ASAP, so that our customers didn’t experience undue difficulties with the transition; let’s face it, as CRM Developers, who ever has time to work on their internal systems when there’s client work to do?  In order to get it all working on the new system, I had to change a few things.  In case anyone else needs to do the same, the following describes the process.

Web.config

CRM 2011 utilises the .NET framework v4.0.  Many of the Configuration Sections in a Web.config generated for a .NET 3.5 web application are no longer necessary in .NET 4.0 as they are specified in the Machine.config and are therefore inherited by each Web Application.

I made the following modifications to my Web.Release.config (Config Transform for Release Builds) to remove the offending sections when I compiled a Release build:

<configuration xmlns:xdt="<a href="http://schemas.microsoft.com/XML-Document-Transform">http://schemas.microsoft.com/XML-Document-Transform</a>">
  <configSections>
    <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
        <section name="scriptResourceHandler" xdt:Locator="Match(name)" xdt:Transform="Remove"/>
        <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
          <section name="jsonSerialization" xdt:Locator="Match(name)" xdt:Transform="Remove"/>
          <section name="profileService" xdt:Locator="Match(name)" xdt:Transform="Remove"/>
          <section name="authenticationService" xdt:Locator="Match(name)" xdt:Transform="Remove"/>
          <section name="roleService" xdt:Locator="Match(name)" xdt:Transform="Remove"/>
        </sectionGroup>
      </sectionGroup>
    </sectionGroup>
  </configSections>
...
</configuration>

CRM SDK Library – Redirect from v4.0 to v5.0

This was a tricky one.  It seems that the installation of CRM 2011 places a machine-wide redirection for the Microsoft.Crm.Sdk.dll from the v4.0 to the v5.0 version.

This means that any application running on the CRM Server that relies on the Microsoft.Crm.Sdk.dll assembly will load the v5.0 version even if the v4.0 version exists in the application’s /bin folder!  I verified this by enabling Fusion logging:

In order to be able to successfully load the v4.0 version of the assembly and execute your legacy v4.0 code, you have to override the <publisherpolicy /> set at the machine level, with the following addition to the application configuration file:

<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  ...
  <runtime>
    ...
    <assemblyBinding appliesTo="v4.0.30319" xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.Crm.Sdk" publicKeyToken="31bf3856ad364e35"/>
        <publisherPolicy apply="no"/>
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
  ...
</configuration>

February 21, 2011

SQL Server Database Maintenance for the CRM Developer

Filed under: CRM, DBMS Management — Tags: , , , , , , — pogo69 @ 14:45

Overview

SQL Server has always been relatively easy to install and configure.  The GUI tools provided make “part-time DBA” duties a reality for most developers working with Microsoft products; I’m sure this applies, at least on occasion, to almost every developer working with Dynamics CRM.

Unfortunately, the foisting of DBA duties upon unexpecting developers is not usually accompanied by commensurate training.  The ability to “point-n-click” administer a SQL Server database rarely translates into optimum performance and can often help bring a database to a standstill.

The one area in which I have found every single production CRM database lacking is index maintenance.

Index Maintenance

In order to allow you to efficiently interact with your data (CRUD operations), the SQL Server query optimiser uses index statistics to formulate an execution plan.  Indexes are stored, just like the data to which they point, in pages.  When a page is filled (or filled to a certain level dependent upon the “fill factor” which I will discuss briefly in the following section) further additions to the index will be placed in a subsequent page.

Over time the statistics pertaining to an index become stale; the data within an index becomes fragmented.  Regular Index Maintenance can be used to help alleviate these issues.

Fill Factor

When creating an Index, you are given the option of specifying the “fill factor”.  The fill factor defines “how full each leaf level page is when the index is created”.  Choosing a fill factor, appropriate to the rowsize of the index, can influence the prevalence of page splits, and how correspondingly fragmented your index will become over time.

So, while the main focus of this post is to look at how we can “de-fragment” our indexes, intelligently defining them from the outset can help ensure that the maintenance required is minimal.

For further information, explained in terms that most of us should be able to understand, and with a great deal more clarity than I am likely to be able to deliver, see the following blog posting:

Index Fill Factor & Performance Considerations

Maintenance Plans

SQL Server Management Studio (and SQL Server Enterprise Manager in SQL 2000) provide GUI Tools to visually define “Maintenance Plans”, that allow the scheduled maintenance of your SQL Server databases.  There are a number of built-in tasks available including:

  • Rebuild Index Task
  • Reorganize Index Task

Reorganising an Index, attempts to defragment the data within the Index, much like Windows Defragmentation Tool does with your file system.  Rebuilding an Index, completely rebuilds the Index anew.

These tasks allow you to rebuild the Indexes for any or all of your database, and any or all tables within each database.

At the very least, you should schedule one or both of these tasks to regularly maintain the indexes within your CRM database <OrgName_MSCRM>.  You could, for instance, schedule a nightly full reorganisation and a weekly full rebuild.  I have never yet come across a CRM implementation where at least this bare minimum level of maintenance is being performed.

Index Fragmentation

Unfortunately, these tasks allow you only to indiscriminantly reorganise/rebuild indexes.  To efficiently maintain your database, indexes should be reorganised and/or rebuilt only when they require it.  Best practises advise:

  • Index fragmentation <= 30% – REORGANIZE
  • Index fragmentation > 30% – REBUILD

To REORGANIZE an Index:

ALTER INDEX <index_name> ON <table_name> REORGANIZE;

To REBUILD and Index:

ALTER INDEX <index_name> ON <table_name> REBUILD WITH (ONLINE = ON);

NB: The reorganisation of an Index is always an online operation; that is, the reorganisation does not cause database users to be unable to access the Index during query execution.  By default however, Index rebuilds take the Index offline and as such users will be unable to access the Index for the duration of the rebuild.  The (ONLINE = ON) option allows the online rebuilding of an Index, but is only available in the Enterprise Version of SQL Server.

To discover the current level of fragmentation of an Index:

DBCC SHOWCONTIG ('<table_name>', '<index_name>');

A Home-grown Index Maintenance Plan

It was while trying to fix an issue with “out of control” SQL Server log file growth, that I came across an outstanding library of Transact SQL that can be used to programatically and intelligently reorganise and/or rebuild your indexes dependent upon their current level of fragmentation:

Index Defrag Script v4.0 – by the SQL Fool, Michelle Ufford

By default, the script will reorganise every Index with a level of fragmentation greater than 10% and rebuild every Index with greater than 30% fragmentation.

You can use this script, as is, and it will be a vast improvement on the built-in Maintenance Plan tasks.

Log File Growth and Indiscriminant Index Rebuilds

We were experiencing the above mentioned log file growth due to the use of the Maintenance Plan ‘Rebuild Index Task’ and its indiscriminantly rebuilding every index in our database.  There were some very large indexes in the database in question, but many of them rarely changed and thus, did not require rebuilding.  Index rebuilds are a logged operation and were therefore causing enormous growth of the log file; before I began my crusade to eradicate the rampant growth, the logfile for the database in question, regularly blew out to over 100GB (for a data file of approximately 120GB).

After the implementation of the new Maintenance schedule, we were able to keep the logfile consistently under 5GB.

The following steps describe the sequence of events in the new schedule:

  1. Backup the LOG
  2. Set the database to ‘Bulk Logged’ mode
  3. Run the Index Defrag Script
  4. Set the database back to ‘Full’ recovery mode
  5. Backup the LOG again
  6. Shrink the LOG

We backup the LOG file, both pre and post maintenance, due to the pre-existing hourly LOG backup schedule.  This was in place so that we could restore to within one hour of any point in time, should anything catastrophic have happened to the database.

If you would like to implement something similar in your own CRM database(s), you will need to obtain a copy of the Index Defrag Script v4.0 from SQL Fool (I use it as is from within my own scripts).  You will also need copies of the following:

Optimisation Script

This script is the “Meta Script” that calls and controls the entire maintenance process; it takes only two parameters:

  • @DATABASE – name of the database you wish to maintain
  • @logsize in MB – target size of the resultant database LOG file – you’ll have to play around a little bit to determine the optimum size for your DB, as the SHRINK operation will fail if you choose too small a number

NB: I’ve hardcode the directory into which the LOG backup files are placed; I really shouldn’t have, but… I had to leave something for others to do.

CREATE PROCEDURE [dbo].[dba_optimiseDB]
(
 @DATABASE varchar(128),
 @logsize int = 2048
)
AS
 DECLARE @RC int;
 DECLARE @minFragmentation float;
 DECLARE @rebuildThreshold float;
 DECLARE @executeSQL bit;
 DECLARE @defragOrderColumn nvarchar(20);
 DECLARE @defragSortOrder nvarchar(4);
 DECLARE @timeLimit int;
 DECLARE @tableName varchar(4000);
 DECLARE @forceRescan bit;
 DECLARE @scanMode varchar(10);
 DECLARE @minPageCount int;
 DECLARE @maxPageCount int;
 DECLARE @excludeMaxPartition bit;
 DECLARE @onlineRebuild bit;
 DECLARE @sortInTempDB bit;
 DECLARE @maxDopRestriction tinyint;
 DECLARE @printCommands bit;
 DECLARE @printFragmentation bit;
 DECLARE @defragDelay char(8);
 DECLARE @debugMode bit;
 SET @minFragmentation = 10;      -- 10%
 SET @rebuildThreshold = 30;      -- 30%
 SET @executeSQL = 1;
 SET @defragOrderColumn = 'range_scan_count';
 SET @defragSortOrder = 'DESC';
 SET @timeLimit = 120;       -- 2hrs
 SET @tableName = NULL;       -- all tables
 SET @forceRescan = 1;
 SET @scanMode = 'LIMITED';      -- LIMITED / SAMPLED / DETAILED
 SET @minPageCount = 8;
 SET @maxPageCount = NULL;
 SET @excludeMaxPartition = 0;
 SET @onlineRebuild = 1;
 SET @sortInTempDB = 1;
 SET @maxDopRestriction = NULL;
 SET @printCommands = 1;
 SET @printFragmentation = 1;
 SET @defragDelay = '00:00:05';
 SET @debugMode = 0;
 -- take pre-optimise log backup
 declare @folder nvarchar(max);
 declare @file nvarchar(max);
 set @folder = N'\\data\backups\Database Maintenance\MSCRM Database Backup\CRM Optimise\';
 set @file = @DATABASE + N'_' + dbo.fFormatDateTime(GETDATE(), 'TIMESTAMP');
 EXEC dba_backupLog @DATABASE, @folder, @file;
 -- switch to bulk logged mode
 EXEC('ALTER DATABASE ' + @DATABASE + ' SET RECOVERY BULK_LOGGED');
 -- re-index your little heart out...
 EXECUTE @RC = [dba].[dbo].[dba_indexDefrag_sp]
    @minFragmentation
   ,@rebuildThreshold
   ,@executeSQL
   ,@defragOrderColumn
   ,@defragSortOrder
   ,@timeLimit
   ,@DATABASE
   ,@tableName
   ,@forceRescan
   ,@scanMode
   ,@minPageCount
   ,@maxPageCount
   ,@excludeMaxPartition
   ,@onlineRebuild
   ,@sortInTempDB
   ,@maxDopRestriction
   ,@printCommands
   ,@printFragmentation
   ,@defragDelay
   ,@debugMode;
 --switch back to full recovery mode
 EXEC('ALTER DATABASE ' + @DATABASE + ' SET RECOVERY FULL');
 -- take post-optimise log backup
 set @file = @DATABASE + N'_' + dbo.fFormatDateTime(GETDATE(), 'TIMESTAMP');
 EXEC dba_backupLog @DATABASE, @folder, @file;
 -- shrink it
 EXEC dba_shrinkLog @DATABASE, @logsize;
GO

LOG Backup Script

CREATE PROCEDURE [dbo].[dba_backupLog]
(
 @DATABASE VARCHAR(128),
 @folder VARCHAR(MAX),
 @file VARCHAR(MAX)
)
AS

 DECLARE @logpath VARCHAR(MAX);
 SELECT @logpath = @folder + @file + '.TRN';

 BACKUP LOG @DATABASE
  TO DISK = @logpath WITH NOFORMAT,
  NOINIT,
  NAME = @file,
  SKIP,
  REWIND,
  NOUNLOAD,
  STATS = 10;

 declare @backupSetId as int;
 select
  @backupSetId = position
 from
  msdb..backupset
 where
  database_name = @DATABASE
 and
  backup_set_id =
   (
    select
     max(backup_set_id)
    from
     msdb..backupset
    where
     database_name = @DATABASE
   );
 
if @backupSetId is null
  begin
   declare @error varchar(max);
   set @error = N'Verify failed. Backup information for database ''' + @DATABASE + ''' not found.';
   raiserror(@error, 16, 1);
  end;

 RESTORE VERIFYONLY
  FROM DISK = @logpath WITH FILE = @backupSetId,
  NOUNLOAD,
  NOREWIND;

GO

Shrink LOG Script

CREATE PROCEDURE [dbo].[dba_shrinkLog]
(
 @DATABASE nvarchar(128),
 @size int = 2048
)
AS
 EXEC
 (
  'USE ' + @DATABASE + ';' +
  'declare @log_name varchar(max);' +
  'select @log_name = [name] from sys.database_files where type_desc = ''LOG'';' +
  'DBCC SHRINKFILE(@log_name, ' + @size + ');'
 );

GO 

Summary

I create a standalone database ‘DBA’ to house these and other database maintenance objects.  The results of the Index Defrag Script are stored in tables created as part of the installation of the scripts you can download from SQL Fool.  It can educational to see just which Indexes are being affected in what way.

November 5, 2010

Caching Revisited – CRM 4.0 SDK – Advanced Developer Extensions

Filed under: C#, CRM — Tags: , , , , , — pogo69 @ 14:07

I was in the middle of a posting on some more advanced concepts regarding the querying of CRM via the new LINQ provider (Microsoft.Xrm.Client) when I discovered another hidden gem.  I’ve now proven the concept and it is so useful, I had to post about it immediately.

Cache Invalidation Plugin

After the release of the new CRM 4.0 SDK, there was a lot of talk generated about the “all or nothing” caching mechanism.  Much of that talk can be found on the Codeplex CRM Accelerator site, due to the Accelerator’s heavy reliance on the xRM client and portal libraries.

There is a hint in a couple of the postings (and somewhere in the xRM documentation I believe) of a Cache Invalidation Plugin; however, no specifics are available and it was subsequently announced that it didn’t get shipped.  Well, guess what?  IT WAS SHIPPED!!  And it’s living inside the same Microsoft.Xrm.Client.dll that we’re all using to help write our fancy new LINQ queries!!

I found it when browsing the assembly with truly magnificent .NET Reflector tool:

Cache Invalidation Plugin

How It Works

In order to use the plugin, you must register it in the CRM using the Plugin Registration tool or similar.  Once the assembly has been registered, you can register steps for whichever entities and messages that you wish.  In my test, I chose to register steps for the Create, Update and Delete messages on the “incident” (Case) entity; mostly due to the fact that our clients have all required some form of access via the Partner Portal to Cases.

If you dig hard enough in the documentation and in postings around the web, you will find sufficient reference to the Portal framework’s Cache Invalidation Handler.  It is mapped to:

http://<portalwebsite>/Cache.axd

So, the next question is, “how does the plugin know how to find the cache invalidation handler?”.

The Portal framework’s CMS entities, contain (among many others) a Web Site entity; that entity has an attribute called ‘Cache Invalidation Handler URL.  Naturally, I initially thought it worked by looking at this attribute, but… it does not.

.NET Reflector to the rescue again!!  When I updated case entities, there was a corresponding error in the log from the plugin telling me that the parameter “configurationXml” cannot be null.  Which is basically telling us that we need to supply a configuration XML when we regsister a step for our plugin assembly.  Digging into Reflector again, I found:

So… we must supply a “Secure Configuration”, but what should it look like?  Back to Reflector:

So, the Configuration constructor throws the “configurationXml” null argument exception that I found in the CRM error log.  It requires that the configuration xml contain a “configuration” root node and builds a name/value collection from the subnodes.  Getting there…

So, we need a Configuration element with the name “invalidateCacheUrl”.  OK, so the final secure configuration XML looks like:

<configuration>
 <invalidateCacheUrl>http//[websiteurl]/Cache.axd</invalidateCacheUrl>
</configuration>

When I registered the steps, I modified the Description so that I could easily identity them in the CRM System Jobs view:

I’ve only done some preliminary testing with the Case (incident) entity, but I have no reason to believe that it won’t work with any or all of the entities; it is the same mechanism that many of us have been using, albeit rather more heavy handed.  But now, we can invalidate per entity instance.

Enjoy!

October 29, 2010

To Cache or Not to Cache – CRM 4.0 SDK – Advanced Developer Extensions

Filed under: CRM — Tags: , , — pogo69 @ 00:05

Let’s face it; I’m a bit lucky with respect to my relatively recent introduction to Microsoft Dynamics CRM. I’ve entered the game to find a robust (most of the time AKA when it behaves) and mature product, without many of the restrictions and idiosyncrasies of earlier versions. I’ve still never even seen a CRM 3.0 or earlier system; and between you, me and the world, I’d like to keep it that way. Soon, we’ll see the introduction of CRM2011, which appears to be a developer’s dream – more customisation, packaged “solutions”, more in-depth integration, a more mature client and server object model.

But back to CRM 4.0. The biggest change brought by the most recent release of the CRM 4.0 SDK was the Advanced Developer Extensions (Microsoft xRM *). It allows developers to access CRM entity instances via LINQ; a set of extension methods introduced to C# to allow SQL like syntax querying of an appropriately mapped object model. So, by ditching the old unwieldy QueryExpression syntax, what would formerly have taken dozens of lines of code, can now be expressed by its far more compact (and far more maintainable) equivalent. Something like:

List<Xrm.mrc_coursesubject> subjects =
	(
		from s in DataContext.mrc_coursesubjects
		join sa in DataContext.mrc_coursesubjectallocations on s.mrc_coursesubjectid equals sa.mrc_subjectid.Value
		join cs in DataContext.mrc_courseschedules on sa.mrc_courseid.Value equals cs.mrc_courseid.Value
		where cs.mrc_coursescheduleid == this._courseScheduleId
		select s
	).ToList();

subjects.ForEach(subject =>
		this.ddlSubject.Items.Add(new ListItem(subject.mrc_name, subject.mrc_coursesubjectid.ToString()))
	);

I won’t go further into it in this posting, but keep an eye out for a subsequent posting on some of the idiosyncrasies of CRM specific LINQ queries.

So… caching.

The xRM libraries ship with an inbuilt caching mechanism; also recently released were new versions of what have been termed “Accelerators”.  These complete reference websites are built around a CRM-driven CMS * system.  Because almost everything you see in the Accelerator websites is dynamically generated from entity instances in the CRM, it is very important that the data is cached appropriately.

This, however, has created a number of issues for developers hoping to use the xRM libraries to dynamically display CRM content on their 3rd party websites. Newly created/updated/deleted CRM data is not immediately viewable as previously obtained results are cached.  The documentation makes some reference to the caching mechanisms involved, but not enough to make it immediately obvious how to turn it off, and/or control it to meet your requirements.  While I don’t profess to understand every intricacy of the xRM caching framework, the following outlines what I did to meet our clients’ needs and it has certainly worked for us.

Basically, we pre-process every incoming page request in the ASP.NET Application File, Global.asax; empty functions elided for simplicity and clarity:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.SessionState;

namespace Namespace.Of.The.Week
{
	public static class Extensions
	{
		public static void RemoveAll(this Microsoft.Xrm.Client.Caching.BaseCache cache)
		{
			foreach (KeyValuePair<string, object> pair in (IEnumerable<KeyValuePair<string, object>>)cache)
			{
				cache.Remove(pair.Key);
			}
		}
	}

	public class Global : System.Web.HttpApplication
	{
		#region Helper Routines
		private static void ClearCache(string entityName)
		{
			var dependency = string.Format("adxdependency:crm:entity:{0}", entityName).ToLower();

			var cache = Microsoft.Xrm.Client.Caching.CacheManager.GetBaseCache();
			cache.Remove(dependency);
		}
		private static void ClearCache()
		{
			Microsoft.Xrm.Client.Caching.CacheManager.GetBaseCache().RemoveAll();
		}
		#endregion

		protected void Application_BeginRequest(object sender, EventArgs e)
		{
			string cacheRemoval = System.Configuration.ConfigurationManager.AppSettings["Cache.Removal"];

			switch (cacheRemoval.ToLower())
			{
				case "all":
					// clear all cache items for the following entities
					ClearCache();
					break;

				case "entity":
					string[] entities = System.Configuration.ConfigurationManager.AppSettings["Cache.Removal.Entities"].Split(new char[] { ',' });
					foreach (string entity in entities)
					{
						ClearCache(entity);
					}
					break;
			}
		}

		...
	}
}

* xRM – Anything (x) Relationship Management

* CMS – Content Management System

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.