Pogo69's Blog

January 9, 2013

Converting Queries/Charts/Dashboards from User to System

Filed under: C#, CRM, Cutting Code — pogo69 @ 09:48

Overview

If you have created your own Saved Queries, Charts or Dashboards, you may have had occasion to share those objects with other CRM Users.  If you find that they become universally reusable to the point of wanting to “promote” them to System objects, there is no mechanism in the CRM Web Application to do so.

Code to the Rescue

You can however, convert such items using some relatively simple C# (or VB if you must) code.

Charts

// chart
var chart =
  (
    from uqv in xrm.UserQueryVisualizationSet
    where uqv.Name == "Pogo's Chart"
    select uqv
  ).FirstOrDefault();

Xrm2011.SavedQueryVisualization newChart = new Xrm2011.SavedQueryVisualization
{
  Name = chart.Name,
  DataDescription = chart.DataDescription,
  Description = chart.Description,
  IsCustomizable = new BooleanManagedProperty(true),
  IsDefault = false,
  IsManaged = false,
  PresentationDescription = chart.PresentationDescription,
  PrimaryEntityTypeCode = chart.PrimaryEntityTypeCode,
  WebResourceId = chart.WebResourceId
};

xrm.Create(newChart);

Views

// view
var view =
  (
    from uq in xrm.UserQuerySet
    where uq.Name == "Pogo's View"
    select uq
  ).FirstOrDefault();

Xrm2011.SavedQuery newView = new Xrm2011.SavedQuery
{
  Name = view.Name,
  AdvancedGroupBy = view.AdvancedGroupBy,
  CanBeDeleted = new BooleanManagedProperty(true),
  ColumnSetXml = view.ColumnSetXml,
  ConditionalFormatting = view.ConditionalFormatting,
  Description = view.Description,
  FetchXml = view.FetchXml,
  IsCustomizable = new BooleanManagedProperty(true),
  IsDefault = false,
  IsManaged = false,
  LayoutXml = view.LayoutXml,
  QueryType = view.QueryType,
  ReturnedTypeCode = view.ReturnedTypeCode
};

xrm.Create(newView);

Dashboards

// dashboard
var dashboard =
  (
    from d in xrm.UserFormSet
    where d.Name == "Pogo's Dashboard"
    select d
  ).FirstOrDefault();

Xrm2011.SystemForm newDashboard = new Xrm2011.SystemForm
{
  Name = dashboard.Name,
  CanBeDeleted = new BooleanManagedProperty(true),
  Description = dashboard.Description,
  FormXml = dashboard.FormXml,
  IsCustomizable = new BooleanManagedProperty(true),
  IsDefault = false,
  IsManaged = false,
  ObjectTypeCode = dashboard.ObjectTypeCode,
  Type = dashboard.Type
};

newDashboard.Id = xrm.Create(newDashboard);
PublishXmlRequest reqPublish = new PublishXmlRequest
{
  ParameterXml = string.Format(
    @"
      <importexportxml>
        <dashboards>
          <dashboard>{0}</dashboard>
        </dashboards>
      </importexportxml>",
    newDashboard.Id)
};
PublishXmlResponse respPublish = (PublishXmlResponse)xrm.Execute(reqPublish);

I had issues with privileges on the newly migrated Dashboard the first time I tried it.  The addition of the the PublishXmlRequest fixed it.  I had no such issue with the Views or Charts.

Reference Implementation

You can download a reference implementation of the code described above from the following:

http://sdrv.ms/W1z1Ra

It contains a CRM 2011 Managed Solution that allows Users to migrate User objects (Views, Charts, Dashboards) to System equivalents.  I am in the process of uploading the source code to Codeplex; upon completion you will be able to download and modify as you see fit.

NB: The current implementation will not migrate User Dashboards that contain User components.  A future version will migrate embedded components in addition to the Dashboard that houses them.

Update

You can now download the Managed Solution and/or all source code from the following Codeplex site:

http://userobjectmigration.codeplex.com

December 2, 2012

Adventures in Multi-Threaded CRM 2011 Client Development

Filed under: .NET, C#, CRM — pogo69 @ 12:52

Overview

I’ve recently completed development of a process to auto-magically batch import data on a weekly basis for a client.

I have no control over the format in which the data will be received; the client (or their technical representatives) have decided that I will receive the entire set of Active Associations (data representing a link between a Contact and Account) each import.  I will not, however, receive any indication of Inactive (that is, deactivated) associations; as such, the import process will “clean up” prior to each import, deleting all active associations and updating each affected Contact (setting a custom Status attribute to Inactive).

I will discuss the import process in another blog posting; the focus of this post is the “clean up”; specifically the development process of deleting and updating approximately 100,000 records (each) as efficiently as possible.

Early Bound Classes, Large Record Sets and Memory Consumption

The initial iteration of my clean up program, simply queried for a set of all Contacts that were currently set to CustomStatus == Active.  It then looped through each record and set them, one by one, to Inactive.  In pseudo-code:

var contactIds =
  (
    from c in xrm.ContactSet
    where c.new_status == 1
    select c.Id
  ).ToList();

foreach (Guid contactId in contactIds)
{
  Contact c = new Contact
  {
    Id = contactId,
    new_status = 2
  };

  xrm.Update(c);
}

This caused the console program to consume huge amounts of memory to the point that the program stopped working.  I thought it may have to do with the ServiceContext caching objects, so I turned caching off completely:

xrm.MergeOption = MergeOption.NoTracking;

It had no effect (on memory); so I started “batching” my updates.

Enter the ThreadPool

I decided to use System.Threading.ThreadPool to operate simultaneously on batches of records.  Initially, my code selected 1,000 records at a time:

var contactIds =
  (
    from c in xrm.ContactSet
    where c.new_status == 1
    select c.Id
  ).Take(1000).ToList();

foreach (Guid contactId in contactIds)
{
  ThreadPool.QueueUserWorkItem(id =>
  {
    Contact c = new Contact
    {
      Id = id,
      new_status = 2
    };

    xrm.Update(c);
  }, contactId);
}

This would run for a little while, before erroring.  Trawling through the CRM Trace Log files indicated that I was encountering Timeout errors.  WCF Services have a throttling mechanism that allows only a certain number of simultaneous connections.  So I throttled my client code:

ThreadPool.SetMaxThreads(10, 10);

The first parameter allows only 10 concurrently running threads in the ThreadPool.  All other requests must wait for a spare thread to become available, effectively throttling my multi-threaded client code.

But it still errored; this time, I started generating SQL Deadlock errors.  It was intermittent and random.  It appears that I still had threads doing multiple updates when the next batch of candidate Contacts was being selected.  So I had to separate the select and update operations by “waiting” for each batch of updates to complete before going back and selecting the next batch.

In order to wait for a batch of threads, I needed to set up an array of WaitObjects.  The maximum number of objects that can be waited on simultaneously is 64, so that became my batch size:

class Program
{
  #region Local Members
  private static int BATCH_SIZE = 64;
  #endregion

  #region Thread Worker Classes
  private class ContactUpdater
  {
    XrmServiceContext _xrm;
    Guid _contactId;
    ManualResetEvent _doneEvent;

    public ContactUpdater(XrmServiceContext xrm, Guid contactId, ManualResetEvent doneEvent)
    {
      this._contactId = contactId;
      this._doneEvent = doneEvent;
      this._xrm = xrm;
    }
    public void UpdateContact(Object o)
    {
      // de-activate contact
      Contact contact = new Contact
      {
        Id = this._contactId,
        new_Status = (int)Contactnew_Status.Inactive
      };

      // update
      this._xrm.Update(contact);

      // signal that we're done
      this._doneEvent.Set();
    }
  }
  #endregion

  static void Main(string[] args)
  {
    ThreadPool.SetMaxThreads(10, 10);

    using (XrmServiceContext xrm = new XrmServiceContext("Xrm"))
    {
      xrm.MergeOption = Microsoft.Xrm.Sdk.Client.MergeOption.NoTracking;

      // update ALL current active contacts
      while (true)
      {
        // retrieve batch
        List<Guid> contactIds =
          (
            from c in xrm.ContactSet
            where c.new_Status.GetValueOrDefault() == (int)Contactnew_Status.Active
            select c.Id
          ).Take(BATCH_SIZE).ToList();

        if (contactIds.Count == 0) { break; }

        ManualResetEvent[] doneEvents = new ManualResetEvent[contactIds.Count];

        // spawn processing threads
        for (int i = 0; i < contactIds.Count; i++)
        {
          ContactUpdater contactUpdater = new ContactUpdater(xrm, contactIds[i], (doneEvents[i] = new ManualResetEvent(false)));

          ThreadPool.QueueUserWorkItem(contactUpdater.UpdateContact);
        }

        // wait for all threads to complete before processing next batch
        WaitHandle.WaitAll(doneEvents);
      }
    }
  }
}

Conclusion

This provides a framework for the multi-threaded updating of a large number of CRM Entity Instances.  With a little bit of tinkering you could get it to fit most batch update situations.

You could probably also play with the ThreadPool maximum thread count; I haven’t, but 10 seemed like a conservatively good number.

December 1, 2012

CRM 2011 OData, REST and the QueryString

Filed under: CRM, Javascript — pogo69 @ 12:06

Introduction

As with most CRM developers, I have loved the introduction of the newly introduced OData endpoint in CRM 2011.  It provides a simple, yet highly powerful mechanism to perform basic CRUD operations against a CRM 2011 Organisation.

Zen, and the Art of OData Query Construction

What makes working with REST so easy from client applications (Javascript and Silverlight in the case of a CRM 2011 developer), is that operations are performed by constructing an appropriate address via an HTTP Request’s QueryString.

A corollary of the implementation mechanism of a “Restful” web service, is that we can “play” with our queries directly in Internet Explorer.  I did this recently (actually I do it all the time; but I digress) to demonstrate to a fellow developer how to construct an OData query; the following is a reconstruction of that demonstration.

The Endpoint

In order to address the OData Endpoint, you must navigate to the following address:

http://<OrganisationRoot>/XrmServices/2011/OrganizationData.svc

Authentication

NB: The OData endpoint is not supported for use outside of Javascript and Silverlight within the CRM deployment.  Thus, in order to address the OData endpoint from the Internet Explorer QueryString, you must first successfully authenticate with the CRM Organisation.

If you are connecting from within the internal network on which the CRM is deployed, it is likely that you will be able to connect automatically due to Windows Integrated Security.  Otherwise, you may first have to open CRM in another tab.

EntitySet

Navigation to the root of the OData endpoint results in a collection of EntitySets being returned.  The collection is a list of the Entity Types on which OData operations may be performed.

<?xml version="1.0" encoding="UTF-8" standalone="true"?>
<service xmlns="http://www.w3.org/2007/app" xmlns:app="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom" xml:base="http://<OrganisationRoot>/XRMServices/2011/OrganizationData.svc/">
    <workspace>
        <atom:title>Default</atom:title>
        <collection href="OpportunityCompetitorsSet">
            <atom:title>OpportunityCompetitorsSet</atom:title>
        </collection>
        <collection href="SystemFormSet">
            <atom:title>SystemFormSet</atom:title>
        </collection>
        <collection href="RoleSet">
            <atom:title>RoleSet</atom:title>
        </collection>
        ...
    </workspace>
</service>

OpportunitySet

The Query that we were building can be defined as follows:

“When was the most recently Won Opportunity attached to the current Account?”

So, we need to interrogate the OpportunitySet.  We do this by simply appending a forward-slash and the name of the EntitySet to the end of the address:

http://<OrganisationRoot>/XrmServices/2011/OrganizationData.svc/OpportunitySet

This will return a collection of every Opportunity in the system (with ALL of each Opportunity’s attributes):

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<feed xml:base="http://<OrganisationRoot>/XRMServices/2011/OrganizationData.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">
  <title type="text">OpportunitySet</title>
  <id>http://<OrganisationRoot>/XrmServices/2011/OrganizationData.svc/OpportunitySet</id>
  <updated>2012-11-30T21:44:25Z</updated>
  <link rel="self" title="OpportunitySet" href="OpportunitySet" />
  <entry>
    <id>http://<OrganisationRoot>/XRMServices/2011/OrganizationData.svc/OpportunitySet(guid'9cdd7cfd-6a34-e211-bea2-00012e244939')</id>
    <title type="text">Test</title>
    <updated>2012-11-30T21:44:25Z</updated>
    <author>
      <name />
    </author>
    <link rel="edit" title="Opportunity" href="OpportunitySet(guid'9cdd7cfd-6a34-e211-bea2-00012e244939')" />
    <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/opportunitycompetitors_association" type="application/atom+xml;type=feed" title="opportunitycompetitors_association" href="OpportunitySet(guid'9cdd7cfd-6a34-e211-bea2-00012e244939')/opportunitycompetitors_association" />
    ...
    <category term="Microsoft.Crm.Sdk.Data.Services.Opportunity" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
    <content type="application/xml">
      <m:properties>
        <d:ParticipatesInWorkflow m:type="Edm.Boolean">false</d:ParticipatesInWorkflow>
        <d:new_IsParentOngoingOpp m:type="Edm.Boolean">false</d:new_IsParentOngoingOpp>
        <d:TimeZoneRuleVersionNumber m:type="Edm.Int32" m:null="true" />
        ...
      </m:properties>
    </content>
  </entry>
</feed>

<entry>

Each <entry> node represents an Entity instance; in this case, an Opportunity.

<link>

The <link> nodes beneath <entry> represent child Entity Collections attached to the individual Entity instance.

<content><m:properties><d:AttributeSchemaName /></m:properties></content>

Each property is represented by a child node of <content><m:properties>.  The attribute nodes are named with the SchemaName of the attribute as defined in your CRM Customisations; the data type is represented by the m:type attribute.

OData Operators

$filter

The $filter operator provides a mechanism to “filter” the returned result set; much like the where clause in SQL.

When was the most recently Won Opportunity attached to the current Account?”

So, we need to return only Won Opportunities for the current Account.  Thus, our filter will look like the following:

/OpportunitySet?$filter=StatusCode/Value eq 3 and CustomerId/Id eq (guid’00000000-0000-0000-0000-000000000000′)

In other words, return all Opportunities with a StatusCode of 3 (Won) and a CustomerId GUID identifier matching that specified.  See the CRM SDK and the OData Specification for additional details regarding the $filter syntax.

$top and $orderby

When was the most recently Won Opportunity attached to the current Account?”

We need the most recently Won Opportunity.  For the purposes of our query, most recently Won is defined as the most recent Actual Close Date.  Thus we need to order our result set by Actual Close Date descending; and return only the top 1 entry:

/OpportunitySet?$orderby=ActualCloseDate desc&$top=1

$select

When was the most recently Won Opportunity attached to the current Account?”

In order to determine when the most recently Won Opportunity occurred, we only care about the Actual Close Date.  With the $select operator, we can choose which attributes get returned in the result set:

/OpportunitySet?$select=ActualCloseDate

Consolidated Query

If we put the query operators together, we end up with:

/OpportunitySet?$filter=StatusCode/Value eq 3 and CustomerId/Id eq (guid’00000000-0000-0000-0000-000000000000′)&$top=1&$orderby=ActualCloseDate desc&$select=ActualCloseDate

The result set will look something like:

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<feed xml:base="http://dev.mrcrm.com.au:8888/MrCRM/XRMServices/2011/OrganizationData.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">
  <title type="text">OpportunitySet</title>
  <id>http://dev.mrcrm.com.au:8888/MrCRM/XrmServices/2011/OrganizationData.svc/OpportunitySet</id>
  <updated>2012-12-01T01:29:33Z</updated>
  <link rel="self" title="OpportunitySet" href="OpportunitySet" />
  <entry>
    <id>http://dev.mrcrm.com.au:8888/MrCRM/XRMServices/2011/OrganizationData.svc/OpportunitySet(guid'3b225dd4-28e3-e011-8abe-001517418134')</id>
    <title type="text">5 User Hosted</title>
    <updated>2012-12-01T01:29:33Z</updated>
    <author>
      <name />
    </author>
    <link rel="edit" title="Opportunity" href="OpportunitySet(guid'3b225dd4-28e3-e011-8abe-001517418134')" />
    <category term="Microsoft.Crm.Sdk.Data.Services.Opportunity" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
    <content type="application/xml">
      <m:properties>
        <d:ActualCloseDate m:type="Edm.DateTime">2012-03-18T14:00:00Z</d:ActualCloseDate>
      </m:properties>
    </content>
  </entry>
</feed>

If the result set is empty, there were NO Won Opportunities attached to the current Account.  Otherwise, the <d:ActualCloseDate /> node contains our answer.

Javascript – How to Use our Query

In the interest of completeness, I will quickly show how to use our query from a Javascript Web Resource (code assumes the existence and availability of the JSON and jQuery libraries.

var serverURL = Xrm.Page.context.getServerUrl();
var accountId = Xrm.Page.data.entity.getId().replace(/[\{\}]/, "");
var query = serverURL + "/OpportunitySet?$filter=StatusCode/Value eq 3 and CustomerId/Id eq (guid'" + accountId + "')&
$top=1&$orderby=ActualCloseDate desc&$select=ActualCloseDate";
$.ajax({
  type: "GET",
  contentType: "application/json; charset=utf-8",
  datatype: "json",
  url: query,
  beforeSend: function (XMLHttpRequest) {
    XMLHttpRequest.setRequestHeader("Accept", "application/json");
  },
  success: function (data, textStatus, XmlHttpRequest) {
    var results = data.d.results;

    if (results.length == 0) {
      alert("No Won Opportunities found!!");
    }
    else {
      var dateResult = results[0].ActualCloseDate;
    }
  },
  error: function (XmlHttpRequest, textStatus, errorThrown) {
    alert('OData Select Failed: ' + odataSelect);
  }
});

See the SDK for more information on the idiosyncrasies of date processing from OData:

http://msdn.microsoft.com/en-us/library/gg328025.aspx#BKMK_WorkingWithDates

April 5, 2012

CRM 2011 – CRUD vs Execute – Which is Faster?

Filed under: C#, CRM, Cutting Code — pogo69 @ 12:35

Overview

An interesting discussion began on the MSDN Support Forums for CRM Development recently; regarding the performance of the various Web Service CRUD methods compare to their Execute wrapped equivalents.

According to the SDK:

http://msdn.microsoft.com/en-us/library/42dcebf5-a624-45b9-b719-20e5882d5ca2(MSDN.10)#Performance

We are encouraged to use the CRUD methods as they are faster:

Use Common Methods

The common methods are faster than using the Execute method together with the corresponding message. The following table lists these methods and messages.

Method Message
Create CreateRequest
Delete DeleteRequest
Update UpdateRequest
Retrieve RetrieveRequest
RetrieveMultiple RetrieveMultipleRequest

The obvious question was raised – why?

I gave an initial hypothesis which, while it seemed quite valid, was admittedly an educated guess at best.  Upon being challenged, I decided to dig deeper.  You can see the results of that discussion in the originating thread:

http://social.msdn.microsoft.com/Forums/en/crmdevelopment/thread/8d679824-8cd6-4a27-ad4d-11974dc5403b

Decompiling is Your Friend

Although peeking inside the inner workings of the CRM is largely discouraged, I find it an extremely useful exercise from time to time.  Not only do I occasionally learn something about the way the CRM implements certain mechanisms, it has on occasion, allowed me to see a solution that I may otherwise have either never found or done so only by trial and error.  This is particularly the case for areas in which the SDK documentation describes only the relevant API (the what and to some extent the how) and not the where, when and why.

In this case, I ripped apart only enough of the relevant assemblies to verify that as per the hypothesis in another thread linked in the one discussed here, the “Create” message and its other CRUD siblings, are “wrapped” in an call to Execute.  As I mention in my final reply however, the Execute web service call carries its own overhead in that the inner message must be extracted.  Without digging too deep (or going nuts and decompiling the entire application and running some profiling software over it), it appeared to me that things were about even with respect to performance.

Reality (The Results are In!!)

In the end; it doesn’t really matter what the SDK says; it doesn’t really matter what a code analysis says.  It matters only what your real world experiences are (THE END RESULT) when you attempt to use these mechanisms.  So I wrote some code and came to my own conclusions, with the code and results as follows:

<pre>System.Diagnostics.Stopwatch stopWatchCreate = new System.Diagnostics.Stopwatch();
System.Diagnostics.Stopwatch stopWatchExecute = new System.Diagnostics.Stopwatch();

using (var xrm = new Xrm2011.XrmServiceContext("MrCRM.Dev"))
{
	xrm.MergeOption = Microsoft.Xrm.Sdk.Client.MergeOption.NoTracking;

	WhoAmIRequest reqWhoAmI = new WhoAmIRequest();
	WhoAmIResponse respWhoAmI = (WhoAmIResponse)xrm.Execute(reqWhoAmI);

	for (int i = 0; i < 10000; i++)
	{
		// execute
		stopWatchExecute.Start();

		{
			CreateRequest req = new CreateRequest
			{
				Target = new Entity
				{
					LogicalName = "contact"
				}
			};
			req.Target.Attributes.AddRange(
				new KeyValuePair<string, object>("firstname", string.Format("Person{0}", i)),
				new KeyValuePair<string, object>("lastname", string.Format("Name{0}", i)),
				new KeyValuePair<string, object>("telephone1", string.Format("555-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone2", string.Format("554-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone3", string.Format("553-{0:0000}", i)),
				new KeyValuePair<string, object>("address1_line1", string.Format("{0} Place Street", i)),
				new KeyValuePair<string, object>("address1_city", string.Format("CityVille{0}", i)),
				new KeyValuePair<string, object>("address1_stateorprovince", string.Format("State{0}", i)),
				new KeyValuePair<string, object>("address1_postalcode", string.Format("553-{0:0000}", i))
			);

			CreateResponse resp = (CreateResponse)xrm.Execute(req);
		}

		stopWatchExecute.Stop();

		// create
		stopWatchCreate.Start();

		{
			Entity target = new Entity
			{
				LogicalName = "contact"
			};
			target.Attributes.AddRange(
				new KeyValuePair<string, object>("firstname", string.Format("Person{0}", i)),
				new KeyValuePair<string, object>("lastname", string.Format("Name{0}", i)),
				new KeyValuePair<string, object>("telephone1", string.Format("555-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone2", string.Format("554-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone3", string.Format("553-{0:0000}", i)),
				new KeyValuePair<string, object>("address1_line1", string.Format("{0} Place Street", i)),
				new KeyValuePair<string, object>("address1_city", string.Format("CityVille{0}", i)),
				new KeyValuePair<string, object>("address1_stateorprovince", string.Format("State{0}", i)),
				new KeyValuePair<string, object>("address1_postalcode", string.Format("553-{0:0000}", i))
			);
			xrm.Create(target);
		}

		stopWatchCreate.Stop();
	}
}

System.Diagnostics.Debug.Print("Create:\t{0} milliseconds", stopWatchCreate.ElapsedMilliseconds);
System.Diagnostics.Debug.Print("Execute:\t{0} milliseconds", stopWatchExecute.ElapsedMilliseconds);

I tried to make it as “fair” as possible by:

  • Preceding the real code with a WhoAmI request to initialise the connection
  • Interspersing the calls
  • Running the entire thing twice, wherein the 1st iteration placed the Create 1st, Execute 2nd; and the 2nd iteration reversed as above

The results?

Create wins!!  But only marginally, particularly given we’re creating 10,000 records and the difference was quite small.

Create: 382873 milliseconds
Execute: 384767 milliseconds

Create: 404256 milliseconds
Execute: 406978 milliseconds

Conclusion

So, I would say that all in all:

  1. Unless you NEED to scrounge every last millisecond of performance out of your application, code whichever way you prefer
  2. Don’t blindly believe everything you read in the SDK – test and confirm

Thanks to Laughing John for the inspiration.

March 30, 2012

Case Management – How to Implement Block Increment Rules in Time Based Contracts

Filed under: C#, CRM, Cutting Code — pogo69 @ 10:47

For our clients with  Time-based Service Contracts, we have some very specific Business Rule requirements to implement.  This ensures that adequate time is allocated to a Case, no matter how trivial, to cater for the administrative overheads incurred.

The Requirements

  1. Each Case must consume a minimum of 60 mins allotment
  2. Each Case must consume a multiple of 15 mins after the initial 60 mins

For example:

Accrued Billable Time Total Time upon Case Resolution
23 mins 60 mins
47 mins 60 mins
1 hr 13 mins 1 hr 15 mins
2 hrs 47 mins 3 hrs

The first two examples meet Rule #1 – minimum of 60 mins.

The second two examples meet Rule #2 – multiple of 15 mins after the initial minimum 60 mins – that is, rounded up to the nearest 15 min interval.

Overview

Time-Based Contracts -> Accrued Time -> Sum of ActualDurationMinutes in Closed Activities

Time-based Contracts manage allotment in accrued minutes.

When a Case attached to a Time-based Contract is resolved, the CRM aggregates the sum of all ActualDurationMinutes values in Closed Activities attached to that Case.  This includes the in-built Activity Types such as Email, Task etc; and also includes Custom Activity Types.  I make a point of mentioning the inclusion of Custom Activity Types here, as it is not clear in any of the documentation that I have read and there appears to be some subsequent confusion in the CRM Developer community.

In order to Close a Case (incident), one must create an IncidentResolution entity instance and pass it, and the required Status (usually 5 == ‘Problem Solved’) to a CloseIncidentRequest.  The IncidentResolution entity must contain two attributes of interest to the Case Resolution process:

  • IncidentId – the GUID identifier of the Case that we wish to Resolve (Close)
  • TimeSpent – the total number of accrued minutes pertaining to the Case being resolved.  This is the amount of allotment that will be decremented from the Contract Line assigned to the Case

TimeSpent is calculated by (the equivalent of) the following:

int timeSpent =
 (
  from a in xrm.ActivityPointerSet
  where a.RegardingObjectId.Id == caseId
  && a.StateCode == 0
  && a.IsBilled.GetValueOrDefault() == false
  select a.ActualDurationMinutes.GetValueOrDefault()
 ).Sum();

Thus, the Total Time Spent in a Case is calculated to be the sum of the Duration of each un-billed Closed Activity attached to that Case.  So; in order to influence this process to support our Business Rules, we need to add time to that calculation.

There are two ways to achieve this:

  1. Attach additional activities to the Case prior to the Case Resolution process (when the Case Resolution dialog box pops up, the calculation has already been done – thus to influence the Time Spent at this stage, you must create the additional activities before the User attempts to Resolve the Case
  2. Change the value of TimeSpent in the CaseResolution entity instance prior to the Case Resolution process taking place

In order to keep things neat and tidy; and also to ensure that I reconcile the calculations at the appropriate time, I have chosen to implement a combination of both.

Implementation

Time Sheets – A Quick Diversion

While not strictly relevant to this article, I will quickly mention our Time Sheeting process in CRM.  Prior to the introduction of Time-based contracts (we previously only dealt with ‘Number of Cases’ allotment types) we introduced Time Sheeting, implemented as a Custom entity attached to the Case.  The sum of durations in all attached Time Sheets contributed to the number of Cases decremented from the client’s allotment (2 hrs per Case).

With Time-based Contracts, we needed to move to an activity-based mechanism.

With 20-20 hindsight, I would have created the Time Sheet entity as a Custom activity, however:

  1. I didn’t create (or specify) it
  2. Hindsight was not available to those involved at the time
  3. What good does that do us now?

So, in order to:

  1. Ensure that the data entry process is the same regardless of Time-based or Number-of-Case-based Cases
  2. Enable the new activity based calculations to work as expected

I created a Custom activity type (Time Sheet Activity) and wrote a plugin to (re-)generate Closed activities for each Time Sheet attached to the Case every time a Time Sheet is Created, Updated or Deleted.  This allows the times entered into the Time Sheets (that we were already entering anyway) to be reflected (or mirrored) in the total time assigned to activities.

NB: I only create the Time Sheet Activity entity instances for Cases to which a Time-based Contract is attached.

Incident Resolution Process – Timing

Before I move onto the good stuff, I will also quickly mention the Case Resolution process, to give a better understanding of what we are doing and the why of the when.

int decrement = 0;
decrement = this.CalculateAllotmentToDecrement(originalIncident, incidentResolution, contract, context);
if (decrement != 0)
{
 this.DecrementAllotment((Guid)originalIncident["contractdetailid"], decrement, context);
}
this.CreateIncidentResolutionActivity(incidentResolution, context);
this.BillIncidentResolutionActivities(businessEntityMoniker.Id, 1, context);
base.SetState(businessEntityMoniker, 1, status, context);

So, the Case Resolution process is as follows:

  1. Calculate Allotment to decrement.  NB: For Time-based Contracts, this is taken to be the value of TimeSpent within the IncidentResolution entity instance passed in from the Case Resolution dialog.  It is NOT re-calculated as part of the Case Resolution process
  2. Decrement the Allotment
  3. Create IncidentResolution entity instance (up to this point it has been constructed, but NOT Created within the CRM)
  4. Bill the Activities attached to the Case
  5. Close the Case
I originally attempted to influence this process by intercepting the pre-Create message for IncidentResolution, however as can be see from the above, this is too late!!  Hence, the final solution is implemented by intercepting the pre-Close message for Incident (Case).

Activity Duration Reconciliation and Re-calculation of TimeSpent

The basic steps of the process (that I chose to implement) are:

  1. Check that the requested Status is ‘Problem Solved’; if not, bail.  This only applies to successfully resolved Cases
  2. Find Contract Template; attached to the Contract; attached to the Incident (identified in IncidentResolution passed to the CloseIncidentRequest)
  3. Check that Contract is Time-based allotment; if not, bail
  4. Sum the ActualDurationMinutes of each un-billed, Closed Activity attached to the Case (yes, this is theoretically the same as the current value of TimeSpent in the IncidentResolution, but I like to be thorough)
  5. Round up TotalTime to 60 mins if it is currently lower
  6. Round up TotalTime to the nearest 15 min increment, as necessary if currently exceeds 60 mins
  7. If there is a deficit between the TotalTime calculated and that passed in, continue
  8. Create a “Dummy” Time Sheet activity for the deficit – this is not strictly necessary, as the system calculations have already been made, however I like it, as it makes it more visually obvious in the CRM front-end what is going on
  9. Modify TimeSpent in the IncidentResolution entity being passed through via the Close message.  NB: it is this step that influences the allotment quantity that will be applied to the relevant Contract Line
public class MyPlugin : IPlugin
{
 public void Execute(IServiceProvider serviceProvider)
 {
  IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

  if (context.PrimaryEntityName != Incident.EntityLogicalName) { return; }
  if (context.MessageName != "Close") { return; }

  try
  {
   IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
   IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

   // retrieve input parameters
   Entity entityIncidentResolution = (Entity)context.InputParameters["IncidentResolution"];

   int status = ((OptionSetValue)context.InputParameters["Status"]).Value;

   // if the problem was not solved, bail
   if (status != 5) { return; }

   IncidentResolution caseResolution = entityIncidentResolution.ToEntity<IncidentResolution>();

   using (XrmServiceContext xrm = new XrmServiceContext(service))
   {
    // extract case and contract details
    ContractTemplate template =
     (
      from t in xrm.ContractTemplateSet
      join c in xrm.ContractSet on t.ContractTemplateId.Value equals c.ContractTemplateId.Id
      join i in xrm.IncidentSet on c.ContractId.Value equals i.ContractId.Id
      where i.IncidentId.Value == caseResolution.IncidentId.Id
      select t
     ).SingleOrDefault();

    if (template == null) { return; }

    // only for time based contracts
    if (template.AllotmentTypeCode == 2 /* time */)
    {
     // obtain list of all currently attached activities (Closed and NOT Billed)
     List<ActivityPointer> activities =
      (
       from act in xrm.ActivityPointerSet
       where act.RegardingObjectId.Id == caseResolution.IncidentId.Id
       && act.IsBilled == false
       && act.StateCode == 0
       select act
      ).ToList();

     // minimum 1 hr (60 mins)
     int expiredTime = activities.Sum(act => act.ActualDurationMinutes.GetValueOrDefault());

     int totalTime = Math.Max(expiredTime, 60);

     // 15 mins increment
     totalTime = 60 + (int)(Math.Ceiling((decimal)(totalTime - 60) / 15) * 15);

     int deficitTime = totalTime - expiredTime;

     if (deficitTime > 0)
     {
      // create a dummy activity that makes up the time difference
      mrcrm_timesheetactivity timesheetActivity = new mrcrm_timesheetactivity();
      timesheetActivity.ActualDurationMinutes = deficitTime;
      timesheetActivity.RegardingObjectId = new Microsoft.Xrm.Client.CrmEntityReference(Incident.EntityLogicalName, caseResolution.IncidentId.Id);
      timesheetActivity.Subject = "<DEFICIT>";
      timesheetActivity.IsBilled = false;

      timesheetActivity.Id = service.Create(timesheetActivity);

      SetStateRequest reqSetState = new SetStateRequest();
      reqSetState.EntityMoniker = timesheetActivity.ToEntityReference();
      reqSetState.State = new OptionSetValue(1); // completed
      reqSetState.Status = new OptionSetValue(2); // completed

      SetStateResponse respSetState = (SetStateResponse)service.Execute(reqSetState);

      // update case resolution time to make up the deficit
      entityIncidentResolution["timespent"] = totalTime;
     }
    }
   }
  }
  catch (FaultException<OrganizationServiceFault> ex)
  {
   throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
  }
 }
}

I am doing all of this in the pre-Close stage, as I need to influence the TimeSpent before the CRM has a chance to decrement allotment.

Conclusion

Not much else to say, but that one should always remember the “other” messages available to us in plugins.  Create, Update, Delete, Retrieve etc are great, but sometimes… they’re just not enough.

 

March 28, 2012

CRM SSRS Reports, SQL CLR and External Data – A Case Study

Filed under: .NET, C#, CRM, Cutting Code, SQL Server, Transact SQL — pogo69 @ 14:21

I met some interesting requirements recently, wherein I needed to produce a CRM Report that joined standard CRM Data using the FilteredEntityName views with data provided by a 3rd party dll.

Given the complexities of the resultant solution, I thought it would make for an interesting Case Study.  Hopefully, others can learn something from my experiences and perhaps broaden their minds to the possibilities of CRM Integration.

Architectural Overview

A picture speaks a thousand words, so I’ll shutup for a moment and:

In summary:

  1. My SSRS Report (uploaded to CRM) renders data in a single table provided by;
  2. An SQL query that joins data retrieved from;
  3. A CRM Database FilteredEntity View and;
  4. A Table Valued Function wrapping an;
  5. SQL-CLR Table Valued Function that retrieves the required data from;
  6. A Web Service that calculates Fatigue ratings using;
  7. A 3rd-party 32-bit DLL

Now, if that seems a little bit convoluted, keep reading for the reasoning behind this spider-web of data manipulation and retrieval.

I will start backwards, as the final piece of the puzzle is actually the beginning of the story.

How (and Why) It All Works

32-bit DLLs and CRM 2011

In contrast to CRM 4.0 that preceded it, CRM 2011 is now available only as a 64-bit installation.  This means that if you ever have a need to access 32-bit DLLs (legacy code, 3rd-party functionality, pretty much every ODBC driver ever written) you are unable to do so unless you write a “thunking” layer; providing access to the 32-bit functionality in a separate memory space in order to maintain compatibility.

I chose to wrap my 32-bit 3rd-party DLL in a WCF Service.  I have exposed several endpoints, including a basicHttpBinding that allows interoperability with older SOAP based clients – this is necessary for interaction from an SQL-CLR object as the requisite assemblies are not available to create a more modern WCF client.

As touched upon briefly above, I have implemented the same sort of idea to interoperate with ODBC drivers for legacy database applications that provide no other interface.

P/Invoke – The Murky World of Managed/Unmanaged Interoperability

I will just briefly mention a tool called the ‘P/Invoke Interop Assistant’ that I found invaluable when building a C# wrapper for the above mentioned 32-bit DLL.  It was written in C and exposed only a legacy C based interface that cannot be easily consumed by managed code; which is where P/Invoke comes in.

By adding your legacy 32-bit DLL to your managed project and marking it as content, it will be copied to your output folder.  You must then create DllImport wrappers for any functions that you wish to access e.g. (names changed to protect the innocent:

[DllImport("xxxx.DLL", EntryPoint="_xxx_yyyy@nn")]
 internal static extern int xxx_yyyy(
 int p1,
 ref int p2,
 int p3,
 ref int p3,
 ref int p4,
 ref int p5,
 ref double p6,
 ref double p7,
 int p8,
 ref int p9,
 ref double p10
);

I won’t go into the nightmare that was trying to figure out exactly what that signature should look like, nor the corresponding nightmare that was figuring out how to properly talk to it.  However, I can say that the P/Invoke Interop Assistant made life somewhat easier.

SQL-CLR Table-Valued Functions are awesome!

I got so excited about SQL-CLR when we first came in contact with each other.  In fact, I was all over a whole stack of new “toys” that became available with SQL Server 2005 (I will mention another, Common Table Expressions or CTEs a little later).

SQL-CLR allows you to write a managed assembly (.NET code) that can perform tasks not ordinarily available via Transact SQL and access that functionality via a Stored Procedure or Function wrapper.  My first attempt at such an assembly was to expose some simple .NET Regular Expression pattern matching/replacing functionality, which became invaluable (and surprisingly fast) for a data cleansing project on which I was working.

In this case, however, I wrote my first SQL-CLR Table Valued function; which does exactly what it says – returns data from a CLR managed assembly that is represented and consumed in Transact SQL as a Table.

So; what does an SQL-CLR Table Valued function look like?  I’m glad you asked.  I will let MSDN explain in details, but basically, you have to implement two methods:

  1. One method does that actual data retrieval/collection/however you obtain the data – this method must return the interface System.Collections.IEnumerable (or something that implements same) and will be marked with metadata describing various SQL related properties and the name of the method that will be used to “feed” the data back to SQL Server
  2. The second method does the feeding – taking a generic Object (which are individual elements of the datatype returned from the first method), ripping it apart, and populating out parameters for each column in the resultset.  This allows SQL Server to stream the data back to the query that calls your assembly
Example as follows:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using System.Collections.Generic;
using System.Linq;
using System.Xml;
using Microsoft.SqlServer.Server;
using BMP.FAID.SqlClr.WebServiceWrapper;
public partial class CalculateFatigue
{
 [SqlFunction(FillRowMethodName = "FillRow", DataAccess = DataAccessKind.None, TableDefinition = "Start datetime, Fatigue float", IsDeterministic = true, IsPrecise = false)]
 public static System.Collections.IEnumerable InitMethod(SqlXml rosterPeriods, string faidServiceUrl)
 {
  return WebService.Process(rosterPeriods.Value, faidServiceUrl);
 }

 public static void FillRow(Object obj, out SqlDateTime start, out SqlDouble fatigue)
 {
  HourlyFatigue hourBlock = (HourlyFatigue)obj;
  start = hourBlock.Start;
  fatigue = hourBlock.Fatigue;
 };
}

NB: I implemented my Web Service Wrapper in another assembly so that I could unit test it outside of SQL Server.

As per the in-depth explanation of SQL-CLR Table Valued functions, I will leave the details of the deployment of these managed assemblies to your MSDN searching capabilities – in fact, I think I may have gone through the process in another blog posting.

The Transact SQL Query

Apologies for the verbosity, but there are several elements of the query that I would like to explain for those who are not familiar with the concepts within.

--declarations and setup (hard-coded for demonstrative purposes)
declare
 @pilotId uniqueidentifier = 'A4D1BAF3-BC41-E111-B9ED-00012E244939',
 @year int = 2012,
 @month int = 2;

declare
 @monthBegin datetime = (select cast(@year as nvarchar) + '-' + cast(@month as nvarchar) + '-01');
declare
 @monthEnd datetime = DATEADD(SECOND,-1,DATEADD(Month,1,@monthBegin));

declare
 @daysInMonth int = (select DAY(@monthEnd));

declare
 @faidServiceUrl nvarchar(255) = (select top 1 mrcrm_faidserviceurl from Filteredmrcrm_rosterandavailabilitysettings);

create table #HourBlock
(
 [Start] [datetime] NULL,
 [Fatigue] [float] NULL
);

-- find roster items
declare
 @rosterPeriods xml =
 (
  select
   rsi.mrcrm_Start, rsi.mrcrm_End, rt.mrcrm_RiskLevel
  from
   mrcrm_rosterscheduleitem rsi
  join
   mrcrm_rostertype rt on rt.mrcrm_rostertypeId = rsi.mrcrm_RosterTypeId
  where
   rsi.mrcrm_EmployeeId = @pilotId
  and
   rt.mrcrm_RiskLevel > 0
  and
   rsi.mrcrm_Start <= @monthEnd
  and
   rsi.mrcrm_End >= DATEADD(DAY, -14, @monthBegin)
  and
   rsi.statecode = 0
  order by
   rsi.mrcrm_Start
  for xml path('period'), root('roster')
 );

-- use faid service to calculate hourly fatigue
insert
 #HourBlock
select
 Start, Fatigue
from
 [BMP_Functions].[dbo].[InitMethod](@rosterPeriods, @faidServiceUrl);

;with
-- obtain list of all hours in the day
Hour_CTE (Hour)
as
(
 select 0

 union all

 select Hour + 1 from Hour_CTE where Hour < 23
),
-- obtain list of all days in the month
Day_CTE (Day)
as
(
 select @monthBegin

 union all

 select DATEADD(Day, 1, Day) from Day_CTE where Day < @monthEnd
)
-- join it all up together
select
 Day_CTE.Day,
 Hour_CTE.Hour,
 ISNULL(h.Fatigue, 0.0) as Fatigue
from
 Hour_CTE
cross join
 Day_CTE
left join
 #HourBlock h on DATEPART(Hour, h.Start) = Hour_CTE.Hour and Day(h.Start) = Day(Day_CTE.Day);

Again; apologies for the verbosity but it is as simple as I could make it while still demonstrating what I had hoped (yes, it is more complicated than that).  Now, to break it down.

Configuration Data

I store system-wide configuration data in a “Settings” entity instance in the CRM.  This allows me to set variables such as Web Service URLs centrally and access them from anywhere that has access to CRM Data.  CRM components access the Fatigue Web Service that I show here in exactly the same way via plugins etc.

declare
 @faidServiceUrl nvarchar(255) = (select top 1 mrcrm_faidserviceurl from Filteredmrcrm_rosterandavailabilitysettings);

Table Inputs into SQL-CLR Functions (are not allowed) – ‘FOR XML PATH’

In order to calculate Fatigue ratings, the 3rd-party component I was using requires a “table” of input data.  Unfortunately, although you can output a table of data from an SQL-CLR function, you cannot input such data.  So I decided to convert my input data into XML and feed it into my managed assembly function via an SQL XML datatype.

This is another of the “new for SQL Server 2005″ Transact SQL enhancements that I was very excited about upon first use.  While the “FOR XML …” syntax was available prior to SQL Server 2005, the introduction of “FOR XML PATH…” made the structured formatting of XML data so much cleaner.

-- find roster items
declare
 @rosterPeriods xml =
 (
  select
   rsi.mrcrm_Start, rsi.mrcrm_End, rt.mrcrm_RiskLevel
  from
   mrcrm_rosterscheduleitem rsi
  join
   mrcrm_rostertype rt on rt.mrcrm_rostertypeId = rsi.mrcrm_RosterTypeId
  where
   rsi.mrcrm_EmployeeId = @pilotId
  and
   rt.mrcrm_RiskLevel > 0
  and
   rsi.mrcrm_Start <= @monthEnd
  and
   rsi.mrcrm_End >= DATEADD(DAY, -14, @monthBegin)
  and
   rsi.statecode = 0
  order by
   rsi.mrcrm_Start
  for xml path('period'), root('roster')
 );

This query outputs data in the following format:

<roster>
 <period>
  <mrcrm_Start>2012-02-06T12:00:00</mrcrm_Start>
  <mrcrm_End>2012-02-06T13:30:00</mrcrm_End>
  <mrcrm_RiskLevel>3</mrcrm_RiskLevel>
 </period>
</roster>

Such data can then be processed in C# using the usual System.Xml provided mechanisms.

Using the SQL-CLR Table Valued Function

Using the SQL-CLR Table Valued Function is as simple as referencing its “wrapper” user-defined function, passing in any required parameters (in this case, the XML data and web service URL) and selecting from the resultant table.

-- use faid service to calculate hourly fatigue
insert
 #HourBlock
select
 Start, Fatigue
from
 [BMP_Functions].[dbo].[InitMethod](@rosterPeriods, @faidServiceUrl);

Common Table Expressions or CTEs – Outputting the Data

Also newly introduced in SQL Server 2005, is the concept of Common Table Expressions.  CTEs provide the equivalent of an inline function, which can be very handy when building reports for Dynamics CRM given that user-defined elements such as custom stored procedures and user-defined functions are disallowed within the CRM database.

NB: in the case of my SQL-CLR function, I stored the managed assemblies and function in a separate database.

In addition to providing a mechanism for an “inline” function, CTEs allow the definition of a recursive function as per my Hour_CTE and Day_CTE below; in this case, I am building a list of:

  1. Each Hour of the Day (0-23)
  2. Each Day in the Month (1-however many days are in the current month)
Using the results of the CTEs and SQL-CLR function, the results are output via a select statement – CROSS JOINS are used to ensure that a value is produced for each combination of Day and Hour.
;with
-- obtain list of all hours in the day
Hour_CTE (Hour)
as
(
 select 0

 union all

 select Hour + 1 from Hour_CTE where Hour < 23
),
-- obtain list of all days in the month
Day_CTE (Day)
as
(
 select @monthBegin

 union all

 select DATEADD(Day, 1, Day) from Day_CTE where Day < @monthEnd
)
-- join it all up together
select
 Day_CTE.Day,
 Hour_CTE.Hour,
 ISNULL(h.Fatigue, 0.0) as Fatigue
from
 Hour_CTE
cross join
 Day_CTE
left join
 #HourBlock h on DATEPART(Hour, h.Start) = Hour_CTE.Hour and Day(h.Start) = Day(Day_CTE.Day);

SSRS and SQL-CLR Functions – A Difficulty

One final word on my experiences with this process.

There is a well-known issue with SSRS Reports, wherein the rendering of large quantities of data can cause excess memory consumption and the resultant failure of a Report to complete.  I encountered this issue, however the supposed cause of such issues did not apply to me – 25 columns and 31 rows of data (in the Production Report) does not constitute a large quantity of data.  Nor does the single, lone table rendering that data constitute a complex visual display.

While I cannot prove causality, I was able to overcome the memory consumption issue, by moving my table and query defined above into a Sub-Report.

My hypothesis is that there is something in my query that could not deal with another DataSet having been defined in the same report (I was using other DataSets to provide data for the Year, Month and Pilot parameters).  This hypothesis is supported by the subsequent removal of the additional DataSets and hard-coding the parameters, allowing the successful rendering of the Report.

I asked for assistance in the SSRS MSDN Support Forums, but was unable to elicit any useful information.  When time permits, I will follow up with this new discovery (I have only just stumbled upon the Sub-Report workaround) and see if it sparks any further ideas.

 

March 22, 2012

CRM 2011 LINQ Provider – FormattedValues Collection

Filed under: C#, CRM — pogo69 @ 09:06

I just re-discovered an idiosyncrasy of the CRM 2011 LINQ Provider that I thought I should share in here, as it may help others who try and fail to retrieve formatted values for attributes such as OptionSets.

The FormattedValues collection will contain entries:

  • Only for those attributes that require it (e.g. text attributes format as the text they represent – no formatting required)
  • Only for those attributes for which the values have been requested in your query

So, if you write a query such as the following:

var contacts =
	(
		from c in xrm.ContactSet
		join a in xrm.AccountSet on c.ParentCustomerId.Id equals a.Id
		where a.Name == "Acme Pty Ltd"
		select new
		{
			Name = c.FullName,
			DOB = c.BirthDate,
			Gender = (c.FormattedValues.Contains("gendercode") ? c.FormattedValues["gendercode"] : "Ambiguous")
		}
	);

the Gender of each Contact will ALWAYS be “Ambiguous”, as we have not queried for the GenderCode attribute.

Alter the query to the following:

var contacts =
	(
		from c in xrm.ContactSet
		join a in xrm.AccountSet on c.ParentCustomerId.Id equals a.Id
		where a.Name == "Acme Pty Ltd"
		select new
		{
			Name = c.FullName,
			DOB = c.BirthDate,
			GenderCode = c.GenderCode,
			Gender = (c.FormattedValues.Contains("gendercode") ? c.FormattedValues["gendercode"] : "Ambiguous")
		}
	);

only those Contacts whose Gender truly is ambiguous, will return “Ambiguous”.

Thanks to ms_crm for the inspiration!

May 10, 2011

CRM Web Services from SQL Server Transact SQL (SQLCLR)

Filed under: C#, CRM, SQL Server, Transact SQL — pogo69 @ 17:53

Let me preface this post by saying “bugga!!”.  Sometime last year, I assisted one of my fellow CRM Community Forum members through the process of building a SQL CLR assembly that communicates with CRM via web services.  This mechanism allows us to synchronise SQL Server updates with CRM in a supported manner by calling our assembly via Create/Update triggers in our 3rd party SQL Server database.  I created the shell for this post quite some time ago, but never got around to starting it, let alone finishing.

Now that CRM 2011 has been released, I was hoping to update the process to describe the steps necessary to create a SQLCLR assembly that communicates with the new CRM 2011 OrganizationService.  Unfortunately, this is not possible, as SQL Server 2005 and SQL Server 2008 (up to and including R2) do not support the .NET Framework v4.0.

So… back to the CRM 4.0 SOAP endpoint it is!!

Creating the SQLCLR Assembly

The example I’m going to describe will allow the creation of a new Contact record from within an SQLCLR Stored Procedure.

New Visual Studio Project

In Visual Studio:

  1. Click the ‘File’ menu
  2. Click ‘New’
  3. Click ‘Project…’
  4. In the Project window, select the ‘Visual C# SQL CLR Database Project’ in the Database / SQL Server category:

In the ‘New Database Reference’ dialog, choose or enter the name of the target SQL Server instance; then select the target database:

Choose ‘No’ for SQL/CLR debugging (or ‘Yes’ if you wish, but I’m not going to go into that here):

New Stored Procedure class

Next, we create the shell of our new SQLCLR Stored Procedure.

  1. Right-click on the Project
  2. Click ‘Add’
  3. Click ‘Stored Procedure…’

In the ‘Add New Item’ dialog, enter the name of your SQLCLR Stored Procedure:

Your newly created Stored Procedure class will look like the following:

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

public partial class StoredProcedures
{
	[Microsoft.SqlServer.Server.SqlProcedure]
	public static void CreateContact()
	{
		// Put your code here
	}
};

CRM Web Reference

Now, we need to add a Web Reference to the CRM 4.0 SOAP endpoint.  It is via this interface that we will communicate with the CRM.

  1. Right-click on the ‘References’ folder
  2. Click ‘Add Service Reference…’

Click on ‘Advanced…’:

Click on ‘Add Web Reference…’:

In the ‘Add Web Reference’ dialog, enter the fully qualified URL path to your CRM Service; then click on Green arrow icon next to the URL:

The CRM Web Service URL takes the following format:

http://<server>:<port>/MSCRMServices/2007/CrmService.asmx

Visual Studio will send a request to the CRM Service, retrieve the WSDL and build a Web Service proxy, which you must now name in the ‘Web reference name’ field:

Click on ‘Add Reference’ and you will see your newly added Web Reference in the Project’s ‘Web References’ folder:

Using the CRM Service to Create a Contact

Stored Procedure Signature

Next, we’ll add some parameters that can be used to create our new Contact.  Our contacts will each have:

  • First Name
  • Last Name
  • Email Address
public static void CreateContact(SqlString firstName, SqlString lastName, SqlString email)
{
	// Put your code here
}

Each of the SQL Server datatypes is represented by a SQLCLR “wrapper” object, that encapsulates the underlying .NET object.  This sample uses only SqlString, which encapsulates the .NET “string” datatype and represents a SQL Server NVARCHAR.

CrmSdk Namespace

I’ve added a “using” directive, so that we can use the CRM Web Service objects and methods directly:

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

using SqlServerProject1.CrmSdk;

public partial class StoredProcedures
{
	[Microsoft.SqlServer.Server.SqlProcedure]
	public static void CreateContact(SqlString firstName, SqlString lastName, SqlString email)
	{
	}
};

CRM Web Service Connection and Interactivity

At this point, we can write CRM Web Service code just like we always have in our plugins, 3rd party applications and whatever we decide to implement.

  1. Obtain an Authentication Token
  2. Create a Web Service connection
  3. “Do stuff”
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

using SqlServerProject1.CrmSdk;

public partial class StoredProcedures
{
	[Microsoft.SqlServer.Server.SqlProcedure]
	public static void CreateContact(SqlString firstName, SqlString lastName, SqlString email)
	{
		CrmAuthenticationToken token = new CrmAuthenticationToken();
		token.AuthenticationType = 0;
		token.OrganizationName = "<uniqueorgname>";

		using (CrmService crmService = new CrmService())
		{
			crmService.Url = "http://<server>:<port>/MSCRMServices/2007/CrmService.asmx";
			crmService.Credentials = new System.Net.NetworkCredential("<username>", "<password>", "<domain>");
			crmService.CrmAuthenticationTokenValue = token;
			crmService.UnsafeAuthenticatedConnectionSharing = true;
			crmService.PreAuthenticate = true;

			contact contact = new contact();
			contact.firstname = firstName.Value;
			contact.lastname = lastName.Value;
			contact.emailaddress1 = email.Value;

			crmService.Create(contact);
		}
	}
};

Side Issues

Exception Handling

The code above has no exception handling.  Your production code should.  But I’m leaving that as an exercise for the reader, as it is largely an application-dependant architectural issue.

Web Service Connection Parameters

The Web Service connection parameters (credentials, URL, Organisation Name) etc are hard-coded.  Again, configurable alternatives are left as an exercise for the reader, although I may come back to this issue at a later time.  For now, some options to consider include:

  • Passing in Connection data as additional parameters
  • Storing Connection data in the Database
  • Storing Connection data in the Registry or File System

XML Serialisation Assembly

One of the restrictions of SQLCLR is that you cannot make use of any .NET Assembly that has not explicitly been loaded into the target SQL Server database, other than a very restricted set of .NET Framework assemblies that have been deemed “safe”.

A side-effect of this, is that an assembly such as ours that utilises Web Services will not work unless we manually generate an XML Serialisation Assembly.

The .NET runtime uses a Serialisation assembly to package Web Service requests into XML packets for transfer via HTTP to the Web Service host.  Ordinarily, serialisation assemblies are generated “on the fly” by the Framework, however we cannot make use of this mechanism as there is no way to dynamically load the resultant assembly into SQL Server.

To generate a Serialisation assembly from within Visual Studio:

  1. Right-click on the Project
  2. Click ‘Properties’
  3. Navigate to the ‘Build’ tab
  4. Change ‘Generate serialization assembly’ to ‘On’

Using sgen.exe from the Command Line

If you experience any issues with the generation of serialisation assemblies from within Visual Studio, you can use sgen.exe directly from the command line.  I do this for all of my serialisation code because:

  • There is no way to specify sgen.exe parameters from within the Visual Studio GUI
  • It takes a ridiculously long time for sgen.exe to complete using whatever parameters Visual Studio chooses to throw at it

I create a post-build task that runs the command from within the build process; the command line I use is as follows:

"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\sgen.exe" /a:"<drive>:\<path to solution>\<project folder>\bin\Debug\SqlServerProject1.dll" /p /force

I then wrap the sgen.exe command up in a batch file.  You can configure the Post Build Event in the Project properties as follows:

Integration with SQL Server

Now, build your Project and locate the resultant assemblies.  The names will reflect the name you gave your assembly, but mine are:

  • SqlServerProject1.dll
  • SqlServerProject1.XmlSerializers.dll

Enabling CLR in SQL Server

By default, CLR interoperability is not enabled in SQL Server.  To enable it, run the following in SQL Server Management Studio:

USE master;
GO
EXEC sp_configure 'CLR Enabled', 1;
go
RECONFIGURE WITH OVERRIDE
GO

It’s a Matter of Trust

For security reasons, SQL CLR assemblies cannot by default be loaded into a SQL Server database.  In order to enable the uploading of our assemblies, there are two alternatives.

Setting TRUSTWORTHY ON for the Target Database

ALTER DATABASE <DBNAME> SET TRUSTWORTHY ON

This is useful for development and/or testing because it’s quick and easy.  However, it does open up the database to the possibility of malicious assemblies being loaded as all assemblies will now be trusted.  For production environments you should instead give “trust” only to your assemblies.

Trusting Individual Assemblies

CREATE ASYMMETRIC KEY CrmKey
FROM EXECUTABLE FILE = '<full path to your assembly>\<assemblyname>.dll'
GO

Now create a login associated with the CrmKey and grant it External Access rights.

CREATE LOGIN [CrmLogin] FROM ASYMMETRIC KEY [CrmKey];
GO
GRANT EXTERNAL ACCESS ASSEMBLY TO [CrmLogin];
GO

Finally add a user for your login in your database:

USE <your database>
GO
CREATE USER [CrmLogin] FOR LOGIN [CrmLogin];
GO

Deployment

Project Database Properties

When creating the SQLCLR Project in Visual Studio, we created a connection to the target SQL Server database.  On the ‘Database’ tab in the Project Properties, we can influence the deployment of our SQLCLR assembly.

Set the Permission level to ‘External’ – we need this, as our assembly will require access outside of SQL Server to communicate with CRM Web Services.

Set the Assembly Owner to either:

  • dbo – if you set TRUSTWORTHY ON in the database
  • CrmLogin – if you are assigning trust only to your assemblies and created the CrmLogin as per the instructions above

Deploying the Serialisation Assembly

The deployment process will only automatically deployment your SQLCLR assembly.  To deploy the serialisation assembly, we create a Post-Deployment script in the PostDeployScript.sql file:

CREATE ASSEMBLY [SqlServerProject1.XmlSerializers]
AUTHORIZATION [dbo] FROM '<solution path>\<project folder>\bin\Debug\SqlServerProject1.XmlSerializers.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS
GO

NB: Because I am deploying my assemblies to a SQL Server on another machine, I had to configure the post deployment script above to use a UNC path to the serialisation assembly.  This is necessary, as the SQL Server must be able to gain file system access to the file.

Build and Deploy

Now… build your Project.  Assuming it builds successfully, you can Deploy it (the Deploy process actually does a build before it Deploys, but it can’t hurt to test the build first).

To deploy:

  1. Right-click on the Project
  2. Click ‘Deploy’

If it all goes as expected, your assemblies will be uploaded into the target database and you will be able to find them in SQL Server Management Studio by:

  1. Expanding the target database
  2. Expanding ‘Programmability’
  3. Expanding ‘Assemblies’

As you can see in the image above, the Deployment process also creates a “wrapper” Stored Procedure for the SQLCLR Stored Procedure implemented in my assembly.  The definition of that Stored Procedure is as follows:

CREATE PROCEDURE [dbo].[CreateContact]
	@firstName [nvarchar](4000),
	@lastName [nvarchar](4000),
	@email [nvarchar](4000)
WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [SqlServerProject1].[StoredProcedures].[CreateContact]
GO

EXEC sys.sp_addextendedproperty @name=N'SqlAssemblyFile', @value=N'CreateContact.cs' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'PROCEDURE',@level1name=N'CreateContact'
GO

EXEC sys.sp_addextendedproperty @name=N'SqlAssemblyFileLine', @value=N'13' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'PROCEDURE',@level1name=N'CreateContact'
GO

To call our new Stored Procedure, simply use the following:

DECLARE	@return_value int

EXEC	@return_value = [dbo].[CreateContact]
		@firstName = N'Testy',
		@lastName = N'Tester',
		@email = N'testiest@test.com'

SELECT	'Return Value' = @return_value

With the following result:

Conclusion

I hope this post makes sense.  It’s rather long, rather involved and I’ve had a really long day.  So, although I’ve read through it to make sure it isn’t obviously unintelligible I can’t guarantee that it is without issue(s).  If anyone spots anything that requires clarification and/or modification, let me know.

Enjoy.

May 7, 2011

The Missing LINQ – CRM 4.0 SDK – Advanced Developer Extensions

Filed under: .NET, C#, CRM, Cutting Code — Tags: , , , — pogo69 @ 09:01

In my inaugural post, I made brief mention of the newly introduced capabilities of the LINQ provider included as part of the most recent revision of the CRM 4.0 SDK – the Advanced Developer Extensions.  In this post, I’ll go into more detail about the features available and how it can increase developer productivity by reducing lines of code and increasing the testability and maintainability of the code.

The article wouldn’t be complete, however, without mentioning some of the current downsides; you can find the caveats later in the post.

NB: I started writing this post last year before CRM 2011 had been released but got distracted (not in the least bit unusual for me).  Most of what I’ve written is still applicable to the CRM 2011 LINQ provider as the underlying query mechanism (Query Expressions) is the same.

LINQ

One of the most exciting new features of the .NET framework is LINQ (Microsoft Language Integrated Query).  Using an applicable provider, LINQ provides SQL-like access to an underlying object model, via some compiler magic and a collection of extension methods (global static methods that take a “this” pointer as their first parameter).

LINQ allows strongly typed queries and results to an arbitrary underlying data model.  The Advanced Developer Extensions makes available a LINQ provider to access CRM entities and attributes via the xRM LINQ provider.

Advanced Developer Extensions

CrmSvcUtil.exe

In order to be able to write LINQ queries against the CRM, you must first generate the following:

  • DataContext class – allows a “connection” to the CRM and provides access to both collections of CRM entities and CRUD methods
  • Data Transfer Objects – a mapping between a CRM entity instance and a .NET object with which your source code can interact

If you wish to keep the model consistent and in sync, the CrmSvcUtil.exe utility must be re-run each time you make metadata changes to the corresponding CRM installation.

xRM LINQ Queries

So, let’s look at some simple queries:

DataContext xrm = new DataContext("Xrm.ConnectionString");

Guid contactId = new Guid("{12341234-1234-1234-123412341234}");

List<incident> myCases =
(
	from cs in xrm.incidents
	join c in xrm.contacts on cs.customerid.Value equals c.contactid
	where c.contactid == contactId
	select cs
).ToList();

This simple “join” query returns all Case entities assigned to the Contact referenced by the Guid “contactId”.  To write the equivalent query in QueryExpression syntax would consume many more lines of code, and largely obfuscate the intention of the query.

With LINQ, the simplified syntax allows us to see:

  • Selection criteria
  • Source entities
  • Link criteria
  • Output attributes
  • Ordering

and more, all in one simple statement.

Returning Custom Classes

The previous query returns entire CRM entities (via the corresponding Data Transfer Object); it is also possible to return only a selection of the entity’s attributes, thus:

var myCases =
	(
		from cs in xrm.incidents
		join c in xrm.contacts on cs.customerid.Value equals c.contactid
		where c.contactid == contactId
		select new
			{
				cs.incidentid,
				cs.title
			}
	);

The keyword “var” refers to an anonymous type; anonymous types allow the dynamic creation of an object, without having to explicitly declare the type of the resultant object.  Anonymous types may only contain read-only properties; if you wish to subsequently manipulate the content of the returned results, you must declare and instantiate instances of explicitly declared types.

The following provides an (almost real world, albeit abbreviated) example:

protected class AppointmentDataItem
{
	// appointment attributes
	public Guid AppointmentId { get; set; }
	public DateTime ScheduledStart { get; set; }
	public DateTime ScheduledEnd { get; set; }
	public DateTime Booked { get; set; }

	// case (job) attributes
	public string JobTicketNumber { get; set; }
	public Guid JobId { get; set; }

	// contact (customer) attributes
	public string CustomerName { get; set; }
	public string CustomerTelephone { get; set; }
	public string CustomerAddress { get; set; }
	public string CustomerCity { get; set; }
	public string CustomerState { get; set; }
	public string CustomerPostcode { get; set; }
	public string CalendarDisplayTitle { get; set; }
}

...

List<AppointmentDataItem> appointments =
	(
		from app in xrm.appointments
		join cs in xrm.incidents on app.regardingobjectid.Value equals cs.incidentid
		where cs.msa_partnercontactid.Value == contactId
		select new AppointmentDataItem
			{
				AppointmentId = app.activityid,
				ScheduledStart = activity.scheduledstart.GetValueOrDefault().ToLocalTime(),
				ScheduledEnd = activity.scheduledend.GetValueOrDefault().ToLocalTime(),
				Booked = activity.createdon.GetValueOrDefault().ToLocalTime()
			}
	).ToList();

...

foreach (AppointmentDataItem appointment in appointments)
{
	// get the Case object to which the Appointment is attached
	var job =
	(
		from cs in xrm.incidents
		join app in xrm.appointments on cs.incidentid equals app.regardingobjectid.Value
		where app.activityid == appointment.AppointmentId
		select cs
	).Single();

	appointment.JobId = job.incidentid;
	appointment.JobTicketNumber = job.ticketnumber;

	// get the Customer Contact to whom the Case is attached
	var customer =
	(
		from cust in xrm.contacts
		join cs in xrm.incidents on cust.contactid equals cs.customerid.Value
		join app in xrm.appointments on cs.incidentid equals app.regardingobjectid.Value
		where app.activityid == appointment.AppointmentId
		select cust
	).Single();

	appointment.CustomerName = customer.fullname;
	appointment.CustomerAddress = customer.address1_line1;
	appointment.CustomerCity = customer.address1_city;
	appointment.CustomerState = customer.address1_stateorprovince;
	appointment.CustomerPostcode = customer.address1_postalcode;
	appointment.CustomerTelephone = customer.telephone2;
	appointment.CalendarDisplayTitle = customer.fullname + " " + FormatTelephone(customer.telephone2) + "<br />" + FormatAddress(activity);
}

So, we:

  • Declare our new custom type
  • Query for, and return a List of instances of our new type
  • Update the properties of our object instances at will

Pretty cool; and we’re still “talking” to CRM in terms of entities and attributes using a relatively natural (to those of us with SQL skills anyway) language.

Restrictions (some of those Caveats I mentioned)

There are limitations to the queries that can be written using the xRM LINQ provider.  These restrictions are due to the underlying query mechanism to which the LINQ provider translates its queries – Query Expression.

One Entity’s Attributes per Query – The impact of Query Expression

Using Query Expressions, we can only return attribute values for a single entity at a time.  A corollary of this is that an xRM LINQ Query can only return attribute values for a single entity at a time.  For instance, the following is not valid and will error:

var contact =
	(
		from c in xrm.contacts
		join a in xrm.accounts on c.parentcustomerid.Value equals a.accountid
		where c.emailaddress1 != null
		select new
			{
				ContactName = c.fullname,
				AccountName = a.name
			}
		);

While the code will compile and run (it *is* valid C# and valid LINQ syntax), it will generate the following error at run-time as soon as you attempt to iterate the resultant collection:

The ‘select’ and ‘orderBy’ calls may only reference a single common entity type.

If you wish to retrieve more than one entity’s attributes in a single query, you are forced to “project” the CRM Entity’s Data Transfer Object(s) onto a local object collection via the ToList() method or similar.  Unfortunately, this brings up another caveat.

Projection of Data Transfer Objects onto Local Object collections

The following revised query (NB: the ToList() projection in the ‘from’ clause):

var contact =
	(
		from c in xrm.contacts.ToList()
		join a in xrm.accounts on c.parentcustomerid.Value equals a.accountid
		where c.emailaddress1 != null
		select new
			{
				ContactName = c.fullname,
				AccountName = a.name
			}
		);

will no longer error as the ToList() projection turns our Contact entity Data Transfer Objects into a local object collection.

What this means however, is that the LINQ provider will return ALL CRM Contact entities, BEFORE it applies either your predicates (where clause) or attribute selection.

In other words, each and every attribute of each and every entity instance is returned, and subsequently whittled down in memory by the remainder of your LINQ query.  This can be very expensive in terms of speed and bandwidth depending on the number and size of the entities in your query.

If you find this to be an issue, you will need to revise your code to use a mechanism such as that described above, wherein you create custom classes to retrieve the relevant data one entity type at a time.

Restricted Predicates – The impact of Condition Expressions

A Condition Expression takes the following form:

ConditionExpression condition = new ConditionExpression("<logicalattributename>", ConditionOperator, object);

This means that the predicates in your CRM LINQ queries must follow a similar format:

where <entityinstance>.<attribute> == "<value>"
  • The left-hand side of the predicate MUST be an entity attribute
  • The right-hand side of the predicate MUST be a variable or literal

This prevents us from comparing attribute values between entities or even from comparing attribute values from the same entity instance.  For instance, the following is NOT valid:

var contacts =
	(
		from c in xrm.contacts
		where c.modifiedon.Value > c.createdon.Value
		select c
	).ToList();
resulting in the following run-time (because again, it *is* valid c# and valid LINQ) error:
Lambda Parameter not in scope

Of course, you can “solve” this problem by projecting onto a local object collection as per above, but you risk the same performance issues previously described.

Many to Many Queries

I’m sure you can find examples somewhere in the SDK, but I figured I’d make mention of many-to-many queries, as they require some “unusual” syntax, once again due to their relationship to the underlying Query Expressions.

In a recent project my data model included a many-to-many relationship between the Category entity and the Module entity.  To query for each Module attached to a particular Category, my query looks like the following:

List<Module> desiredModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join cat in xrm.ame_categoryame_categories on mod_cat.ame_categoryid equals cat.ame_categoryid
		where cat.ame_categoryid == desiredCategoryId
		where mod.statuscode.Value == 1
		select new Module
			{
				CategoryId = cat.ame_categoryid,

				ModuleId = mod.ame_moduleid,
				ModuleNo = mod.ame_name,
				ModuleTitle = mod.ame_title
			}
	).ToList();

The first thing you will probably notice is that I used ToList() to project the Modules onto a local object collection.  This is so that I could populate my custom Module class with the Category to which it was attached.  In this case, I know that the number of Categories/Modules/Subjects in the system will never be prohibitively large and there are only 2-3 custom attributes on each, so I can live with the overhead.

The “unusual” thing about the query is that we have 2 “from” clauses – this won’t make any sense to SQL aficionados, but it is necessary due to Query Expression and the way that many-to-many relationships are represented via the CRM SDK – the Moniker.

More Cool Stuff

Custom Comparators (this is more C# than CRM, but it’ll get you thinking)

In the same project described above, I need to build lists of Modules and Subjects that a user requires to complete a selected qualification (Category).

First, the user selects their desired Category, from which a list of required Modules and Subjects is constructed:

// obtain desired modules/subjects
List<Module> desiredModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join cat in xrm.ame_categoryame_categories on mod_cat.ame_categoryid equals cat.ame_categoryid
		where cat.ame_categoryid == desiredCategoryId
		where mod.statuscode.Value == 1
		select new Module
			{
				CategoryId = cat.ame_categoryid,

				ModuleId = mod.ame_moduleid,
				ModuleNo = mod.ame_name,
				ModuleTitle = mod.ame_title
			}
	).ToList();

List<Subject> desiredSubjects =
	(
		from sub in xrm.ame_subjects.ToList()
		from mod_sub in sub.ame_ame_module_ame_subject
		join mod in desiredModules on mod_sub.ame_moduleid equals mod.ModuleId
		where sub.statuscode.Value == 1
		select new Subject
			{
				ModuleId = mod.ModuleId,

				SubjectId = sub.ame_subjectid,
				SubjectNo = sub.ame_name,
				SubjectTitle = sub.ame_title
			}
	).ToList();

Next, the user selects each Category that they currently hold, from which a list of currently held Modules and Subjects is constructed:

// obtain currently held modules/subjects
List<Module> heldModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join held in heldCategories on mod_cat.ame_categoryid equals held
		where mod.statuscode.Value == 1
		select new Module
		{
			CategoryId = held,

			ModuleId = mod.ame_moduleid,
			ModuleNo = mod.ame_name,
			ModuleTitle = mod.ame_title
		}
	).ToList();

List<Subject> heldSubjects =
	(
		from sub in xrm.ame_subjects.ToList()
		from mod_sub in sub.ame_ame_module_ame_subject
		join mod in heldModules on mod_sub.ame_moduleid equals mod.ModuleId
		where sub.statuscode.Value == 1
		select new Subject
		{
			ModuleId = mod.ModuleId,

			SubjectId = sub.ame_subjectid,
			SubjectNo = sub.ame_name,
			SubjectTitle = sub.ame_title
		}
	).ToList();

Next, I needed to calculate the difference.  That is, remove all Modules and Subjects currently held from the list of required Modules and Subjects.

I started with the following:

// calculate difference
List<Module> requiredModules = desiredModules.Except(heldModules).ToList();
List<Subject> requiredSubjects = desiredSubjects.Except(heldSubjects).ToList();

Unfortunately, it doesn’t work correctly.  As the relationship between each entity is many-to-many and I am storing the parent identifier in my local Module and Subject classes, the “held” Modules/Subjects never matched the “required” Modules/Subjects – for the Modules, the CategoryId didn’t match and for the Subjects, the ModuleId didn’t match:

internal class Module
{
	internal Guid CategoryId { get; set; }

	internal Guid ModuleId { get; set; }
	internal string ModuleNo { get; set; }
	internal string ModuleTitle { get; set; }
}

internal class Subject
{
	internal Guid ModuleId { get; set; }

	internal Guid SubjectId { get; set; }
	internal string SubjectNo { get; set; }
	internal string SubjectTitle { get; set; }
}

Enter, the Custom Comparator.  A custom comparator can be used to provide rules for comparison for classes that don’t meet the default property-wise comparison criteria.  Your custom comparator must inherit from IEqualityComparer.  For simplicity I’ll show only the Module comparator:

class ModuleComparer : IEqualityComparer<Module>
{
	public bool Equals(Module x, Module y)
	{
		//Check whether the compared objects reference the same data.
		if (Object.ReferenceEquals(x, y)) return true;

		//Check whether any of the compared objects is null.
		if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false;

		//Check whether the Modules' properties are equal.
		return x.ModuleId == y.ModuleId;
	}

	// If Equals() returns true for a pair of objects
	// then GetHashCode() must return the same value for these objects.
	public int GetHashCode(Module module)
	{
		//Check whether the object is null
		if (Object.ReferenceEquals(module, null)) return 0;

		return module.ModuleId.GetHashCode();
	}
}

So, my basis for comparison is only the ModuleId:

//Check whether the Modules' properties are equal.
return x.ModuleId == y.ModuleId;

and I use the ModuleId’s hash code as the Module’s hash code:

return module.ModuleId.GetHashCode();

So, I can now alter my code to the following:

// calculate difference
List<Module> requiredModules = desiredModules.Except(heldModules, new ModuleComparer()).ToList();
List<Subject> requiredSubjects = desiredSubjects.Except(heldSubjects, new SubjectComparer()).ToList();

and it returns the correct results.

April 19, 2011

Debugging Plugins in CRM Online

Filed under: C#, CRM — Tags: , , , , — pogo69 @ 16:10

Plugin development in CRM Online offers a unique challenge, in that we cannot interactively debug our code.

In the ideal world, you would test your plugin in a development/testing environment (VPC or equivalent) before moving your code to the production Online deployment.  However, if you either have no testing environment available or are experiencing issues specific to the production deployment (data dependent errors, for instance), a few lines of code can help ease the pain:

	public class Plugin : IPlugin
	{
		public void Execute(IServiceProvider serviceProvider)
		{
			IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

			// TODO - If you require tracing, uncomment the following line
			ITracingService trace = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

			Entity entity = null;

			// Check if the InputParameters property bag contains a target
			// of the current operation and that target is of type DynamicEntity.
			if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)
			{
				// Obtain the target business entity from the input parmameters.
				entity = (Entity)context.InputParameters["Target"];

				// TODO Test for an entity type and message supported by your plug-in.
				if (context.PrimaryEntityName != "<logicalentityname>") { return; }
				if (context.MessageName != "<message>") { return; }
			}
			else
			{
				return;
			}
			try
			{
				IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
				IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

				...
			}
			catch (FaultException<OrganizationServiceFault> ex)
			{
				System.Text.StringBuilder errors = ex.Detail.ErrorDetails.Aggregate(
					new System.Text.StringBuilder(), (list, error) =>
					{
						list.Append(string.Format("[{0}:{1}]", error.Key, error.Value));
						return list;
					});

				#if TRACE == true
				trace.Trace(
					@"ErrorCode: {0}
					ErrorDetails: {1}
					Message: {2}
					TraceText: {3}",
					ex.Detail.ErrorCode,
					errors.ToString(),
					ex.Detail.Message,
					ex.Detail.TraceText
				);
				#endif

				throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
			}
		}
	}

With your exception handler in place, when/if an exception is thrown, the details of the Exception will be written to the Trace Log which is available for download via the popup Error dialog.  Anything written to the Tracing service will be sent to the CRM Web Application client in this way, but only if the plugin throws an exception.

Older Posts »

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.