Pogo69's Blog

March 30, 2012

Case Management – How to Implement Block Increment Rules in Time Based Contracts

Filed under: C#, CRM, Cutting Code — pogo69 [Pat Janes] @ 10:47

For our clients with  Time-based Service Contracts, we have some very specific Business Rule requirements to implement.  This ensures that adequate time is allocated to a Case, no matter how trivial, to cater for the administrative overheads incurred.

The Requirements

  1. Each Case must consume a minimum of 60 mins allotment
  2. Each Case must consume a multiple of 15 mins after the initial 60 mins

For example:

Accrued Billable Time Total Time upon Case Resolution
23 mins 60 mins
47 mins 60 mins
1 hr 13 mins 1 hr 15 mins
2 hrs 47 mins 3 hrs

The first two examples meet Rule #1 – minimum of 60 mins.

The second two examples meet Rule #2 – multiple of 15 mins after the initial minimum 60 mins – that is, rounded up to the nearest 15 min interval.

Overview

Time-Based Contracts -> Accrued Time -> Sum of ActualDurationMinutes in Closed Activities

Time-based Contracts manage allotment in accrued minutes.

When a Case attached to a Time-based Contract is resolved, the CRM aggregates the sum of all ActualDurationMinutes values in Closed Activities attached to that Case.  This includes the in-built Activity Types such as Email, Task etc; and also includes Custom Activity Types.  I make a point of mentioning the inclusion of Custom Activity Types here, as it is not clear in any of the documentation that I have read and there appears to be some subsequent confusion in the CRM Developer community.

In order to Close a Case (incident), one must create an IncidentResolution entity instance and pass it, and the required Status (usually 5 == ‘Problem Solved’) to a CloseIncidentRequest.  The IncidentResolution entity must contain two attributes of interest to the Case Resolution process:

  • IncidentId – the GUID identifier of the Case that we wish to Resolve (Close)
  • TimeSpent – the total number of accrued minutes pertaining to the Case being resolved.  This is the amount of allotment that will be decremented from the Contract Line assigned to the Case

TimeSpent is calculated by (the equivalent of) the following:

int timeSpent =
 (
  from a in xrm.ActivityPointerSet
  where a.RegardingObjectId.Id == caseId
  && a.StateCode == 0
  && a.IsBilled.GetValueOrDefault() == false
  select a.ActualDurationMinutes.GetValueOrDefault()
 ).Sum();

Thus, the Total Time Spent in a Case is calculated to be the sum of the Duration of each un-billed Closed Activity attached to that Case.  So; in order to influence this process to support our Business Rules, we need to add time to that calculation.

There are two ways to achieve this:

  1. Attach additional activities to the Case prior to the Case Resolution process (when the Case Resolution dialog box pops up, the calculation has already been done – thus to influence the Time Spent at this stage, you must create the additional activities before the User attempts to Resolve the Case
  2. Change the value of TimeSpent in the CaseResolution entity instance prior to the Case Resolution process taking place

In order to keep things neat and tidy; and also to ensure that I reconcile the calculations at the appropriate time, I have chosen to implement a combination of both.

Implementation

Time Sheets – A Quick Diversion

While not strictly relevant to this article, I will quickly mention our Time Sheeting process in CRM.  Prior to the introduction of Time-based contracts (we previously only dealt with ‘Number of Cases’ allotment types) we introduced Time Sheeting, implemented as a Custom entity attached to the Case.  The sum of durations in all attached Time Sheets contributed to the number of Cases decremented from the client’s allotment (2 hrs per Case).

With Time-based Contracts, we needed to move to an activity-based mechanism.

With 20-20 hindsight, I would have created the Time Sheet entity as a Custom activity, however:

  1. I didn’t create (or specify) it
  2. Hindsight was not available to those involved at the time
  3. What good does that do us now?

So, in order to:

  1. Ensure that the data entry process is the same regardless of Time-based or Number-of-Case-based Cases
  2. Enable the new activity based calculations to work as expected

I created a Custom activity type (Time Sheet Activity) and wrote a plugin to (re-)generate Closed activities for each Time Sheet attached to the Case every time a Time Sheet is Created, Updated or Deleted.  This allows the times entered into the Time Sheets (that we were already entering anyway) to be reflected (or mirrored) in the total time assigned to activities.

NB: I only create the Time Sheet Activity entity instances for Cases to which a Time-based Contract is attached.

Incident Resolution Process – Timing

Before I move onto the good stuff, I will also quickly mention the Case Resolution process, to give a better understanding of what we are doing and the why of the when.

int decrement = 0;
decrement = this.CalculateAllotmentToDecrement(originalIncident, incidentResolution, contract, context);
if (decrement != 0)
{
 this.DecrementAllotment((Guid)originalIncident["contractdetailid"], decrement, context);
}
this.CreateIncidentResolutionActivity(incidentResolution, context);
this.BillIncidentResolutionActivities(businessEntityMoniker.Id, 1, context);
base.SetState(businessEntityMoniker, 1, status, context);

So, the Case Resolution process is as follows:

  1. Calculate Allotment to decrement.  NB: For Time-based Contracts, this is taken to be the value of TimeSpent within the IncidentResolution entity instance passed in from the Case Resolution dialog.  It is NOT re-calculated as part of the Case Resolution process
  2. Decrement the Allotment
  3. Create IncidentResolution entity instance (up to this point it has been constructed, but NOT Created within the CRM)
  4. Bill the Activities attached to the Case
  5. Close the Case
I originally attempted to influence this process by intercepting the pre-Create message for IncidentResolution, however as can be see from the above, this is too late!!  Hence, the final solution is implemented by intercepting the pre-Close message for Incident (Case).

Activity Duration Reconciliation and Re-calculation of TimeSpent

The basic steps of the process (that I chose to implement) are:

  1. Check that the requested Status is ‘Problem Solved’; if not, bail.  This only applies to successfully resolved Cases
  2. Find Contract Template; attached to the Contract; attached to the Incident (identified in IncidentResolution passed to the CloseIncidentRequest)
  3. Check that Contract is Time-based allotment; if not, bail
  4. Sum the ActualDurationMinutes of each un-billed, Closed Activity attached to the Case (yes, this is theoretically the same as the current value of TimeSpent in the IncidentResolution, but I like to be thorough)
  5. Round up TotalTime to 60 mins if it is currently lower
  6. Round up TotalTime to the nearest 15 min increment, as necessary if currently exceeds 60 mins
  7. If there is a deficit between the TotalTime calculated and that passed in, continue
  8. Create a “Dummy” Time Sheet activity for the deficit – this is not strictly necessary, as the system calculations have already been made, however I like it, as it makes it more visually obvious in the CRM front-end what is going on
  9. Modify TimeSpent in the IncidentResolution entity being passed through via the Close message.  NB: it is this step that influences the allotment quantity that will be applied to the relevant Contract Line
public class MyPlugin : IPlugin
{
 public void Execute(IServiceProvider serviceProvider)
 {
  IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

  if (context.PrimaryEntityName != Incident.EntityLogicalName) { return; }
  if (context.MessageName != "Close") { return; }

  try
  {
   IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
   IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

   // retrieve input parameters
   Entity entityIncidentResolution = (Entity)context.InputParameters["IncidentResolution"];

   int status = ((OptionSetValue)context.InputParameters["Status"]).Value;

   // if the problem was not solved, bail
   if (status != 5) { return; }

   IncidentResolution caseResolution = entityIncidentResolution.ToEntity<IncidentResolution>();

   using (XrmServiceContext xrm = new XrmServiceContext(service))
   {
    // extract case and contract details
    ContractTemplate template =
     (
      from t in xrm.ContractTemplateSet
      join c in xrm.ContractSet on t.ContractTemplateId.Value equals c.ContractTemplateId.Id
      join i in xrm.IncidentSet on c.ContractId.Value equals i.ContractId.Id
      where i.IncidentId.Value == caseResolution.IncidentId.Id
      select t
     ).SingleOrDefault();

    if (template == null) { return; }

    // only for time based contracts
    if (template.AllotmentTypeCode == 2 /* time */)
    {
     // obtain list of all currently attached activities (Closed and NOT Billed)
     List<ActivityPointer> activities =
      (
       from act in xrm.ActivityPointerSet
       where act.RegardingObjectId.Id == caseResolution.IncidentId.Id
       && act.IsBilled == false
       && act.StateCode == 0
       select act
      ).ToList();

     // minimum 1 hr (60 mins)
     int expiredTime = activities.Sum(act => act.ActualDurationMinutes.GetValueOrDefault());

     int totalTime = Math.Max(expiredTime, 60);

     // 15 mins increment
     totalTime = 60 + (int)(Math.Ceiling((decimal)(totalTime - 60) / 15) * 15);

     int deficitTime = totalTime - expiredTime;

     if (deficitTime > 0)
     {
      // create a dummy activity that makes up the time difference
      mrcrm_timesheetactivity timesheetActivity = new mrcrm_timesheetactivity();
      timesheetActivity.ActualDurationMinutes = deficitTime;
      timesheetActivity.RegardingObjectId = new Microsoft.Xrm.Client.CrmEntityReference(Incident.EntityLogicalName, caseResolution.IncidentId.Id);
      timesheetActivity.Subject = "<DEFICIT>";
      timesheetActivity.IsBilled = false;

      timesheetActivity.Id = service.Create(timesheetActivity);

      SetStateRequest reqSetState = new SetStateRequest();
      reqSetState.EntityMoniker = timesheetActivity.ToEntityReference();
      reqSetState.State = new OptionSetValue(1); // completed
      reqSetState.Status = new OptionSetValue(2); // completed

      SetStateResponse respSetState = (SetStateResponse)service.Execute(reqSetState);

      // update case resolution time to make up the deficit
      entityIncidentResolution["timespent"] = totalTime;
     }
    }
   }
  }
  catch (FaultException<OrganizationServiceFault> ex)
  {
   throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
  }
 }
}

I am doing all of this in the pre-Close stage, as I need to influence the TimeSpent before the CRM has a chance to decrement allotment.

Conclusion

Not much else to say, but that one should always remember the “other” messages available to us in plugins.  Create, Update, Delete, Retrieve etc are great, but sometimes… they’re just not enough.

 

March 28, 2012

CRM SSRS Reports, SQL CLR and External Data – A Case Study

Filed under: .NET, C#, CRM, Cutting Code, SQL Server, Transact SQL — pogo69 [Pat Janes] @ 14:21

I met some interesting requirements recently, wherein I needed to produce a CRM Report that joined standard CRM Data using the FilteredEntityName views with data provided by a 3rd party dll.

Given the complexities of the resultant solution, I thought it would make for an interesting Case Study.  Hopefully, others can learn something from my experiences and perhaps broaden their minds to the possibilities of CRM Integration.

Architectural Overview

A picture speaks a thousand words, so I’ll shutup for a moment and:

In summary:

  1. My SSRS Report (uploaded to CRM) renders data in a single table provided by;
  2. An SQL query that joins data retrieved from;
  3. A CRM Database FilteredEntity View and;
  4. A Table Valued Function wrapping an;
  5. SQL-CLR Table Valued Function that retrieves the required data from;
  6. A Web Service that calculates Fatigue ratings using;
  7. A 3rd-party 32-bit DLL

Now, if that seems a little bit convoluted, keep reading for the reasoning behind this spider-web of data manipulation and retrieval.

I will start backwards, as the final piece of the puzzle is actually the beginning of the story.

How (and Why) It All Works

32-bit DLLs and CRM 2011

In contrast to CRM 4.0 that preceded it, CRM 2011 is now available only as a 64-bit installation.  This means that if you ever have a need to access 32-bit DLLs (legacy code, 3rd-party functionality, pretty much every ODBC driver ever written) you are unable to do so unless you write a “thunking” layer; providing access to the 32-bit functionality in a separate memory space in order to maintain compatibility.

I chose to wrap my 32-bit 3rd-party DLL in a WCF Service.  I have exposed several endpoints, including a basicHttpBinding that allows interoperability with older SOAP based clients – this is necessary for interaction from an SQL-CLR object as the requisite assemblies are not available to create a more modern WCF client.

As touched upon briefly above, I have implemented the same sort of idea to interoperate with ODBC drivers for legacy database applications that provide no other interface.

P/Invoke – The Murky World of Managed/Unmanaged Interoperability

I will just briefly mention a tool called the ‘P/Invoke Interop Assistant’ that I found invaluable when building a C# wrapper for the above mentioned 32-bit DLL.  It was written in C and exposed only a legacy C based interface that cannot be easily consumed by managed code; which is where P/Invoke comes in.

By adding your legacy 32-bit DLL to your managed project and marking it as content, it will be copied to your output folder.  You must then create DllImport wrappers for any functions that you wish to access e.g. (names changed to protect the innocent:

[DllImport("xxxx.DLL", EntryPoint="_xxx_yyyy@nn")]
 internal static extern int xxx_yyyy(
 int p1,
 ref int p2,
 int p3,
 ref int p3,
 ref int p4,
 ref int p5,
 ref double p6,
 ref double p7,
 int p8,
 ref int p9,
 ref double p10
);

I won’t go into the nightmare that was trying to figure out exactly what that signature should look like, nor the corresponding nightmare that was figuring out how to properly talk to it.  However, I can say that the P/Invoke Interop Assistant made life somewhat easier.

SQL-CLR Table-Valued Functions are awesome!

I got so excited about SQL-CLR when we first came in contact with each other.  In fact, I was all over a whole stack of new “toys” that became available with SQL Server 2005 (I will mention another, Common Table Expressions or CTEs a little later).

SQL-CLR allows you to write a managed assembly (.NET code) that can perform tasks not ordinarily available via Transact SQL and access that functionality via a Stored Procedure or Function wrapper.  My first attempt at such an assembly was to expose some simple .NET Regular Expression pattern matching/replacing functionality, which became invaluable (and surprisingly fast) for a data cleansing project on which I was working.

In this case, however, I wrote my first SQL-CLR Table Valued function; which does exactly what it says – returns data from a CLR managed assembly that is represented and consumed in Transact SQL as a Table.

So; what does an SQL-CLR Table Valued function look like?  I’m glad you asked.  I will let MSDN explain in details, but basically, you have to implement two methods:

  1. One method does that actual data retrieval/collection/however you obtain the data – this method must return the interface System.Collections.IEnumerable (or something that implements same) and will be marked with metadata describing various SQL related properties and the name of the method that will be used to “feed” the data back to SQL Server
  2. The second method does the feeding – taking a generic Object (which are individual elements of the datatype returned from the first method), ripping it apart, and populating out parameters for each column in the resultset.  This allows SQL Server to stream the data back to the query that calls your assembly
Example as follows:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using System.Collections.Generic;
using System.Linq;
using System.Xml;
using Microsoft.SqlServer.Server;
using BMP.FAID.SqlClr.WebServiceWrapper;
public partial class CalculateFatigue
{
 [SqlFunction(FillRowMethodName = "FillRow", DataAccess = DataAccessKind.None, TableDefinition = "Start datetime, Fatigue float", IsDeterministic = true, IsPrecise = false)]
 public static System.Collections.IEnumerable InitMethod(SqlXml rosterPeriods, string faidServiceUrl)
 {
  return WebService.Process(rosterPeriods.Value, faidServiceUrl);
 }

 public static void FillRow(Object obj, out SqlDateTime start, out SqlDouble fatigue)
 {
  HourlyFatigue hourBlock = (HourlyFatigue)obj;
  start = hourBlock.Start;
  fatigue = hourBlock.Fatigue;
 };
}

NB: I implemented my Web Service Wrapper in another assembly so that I could unit test it outside of SQL Server.

As per the in-depth explanation of SQL-CLR Table Valued functions, I will leave the details of the deployment of these managed assemblies to your MSDN searching capabilities – in fact, I think I may have gone through the process in another blog posting.

The Transact SQL Query

Apologies for the verbosity, but there are several elements of the query that I would like to explain for those who are not familiar with the concepts within.

--declarations and setup (hard-coded for demonstrative purposes)
declare
 @pilotId uniqueidentifier = 'A4D1BAF3-BC41-E111-B9ED-00012E244939',
 @year int = 2012,
 @month int = 2;

declare
 @monthBegin datetime = (select cast(@year as nvarchar) + '-' + cast(@month as nvarchar) + '-01');
declare
 @monthEnd datetime = DATEADD(SECOND,-1,DATEADD(Month,1,@monthBegin));

declare
 @daysInMonth int = (select DAY(@monthEnd));

declare
 @faidServiceUrl nvarchar(255) = (select top 1 mrcrm_faidserviceurl from Filteredmrcrm_rosterandavailabilitysettings);

create table #HourBlock
(
 [Start] [datetime] NULL,
 [Fatigue] [float] NULL
);

-- find roster items
declare
 @rosterPeriods xml =
 (
  select
   rsi.mrcrm_Start, rsi.mrcrm_End, rt.mrcrm_RiskLevel
  from
   mrcrm_rosterscheduleitem rsi
  join
   mrcrm_rostertype rt on rt.mrcrm_rostertypeId = rsi.mrcrm_RosterTypeId
  where
   rsi.mrcrm_EmployeeId = @pilotId
  and
   rt.mrcrm_RiskLevel > 0
  and
   rsi.mrcrm_Start <= @monthEnd
  and
   rsi.mrcrm_End >= DATEADD(DAY, -14, @monthBegin)
  and
   rsi.statecode = 0
  order by
   rsi.mrcrm_Start
  for xml path('period'), root('roster')
 );

-- use faid service to calculate hourly fatigue
insert
 #HourBlock
select
 Start, Fatigue
from
 [BMP_Functions].[dbo].[InitMethod](@rosterPeriods, @faidServiceUrl);

;with
-- obtain list of all hours in the day
Hour_CTE (Hour)
as
(
 select 0

 union all

 select Hour + 1 from Hour_CTE where Hour < 23
),
-- obtain list of all days in the month
Day_CTE (Day)
as
(
 select @monthBegin

 union all

 select DATEADD(Day, 1, Day) from Day_CTE where Day < @monthEnd
)
-- join it all up together
select
 Day_CTE.Day,
 Hour_CTE.Hour,
 ISNULL(h.Fatigue, 0.0) as Fatigue
from
 Hour_CTE
cross join
 Day_CTE
left join
 #HourBlock h on DATEPART(Hour, h.Start) = Hour_CTE.Hour and Day(h.Start) = Day(Day_CTE.Day);

Again; apologies for the verbosity but it is as simple as I could make it while still demonstrating what I had hoped (yes, it is more complicated than that).  Now, to break it down.

Configuration Data

I store system-wide configuration data in a “Settings” entity instance in the CRM.  This allows me to set variables such as Web Service URLs centrally and access them from anywhere that has access to CRM Data.  CRM components access the Fatigue Web Service that I show here in exactly the same way via plugins etc.

declare
 @faidServiceUrl nvarchar(255) = (select top 1 mrcrm_faidserviceurl from Filteredmrcrm_rosterandavailabilitysettings);

Table Inputs into SQL-CLR Functions (are not allowed) – ‘FOR XML PATH’

In order to calculate Fatigue ratings, the 3rd-party component I was using requires a “table” of input data.  Unfortunately, although you can output a table of data from an SQL-CLR function, you cannot input such data.  So I decided to convert my input data into XML and feed it into my managed assembly function via an SQL XML datatype.

This is another of the “new for SQL Server 2005” Transact SQL enhancements that I was very excited about upon first use.  While the “FOR XML …” syntax was available prior to SQL Server 2005, the introduction of “FOR XML PATH…” made the structured formatting of XML data so much cleaner.

-- find roster items
declare
 @rosterPeriods xml =
 (
  select
   rsi.mrcrm_Start, rsi.mrcrm_End, rt.mrcrm_RiskLevel
  from
   mrcrm_rosterscheduleitem rsi
  join
   mrcrm_rostertype rt on rt.mrcrm_rostertypeId = rsi.mrcrm_RosterTypeId
  where
   rsi.mrcrm_EmployeeId = @pilotId
  and
   rt.mrcrm_RiskLevel > 0
  and
   rsi.mrcrm_Start <= @monthEnd
  and
   rsi.mrcrm_End >= DATEADD(DAY, -14, @monthBegin)
  and
   rsi.statecode = 0
  order by
   rsi.mrcrm_Start
  for xml path('period'), root('roster')
 );

This query outputs data in the following format:

<roster>
 <period>
  <mrcrm_Start>2012-02-06T12:00:00</mrcrm_Start>
  <mrcrm_End>2012-02-06T13:30:00</mrcrm_End>
  <mrcrm_RiskLevel>3</mrcrm_RiskLevel>
 </period>
</roster>

Such data can then be processed in C# using the usual System.Xml provided mechanisms.

Using the SQL-CLR Table Valued Function

Using the SQL-CLR Table Valued Function is as simple as referencing its “wrapper” user-defined function, passing in any required parameters (in this case, the XML data and web service URL) and selecting from the resultant table.

-- use faid service to calculate hourly fatigue
insert
 #HourBlock
select
 Start, Fatigue
from
 [BMP_Functions].[dbo].[InitMethod](@rosterPeriods, @faidServiceUrl);

Common Table Expressions or CTEs – Outputting the Data

Also newly introduced in SQL Server 2005, is the concept of Common Table Expressions.  CTEs provide the equivalent of an inline function, which can be very handy when building reports for Dynamics CRM given that user-defined elements such as custom stored procedures and user-defined functions are disallowed within the CRM database.

NB: in the case of my SQL-CLR function, I stored the managed assemblies and function in a separate database.

In addition to providing a mechanism for an “inline” function, CTEs allow the definition of a recursive function as per my Hour_CTE and Day_CTE below; in this case, I am building a list of:

  1. Each Hour of the Day (0-23)
  2. Each Day in the Month (1-however many days are in the current month)
Using the results of the CTEs and SQL-CLR function, the results are output via a select statement – CROSS JOINS are used to ensure that a value is produced for each combination of Day and Hour.
;with
-- obtain list of all hours in the day
Hour_CTE (Hour)
as
(
 select 0

 union all

 select Hour + 1 from Hour_CTE where Hour < 23
),
-- obtain list of all days in the month
Day_CTE (Day)
as
(
 select @monthBegin

 union all

 select DATEADD(Day, 1, Day) from Day_CTE where Day < @monthEnd
)
-- join it all up together
select
 Day_CTE.Day,
 Hour_CTE.Hour,
 ISNULL(h.Fatigue, 0.0) as Fatigue
from
 Hour_CTE
cross join
 Day_CTE
left join
 #HourBlock h on DATEPART(Hour, h.Start) = Hour_CTE.Hour and Day(h.Start) = Day(Day_CTE.Day);

SSRS and SQL-CLR Functions – A Difficulty

One final word on my experiences with this process.

There is a well-known issue with SSRS Reports, wherein the rendering of large quantities of data can cause excess memory consumption and the resultant failure of a Report to complete.  I encountered this issue, however the supposed cause of such issues did not apply to me – 25 columns and 31 rows of data (in the Production Report) does not constitute a large quantity of data.  Nor does the single, lone table rendering that data constitute a complex visual display.

While I cannot prove causality, I was able to overcome the memory consumption issue, by moving my table and query defined above into a Sub-Report.

My hypothesis is that there is something in my query that could not deal with another DataSet having been defined in the same report (I was using other DataSets to provide data for the Year, Month and Pilot parameters).  This hypothesis is supported by the subsequent removal of the additional DataSets and hard-coding the parameters, allowing the successful rendering of the Report.

I asked for assistance in the SSRS MSDN Support Forums, but was unable to elicit any useful information.  When time permits, I will follow up with this new discovery (I have only just stumbled upon the Sub-Report workaround) and see if it sparks any further ideas.

 

March 22, 2012

CRM 2011 LINQ Provider – FormattedValues Collection

Filed under: C#, CRM — pogo69 [Pat Janes] @ 09:06

I just re-discovered an idiosyncrasy of the CRM 2011 LINQ Provider that I thought I should share in here, as it may help others who try and fail to retrieve formatted values for attributes such as OptionSets.

The FormattedValues collection will contain entries:

  • Only for those attributes that require it (e.g. text attributes format as the text they represent – no formatting required)
  • Only for those attributes for which the values have been requested in your query

So, if you write a query such as the following:

var contacts =
	(
		from c in xrm.ContactSet
		join a in xrm.AccountSet on c.ParentCustomerId.Id equals a.Id
		where a.Name == "Acme Pty Ltd"
		select new
		{
			Name = c.FullName,
			DOB = c.BirthDate,
			Gender = (c.FormattedValues.Contains("gendercode") ? c.FormattedValues["gendercode"] : "Ambiguous")
		}
	);

the Gender of each Contact will ALWAYS be “Ambiguous”, as we have not queried for the GenderCode attribute.

Alter the query to the following:

var contacts =
	(
		from c in xrm.ContactSet
		join a in xrm.AccountSet on c.ParentCustomerId.Id equals a.Id
		where a.Name == "Acme Pty Ltd"
		select new
		{
			Name = c.FullName,
			DOB = c.BirthDate,
			GenderCode = c.GenderCode,
			Gender = (c.FormattedValues.Contains("gendercode") ? c.FormattedValues["gendercode"] : "Ambiguous")
		}
	);

only those Contacts whose Gender truly is ambiguous, will return “Ambiguous”.

Thanks to ms_crm for the inspiration!

May 20, 2011

CRM 2011 LINQ – Entity Instance Tracking

Filed under: Cutting Code — pogo69 [Pat Janes] @ 10:11

An interesting query came up in the CRM Development Forum on MSDN recently:

http://social.msdn.microsoft.com/Forums/en-US/crmdevelopment/thread/83d49d47-1fc0-4181-99c6-229b658c99f3

which I will repeat here for context:

I currently have a plugin that triggers anytime a user creates, updates, or deletes a record out of a custom entity named Conflict of Interest. The plugin will update a flag on the contact record that lets a user know if there is an active conflict of interest. When I try to run the linq update I get an error “The context is not currently tracking the ‘contact’ entity“.

This sets up the IOrganizationService:

IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

// This function creates the linq query to update the contact record
public void SetActor_ConflictOfInterest(IOrganizationService service, Guid ActorGUID)
    {
      Utilities util = new Utilities();
      Guid BillingHeader = Guid.Empty;
      var xrm = new XrmServiceContext(service);

      var Actor = new Xrm.Contact
      {
        ContactId = ActorGUID,
        new_ConflictofInterest = true,
      };
      xrm.UpdateObject(Actor);
      xrm.SaveChanges();
    }

I’ve tried updating the record with a sql query as well, but CRM is locking the contact record for some reason. Any ideas on the linq error and why CRM is locking the contact record?

I’d never seen the error before, so I set off to see if I could replicate it.  It was not difficult:

var xrm = new Xrm2011.XrmServiceContext("Xrm");

Guid ActorGUID = (from c in xrm.ContactSet where c.FullName == "aa test 2" select c.Id).FirstOrDefault();

xrm.ClearChanges();

Xrm2011.Contact Actor = new Xrm2011.Contact
{
	Id = ActorGUID,
	DoNotEMail = true
};

xrm.UpdateObject(Actor);
xrm.SaveChanges();</pre>
<div>
<pre>

And bingo!!  As soon as I execute the .UpdateObject() method, I receive the error:

The context is not currently tracking the ‘contact’ entity.

According to my  reading through the SDK, we’re being told that the XRM LINQ Context is not currently keeping track of the changes in our ‘contact’ entity instance (identified by the GUID that we have assigned to our new Xrm2011.Contact object).

Ordinarily when interacting with the CRM via the XRM context, we follow a pattern:

  1. Retrieve entity instance(s)
  2. Update entity instance(s)
  3. Save

OrganizationServiceContext.Attach()

But in this case we’re trying to update an entity instance by creating its representation from scratch; so the context knows nothing about it.  The solution is the context.Attach() method:

Guid ActorGUID = (from c in xrm.ContactSet where c.FullName == "aa test 2" select c.Id).FirstOrDefault();

Xrm2011.Contact Actor = new Xrm2011.Contact
{
	Id = ActorGUID,
	DoNotEMail = true
};

xrm.Attach(Actor);
xrm.UpdateObject(Actor);
xrm.SaveChanges();

which attaches our ‘contact’ entity instance to the context, so that it can track the changes.  So we run the code and we’re greeted with this:

The context is already tracking a different ‘contact’ entity with the same identity.

What’s going on?  First, we’re not tracking the ‘contact’ and now we are?  But not?

What happens is this:

  1. Our initial LINQ query retrieves the GUID identifier for the ‘contact’ entity instance we’re interested in.  This tracks the ‘contact’ in context so that the context can keep up to date with any changes.  However, because we are only extracting the GUID, we’re throwing away our reference to the ‘contact’ entity instance that it being tracked.
  2. We construct our ‘contact’ entity instance and set whichever attributes are of relevance.
  3. We attempt to attach our ‘contact’ entity instance to the context.  However, this entity refers to the same entity instance (that is, identified by the same GUID) that was already added to context by our query in (1).  Hence, the error.

So, what do we do now?

There are two solutions in this case:

  1. Keep hold of the original entity instance reference; return the instance and not just the GUID.
  2. Remove the original entity instance reference from context.  We can do this with either the .Detach() method or .ClearChanges() method.

OrganizationServiceContext.Detach()

This method will remove an entity instance from the context and stop tracking changes.  To use this method, you must have a reference to the entity instance.  So this would require a combination of options (1) and (2) above:
  1. Hold onto the original entity instance reference
  2. .Detach() it
  3. Create your new entity instance reference and .Attach()
  4. Update
  5. Save

OrganizationServiceContext.ClearChanges()

This method will remove ALL entity instances from context and stop tracking changes for ALL entity instances.  This will allow us to .Attach and .Update without holding onto the original entity instance:

Guid ActorGUID = (from c in xrm.ContactSet where c.FullName == "aa test 2" select c.Id).FirstOrDefault();

xrm.ClearChanges();

Xrm2011.Contact Actor = new Xrm2011.Contact
{
	Id = ActorGUID,
	DoNotEMail = true
};

xrm.Attach(Actor);
xrm.UpdateObject(Actor);
xrm.SaveChanges();

Conclusion

Which combination of tracking methods you use in your application will depend entirely upon what you need to accomplish, but armed with this knowledge at least you’ll know which tools you have at your disposal.

May 11, 2011

CRM Hidden Gems – #1 – Business Closures

Filed under: Cutting Code — pogo69 [Pat Janes] @ 14:18

From time to time I come across functionality in the CRM that isn’t obviously accessible to us as CRM Developers.  A large part of my job is to make the CRM do things that it isn’t supposed to be able to do.  So I get to dig around and hunt for undocumented features.

I’ve decided to start posting some of my findings.  I won’t be posting complete solutions that been developed as a result of my findings, but hopefully some of these ideas can prevent others from having to do quite as much digging as have I when trying to implement some of the more “esoteric” ideas that customers request.

How are Business Closures implemented?

Calendars

Business Closures are modelled in the CRM as a shared calendar.  There is one and only one calendar with the name ‘Business Closure Calendar’.  You can verify this in SQL Server Management Studio with the following query:

SELECT
	CalendarId, Description, Name, IsShared
FROM
	Calendar
WHERE
	Name = 'Business Closure Calendar';

The results of which should look something like:

CalendarId Description Name IsShared
C6F75360-E122-DD11-892E-0003FF6D78E5 Calendar for Business Closure Business Closure Calendar 1

Now, for every Business Closure added to the CRM Organisation, a Calendar Rule is created and attached to the ‘Business Closure Calendar’ Calendar.  Verify this with the following query:

Calendar Rules

SELECT
	Description, Name, EffectiveIntervalStart, EffectiveIntervalEnd
FROM
	CalendarRuleBase
WHERE
	CalendarId = 'C6F75360-E122-DD11-892E-0003FF6D78E5';

using the same CalendarId that we obtained from the previous query.  The results will look something like:

Description Name EffectiveIntervalStart EffectiveIntervalEnd
Holiday Rule Pogo Day Moved 2011-05-12 00:00:00.000 2011-05-13 00:00:00.000
Holiday Rule Pogo Week 2011-05-23 00:00:00.000 2011-05-28 00:00:00.000
Holiday Rule Pogo Rocks 2011-07-01 00:00:00.000 2011-08-01 00:00:00.000
Holiday Rule Next Year 2012-03-01 00:00:00.000 2012-03-02 00:00:00.000

How Do We Gain Access to Business Closures?

This is where it started to get tricky.  The CRM SDK does not provide direct access to the Calendar Rule entity – for ANY messages.  From the SDK:

The class for this entity is calendarrule. The calendar rule entity is accessed by using the messages on the calendar entity. See Calendar Entity Capabilities.

So… back to the Calendar entity again.  The Calendar class in the SDK is documented as having a “calendarrules” attribute defined as an array of “calendarrule” entities:

public calendarrule[] calendarrules {get; set;}

Awesome!?!?  Well… don’t get too excited just yet.  If we explicitly ask for the “calendarrules” attribute, CRM tells us that the ‘Calendar’ entity contains no such attribute!!

QueryByAttribute query = new QueryByAttribute();
query.EntityName = EntityName.calendar.ToString();
query.Attributes = new string[] { "name" };
query.Values = new object[] { "Business Closure Calendar" };
query.ColumnSet = new ColumnSet(new string[] { "name", "calendarrules" });

BusinessEntityCollection calendars = crmService.RetrieveMultiple(query);

So, is does it exist or does it not?  If instead, we specify that we wish all columns to be returned:

QueryByAttribute query = new QueryByAttribute();
query.EntityName = EntityName.calendar.ToString();
query.Attributes = new string[] { "name" };
query.Values = new object[] { "Business Closure Calendar" };
query.ColumnSet = new AllColumns();

BusinessEntityCollection calendars = crmService.RetrieveMultiple(query);

and we inspect the resultant Calendar object(s):

Turns out, it really *does* exist!!  So we can gain programmatic access to the Calendar Rules via the Calendar class, but we have to specify that ALL columns are returned and must NOT specify the column explicitly.

So What’s Going On?

At this point, I’m devolving into pure speculation.

My guess is that because “calendarrules” isn’t a “real” attribute (it is, in fact a collection of entity instances related to their parent entity via the “calendar_calendar_rules” relationship), it cannot be accessed by specifying it as an attribute.  The CRM Web Services return it as an attribute because that is the only mechanism available in the results of a RetrieveMultiple message.

Lessons Learned

  • Not everything in the CRM is as it seems
  • Even if it is documented in the SDK, things are not necessarily as the documentation makes things seem
  • Some things just aren’t documented at all
  • If you really want to achieve something “un” or “under” documented within the CRM, keep digging until you’ve exhausted all possibilities because there are all sorts of hidden gems awaiting the dedicated (stubborn) explorer

	

May 10, 2011

CRM Web Services from SQL Server Transact SQL (SQLCLR)

Filed under: C#, CRM, SQL Server, Transact SQL — pogo69 [Pat Janes] @ 17:53

Let me preface this post by saying “bugga!!”.  Sometime last year, I assisted one of my fellow CRM Community Forum members through the process of building a SQL CLR assembly that communicates with CRM via web services.  This mechanism allows us to synchronise SQL Server updates with CRM in a supported manner by calling our assembly via Create/Update triggers in our 3rd party SQL Server database.  I created the shell for this post quite some time ago, but never got around to starting it, let alone finishing.

Now that CRM 2011 has been released, I was hoping to update the process to describe the steps necessary to create a SQLCLR assembly that communicates with the new CRM 2011 OrganizationService.  Unfortunately, this is not possible, as SQL Server 2005 and SQL Server 2008 (up to and including R2) do not support the .NET Framework v4.0.

So… back to the CRM 4.0 SOAP endpoint it is!!

Creating the SQLCLR Assembly

The example I’m going to describe will allow the creation of a new Contact record from within an SQLCLR Stored Procedure.

New Visual Studio Project

In Visual Studio:

  1. Click the ‘File’ menu
  2. Click ‘New’
  3. Click ‘Project…’
  4. In the Project window, select the ‘Visual C# SQL CLR Database Project’ in the Database / SQL Server category:

In the ‘New Database Reference’ dialog, choose or enter the name of the target SQL Server instance; then select the target database:

Choose ‘No’ for SQL/CLR debugging (or ‘Yes’ if you wish, but I’m not going to go into that here):

New Stored Procedure class

Next, we create the shell of our new SQLCLR Stored Procedure.

  1. Right-click on the Project
  2. Click ‘Add’
  3. Click ‘Stored Procedure…’

In the ‘Add New Item’ dialog, enter the name of your SQLCLR Stored Procedure:

Your newly created Stored Procedure class will look like the following:

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

public partial class StoredProcedures
{
	[Microsoft.SqlServer.Server.SqlProcedure]
	public static void CreateContact()
	{
		// Put your code here
	}
};

CRM Web Reference

Now, we need to add a Web Reference to the CRM 4.0 SOAP endpoint.  It is via this interface that we will communicate with the CRM.

  1. Right-click on the ‘References’ folder
  2. Click ‘Add Service Reference…’

Click on ‘Advanced…’:

Click on ‘Add Web Reference…’:

In the ‘Add Web Reference’ dialog, enter the fully qualified URL path to your CRM Service; then click on Green arrow icon next to the URL:

The CRM Web Service URL takes the following format:

http://<server>:<port>/MSCRMServices/2007/CrmService.asmx

Visual Studio will send a request to the CRM Service, retrieve the WSDL and build a Web Service proxy, which you must now name in the ‘Web reference name’ field:

Click on ‘Add Reference’ and you will see your newly added Web Reference in the Project’s ‘Web References’ folder:

Using the CRM Service to Create a Contact

Stored Procedure Signature

Next, we’ll add some parameters that can be used to create our new Contact.  Our contacts will each have:

  • First Name
  • Last Name
  • Email Address
public static void CreateContact(SqlString firstName, SqlString lastName, SqlString email)
{
	// Put your code here
}

Each of the SQL Server datatypes is represented by a SQLCLR “wrapper” object, that encapsulates the underlying .NET object.  This sample uses only SqlString, which encapsulates the .NET “string” datatype and represents a SQL Server NVARCHAR.

CrmSdk Namespace

I’ve added a “using” directive, so that we can use the CRM Web Service objects and methods directly:

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

using SqlServerProject1.CrmSdk;

public partial class StoredProcedures
{
	[Microsoft.SqlServer.Server.SqlProcedure]
	public static void CreateContact(SqlString firstName, SqlString lastName, SqlString email)
	{
	}
};

CRM Web Service Connection and Interactivity

At this point, we can write CRM Web Service code just like we always have in our plugins, 3rd party applications and whatever we decide to implement.

  1. Obtain an Authentication Token
  2. Create a Web Service connection
  3. “Do stuff”
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

using SqlServerProject1.CrmSdk;

public partial class StoredProcedures
{
	[Microsoft.SqlServer.Server.SqlProcedure]
	public static void CreateContact(SqlString firstName, SqlString lastName, SqlString email)
	{
		CrmAuthenticationToken token = new CrmAuthenticationToken();
		token.AuthenticationType = 0;
		token.OrganizationName = "<uniqueorgname>";

		using (CrmService crmService = new CrmService())
		{
			crmService.Url = "http://<server>:<port>/MSCRMServices/2007/CrmService.asmx";
			crmService.Credentials = new System.Net.NetworkCredential("<username>", "<password>", "<domain>");
			crmService.CrmAuthenticationTokenValue = token;
			crmService.UnsafeAuthenticatedConnectionSharing = true;
			crmService.PreAuthenticate = true;

			contact contact = new contact();
			contact.firstname = firstName.Value;
			contact.lastname = lastName.Value;
			contact.emailaddress1 = email.Value;

			crmService.Create(contact);
		}
	}
};

Side Issues

Exception Handling

The code above has no exception handling.  Your production code should.  But I’m leaving that as an exercise for the reader, as it is largely an application-dependant architectural issue.

Web Service Connection Parameters

The Web Service connection parameters (credentials, URL, Organisation Name) etc are hard-coded.  Again, configurable alternatives are left as an exercise for the reader, although I may come back to this issue at a later time.  For now, some options to consider include:

  • Passing in Connection data as additional parameters
  • Storing Connection data in the Database
  • Storing Connection data in the Registry or File System

XML Serialisation Assembly

One of the restrictions of SQLCLR is that you cannot make use of any .NET Assembly that has not explicitly been loaded into the target SQL Server database, other than a very restricted set of .NET Framework assemblies that have been deemed “safe”.

A side-effect of this, is that an assembly such as ours that utilises Web Services will not work unless we manually generate an XML Serialisation Assembly.

The .NET runtime uses a Serialisation assembly to package Web Service requests into XML packets for transfer via HTTP to the Web Service host.  Ordinarily, serialisation assemblies are generated “on the fly” by the Framework, however we cannot make use of this mechanism as there is no way to dynamically load the resultant assembly into SQL Server.

To generate a Serialisation assembly from within Visual Studio:

  1. Right-click on the Project
  2. Click ‘Properties’
  3. Navigate to the ‘Build’ tab
  4. Change ‘Generate serialization assembly’ to ‘On’

Using sgen.exe from the Command Line

If you experience any issues with the generation of serialisation assemblies from within Visual Studio, you can use sgen.exe directly from the command line.  I do this for all of my serialisation code because:

  • There is no way to specify sgen.exe parameters from within the Visual Studio GUI
  • It takes a ridiculously long time for sgen.exe to complete using whatever parameters Visual Studio chooses to throw at it

I create a post-build task that runs the command from within the build process; the command line I use is as follows:

"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\sgen.exe" /a:"<drive>:\<path to solution>\<project folder>\bin\Debug\SqlServerProject1.dll" /p /force

I then wrap the sgen.exe command up in a batch file.  You can configure the Post Build Event in the Project properties as follows:

Integration with SQL Server

Now, build your Project and locate the resultant assemblies.  The names will reflect the name you gave your assembly, but mine are:

  • SqlServerProject1.dll
  • SqlServerProject1.XmlSerializers.dll

Enabling CLR in SQL Server

By default, CLR interoperability is not enabled in SQL Server.  To enable it, run the following in SQL Server Management Studio:

USE master;
GO
EXEC sp_configure 'CLR Enabled', 1;
go
RECONFIGURE WITH OVERRIDE
GO

It’s a Matter of Trust

For security reasons, SQL CLR assemblies cannot by default be loaded into a SQL Server database.  In order to enable the uploading of our assemblies, there are two alternatives.

Setting TRUSTWORTHY ON for the Target Database

ALTER DATABASE <DBNAME> SET TRUSTWORTHY ON

This is useful for development and/or testing because it’s quick and easy.  However, it does open up the database to the possibility of malicious assemblies being loaded as all assemblies will now be trusted.  For production environments you should instead give “trust” only to your assemblies.

Trusting Individual Assemblies

CREATE ASYMMETRIC KEY CrmKey
FROM EXECUTABLE FILE = '<full path to your assembly>\<assemblyname>.dll'
GO

Now create a login associated with the CrmKey and grant it External Access rights.

CREATE LOGIN [CrmLogin] FROM ASYMMETRIC KEY [CrmKey];
GO
GRANT EXTERNAL ACCESS ASSEMBLY TO [CrmLogin];
GO

Finally add a user for your login in your database:

USE <your database>
GO
CREATE USER [CrmLogin] FOR LOGIN [CrmLogin];
GO

Deployment

Project Database Properties

When creating the SQLCLR Project in Visual Studio, we created a connection to the target SQL Server database.  On the ‘Database’ tab in the Project Properties, we can influence the deployment of our SQLCLR assembly.

Set the Permission level to ‘External’ – we need this, as our assembly will require access outside of SQL Server to communicate with CRM Web Services.

Set the Assembly Owner to either:

  • dbo – if you set TRUSTWORTHY ON in the database
  • CrmLogin – if you are assigning trust only to your assemblies and created the CrmLogin as per the instructions above

Deploying the Serialisation Assembly

The deployment process will only automatically deployment your SQLCLR assembly.  To deploy the serialisation assembly, we create a Post-Deployment script in the PostDeployScript.sql file:

CREATE ASSEMBLY [SqlServerProject1.XmlSerializers]
AUTHORIZATION [dbo] FROM '<solution path>\<project folder>\bin\Debug\SqlServerProject1.XmlSerializers.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS
GO

NB: Because I am deploying my assemblies to a SQL Server on another machine, I had to configure the post deployment script above to use a UNC path to the serialisation assembly.  This is necessary, as the SQL Server must be able to gain file system access to the file.

Build and Deploy

Now… build your Project.  Assuming it builds successfully, you can Deploy it (the Deploy process actually does a build before it Deploys, but it can’t hurt to test the build first).

To deploy:

  1. Right-click on the Project
  2. Click ‘Deploy’

If it all goes as expected, your assemblies will be uploaded into the target database and you will be able to find them in SQL Server Management Studio by:

  1. Expanding the target database
  2. Expanding ‘Programmability’
  3. Expanding ‘Assemblies’

As you can see in the image above, the Deployment process also creates a “wrapper” Stored Procedure for the SQLCLR Stored Procedure implemented in my assembly.  The definition of that Stored Procedure is as follows:

CREATE PROCEDURE [dbo].[CreateContact]
	@firstName [nvarchar](4000),
	@lastName [nvarchar](4000),
	@email [nvarchar](4000)
WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [SqlServerProject1].[StoredProcedures].[CreateContact]
GO

EXEC sys.sp_addextendedproperty @name=N'SqlAssemblyFile', @value=N'CreateContact.cs' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'PROCEDURE',@level1name=N'CreateContact'
GO

EXEC sys.sp_addextendedproperty @name=N'SqlAssemblyFileLine', @value=N'13' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'PROCEDURE',@level1name=N'CreateContact'
GO

To call our new Stored Procedure, simply use the following:

DECLARE	@return_value int

EXEC	@return_value = [dbo].[CreateContact]
		@firstName = N'Testy',
		@lastName = N'Tester',
		@email = N'testiest@test.com'

SELECT	'Return Value' = @return_value

With the following result:

Conclusion

I hope this post makes sense.  It’s rather long, rather involved and I’ve had a really long day.  So, although I’ve read through it to make sure it isn’t obviously unintelligible I can’t guarantee that it is without issue(s).  If anyone spots anything that requires clarification and/or modification, let me know.

Enjoy.

May 7, 2011

The Missing LINQ – CRM 4.0 SDK – Advanced Developer Extensions

Filed under: .NET, C#, CRM, Cutting Code — Tags: , , , — pogo69 [Pat Janes] @ 09:01

In my inaugural post, I made brief mention of the newly introduced capabilities of the LINQ provider included as part of the most recent revision of the CRM 4.0 SDK – the Advanced Developer Extensions.  In this post, I’ll go into more detail about the features available and how it can increase developer productivity by reducing lines of code and increasing the testability and maintainability of the code.

The article wouldn’t be complete, however, without mentioning some of the current downsides; you can find the caveats later in the post.

NB: I started writing this post last year before CRM 2011 had been released but got distracted (not in the least bit unusual for me).  Most of what I’ve written is still applicable to the CRM 2011 LINQ provider as the underlying query mechanism (Query Expressions) is the same.

LINQ

One of the most exciting new features of the .NET framework is LINQ (Microsoft Language Integrated Query).  Using an applicable provider, LINQ provides SQL-like access to an underlying object model, via some compiler magic and a collection of extension methods (global static methods that take a “this” pointer as their first parameter).

LINQ allows strongly typed queries and results to an arbitrary underlying data model.  The Advanced Developer Extensions makes available a LINQ provider to access CRM entities and attributes via the xRM LINQ provider.

Advanced Developer Extensions

CrmSvcUtil.exe

In order to be able to write LINQ queries against the CRM, you must first generate the following:

  • DataContext class – allows a “connection” to the CRM and provides access to both collections of CRM entities and CRUD methods
  • Data Transfer Objects – a mapping between a CRM entity instance and a .NET object with which your source code can interact

If you wish to keep the model consistent and in sync, the CrmSvcUtil.exe utility must be re-run each time you make metadata changes to the corresponding CRM installation.

xRM LINQ Queries

So, let’s look at some simple queries:

DataContext xrm = new DataContext("Xrm.ConnectionString");

Guid contactId = new Guid("{12341234-1234-1234-123412341234}");

List<incident> myCases =
(
	from cs in xrm.incidents
	join c in xrm.contacts on cs.customerid.Value equals c.contactid
	where c.contactid == contactId
	select cs
).ToList();

This simple “join” query returns all Case entities assigned to the Contact referenced by the Guid “contactId”.  To write the equivalent query in QueryExpression syntax would consume many more lines of code, and largely obfuscate the intention of the query.

With LINQ, the simplified syntax allows us to see:

  • Selection criteria
  • Source entities
  • Link criteria
  • Output attributes
  • Ordering

and more, all in one simple statement.

Returning Custom Classes

The previous query returns entire CRM entities (via the corresponding Data Transfer Object); it is also possible to return only a selection of the entity’s attributes, thus:

var myCases =
	(
		from cs in xrm.incidents
		join c in xrm.contacts on cs.customerid.Value equals c.contactid
		where c.contactid == contactId
		select new
			{
				cs.incidentid,
				cs.title
			}
	);

The keyword “var” refers to an anonymous type; anonymous types allow the dynamic creation of an object, without having to explicitly declare the type of the resultant object.  Anonymous types may only contain read-only properties; if you wish to subsequently manipulate the content of the returned results, you must declare and instantiate instances of explicitly declared types.

The following provides an (almost real world, albeit abbreviated) example:

protected class AppointmentDataItem
{
	// appointment attributes
	public Guid AppointmentId { get; set; }
	public DateTime ScheduledStart { get; set; }
	public DateTime ScheduledEnd { get; set; }
	public DateTime Booked { get; set; }

	// case (job) attributes
	public string JobTicketNumber { get; set; }
	public Guid JobId { get; set; }

	// contact (customer) attributes
	public string CustomerName { get; set; }
	public string CustomerTelephone { get; set; }
	public string CustomerAddress { get; set; }
	public string CustomerCity { get; set; }
	public string CustomerState { get; set; }
	public string CustomerPostcode { get; set; }
	public string CalendarDisplayTitle { get; set; }
}

...

List<AppointmentDataItem> appointments =
	(
		from app in xrm.appointments
		join cs in xrm.incidents on app.regardingobjectid.Value equals cs.incidentid
		where cs.msa_partnercontactid.Value == contactId
		select new AppointmentDataItem
			{
				AppointmentId = app.activityid,
				ScheduledStart = activity.scheduledstart.GetValueOrDefault().ToLocalTime(),
				ScheduledEnd = activity.scheduledend.GetValueOrDefault().ToLocalTime(),
				Booked = activity.createdon.GetValueOrDefault().ToLocalTime()
			}
	).ToList();

...

foreach (AppointmentDataItem appointment in appointments)
{
	// get the Case object to which the Appointment is attached
	var job =
	(
		from cs in xrm.incidents
		join app in xrm.appointments on cs.incidentid equals app.regardingobjectid.Value
		where app.activityid == appointment.AppointmentId
		select cs
	).Single();

	appointment.JobId = job.incidentid;
	appointment.JobTicketNumber = job.ticketnumber;

	// get the Customer Contact to whom the Case is attached
	var customer =
	(
		from cust in xrm.contacts
		join cs in xrm.incidents on cust.contactid equals cs.customerid.Value
		join app in xrm.appointments on cs.incidentid equals app.regardingobjectid.Value
		where app.activityid == appointment.AppointmentId
		select cust
	).Single();

	appointment.CustomerName = customer.fullname;
	appointment.CustomerAddress = customer.address1_line1;
	appointment.CustomerCity = customer.address1_city;
	appointment.CustomerState = customer.address1_stateorprovince;
	appointment.CustomerPostcode = customer.address1_postalcode;
	appointment.CustomerTelephone = customer.telephone2;
	appointment.CalendarDisplayTitle = customer.fullname + " " + FormatTelephone(customer.telephone2) + "<br />" + FormatAddress(activity);
}

So, we:

  • Declare our new custom type
  • Query for, and return a List of instances of our new type
  • Update the properties of our object instances at will

Pretty cool; and we’re still “talking” to CRM in terms of entities and attributes using a relatively natural (to those of us with SQL skills anyway) language.

Restrictions (some of those Caveats I mentioned)

There are limitations to the queries that can be written using the xRM LINQ provider.  These restrictions are due to the underlying query mechanism to which the LINQ provider translates its queries – Query Expression.

One Entity’s Attributes per Query – The impact of Query Expression

Using Query Expressions, we can only return attribute values for a single entity at a time.  A corollary of this is that an xRM LINQ Query can only return attribute values for a single entity at a time.  For instance, the following is not valid and will error:

var contact =
	(
		from c in xrm.contacts
		join a in xrm.accounts on c.parentcustomerid.Value equals a.accountid
		where c.emailaddress1 != null
		select new
			{
				ContactName = c.fullname,
				AccountName = a.name
			}
		);

While the code will compile and run (it *is* valid C# and valid LINQ syntax), it will generate the following error at run-time as soon as you attempt to iterate the resultant collection:

The ‘select’ and ‘orderBy’ calls may only reference a single common entity type.

If you wish to retrieve more than one entity’s attributes in a single query, you are forced to “project” the CRM Entity’s Data Transfer Object(s) onto a local object collection via the ToList() method or similar.  Unfortunately, this brings up another caveat.

Projection of Data Transfer Objects onto Local Object collections

The following revised query (NB: the ToList() projection in the ‘from’ clause):

var contact =
	(
		from c in xrm.contacts.ToList()
		join a in xrm.accounts on c.parentcustomerid.Value equals a.accountid
		where c.emailaddress1 != null
		select new
			{
				ContactName = c.fullname,
				AccountName = a.name
			}
		);

will no longer error as the ToList() projection turns our Contact entity Data Transfer Objects into a local object collection.

What this means however, is that the LINQ provider will return ALL CRM Contact entities, BEFORE it applies either your predicates (where clause) or attribute selection.

In other words, each and every attribute of each and every entity instance is returned, and subsequently whittled down in memory by the remainder of your LINQ query.  This can be very expensive in terms of speed and bandwidth depending on the number and size of the entities in your query.

If you find this to be an issue, you will need to revise your code to use a mechanism such as that described above, wherein you create custom classes to retrieve the relevant data one entity type at a time.

Restricted Predicates – The impact of Condition Expressions

A Condition Expression takes the following form:

ConditionExpression condition = new ConditionExpression("<logicalattributename>", ConditionOperator, object);

This means that the predicates in your CRM LINQ queries must follow a similar format:

where <entityinstance>.<attribute> == "<value>"
  • The left-hand side of the predicate MUST be an entity attribute
  • The right-hand side of the predicate MUST be a variable or literal

This prevents us from comparing attribute values between entities or even from comparing attribute values from the same entity instance.  For instance, the following is NOT valid:

var contacts =
	(
		from c in xrm.contacts
		where c.modifiedon.Value > c.createdon.Value
		select c
	).ToList();
resulting in the following run-time (because again, it *is* valid c# and valid LINQ) error:
Lambda Parameter not in scope

Of course, you can “solve” this problem by projecting onto a local object collection as per above, but you risk the same performance issues previously described.

Many to Many Queries

I’m sure you can find examples somewhere in the SDK, but I figured I’d make mention of many-to-many queries, as they require some “unusual” syntax, once again due to their relationship to the underlying Query Expressions.

In a recent project my data model included a many-to-many relationship between the Category entity and the Module entity.  To query for each Module attached to a particular Category, my query looks like the following:

List<Module> desiredModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join cat in xrm.ame_categoryame_categories on mod_cat.ame_categoryid equals cat.ame_categoryid
		where cat.ame_categoryid == desiredCategoryId
		where mod.statuscode.Value == 1
		select new Module
			{
				CategoryId = cat.ame_categoryid,

				ModuleId = mod.ame_moduleid,
				ModuleNo = mod.ame_name,
				ModuleTitle = mod.ame_title
			}
	).ToList();

The first thing you will probably notice is that I used ToList() to project the Modules onto a local object collection.  This is so that I could populate my custom Module class with the Category to which it was attached.  In this case, I know that the number of Categories/Modules/Subjects in the system will never be prohibitively large and there are only 2-3 custom attributes on each, so I can live with the overhead.

The “unusual” thing about the query is that we have 2 “from” clauses – this won’t make any sense to SQL aficionados, but it is necessary due to Query Expression and the way that many-to-many relationships are represented via the CRM SDK – the Moniker.

More Cool Stuff

Custom Comparators (this is more C# than CRM, but it’ll get you thinking)

In the same project described above, I need to build lists of Modules and Subjects that a user requires to complete a selected qualification (Category).

First, the user selects their desired Category, from which a list of required Modules and Subjects is constructed:

// obtain desired modules/subjects
List<Module> desiredModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join cat in xrm.ame_categoryame_categories on mod_cat.ame_categoryid equals cat.ame_categoryid
		where cat.ame_categoryid == desiredCategoryId
		where mod.statuscode.Value == 1
		select new Module
			{
				CategoryId = cat.ame_categoryid,

				ModuleId = mod.ame_moduleid,
				ModuleNo = mod.ame_name,
				ModuleTitle = mod.ame_title
			}
	).ToList();

List<Subject> desiredSubjects =
	(
		from sub in xrm.ame_subjects.ToList()
		from mod_sub in sub.ame_ame_module_ame_subject
		join mod in desiredModules on mod_sub.ame_moduleid equals mod.ModuleId
		where sub.statuscode.Value == 1
		select new Subject
			{
				ModuleId = mod.ModuleId,

				SubjectId = sub.ame_subjectid,
				SubjectNo = sub.ame_name,
				SubjectTitle = sub.ame_title
			}
	).ToList();

Next, the user selects each Category that they currently hold, from which a list of currently held Modules and Subjects is constructed:

// obtain currently held modules/subjects
List<Module> heldModules =
	(
		from mod in xrm.ame_modules.ToList()
		from mod_cat in mod.ame_ame_category_ame_module
		join held in heldCategories on mod_cat.ame_categoryid equals held
		where mod.statuscode.Value == 1
		select new Module
		{
			CategoryId = held,

			ModuleId = mod.ame_moduleid,
			ModuleNo = mod.ame_name,
			ModuleTitle = mod.ame_title
		}
	).ToList();

List<Subject> heldSubjects =
	(
		from sub in xrm.ame_subjects.ToList()
		from mod_sub in sub.ame_ame_module_ame_subject
		join mod in heldModules on mod_sub.ame_moduleid equals mod.ModuleId
		where sub.statuscode.Value == 1
		select new Subject
		{
			ModuleId = mod.ModuleId,

			SubjectId = sub.ame_subjectid,
			SubjectNo = sub.ame_name,
			SubjectTitle = sub.ame_title
		}
	).ToList();

Next, I needed to calculate the difference.  That is, remove all Modules and Subjects currently held from the list of required Modules and Subjects.

I started with the following:

// calculate difference
List<Module> requiredModules = desiredModules.Except(heldModules).ToList();
List<Subject> requiredSubjects = desiredSubjects.Except(heldSubjects).ToList();

Unfortunately, it doesn’t work correctly.  As the relationship between each entity is many-to-many and I am storing the parent identifier in my local Module and Subject classes, the “held” Modules/Subjects never matched the “required” Modules/Subjects – for the Modules, the CategoryId didn’t match and for the Subjects, the ModuleId didn’t match:

internal class Module
{
	internal Guid CategoryId { get; set; }

	internal Guid ModuleId { get; set; }
	internal string ModuleNo { get; set; }
	internal string ModuleTitle { get; set; }
}

internal class Subject
{
	internal Guid ModuleId { get; set; }

	internal Guid SubjectId { get; set; }
	internal string SubjectNo { get; set; }
	internal string SubjectTitle { get; set; }
}

Enter, the Custom Comparator.  A custom comparator can be used to provide rules for comparison for classes that don’t meet the default property-wise comparison criteria.  Your custom comparator must inherit from IEqualityComparer.  For simplicity I’ll show only the Module comparator:

class ModuleComparer : IEqualityComparer<Module>
{
	public bool Equals(Module x, Module y)
	{
		//Check whether the compared objects reference the same data.
		if (Object.ReferenceEquals(x, y)) return true;

		//Check whether any of the compared objects is null.
		if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false;

		//Check whether the Modules' properties are equal.
		return x.ModuleId == y.ModuleId;
	}

	// If Equals() returns true for a pair of objects
	// then GetHashCode() must return the same value for these objects.
	public int GetHashCode(Module module)
	{
		//Check whether the object is null
		if (Object.ReferenceEquals(module, null)) return 0;

		return module.ModuleId.GetHashCode();
	}
}

So, my basis for comparison is only the ModuleId:

//Check whether the Modules' properties are equal.
return x.ModuleId == y.ModuleId;

and I use the ModuleId’s hash code as the Module’s hash code:

return module.ModuleId.GetHashCode();

So, I can now alter my code to the following:

// calculate difference
List<Module> requiredModules = desiredModules.Except(heldModules, new ModuleComparer()).ToList();
List<Subject> requiredSubjects = desiredSubjects.Except(heldSubjects, new SubjectComparer()).ToList();

and it returns the correct results.

May 5, 2011

The Beginnings of a Date Library

Filed under: Cutting Code, Javascript — pogo69 [Pat Janes] @ 09:34

Recently in one of the CRM Forums, there was a request for some javascript to find the differences between two dates.  I’d started to work up a JScript Date Library that does exactly that, so I figured I’d share it here for all:

if (typeof DateLib == 'undefined') {
	DateLib = {};
}

DateLib.Diff = function (from, to) {
	return to - from;
}
DateLib.DiffMinutes = function (from, to) {
	return DateLib.Diff(from, to) / (1000 * 60);
}
DateLib.DiffParts = function (from, to) {
	var diff = DateLib.DiffMinutes(from, to);

	var parts = new Object();

	parts.Days = Math.floor(diff / (60 * 24));
	diff -= (parts.Days * 60 * 24);
	parts.Hours = Math.floor(diff / 60);
	diff -= (parts.Hours * 60);
	parts.Minutes = Math.floor(diff);

	parts.Display =
			(parts.Days > 0 ? parts.Days + " days, " : "") +
			(parts.Hours > 0 ? parts.Hours + " hrs, " : "") +
			(parts.Minutes + " mins");

	return parts;
}

April 19, 2011

Debugging Plugins in CRM Online

Filed under: C#, CRM — Tags: , , , , — pogo69 [Pat Janes] @ 16:10

Plugin development in CRM Online offers a unique challenge, in that we cannot interactively debug our code.

In the ideal world, you would test your plugin in a development/testing environment (VPC or equivalent) before moving your code to the production Online deployment.  However, if you either have no testing environment available or are experiencing issues specific to the production deployment (data dependent errors, for instance), a few lines of code can help ease the pain:

	public class Plugin : IPlugin
	{
		public void Execute(IServiceProvider serviceProvider)
		{
			IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

			// TODO - If you require tracing, uncomment the following line
			ITracingService trace = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

			Entity entity = null;

			// Check if the InputParameters property bag contains a target
			// of the current operation and that target is of type DynamicEntity.
			if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)
			{
				// Obtain the target business entity from the input parmameters.
				entity = (Entity)context.InputParameters["Target"];

				// TODO Test for an entity type and message supported by your plug-in.
				if (context.PrimaryEntityName != "<logicalentityname>") { return; }
				if (context.MessageName != "<message>") { return; }
			}
			else
			{
				return;
			}
			try
			{
				IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
				IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

				...
			}
			catch (FaultException<OrganizationServiceFault> ex)
			{
				System.Text.StringBuilder errors = ex.Detail.ErrorDetails.Aggregate(
					new System.Text.StringBuilder(), (list, error) =>
					{
						list.Append(string.Format("[{0}:{1}]", error.Key, error.Value));
						return list;
					});

				#if TRACE == true
				trace.Trace(
					@"ErrorCode: {0}
					ErrorDetails: {1}
					Message: {2}
					TraceText: {3}",
					ex.Detail.ErrorCode,
					errors.ToString(),
					ex.Detail.Message,
					ex.Detail.TraceText
				);
				#endif

				throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
			}
		}
	}

With your exception handler in place, when/if an exception is thrown, the details of the Exception will be written to the Trace Log which is available for download via the popup Error dialog.  Anything written to the Tracing service will be sent to the CRM Web Application client in this way, but only if the plugin throws an exception.

April 15, 2011

CRM 2011 – Visual Studio Plugin Templates

Filed under: C#, CRM — Tags: , , , , — pogo69 [Pat Janes] @ 15:27

I really liked the Visual Studio templates that shipped with the CRM 4.0 SDK.  They provide a handy blank workspace in which to create a new plugin or custom workflow activity without having to remember each time how to code them from scratch.

I was rather dismayed to find no trace of similar templates in the CRM 2011 SDK.  So I created my own.  I’m offering them here so that other developers can benefit from the same.  Let me know if you discover any issues; I’ll update the source and refresh the links.

The Templates

Custom Workflow Activity

http://www.mediafire.com/?5ai40s51qjqv24o

Plugin

http://www.mediafire.com/?ip9avvd5wnc87f0

How to Install

Simply copy the zip files (do not extract them – just copy the archives directly) into the folder:

<My Documents>\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\CRM 2011\

I called my new folder ‘CRM 2011’ – you can call it what you wish.  The name you choose will appear as a new category when you create a new Visual Studio project:

Automatically Referencing the SDK Libraries

I had an interesting query (see below) from Gonzalo Ruiz about whether we could have the template include references to the SDK libraries.  If you’ve used these templates, you will have noticed that there are comments at the top of each about which libraries require referencing before your SDK object references will resolve and compile.

The answer is yes… and no.

Why I Can’t Put the References in the Generic Template

The CRM SDK libraries are not installed in the GAC on a developer’s workstation.  Nor are they installed to a well known location.  The template cannot therefore know to what path the library references should point.  I’ll include the plugin template’s project file so that you can see what the existing assembly references look like:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProductVersion>8.0.50727</ProductVersion>
    <SchemaVersion>2.0</SchemaVersion>
    <ProjectGuid>{BCC9080F-3C19-4D40-B487-1E874F6D2BD1}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>$safeprojectname$</RootNamespace>
    <AssemblyName>$safeprojectname$</AssemblyName>
    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <SignAssembly>true</SignAssembly>
    <AssemblyOriginatorKeyFile>PluginKey.snk</AssemblyOriginatorKeyFile>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
    <DebugSymbols>true</DebugSymbols>
    <DebugType>full</DebugType>
    <Optimize>false</Optimize>
    <OutputPath>bin\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
    <DebugType>pdbonly</DebugType>
    <Optimize>true</Optimize>
    <OutputPath>bin\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
  </PropertyGroup>
  <ItemGroup>
    <Reference Include="System" />
    <Reference Include="System.Core">
      <RequiredTargetFramework>4.0</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Data.Services" />
    <Reference Include="System.Data.Services.Client" />
    <Reference Include="System.Runtime.Serialization" />
    <Reference Include="System.ServiceModel" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
  </ItemGroup>
  <ItemGroup>
    <Compile Include="plugin.cs" />
    <Compile Include="Properties\AssemblyInfo.cs" />
  </ItemGroup>
  <ItemGroup>
    <None Include="PluginKey.snk" />
  </ItemGroup>
  <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
  <!-- To modify your build process, add your task inside one of the targets below and uncomment it.
       Other similar extension points exist, see Microsoft.Common.targets.
  <Target Name="BeforeBuild">
  </Target>
  <Target Name="AfterBuild">
  </Target>
  -->
</Project>

For example, the LINQ library is referenced using:

    <Reference Include="System.Xml.Linq" />

because it is installed in the GAC and therefore requires no path information.

What You Can Do to Make Hard-Coded References Work for You and Your Team

  1. Install the latest version of the SDK in a well-known location – preferably a network location so that the entire development team can reliably load the same projects
  2. Update the template to include “hint” information in a reference to each required CRM SDK assembly

Plugin Template

I’ll step through it for the Plugin template – you can figure out from there what to do to the Custom Workflow Assembly template.

Expand Plugin.zip

In the resultant folder, you will find a file named ‘CRM 2011 Plug-in Test Template.csproj’ – edit this file in notepad (or similar).  The contents are exactly as I have posted above – an XML file that tells Visual Studio which resources to load for your project.

Edit ‘CRM 2011 Plug-in Test Template.csproj’ to Add SDK Assembly References

Add two references in the <ItemGroup> node that contains the existing assembly references.  It should look like the following when you’re done:

  <ItemGroup>
    <Reference Include="System" />
    <Reference Include="System.Core">
      <RequiredTargetFramework>4.0</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Data.Services" />
    <Reference Include="System.Data.Services.Client" />
    <Reference Include="System.Runtime.Serialization" />
    <Reference Include="System.ServiceModel" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
    <Reference Include="microsoft.xrm.client">
      <HintPath>\\<server>\<share>\<path to sdk>\bin\microsoft.xrm.client.dll</HintPath>
    </Reference>
    <Reference Include="microsoft.xrm.sdk">
      <HintPath>\\<server>\<share>\<path to sdk>\bin\microsoft.xrm.sdk.dll</HintPath>
    </Reference>
  </ItemGroup>

Much the same as the other references, but with “hints” to the location(s).

Package and Deploy

  1. Save the project file.
  2. Zip up the contents of the entire folder structure you extracted into an archive named Plugin.zip.
  3. Overwrite the existing Plugin.zip

Your new Plugin projects should now contain resolved references to CRM SDK libraries.

 

 

 

 

 

 

 

 

 

 

 

 

 

« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.