Pogo69's Blog

January 9, 2013

Converting Queries/Charts/Dashboards from User to System

Filed under: C#, CRM, Cutting Code — pogo69 [Pat Janes] @ 09:48

Overview

If you have created your own Saved Queries, Charts or Dashboards, you may have had occasion to share those objects with other CRM Users.  If you find that they become universally reusable to the point of wanting to “promote” them to System objects, there is no mechanism in the CRM Web Application to do so.

Code to the Rescue

You can however, convert such items using some relatively simple C# (or VB if you must) code.

Charts

// chart
var chart =
  (
    from uqv in xrm.UserQueryVisualizationSet
    where uqv.Name == "Pogo's Chart"
    select uqv
  ).FirstOrDefault();

Xrm2011.SavedQueryVisualization newChart = new Xrm2011.SavedQueryVisualization
{
  Name = chart.Name,
  DataDescription = chart.DataDescription,
  Description = chart.Description,
  IsCustomizable = new BooleanManagedProperty(true),
  IsDefault = false,
  IsManaged = false,
  PresentationDescription = chart.PresentationDescription,
  PrimaryEntityTypeCode = chart.PrimaryEntityTypeCode,
  WebResourceId = chart.WebResourceId
};

xrm.Create(newChart);

Views

// view
var view =
  (
    from uq in xrm.UserQuerySet
    where uq.Name == "Pogo's View"
    select uq
  ).FirstOrDefault();

Xrm2011.SavedQuery newView = new Xrm2011.SavedQuery
{
  Name = view.Name,
  AdvancedGroupBy = view.AdvancedGroupBy,
  CanBeDeleted = new BooleanManagedProperty(true),
  ColumnSetXml = view.ColumnSetXml,
  ConditionalFormatting = view.ConditionalFormatting,
  Description = view.Description,
  FetchXml = view.FetchXml,
  IsCustomizable = new BooleanManagedProperty(true),
  IsDefault = false,
  IsManaged = false,
  LayoutXml = view.LayoutXml,
  QueryType = view.QueryType,
  ReturnedTypeCode = view.ReturnedTypeCode
};

xrm.Create(newView);

Dashboards

// dashboard
var dashboard =
  (
    from d in xrm.UserFormSet
    where d.Name == "Pogo's Dashboard"
    select d
  ).FirstOrDefault();

Xrm2011.SystemForm newDashboard = new Xrm2011.SystemForm
{
  Name = dashboard.Name,
  CanBeDeleted = new BooleanManagedProperty(true),
  Description = dashboard.Description,
  FormXml = dashboard.FormXml,
  IsCustomizable = new BooleanManagedProperty(true),
  IsDefault = false,
  IsManaged = false,
  ObjectTypeCode = dashboard.ObjectTypeCode,
  Type = dashboard.Type
};

newDashboard.Id = xrm.Create(newDashboard);
PublishXmlRequest reqPublish = new PublishXmlRequest
{
  ParameterXml = string.Format(
    @"
      <importexportxml>
        <dashboards>
          <dashboard>{0}</dashboard>
        </dashboards>
      </importexportxml>",
    newDashboard.Id)
};
PublishXmlResponse respPublish = (PublishXmlResponse)xrm.Execute(reqPublish);

I had issues with privileges on the newly migrated Dashboard the first time I tried it.  The addition of the the PublishXmlRequest fixed it.  I had no such issue with the Views or Charts.

Reference Implementation

You can download a reference implementation of the code described above from the following:

http://sdrv.ms/W1z1Ra

It contains a CRM 2011 Managed Solution that allows Users to migrate User objects (Views, Charts, Dashboards) to System equivalents.  I am in the process of uploading the source code to Codeplex; upon completion you will be able to download and modify as you see fit.

NB: The current implementation will not migrate User Dashboards that contain User components.  A future version will migrate embedded components in addition to the Dashboard that houses them.

Update

You can now download the Managed Solution and/or all source code from the following Codeplex site:

http://userobjectmigration.codeplex.com

January 6, 2013

Using PreEntityImages and PostEntityImages in Custom Workflow Activities

Filed under: Cutting Code — pogo69 [Pat Janes] @ 08:03

Overview

As with many of my blog postings, the inspiration for this post originated with a discussion on the Dynamics CRM Forums.  In the following forum posting, Mostafa El Moatassem bellah asked how to use PreEntityImages and/or PostEntityImages within a Custom Workflow Activity to obtain the values of attributes before/after the operation that triggered a Workflow:

http://social.microsoft.com/Forums/is/crmdevelopment/thread/bd40aecb-c0a1-4e50-89bf-8cb2a11a67b5

My initial response (and understanding) was that you are unable to do so.  Images (PreEntity and PostEntity) are registered against Steps, which are in turn, registered against Plugins.  No such facility exists for Custom Workflow Activities as you cannot register Steps.  The “triggers” for Workflows are instead controlled via the Workflow customisation UI.

How the CRM Implements Images in Custom Workflow Activities

After writing a short diagnostic Custom Workflow Activity, I was able to verify that the CRM utilises PreEntityImages and PostEntityImages internally to store copies of the Entity attributes before and after the operation that triggered the Workflow.

NB: While the findings in this blog post are verifiable and repeatable, it is NOT documented in the SDK and is therefore considered to be unsupported and subject to change at any time.  Use at your own risk.

Accessing the Image Collections

// attach note to primary entity instance
Entity note = new Entity("annotation");
note.Attributes.Add("objectid", new EntityReference(context.PrimaryEntityName, context.PrimaryEntityId));
note.Attributes.Add("objecttypecode", context.PrimaryEntityName);

StringBuilder sb = new StringBuilder();
sb.AppendFormat("********************************************\r\n");
sb.AppendFormat("PreEntityImages\r\n");
sb.AppendFormat("--------------------------------------------\r\n");
foreach (string imageName in context.PreEntityImages.Keys)
{
  sb.AppendFormat("[{0}]\r\n", imageName);

  Entity image = (Entity)context.PreEntityImages[imageName];

  foreach (string attributeName in image.Attributes.Keys)
  {
    sb.AppendFormat("- {0}\r\n", attributeName);
  }
}
sb.AppendFormat("********************************************\r\n");
sb.AppendFormat("PostEntityImages\r\n");
sb.AppendFormat("--------------------------------------------\r\n");
foreach (string imageName in context.PostEntityImages.Keys)
{
  sb.AppendFormat("[{0}]\r\n", imageName);

  Entity image = (Entity)context.PostEntityImages[imageName];

  foreach (string attributeName in image.Attributes.Keys)
  {
    sb.AppendFormat("- {0}\r\n", attributeName);
  }
}

note.Attributes.Add("notetext", sb.ToString());
service.Create(note);

In effect, the code creates a Note and attaches it to the Primary Entity for which the Workflow was triggered.  The Note text contains a list of every PreEntityImage and PostEntityImage with the names of their respective attributes.

Image Collection Names

I constructed a simple Workflow on the Account entity and triggered an update with the following results:

********************************************
PreEntityImages
——————————————–
[account]
– accountclassificationcode
– accountid

[PreBusinessEntity]
– accountclassificationcode
– accountid

********************************************
PostEntityImages
——————————————–
[account]
– accountclassificationcode
– accountid

[PostBusinessEntity]
– accountclassificationcode
– accountid

Thus, the PreEntityImages collection contains 2 Entity Images:

  1. The first shares the Logical Entity name of the Primary Entity
  2. The second is called PreBusinessEntity

And the PostEntityImages collection contains 2 Entity Images:

  1. The first shares the Logical Entity name of the Primary Entity
  2. The second is called PostBusinessEntity

Both PreEntity Images appear to contain the same data; likewise for the PostEntity Images.

Image Content

Update Trigger

As with the Images supplied via the Plugin architecture and/or via the Target Entity in Update Plugin operations; the Pre/Post Entity Images described above contain values in the Attributes collection only for attributes that contain values in the CRM.  For example; if the Account email address is missing prior to the Update, the emailaddress1 attribute will exist only in the PostEntity Images.

Create Trigger

If a Workflow is triggered by a Create operation, only the PostEntity Images are created (PreEntityImages collection is empty).

On Demand Trigger

If a Workflow is triggered On Demand, only the PreEntity Images are created (PostEntityImages collection is empty).

Conclusion

As per my warning above, the usage of PreEntity and PostEntity Images in Custom Workflow Activities is undocumented and as such, unsupported.

It appears to provide a convenient mechanism to access the values of an Entity instance before and after the operation that triggered the Workflow.  However, its exclusion from the SDK means that the implementation could change at any time, causing code based on anecdotal assumption such as these to fail.

Thanks to Mostafa for the inspiration.

December 21, 2012

Restricting Active Directory Domain Access in a Hosted CRM 2011 Deployment

Filed under: Cutting Code — pogo69 [Pat Janes] @ 10:21

Overview

Being primarily responsible for our hosted CRM 2011 environment, it has bothered me for some time that the ‘New Multiple Users’ button in CRM 2011 provides our hosted Users access to our Active Directory Domain structure.  This potentially includes our hosted client list AND the names of the Users in those Organisations.

multiple_users

In CRM 4.0, it was possible to update a setting to disallow the addition of Users in a specific Organisation.  Although an equivalent mechanism exists in CRM 2011, it is accessible only via a direct database update (I am the supported mechanism nazi around here, so I’d rather not go there) – even the Microsoft CRM SDK Team were unable to assist.

Light at the End of the Tunnel

An update to a post on the Dynamics CRM forums this morning pointed me in the direction of a supported mechanism to ensure that our hosted Users will have access only to the specific AD OU in which their CRM Users reside.  The post indicates how to affect such an update via direct database update:

http://social.microsoft.com/Forums/en-US/crm/thread/848ea8e6-3bd3-44d5-8363-cdb95a34c0d1

Support Mechanism to Set CRM Organisation Active Directory UserRootPath

The same can be achieved via the following supported code:

Microsoft.Xrm.Sdk.Deployment.DeploymentServiceClient deploymentClient = Microsoft.Xrm.Sdk.Deployment.Proxy.ProxyClientHelper.CreateClient(new Uri("http://<server>:<port>/XRMDeployment/2011/Deployment.svc"));
deploymentClient.ClientCredentials.Windows.ClientCredential = new System.Net.NetworkCredential("<username>", "<password>", "<domain>");
// find org
var organizations = deploymentClient.RetrieveAll(Microsoft.Xrm.Sdk.Deployment.DeploymentEntityType.Organization);
var org = organizations.Where(o => o.Name.Equals("<orgname>")).SingleOrDefault();
// update UserRootPath setting
Microsoft.Xrm.Sdk.Deployment.ConfigurationEntity orgSettings = new Microsoft.Xrm.Sdk.Deployment.ConfigurationEntity
{
  Id = org.Id,
  LogicalName = "Organization"
};
orgSettings.Attributes = new Microsoft.Xrm.Sdk.Deployment.AttributeCollection();
orgSettings.Attributes.Add(new KeyValuePair<string, object>("UserRootPath", "LDAP://<domain>.<tld>/OU=<ou4>,OU=<ou3>,OU=<ou2>,OU=<ou1>,DC=<domain>,DC=<tld>"));
Microsoft.Xrm.Sdk.Deployment.UpdateAdvancedSettingsRequest reqUpdateSettings = new Microsoft.Xrm.Sdk.Deployment.UpdateAdvancedSettingsRequest
{
  Entity = orgSettings
};
Microsoft.Xrm.Sdk.Deployment.UpdateAdvancedSettingsResponse respUpdateSettings = (Microsoft.Xrm.Sdk.Deployment.UpdateAdvancedSettingsResponse)deploymentClient.Execute(reqUpdateSettings);

If you’re not sure what the distinguishedName of the target OU is, you can find it in the Active Directory Users and Computers tool:

Enable Advanced Features

AdvancedFeatures

 

Use Attribute Editor to Locate distinguishedName

distinguishedName

Thanks to Neil McD for the inspiration!

December 2, 2012

Adventures in Multi-Threaded CRM 2011 Client Development

Filed under: .NET, C#, CRM — pogo69 [Pat Janes] @ 12:52

Overview

I’ve recently completed development of a process to auto-magically batch import data on a weekly basis for a client.

I have no control over the format in which the data will be received; the client (or their technical representatives) have decided that I will receive the entire set of Active Associations (data representing a link between a Contact and Account) each import.  I will not, however, receive any indication of Inactive (that is, deactivated) associations; as such, the import process will “clean up” prior to each import, deleting all active associations and updating each affected Contact (setting a custom Status attribute to Inactive).

I will discuss the import process in another blog posting; the focus of this post is the “clean up”; specifically the development process of deleting and updating approximately 100,000 records (each) as efficiently as possible.

Early Bound Classes, Large Record Sets and Memory Consumption

The initial iteration of my clean up program, simply queried for a set of all Contacts that were currently set to CustomStatus == Active.  It then looped through each record and set them, one by one, to Inactive.  In pseudo-code:

var contactIds =
  (
    from c in xrm.ContactSet
    where c.new_status == 1
    select c.Id
  ).ToList();

foreach (Guid contactId in contactIds)
{
  Contact c = new Contact
  {
    Id = contactId,
    new_status = 2
  };

  xrm.Update(c);
}

This caused the console program to consume huge amounts of memory to the point that the program stopped working.  I thought it may have to do with the ServiceContext caching objects, so I turned caching off completely:

xrm.MergeOption = MergeOption.NoTracking;

It had no effect (on memory); so I started “batching” my updates.

Enter the ThreadPool

I decided to use System.Threading.ThreadPool to operate simultaneously on batches of records.  Initially, my code selected 1,000 records at a time:

var contactIds =
  (
    from c in xrm.ContactSet
    where c.new_status == 1
    select c.Id
  ).Take(1000).ToList();

foreach (Guid contactId in contactIds)
{
  ThreadPool.QueueUserWorkItem(id =>
  {
    Contact c = new Contact
    {
      Id = id,
      new_status = 2
    };

    xrm.Update(c);
  }, contactId);
}

This would run for a little while, before erroring.  Trawling through the CRM Trace Log files indicated that I was encountering Timeout errors.  WCF Services have a throttling mechanism that allows only a certain number of simultaneous connections.  So I throttled my client code:

ThreadPool.SetMaxThreads(10, 10);

The first parameter allows only 10 concurrently running threads in the ThreadPool.  All other requests must wait for a spare thread to become available, effectively throttling my multi-threaded client code.

But it still errored; this time, I started generating SQL Deadlock errors.  It was intermittent and random.  It appears that I still had threads doing multiple updates when the next batch of candidate Contacts was being selected.  So I had to separate the select and update operations by “waiting” for each batch of updates to complete before going back and selecting the next batch.

In order to wait for a batch of threads, I needed to set up an array of WaitObjects.  The maximum number of objects that can be waited on simultaneously is 64, so that became my batch size:

class Program
{
  #region Local Members
  private static int BATCH_SIZE = 64;
  #endregion

  #region Thread Worker Classes
  private class ContactUpdater
  {
    XrmServiceContext _xrm;
    Guid _contactId;
    ManualResetEvent _doneEvent;

    public ContactUpdater(XrmServiceContext xrm, Guid contactId, ManualResetEvent doneEvent)
    {
      this._contactId = contactId;
      this._doneEvent = doneEvent;
      this._xrm = xrm;
    }
    public void UpdateContact(Object o)
    {
      // de-activate contact
      Contact contact = new Contact
      {
        Id = this._contactId,
        new_Status = (int)Contactnew_Status.Inactive
      };

      // update
      this._xrm.Update(contact);

      // signal that we're done
      this._doneEvent.Set();
    }
  }
  #endregion

  static void Main(string[] args)
  {
    ThreadPool.SetMaxThreads(10, 10);

    using (XrmServiceContext xrm = new XrmServiceContext("Xrm"))
    {
      xrm.MergeOption = Microsoft.Xrm.Sdk.Client.MergeOption.NoTracking;

      // update ALL current active contacts
      while (true)
      {
        // retrieve batch
        List<Guid> contactIds =
          (
            from c in xrm.ContactSet
            where c.new_Status.GetValueOrDefault() == (int)Contactnew_Status.Active
            select c.Id
          ).Take(BATCH_SIZE).ToList();

        if (contactIds.Count == 0) { break; }

        ManualResetEvent[] doneEvents = new ManualResetEvent[contactIds.Count];

        // spawn processing threads
        for (int i = 0; i < contactIds.Count; i++)
        {
          ContactUpdater contactUpdater = new ContactUpdater(xrm, contactIds[i], (doneEvents[i] = new ManualResetEvent(false)));

          ThreadPool.QueueUserWorkItem(contactUpdater.UpdateContact);
        }

        // wait for all threads to complete before processing next batch
        WaitHandle.WaitAll(doneEvents);
      }
    }
  }
}

Conclusion

This provides a framework for the multi-threaded updating of a large number of CRM Entity Instances.  With a little bit of tinkering you could get it to fit most batch update situations.

You could probably also play with the ThreadPool maximum thread count; I haven’t, but 10 seemed like a conservatively good number.

December 1, 2012

CRM 2011 OData, REST and the QueryString

Filed under: CRM, Javascript — pogo69 [Pat Janes] @ 12:06

Introduction

As with most CRM developers, I have loved the introduction of the newly introduced OData endpoint in CRM 2011.  It provides a simple, yet highly powerful mechanism to perform basic CRUD operations against a CRM 2011 Organisation.

Zen, and the Art of OData Query Construction

What makes working with REST so easy from client applications (Javascript and Silverlight in the case of a CRM 2011 developer), is that operations are performed by constructing an appropriate address via an HTTP Request’s QueryString.

A corollary of the implementation mechanism of a “Restful” web service, is that we can “play” with our queries directly in Internet Explorer.  I did this recently (actually I do it all the time; but I digress) to demonstrate to a fellow developer how to construct an OData query; the following is a reconstruction of that demonstration.

The Endpoint

In order to address the OData Endpoint, you must navigate to the following address:

http://<OrganisationRoot>/XrmServices/2011/OrganizationData.svc

Authentication

NB: The OData endpoint is not supported for use outside of Javascript and Silverlight within the CRM deployment.  Thus, in order to address the OData endpoint from the Internet Explorer QueryString, you must first successfully authenticate with the CRM Organisation.

If you are connecting from within the internal network on which the CRM is deployed, it is likely that you will be able to connect automatically due to Windows Integrated Security.  Otherwise, you may first have to open CRM in another tab.

EntitySet

Navigation to the root of the OData endpoint results in a collection of EntitySets being returned.  The collection is a list of the Entity Types on which OData operations may be performed.

<?xml version="1.0" encoding="UTF-8" standalone="true"?>
<service xmlns="http://www.w3.org/2007/app" xmlns:app="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom" xml:base="http://<OrganisationRoot>/XRMServices/2011/OrganizationData.svc/">
    <workspace>
        <atom:title>Default</atom:title>
        <collection href="OpportunityCompetitorsSet">
            <atom:title>OpportunityCompetitorsSet</atom:title>
        </collection>
        <collection href="SystemFormSet">
            <atom:title>SystemFormSet</atom:title>
        </collection>
        <collection href="RoleSet">
            <atom:title>RoleSet</atom:title>
        </collection>
        ...
    </workspace>
</service>

OpportunitySet

The Query that we were building can be defined as follows:

“When was the most recently Won Opportunity attached to the current Account?”

So, we need to interrogate the OpportunitySet.  We do this by simply appending a forward-slash and the name of the EntitySet to the end of the address:

http://<OrganisationRoot>/XrmServices/2011/OrganizationData.svc/OpportunitySet

This will return a collection of every Opportunity in the system (with ALL of each Opportunity’s attributes):

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<feed xml:base="http://<OrganisationRoot>/XRMServices/2011/OrganizationData.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">
  <title type="text">OpportunitySet</title>
  <id>http://<OrganisationRoot>/XrmServices/2011/OrganizationData.svc/OpportunitySet</id>
  <updated>2012-11-30T21:44:25Z</updated>
  <link rel="self" title="OpportunitySet" href="OpportunitySet" />
  <entry>
    <id>http://<OrganisationRoot>/XRMServices/2011/OrganizationData.svc/OpportunitySet(guid'9cdd7cfd-6a34-e211-bea2-00012e244939')</id>
    <title type="text">Test</title>
    <updated>2012-11-30T21:44:25Z</updated>
    <author>
      <name />
    </author>
    <link rel="edit" title="Opportunity" href="OpportunitySet(guid'9cdd7cfd-6a34-e211-bea2-00012e244939')" />
    <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/opportunitycompetitors_association" type="application/atom+xml;type=feed" title="opportunitycompetitors_association" href="OpportunitySet(guid'9cdd7cfd-6a34-e211-bea2-00012e244939')/opportunitycompetitors_association" />
    ...
    <category term="Microsoft.Crm.Sdk.Data.Services.Opportunity" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
    <content type="application/xml">
      <m:properties>
        <d:ParticipatesInWorkflow m:type="Edm.Boolean">false</d:ParticipatesInWorkflow>
        <d:new_IsParentOngoingOpp m:type="Edm.Boolean">false</d:new_IsParentOngoingOpp>
        <d:TimeZoneRuleVersionNumber m:type="Edm.Int32" m:null="true" />
        ...
      </m:properties>
    </content>
  </entry>
</feed>

<entry>

Each <entry> node represents an Entity instance; in this case, an Opportunity.

<link>

The <link> nodes beneath <entry> represent child Entity Collections attached to the individual Entity instance.

<content><m:properties><d:AttributeSchemaName /></m:properties></content>

Each property is represented by a child node of <content><m:properties>.  The attribute nodes are named with the SchemaName of the attribute as defined in your CRM Customisations; the data type is represented by the m:type attribute.

OData Operators

$filter

The $filter operator provides a mechanism to “filter” the returned result set; much like the where clause in SQL.

When was the most recently Won Opportunity attached to the current Account?”

So, we need to return only Won Opportunities for the current Account.  Thus, our filter will look like the following:

/OpportunitySet?$filter=StatusCode/Value eq 3 and CustomerId/Id eq (guid’00000000-0000-0000-0000-000000000000′)

In other words, return all Opportunities with a StatusCode of 3 (Won) and a CustomerId GUID identifier matching that specified.  See the CRM SDK and the OData Specification for additional details regarding the $filter syntax.

$top and $orderby

When was the most recently Won Opportunity attached to the current Account?”

We need the most recently Won Opportunity.  For the purposes of our query, most recently Won is defined as the most recent Actual Close Date.  Thus we need to order our result set by Actual Close Date descending; and return only the top 1 entry:

/OpportunitySet?$orderby=ActualCloseDate desc&$top=1

$select

When was the most recently Won Opportunity attached to the current Account?”

In order to determine when the most recently Won Opportunity occurred, we only care about the Actual Close Date.  With the $select operator, we can choose which attributes get returned in the result set:

/OpportunitySet?$select=ActualCloseDate

Consolidated Query

If we put the query operators together, we end up with:

/OpportunitySet?$filter=StatusCode/Value eq 3 and CustomerId/Id eq (guid’00000000-0000-0000-0000-000000000000′)&$top=1&$orderby=ActualCloseDate desc&$select=ActualCloseDate

The result set will look something like:

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<feed xml:base="http://dev.mrcrm.com.au:8888/MrCRM/XRMServices/2011/OrganizationData.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">
  <title type="text">OpportunitySet</title>
  <id>http://dev.mrcrm.com.au:8888/MrCRM/XrmServices/2011/OrganizationData.svc/OpportunitySet</id>
  <updated>2012-12-01T01:29:33Z</updated>
  <link rel="self" title="OpportunitySet" href="OpportunitySet" />
  <entry>
    <id>http://dev.mrcrm.com.au:8888/MrCRM/XRMServices/2011/OrganizationData.svc/OpportunitySet(guid'3b225dd4-28e3-e011-8abe-001517418134')</id>
    <title type="text">5 User Hosted</title>
    <updated>2012-12-01T01:29:33Z</updated>
    <author>
      <name />
    </author>
    <link rel="edit" title="Opportunity" href="OpportunitySet(guid'3b225dd4-28e3-e011-8abe-001517418134')" />
    <category term="Microsoft.Crm.Sdk.Data.Services.Opportunity" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
    <content type="application/xml">
      <m:properties>
        <d:ActualCloseDate m:type="Edm.DateTime">2012-03-18T14:00:00Z</d:ActualCloseDate>
      </m:properties>
    </content>
  </entry>
</feed>

If the result set is empty, there were NO Won Opportunities attached to the current Account.  Otherwise, the <d:ActualCloseDate /> node contains our answer.

Javascript – How to Use our Query

In the interest of completeness, I will quickly show how to use our query from a Javascript Web Resource (code assumes the existence and availability of the JSON and jQuery libraries.

var serverURL = Xrm.Page.context.getServerUrl();
var accountId = Xrm.Page.data.entity.getId().replace(/[\{\}]/, "");
var query = serverURL + "/OpportunitySet?$filter=StatusCode/Value eq 3 and CustomerId/Id eq (guid'" + accountId + "')&
$top=1&$orderby=ActualCloseDate desc&$select=ActualCloseDate";
$.ajax({
  type: "GET",
  contentType: "application/json; charset=utf-8",
  datatype: "json",
  url: query,
  beforeSend: function (XMLHttpRequest) {
    XMLHttpRequest.setRequestHeader("Accept", "application/json");
  },
  success: function (data, textStatus, XmlHttpRequest) {
    var results = data.d.results;

    if (results.length == 0) {
      alert("No Won Opportunities found!!");
    }
    else {
      var dateResult = results[0].ActualCloseDate;
    }
  },
  error: function (XmlHttpRequest, textStatus, errorThrown) {
    alert('OData Select Failed: ' + odataSelect);
  }
});

See the SDK for more information on the idiosyncrasies of date processing from OData:

http://msdn.microsoft.com/en-us/library/gg328025.aspx#BKMK_WorkingWithDates

SSRS Reports in CRM 2011 with SQL Server 2008 (pre-R2) – Implications for Managed Solutions

Filed under: Cutting Code — pogo69 [Pat Janes] @ 05:53

Overview

I am writing this blog posting as a warning to all regarding the deployment of SSRS Reports into a CRM 2011 Organisation wherein the SQL Server instance is pre-version 2008 R2.

SSRS 2008 “Compatibility” in BIDS

If your SSRS Report is created in SQL Server 2008 R2 BIDS (while CRM 2011 supports the use of SQL Server 2008; most CRM 2011 deployments are, to my knowledge and certainly in my experience, running SQL Server 2008 R2), you will have an option when creating the Report to set “compatibility” to:

  • SQL 2008 R2
  • SQL 2008
  • SQL 2005

This forces BIDS to store your report using the corresponding XML Schema.

You could reasonably assume that selecting SQL 2008 would produce a report that will upload without error into a SQL Server 2008 Reporting Service deployment.  Unfortunately, if you add a Chart to your SSRS Report, such is not the case.  Uploading the report will fail with an “index out of bounds” error.

Implications

I first discovered this issue while attempting to import the PowerMailChimp solution into a client’s CRM 2011 Organisation.  The client has on OnPremise installation using SQL Server 2008 (pre-R2).

I checked (and re-checked) the schema definition in the offending Report (2008) and that the Report conformed to the SSRS 2008 schema (it did).  But it still would not import.  If I removed the report from the solution; the solution imported without error.

After a very frustrating session of Googling and experimentation, I confirmed and reliably reproduced the above error.  If you create a Report in BIDS 2008 R2, setting compatibility to SQL 2008 is not sufficient to ensure true compatibility if your Report contains a Chart.

Conclusion and Recommendation

If you are packaging a Managed Solution for distribution to an arbitrary CRM 2011 deployment that contains SSRS Reports, you must either:

  1. Create your Reports in an SSRS 2008 (pre-R2) deployment

Highlight to potential Users that the solution is compatible only with SQL Server 2008 R2+

July 12, 2012

SQL Server (2005+) Ranking Functions – A Better Mailout Boxing Algorithm

Filed under: Cutting Code, SQL Server, Transact SQL — pogo69 [Pat Janes] @ 15:41

Introduction

I’d been wanting to post about the joys of SQL Server Transact SQL’s (new to SQL Server 2005) ranking functions since learning of them.  I wrote this article originally as an explanation to my fellow development team members about what I had done and why I did it.

It then morphed into a blog/article posting that never found a home.  It was lost forever until this morning when I resurrected a copy of it from a hard drive that had been sitting in a long dead notebook.

Overview

As a developer, there is so much to love about SQL Server 2005.

In my current role, our primary responsiblity, an Intranet, is largely AJAX based, so we’ve taken great pleasure in converting such things as the formerly clunky FOR XML EXPLICIT syntax to the far more elegant FOR XML PATH syntax; allowing us to generate XML documents in a largely “native” manner.

That specific need aside, the new feature that has afforded us the greatest improvements in code clarity and performance are the SQL Server ranking functions.

This article will describe a recently encountered problem and how it was solved using those ranking functions to improve:

  • the performance of the code
  • the readability of the code and;
  • maintainability of the code

Boxing Algorithms for Mailouts

In about three weeks time, I’ll be starting a new job; as such, I am, in my current role, starting to wind down. This means that the majority of my time has been allocated to finishing up current projects, documenting what I know and transferring knowledge to other team members.

In between times, I’ve been devoting my “spare” time to those areas of the system that I’ve “always wanted to improve, but never had the time”.

A large sub-system of our Intranet is devoted to the selection of “contact” data and the subsequent mailout of invitations to those contacts. A couple of years ago, the company expanded it’s operations into New Zealand; this brought with it, some unique issues, not least of which were some very specific requirements from NZ Post, with respect to the “boxing” of bulk mail in order to receive bulk discounts.

In New Zealand, postcodes are allocated in groups to “Lodgement Centres”. The mail for each lodgement centre must be “boxed” into groups of 300, sorted by descending postcode.

Ever since we upgraded our system to SQL Server 2005 from it’s former SQL Server 2000, and I was introduced to T-SQL’s new ranking functions, I could see that this problem was crying out to be solved using them; but I never had the time to re-write existing code. Yesterday, I finally got around to doing so.

NB: You’ll see the use of the new XML datatype in both the old and new version of the code; this is due to that portion of the code already having been converted post SQL Server 2005 upgrade.

The Old Solution (Cursors – ewww)

The former solution uses a cursor to iterate through the contact/invitation records, such that we can deal with them per lodgement centre:

        -- @xmlSeminars is actually passed into a stored procedure, but is explicitly declared for -- clarity in this example
        DECLARE @xmlSeminars XML;
        SELECT @xmlSeminars = '<seminars><seminar id="123" /><seminar id="456" /></seminars>';

        CREATE TABLE #invite
        (
            intSeminarContactID INT,
            intLodgementCentreID INT,
            strPostCode VARCHAR(6)
        );

        CREATE TABLE #lodgement_centre
        (
            intOrder INT IDENTITY(0, 1),
            intSeminarContactID INT,
            strPostCode VARCHAR(6)
        );

        CREATE TABLE #invite_lodgement_centre
        (
            intOrder INT,
            intSeminarContactID INT,
            intLodgementCentreID INT,
            strPostCode VARCHAR(6),
            intBox INT
        );

        INSERT
            #invite
        SELECT
            sc.intSeminarContactID,
            lcb.intLodgementCentreID,
            a.strPostCode
        FROM
            tblSeminarContact sc
        JOIN
            tblContact c ON c.intContactID = sc.intContactID
        JOIN
            tblAddress a ON a.intAddressID = c.intAddressID
        JOIN
            tblLodgementCentreBand lcb ON a.strPostCode BETWEEN lcb.strPostCode_From AND lcb.strPostCode_To
        JOIN
            @xmlSeminars.NODES('seminars/seminar') AS seminars(seminar) ON seminars.seminar.VALUE('@id', 'INT') = sc.intSeminarID;

        DECLARE @intLodgementCentreID INT;

        DECLARE curLodgementCentre CURSOR FOR
            SELECT DISTINCT
                #invite.intLodgementCentreID
            FROM
                #invite;

        OPEN curLodgementCentre;

        FETCH NEXT FROM curLodgementCentre
        INTO @intLodgementCentreID;

        WHILE @@FETCH_STATUS = 0
        BEGIN
            INSERT
                #lodgement_centre
            SELECT
                #invite.intSeminarContactID,
                #invite.strPostCode
            FROM
                #invite
            WHERE
                #invite.intLodgementCentreID = @intLodgementCentreID
            ORDER BY
                #invite.strPostCode;

            INSERT
                #invite_lodgement_centre
            SELECT
                #invite.intOrder,
                #inviteintSeminarContactID,
                @intLodgementCentreID,
                #invite.strPostCode,
                #invite.intOrder / 300
            FROM
                #invite;

            FETCH NEXT FROM curLodgementCentre
            INTO @intLodgementCentreID;
        END

        CLOSE curLodgementCentre;
        DEALLOCATE curLodgementCentre;

This causes the following problems:

  • it hampers performance, due to:
    • using programmatic iterative operations instead of set based operations, which is what SQL Server is good at
    • the overhead of cursors
  • it obfuscates the intention of the boxing algorithm

The New Solution (Ranking Functions – ROW_NUMBER and DENSE_RANK)

The new code solves all of these problems in two simply elegant, set based operations.

1. Ordering and Partitioning

        -- @xmlSeminars is actually passed into a stored procedure, but is explicitly declared for clarity in this example
        DECLARE @xmlSeminars XML;
        SELECT @xmlSeminars = '<seminars><seminar id="123" /><seminar id="456" /></seminars>';

        CREATE TABLE #invite
        (
            intSeminarContactID INT,
            intLodgementCentreID INT,
            strPostCode VARCHAR(6),
            intOrder INT
        );

        CREATE TABLE #invite_lodgement_centre
        (
            intSeminarContactID INT,
            intLodgementCentreID INT,
            strPostCode VARCHAR(6),
            intOrder INT,
            intBox INT
        );

        INSERT
            #invite
        SELECT
            sc.intSeminarContactID,
            lcb.intLodgementCentreID,
            a.strPostCode,
            (ROW_NUMBER() OVER (PARTITION BY lc.intLodgementCentreID ORDER BY a.strPostCode DESCENDING)) - 1      -- (1)
        FROM
            tblSeminarContact sc
        JOIN
            tblContact c ON c.intContactID = sc.intContactID
        JOIN
            tblAddress a ON a.intAddressID = c.intAddressID
        JOIN
            tblLodgementCentreBand lcb ON a.strPostCode BETWEEN lcb.strPostCode_From AND lcb.strPostCode_To
        JOIN
            @xmlSeminars.NODES('seminars/seminar') AS seminars(seminar) ON seminars.seminar.VALUE('@id', 'INT') = sc.intSeminarID;

(1)
This initial INSERT statement makes use of the new ROW_NUMBER() ranking function that allows us to generate an ordinal number (ORDER BY) for each record in the set, optionally partitioning those ordinal numbers into groups (PARTITION BY); the results generated look something like the following:

intSeminarContactID intLodgementCentreID strPostCode intOrder
123 1 9999 0
234 1 9999 1
345 1 9998 2
…elided…
456 1 9996 299
567 1 9996 300
678 1 9996 301
789 1 9995 302
4234 2 6666 0
1234 2 6666 1
1235 2 6666 2
1236 2 6665 3
1237 2 6665 4
1238 2 6665 5
1239 2 6664 6
1211 2 6664 7
…elided…

NB: I decrement the ordinal number returned by ROW_NUMBER() by one in (1) above, to allow a simple integer division calculation in the following statement:

2. Boxing

        INSERT
            #invite_lodgement_centre
        SELECT
            #invite.intSeminarContactID,
            #invite.intLodgementCentreID,
            #invite.strPostCode,
            #invite.intOrder,
            #invite.intOrder / 300 + DENSE_RANK() OVER (ORDER BY #invite.intLodgementCentreID);                                                         -- (2)

(2)
The first part of statement (2) divides #invite.intOrder / 300 our invitations into boxes of 300.

The second part of the statement “corrects” the boxing such that each lodgement centre has a distinct set of boxes; this works because the DENSE_RANK() “ranks” the dataset according to the content of the ORDER BY CLAUSE. We use DENSE_RANK() instead of RANK() so that the intBox box identifiers are kept contiguous.

So now, our new dataset has each invitation record, allocated to a “box” of 300 or fewer, where each box contains records from only a single lodgement centre.

intSeminarContactID intLodgementCentreID strPostCode intOrder intBox
123 1 9999 0 1
234 1 9999 1 1
345 1 9998 2 1
…elided…
456 1 9996 299 1
567 1 9996 300 2
678 1 9996 301 2
789 1 9995 302 2
4234 2 6666 0 3
1234 2 6666 1 3
1235 2 6666 2 3
1236 2 6665 3 3
1237 2 6665 4 3
1238 2 6665 5 3
1239 2 6664 6 3
1211 2 6664 7 3
…elided…

Conclusion

As you can see, the ranking functions clean up the code quite considerably; in addition to being far simpler, the code is now far easier to maintain, with the original intention of the code no longer obfuscated behind a wall of crazy cursor logic.

A DBMS will always be far happier working with set based operations; the introduction of ranking functions in SQL Server 2005 is one more way in which we can keep SQL Server, and those of us who work with it everyday, happy.

July 2, 2012

CRM 2011 – Custom Code Validation Tool – UR9 (AKA R8) Preparation

Filed under: Cutting Code — pogo69 [Pat Janes] @ 15:51

Introduction

I’m making this post in an effort to hopefully alleviate my fellow CRM Developer’s concerns over the output from the recently released ‘Microsoft Dynamics CRM 2011 Custom Code Validation Tool’:

http://www.microsoft.com/en-us/download/details.aspx?id=30151

I participated in a posting on this forum last week:

http://social.msdn.microsoft.com/Forums/en-US/crmdevelopment/thread/3bdd0926-b66f-4c69-9e5c-bd43e400bf08

wherein the Tool picked up code that was tagged as “creating problems in browsers other than IE”.  The offending code is as follows:

Xrm.Page.getAttribute("attributeName").getSelectedOption().text

which is supported via the published Xrm.Page object model documented in the CRM SDK.

False Positives

I thought it odd that supported code would be deprecated in the upcoming Rollup, regardless of the browser.  After seeing the Code Validation Tool in action (and decompiling it so that I knew exactly what it was doing), I am convinced that this, and possibly MANY other such instances of coding violations will be “false positives” – that is, such code will be flagged as potentially unsupported, but will in fact, continue to work on ALL browsers.

The release notes for the tool mention the possibility of such “false positives” but I think the impact of such may be missed by many users.

How the Tool Works

The Tool contains a list of “offending” code phrases; amongst which are snippets such as:

.Text
.InnerText
.Value
.selectSingleNode("...")

The tool uses these phrases to perform string pattern matching on your code.  If it finds a match, the code is highlighted as “offending”.

Obviously, if your code is using .Value or .Text to interrogate and/or manipulate the DOM of a CRM Form, it is unsupported and should quite correctly be flagged as such.  However, the tool cannot interrogate the semantics of your code – merely the syntax.  If, for instance, you are using .Value to build a Money attribute for the purposes of submission via OData/REST; or if you use .selectSingleNode() to process the results of a SOAP query, your code is still supported, but will be flagged otherwise.

As such, I’m reasonably certain that the instance in the link above remains valid and was merely a “false positive”.

Conclusion

If you’re in the habit of using unsupported code mechanisms use the Tool and find out where you will almost certainly have to make changes.  But use with caution and make sure that you don’t end up fixing things that weren’t broken.

April 5, 2012

CRM LINQ Provider – Converting Expressions to QueryExpression and/or FetchXml

Filed under: Cutting Code — pogo69 [Pat Janes] @ 16:59

Overview

Yet another post for which the impetus was a question posed on the MSDN Support Forums for CRM.

This time, someone wanted to know if a CRM LINQ query could be converted to its QueryExpression and/or its FetchXml equivalent:

http://social.msdn.microsoft.com/Forums/en/crm/thread/8c5fe0fd-d55b-41ea-a0b2-a703eccf06f1

As always, I love a challenge, so I opened up my favourite decompiler and got to work trying to figure out:

  1. How the SDK converts an Expression (the internal representation of a LINQ Query and its constituent parts) to a QueryExpression
  2. How I might convince said SDK method(s) to do the same thing for me

QueryProvider

The results of my investigations turned up the QueryProvider class, that exposes the very method I required (Translate):

namespace Microsoft.Xrm.Sdk.Linq
{
 internal class QueryProvider : IQueryProvider
{
  public QueryExpression Translate(Expression expression)
  {
   Projection projection = null;
   bool flag;
   bool flag1;
   NavigationSource navigationSource = null;
   List<LinkLookup> linkLookups = null;
   return this.GetQueryExpression(expression, out flag, out flag1, out projection, ref navigationSource, ref linkLookups);
  }
 }
}

Awesome!!  Except…

…the QueryProvider class is marked internal.  So I can’t get to it.

One can get at a CRM LINQ query’s QueryProvider via the Provider property:

var contacts =
 (
  from c in xrm.ContactSet
  where c.LastName.StartsWith("B")
  select new Xrm2011.Contact
  {
   FirstName = c.FirstName,
   LastName = c.LastName,
   ParentCustomerId = c.ParentCustomerId
  }
 );
IQueryProvider queryProvider = contacts.Provider;

As you can see the Provider property is exposed via its interface (IQueryProvider) rather than the underlying QueryProvider object – it has to be in fact, as QueryProvider is unavailable outside of the Microsoft.Xrm.Sdk.dll assembly

Reflection to the Rescue!!

NB: WARNING – this is highly unsupported and may break at any time in the future without prior warning!!

Luckily; provided your code has sufficient privileges, reflection allows us to obtain access to fields, properties and methods that are otherwise hidden from us due to access modifiers such as internal.

Thus, to convert your CRM LINQ queries to QueryExpression and/or FetchXml, you may use the following:

var contacts =
 (
  from c in xrm.ContactSet
  where c.LastName.StartsWith("B")
  select new Xrm2011.Contact
  {
   FirstName = c.FirstName,
   LastName = c.LastName,
   ParentCustomerId = c.ParentCustomerId
  }
 );

IQueryProvider queryProvider = contacts.Provider;

MethodInfo translateMethodInfo = queryProvider.GetType().GetMethod("Translate");
QueryExpression query = (QueryExpression)translateMethodInfo.Invoke(queryProvider, new object[] { contacts.Expression });

QueryExpressionToFetchXmlRequest reqConvertToFetchXml = new QueryExpressionToFetchXmlRequest { Query = query };
QueryExpressionToFetchXmlResponse respConvertToFetchXml = (QueryExpressionToFetchXmlResponse)xrm.Execute(reqConvertToFetchXml);

System.Diagnostics.Debug.Print(respConvertToFetchXml.FetchXml);

Conclusion

Enjoy!!  But be careful…

Thanks to sebastian.mayer.67 for the inspiration.

 

CRM 2011 – CRUD vs Execute – Which is Faster?

Filed under: C#, CRM, Cutting Code — pogo69 [Pat Janes] @ 12:35

Overview

An interesting discussion began on the MSDN Support Forums for CRM Development recently; regarding the performance of the various Web Service CRUD methods compare to their Execute wrapped equivalents.

According to the SDK:

http://msdn.microsoft.com/en-us/library/42dcebf5-a624-45b9-b719-20e5882d5ca2(MSDN.10)#Performance

We are encouraged to use the CRUD methods as they are faster:

Use Common Methods

The common methods are faster than using the Execute method together with the corresponding message. The following table lists these methods and messages.

Method Message
Create CreateRequest
Delete DeleteRequest
Update UpdateRequest
Retrieve RetrieveRequest
RetrieveMultiple RetrieveMultipleRequest

The obvious question was raised – why?

I gave an initial hypothesis which, while it seemed quite valid, was admittedly an educated guess at best.  Upon being challenged, I decided to dig deeper.  You can see the results of that discussion in the originating thread:

http://social.msdn.microsoft.com/Forums/en/crmdevelopment/thread/8d679824-8cd6-4a27-ad4d-11974dc5403b

Decompiling is Your Friend

Although peeking inside the inner workings of the CRM is largely discouraged, I find it an extremely useful exercise from time to time.  Not only do I occasionally learn something about the way the CRM implements certain mechanisms, it has on occasion, allowed me to see a solution that I may otherwise have either never found or done so only by trial and error.  This is particularly the case for areas in which the SDK documentation describes only the relevant API (the what and to some extent the how) and not the where, when and why.

In this case, I ripped apart only enough of the relevant assemblies to verify that as per the hypothesis in another thread linked in the one discussed here, the “Create” message and its other CRUD siblings, are “wrapped” in an call to Execute.  As I mention in my final reply however, the Execute web service call carries its own overhead in that the inner message must be extracted.  Without digging too deep (or going nuts and decompiling the entire application and running some profiling software over it), it appeared to me that things were about even with respect to performance.

Reality (The Results are In!!)

In the end; it doesn’t really matter what the SDK says; it doesn’t really matter what a code analysis says.  It matters only what your real world experiences are (THE END RESULT) when you attempt to use these mechanisms.  So I wrote some code and came to my own conclusions, with the code and results as follows:

<pre>System.Diagnostics.Stopwatch stopWatchCreate = new System.Diagnostics.Stopwatch();
System.Diagnostics.Stopwatch stopWatchExecute = new System.Diagnostics.Stopwatch();

using (var xrm = new Xrm2011.XrmServiceContext("MrCRM.Dev"))
{
	xrm.MergeOption = Microsoft.Xrm.Sdk.Client.MergeOption.NoTracking;

	WhoAmIRequest reqWhoAmI = new WhoAmIRequest();
	WhoAmIResponse respWhoAmI = (WhoAmIResponse)xrm.Execute(reqWhoAmI);

	for (int i = 0; i < 10000; i++)
	{
		// execute
		stopWatchExecute.Start();

		{
			CreateRequest req = new CreateRequest
			{
				Target = new Entity
				{
					LogicalName = "contact"
				}
			};
			req.Target.Attributes.AddRange(
				new KeyValuePair<string, object>("firstname", string.Format("Person{0}", i)),
				new KeyValuePair<string, object>("lastname", string.Format("Name{0}", i)),
				new KeyValuePair<string, object>("telephone1", string.Format("555-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone2", string.Format("554-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone3", string.Format("553-{0:0000}", i)),
				new KeyValuePair<string, object>("address1_line1", string.Format("{0} Place Street", i)),
				new KeyValuePair<string, object>("address1_city", string.Format("CityVille{0}", i)),
				new KeyValuePair<string, object>("address1_stateorprovince", string.Format("State{0}", i)),
				new KeyValuePair<string, object>("address1_postalcode", string.Format("553-{0:0000}", i))
			);

			CreateResponse resp = (CreateResponse)xrm.Execute(req);
		}

		stopWatchExecute.Stop();

		// create
		stopWatchCreate.Start();

		{
			Entity target = new Entity
			{
				LogicalName = "contact"
			};
			target.Attributes.AddRange(
				new KeyValuePair<string, object>("firstname", string.Format("Person{0}", i)),
				new KeyValuePair<string, object>("lastname", string.Format("Name{0}", i)),
				new KeyValuePair<string, object>("telephone1", string.Format("555-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone2", string.Format("554-{0:0000}", i)),
				new KeyValuePair<string, object>("telephone3", string.Format("553-{0:0000}", i)),
				new KeyValuePair<string, object>("address1_line1", string.Format("{0} Place Street", i)),
				new KeyValuePair<string, object>("address1_city", string.Format("CityVille{0}", i)),
				new KeyValuePair<string, object>("address1_stateorprovince", string.Format("State{0}", i)),
				new KeyValuePair<string, object>("address1_postalcode", string.Format("553-{0:0000}", i))
			);
			xrm.Create(target);
		}

		stopWatchCreate.Stop();
	}
}

System.Diagnostics.Debug.Print("Create:\t{0} milliseconds", stopWatchCreate.ElapsedMilliseconds);
System.Diagnostics.Debug.Print("Execute:\t{0} milliseconds", stopWatchExecute.ElapsedMilliseconds);

I tried to make it as “fair” as possible by:

  • Preceding the real code with a WhoAmI request to initialise the connection
  • Interspersing the calls
  • Running the entire thing twice, wherein the 1st iteration placed the Create 1st, Execute 2nd; and the 2nd iteration reversed as above

The results?

Create wins!!  But only marginally, particularly given we’re creating 10,000 records and the difference was quite small.

Create: 382873 milliseconds
Execute: 384767 milliseconds

Create: 404256 milliseconds
Execute: 406978 milliseconds

Conclusion

So, I would say that all in all:

  1. Unless you NEED to scrounge every last millisecond of performance out of your application, code whichever way you prefer
  2. Don’t blindly believe everything you read in the SDK – test and confirm

Thanks to Laughing John for the inspiration.

Older Posts »

Create a free website or blog at WordPress.com.