Pogo69's Blog

February 21, 2011

SQL Server Database Maintenance for the CRM Developer

Filed under: CRM, DBMS Management — Tags: , , , , , , — pogo69 [Pat Janes] @ 14:45

Overview

SQL Server has always been relatively easy to install and configure.  The GUI tools provided make “part-time DBA” duties a reality for most developers working with Microsoft products; I’m sure this applies, at least on occasion, to almost every developer working with Dynamics CRM.

Unfortunately, the foisting of DBA duties upon unexpecting developers is not usually accompanied by commensurate training.  The ability to “point-n-click” administer a SQL Server database rarely translates into optimum performance and can often help bring a database to a standstill.

The one area in which I have found every single production CRM database lacking is index maintenance.

Index Maintenance

In order to allow you to efficiently interact with your data (CRUD operations), the SQL Server query optimiser uses index statistics to formulate an execution plan.  Indexes are stored, just like the data to which they point, in pages.  When a page is filled (or filled to a certain level dependent upon the “fill factor” which I will discuss briefly in the following section) further additions to the index will be placed in a subsequent page.

Over time the statistics pertaining to an index become stale; the data within an index becomes fragmented.  Regular Index Maintenance can be used to help alleviate these issues.

Fill Factor

When creating an Index, you are given the option of specifying the “fill factor”.  The fill factor defines “how full each leaf level page is when the index is created”.  Choosing a fill factor, appropriate to the rowsize of the index, can influence the prevalence of page splits, and how correspondingly fragmented your index will become over time.

So, while the main focus of this post is to look at how we can “de-fragment” our indexes, intelligently defining them from the outset can help ensure that the maintenance required is minimal.

For further information, explained in terms that most of us should be able to understand, and with a great deal more clarity than I am likely to be able to deliver, see the following blog posting:

Index Fill Factor & Performance Considerations

Maintenance Plans

SQL Server Management Studio (and SQL Server Enterprise Manager in SQL 2000) provide GUI Tools to visually define “Maintenance Plans”, that allow the scheduled maintenance of your SQL Server databases.  There are a number of built-in tasks available including:

  • Rebuild Index Task
  • Reorganize Index Task

Reorganising an Index, attempts to defragment the data within the Index, much like Windows Defragmentation Tool does with your file system.  Rebuilding an Index, completely rebuilds the Index anew.

These tasks allow you to rebuild the Indexes for any or all of your database, and any or all tables within each database.

At the very least, you should schedule one or both of these tasks to regularly maintain the indexes within your CRM database <OrgName_MSCRM>.  You could, for instance, schedule a nightly full reorganisation and a weekly full rebuild.  I have never yet come across a CRM implementation where at least this bare minimum level of maintenance is being performed.

Index Fragmentation

Unfortunately, these tasks allow you only to indiscriminantly reorganise/rebuild indexes.  To efficiently maintain your database, indexes should be reorganised and/or rebuilt only when they require it.  Best practises advise:

  • Index fragmentation <= 30% – REORGANIZE
  • Index fragmentation > 30% – REBUILD

To REORGANIZE an Index:

ALTER INDEX <index_name> ON <table_name> REORGANIZE;

To REBUILD and Index:

ALTER INDEX <index_name> ON <table_name> REBUILD WITH (ONLINE = ON);

NB: The reorganisation of an Index is always an online operation; that is, the reorganisation does not cause database users to be unable to access the Index during query execution.  By default however, Index rebuilds take the Index offline and as such users will be unable to access the Index for the duration of the rebuild.  The (ONLINE = ON) option allows the online rebuilding of an Index, but is only available in the Enterprise Version of SQL Server.

To discover the current level of fragmentation of an Index:

DBCC SHOWCONTIG ('<table_name>', '<index_name>');

A Home-grown Index Maintenance Plan

It was while trying to fix an issue with “out of control” SQL Server log file growth, that I came across an outstanding library of Transact SQL that can be used to programatically and intelligently reorganise and/or rebuild your indexes dependent upon their current level of fragmentation:

Index Defrag Script v4.0 – by the SQL Fool, Michelle Ufford

By default, the script will reorganise every Index with a level of fragmentation greater than 10% and rebuild every Index with greater than 30% fragmentation.

You can use this script, as is, and it will be a vast improvement on the built-in Maintenance Plan tasks.

Log File Growth and Indiscriminant Index Rebuilds

We were experiencing the above mentioned log file growth due to the use of the Maintenance Plan ‘Rebuild Index Task’ and its indiscriminantly rebuilding every index in our database.  There were some very large indexes in the database in question, but many of them rarely changed and thus, did not require rebuilding.  Index rebuilds are a logged operation and were therefore causing enormous growth of the log file; before I began my crusade to eradicate the rampant growth, the logfile for the database in question, regularly blew out to over 100GB (for a data file of approximately 120GB).

After the implementation of the new Maintenance schedule, we were able to keep the logfile consistently under 5GB.

The following steps describe the sequence of events in the new schedule:

  1. Backup the LOG
  2. Set the database to ‘Bulk Logged’ mode
  3. Run the Index Defrag Script
  4. Set the database back to ‘Full’ recovery mode
  5. Backup the LOG again
  6. Shrink the LOG

We backup the LOG file, both pre and post maintenance, due to the pre-existing hourly LOG backup schedule.  This was in place so that we could restore to within one hour of any point in time, should anything catastrophic have happened to the database.

If you would like to implement something similar in your own CRM database(s), you will need to obtain a copy of the Index Defrag Script v4.0 from SQL Fool (I use it as is from within my own scripts).  You will also need copies of the following:

Optimisation Script

This script is the “Meta Script” that calls and controls the entire maintenance process; it takes only two parameters:

  • @DATABASE – name of the database you wish to maintain
  • @logsize in MB – target size of the resultant database LOG file – you’ll have to play around a little bit to determine the optimum size for your DB, as the SHRINK operation will fail if you choose too small a number

NB: I’ve hardcode the directory into which the LOG backup files are placed; I really shouldn’t have, but… I had to leave something for others to do.

CREATE PROCEDURE [dbo].[dba_optimiseDB]
(
 @DATABASE varchar(128),
 @logsize int = 2048
)
AS
 DECLARE @RC int;
 DECLARE @minFragmentation float;
 DECLARE @rebuildThreshold float;
 DECLARE @executeSQL bit;
 DECLARE @defragOrderColumn nvarchar(20);
 DECLARE @defragSortOrder nvarchar(4);
 DECLARE @timeLimit int;
 DECLARE @tableName varchar(4000);
 DECLARE @forceRescan bit;
 DECLARE @scanMode varchar(10);
 DECLARE @minPageCount int;
 DECLARE @maxPageCount int;
 DECLARE @excludeMaxPartition bit;
 DECLARE @onlineRebuild bit;
 DECLARE @sortInTempDB bit;
 DECLARE @maxDopRestriction tinyint;
 DECLARE @printCommands bit;
 DECLARE @printFragmentation bit;
 DECLARE @defragDelay char(8);
 DECLARE @debugMode bit;
 SET @minFragmentation = 10;      -- 10%
 SET @rebuildThreshold = 30;      -- 30%
 SET @executeSQL = 1;
 SET @defragOrderColumn = 'range_scan_count';
 SET @defragSortOrder = 'DESC';
 SET @timeLimit = 120;       -- 2hrs
 SET @tableName = NULL;       -- all tables
 SET @forceRescan = 1;
 SET @scanMode = 'LIMITED';      -- LIMITED / SAMPLED / DETAILED
 SET @minPageCount = 8;
 SET @maxPageCount = NULL;
 SET @excludeMaxPartition = 0;
 SET @onlineRebuild = 1;
 SET @sortInTempDB = 1;
 SET @maxDopRestriction = NULL;
 SET @printCommands = 1;
 SET @printFragmentation = 1;
 SET @defragDelay = '00:00:05';
 SET @debugMode = 0;
 -- take pre-optimise log backup
 declare @folder nvarchar(max);
 declare @file nvarchar(max);
 set @folder = N'\\data\backups\Database Maintenance\MSCRM Database Backup\CRM Optimise\';
 set @file = @DATABASE + N'_' + dbo.fFormatDateTime(GETDATE(), 'TIMESTAMP');
 EXEC dba_backupLog @DATABASE, @folder, @file;
 -- switch to bulk logged mode
 EXEC('ALTER DATABASE ' + @DATABASE + ' SET RECOVERY BULK_LOGGED');
 -- re-index your little heart out...
 EXECUTE @RC = [dba].[dbo].[dba_indexDefrag_sp]
    @minFragmentation
   ,@rebuildThreshold
   ,@executeSQL
   ,@defragOrderColumn
   ,@defragSortOrder
   ,@timeLimit
   ,@DATABASE
   ,@tableName
   ,@forceRescan
   ,@scanMode
   ,@minPageCount
   ,@maxPageCount
   ,@excludeMaxPartition
   ,@onlineRebuild
   ,@sortInTempDB
   ,@maxDopRestriction
   ,@printCommands
   ,@printFragmentation
   ,@defragDelay
   ,@debugMode;
 --switch back to full recovery mode
 EXEC('ALTER DATABASE ' + @DATABASE + ' SET RECOVERY FULL');
 -- take post-optimise log backup
 set @file = @DATABASE + N'_' + dbo.fFormatDateTime(GETDATE(), 'TIMESTAMP');
 EXEC dba_backupLog @DATABASE, @folder, @file;
 -- shrink it
 EXEC dba_shrinkLog @DATABASE, @logsize;
GO

LOG Backup Script

CREATE PROCEDURE [dbo].[dba_backupLog]
(
 @DATABASE VARCHAR(128),
 @folder VARCHAR(MAX),
 @file VARCHAR(MAX)
)
AS

 DECLARE @logpath VARCHAR(MAX);
 SELECT @logpath = @folder + @file + '.TRN';

 BACKUP LOG @DATABASE
  TO DISK = @logpath WITH NOFORMAT,
  NOINIT,
  NAME = @file,
  SKIP,
  REWIND,
  NOUNLOAD,
  STATS = 10;

 declare @backupSetId as int;
 select
  @backupSetId = position
 from
  msdb..backupset
 where
  database_name = @DATABASE
 and
  backup_set_id =
   (
    select
     max(backup_set_id)
    from
     msdb..backupset
    where
     database_name = @DATABASE
   );
 
if @backupSetId is null
  begin
   declare @error varchar(max);
   set @error = N'Verify failed. Backup information for database ''' + @DATABASE + ''' not found.';
   raiserror(@error, 16, 1);
  end;

 RESTORE VERIFYONLY
  FROM DISK = @logpath WITH FILE = @backupSetId,
  NOUNLOAD,
  NOREWIND;

GO

Shrink LOG Script

CREATE PROCEDURE [dbo].[dba_shrinkLog]
(
 @DATABASE nvarchar(128),
 @size int = 2048
)
AS
 EXEC
 (
  'USE ' + @DATABASE + ';' +
  'declare @log_name varchar(max);' +
  'select @log_name = [name] from sys.database_files where type_desc = ''LOG'';' +
  'DBCC SHRINKFILE(@log_name, ' + @size + ');'
 );

GO 

Summary

I create a standalone database ‘DBA’ to house these and other database maintenance objects.  The results of the Index Defrag Script are stored in tables created as part of the installation of the scripts you can download from SQL Fool.  It can educational to see just which Indexes are being affected in what way.

Create a free website or blog at WordPress.com.