ConfigMgr 2012 determine if there is a performance problem

The number one reason that small sites like this have performance issues is because of disk IO.  If this is a virtual machine, ensure that you are following the vendor recommendations for virtualizing SQL (don’t over commit processors/cores, ensure SQL data files, tempdb, and logs are on separate spindles, format SQL drives with 64KB block sizes, split SQL data files).

CPU does not matter too much. Disk i/o is more important. Also see
MP replica might take some load off of an MP, but it also adds extra load because of SQL replication.
Content transfer can be scripted: case you did not use a DP group).

Are you experiencing current perf issues with all roles on the site server itself? Can you describe these perf issues?

Have you setup a DB re-indexing and statistics rebuild agent task?

Have you run perfmon to baseline the perf of the system?

Is the system a VM on an over-committed host and/.or is it sharing spindles with other applications?

System Center 2012 Configuration Manager Best Practices

Taking Your Server’s Pulse


From the above blog

SQL is an every-day part of a ConfigMgr’s life. Below is a small set of scripts we have used in production on ConfigMgr 2007 for quite a while. Standard disclaimer: Test everything first.


SQL server uses statistics to keep track of values in an index, and determine when and how to use that particular index while processing a query. This is a horribly simplified definition (because I barely understand it), but basically it means that statistics are a way for SQL to find the best index to use. By default when you create a database in SQL 2005 (such as the ConfigMgr Database), the Auto Update Statistics option is turned on. You can check it by opening SQL Management Studio, right click on the database, select properties, then select the Options.

Now that you know what they are, it’s important to know when to manually kick off an update to the statistics. There are times when the key values in an index will change – especially in the ConfigMgr database. Patch Tuesday, for example – there is a lot of new data flooding the ConfigMgr and WSUS databases as clients scan and report back patch status. After large distributions also change a large amount of data in the indexes (status from distribution and advertisements).

Auto-Update of Statistics will catch these changes, but there will be times when you want your queries to execute at their fastest without waiting for the system task to kick off. There are also times when the system task will take a lower priority to other tasks, effectively keeping your statistics out of date. When you need to update the stats on index manually, use the following command:

UPDATE STATISTICS TABLENAME –replace Tablename with appropriate table

This works great on a single table, but who wants to do that for an entire database? Use the built-in stored procedure to update all statistics on all indexes in your database. Be aware that this can take some time, and if you don’t have Async Auto Update Statistics on, could cause queries to time-out while it’s running.

/******Code Below Here******/

USE ConfigMgr –change to the name of your database

EXEC sp_updatestats

/******Code Above Here******/

We use this on a set schedule, every 12 hours, to keep our stats update to date, and avoid any priority problems with the auto-update process. This does have an impact on indexes, so be sure you test accordingly.

From MS

Exec sp_MSForEachtable ‘DBCC DBREINDEX (”?”)’

Exec sp_MSForEachTable “UPDATE STATISTICS ? with fullscan”


If you suspect that you have fragmented SQL indexes in your ConfigMgr database then what are your options? You could wait for your “Rebuild Indexes” ConfigMgr maintenance task to come around again, or you could just go ahead and rebuild those indexes quickly from SQL Management Studio.

If you want to rebuild all indexes in the ConfigMgr database, which is quite a lengthy and process consuming task, you can run this query. Keep in mind that this is an intensive operation, so it’s best to do it off hours. Expect it to take quite a while to complete.

/********************CODE BELOW HERE*************/

USE SCCM –Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)


SELECT table_name FROM information_schema.tables

WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName



DBCC DBREINDEX(@TableName,’ ‘,90)

FETCH NEXT FROM TableCursor INTO @TableName


CLOSE TableCursor


/********************CODE ABOVE HERE*************/

But what if you just want to rebuild a single index? First, you need to know the name of the index. You can find that out a variety of ways, including just looking directly at the table in SQL Management Studio – there is a sub-folder per table for indexes.

Once you have the index name, you can run this quick statement to just rebuild that particular index:

/********************CODE BELOW HERE*************/

DBCC dbReindex(‘INDEX_NAME_GOES_HERE’,’ ‘,90)

/********************CODE ABOVE HERE*************/

BTW, the 90 in both queries is a fill factor. Typically you won’t have to change that. A useful tip page can be found here:

While working on a performance problem with a couple of very talented SQL gurus I was handed this script. It checks, among other things, the fragmentation of the indexes in the ConfigMgr database. This will help tell you if your rebuild indexes task is being run often enough, or if you need to target specific indexes more often with an additional SQL Task.

**************CODE BELOW HERE********************

SELECT * FROM sys.dm_db_index_physical_stats


order by 9 desc;


*************CODE ABOVE HERE**********************

Be sure you change the “ConfigMgr” above to the name of your database!!

This is going to return quite a few indexes, and if you check the 9th column (avg_fragmentation_in_percent), you can see how badly they are torn up. Now, before you get too upset that most of them read 100%, keep in mind the Page_Count column. If an index only has 5 pages, and it shows 100% fragmentation, then that is not really that big of a deal. It just means that those 5 pages aren’t in order. If, however, you see an index with 20,000 pages and it shows a high fragmentation percentage….well, then you can be sure that you aren’t getting all of the performance you can from your SQL database.

If you need to find out what index has a high fragmentation – check out the 2nd column. Object_ID. Note the object_id and run this query:

*********CODE BELOW HERE***************


FROM master.sys.objects

*********CODE ABOVE HERE****************

Be sure you change the “OBJECT_ID” above to the appropriate ID you want to query!!

This will return the ‘common’ name for the index, and should give you a good idea what table it’s attached to.

So, keep in mind that the 9th column – avg_fragmentation_in_percent – will show 100% for quite a few indexes….but the page count on those indexes should be low. If you find an index with a high number of pages, and high fragmentation percent, then consider running your Rebuild Indexes task more often, or target specific indexes with a SQL task.

Generic Space:

Ever wonder what is taking up all the space in your ConfigMgr database? This SQL query will show row count, reserved/used data size, and reserved/used index size.

This code works for any database, not just ConfigMgr. Enjoy!

**********************CODE BELOW HERE*************************

declare @id int

declare @type character(2)

declare @pages int

declare @dbname sysname

declare @dbsize dec(15,0)

declare @bytesperpage dec(15,0)

declare @pagesperMB dec(15,0)

create table #spt_space


objid int null,

rows int null,

reserved dec(15) null,

data dec(15) null,

indexp dec(15) null,

unused dec(15) null


set nocount on

– Create a cursor to loop through the user tables

declare c_tables cursor for

select id

from sysobjects

where xtype = ‘U’

open c_tables

fetch next from c_tables

into @id

while @@fetch_status = 0


/* Code from sp_spaceused */

insert into #spt_space (objid, reserved)

select objid = @id, sum(reserved)

from sysindexes

where indid in (0, 1, 255)

and id = @id

select @pages = sum(dpages)

from sysindexes

where indid < 2

and id = @id

select @pages = @pages + isnull(sum(used), 0)

from sysindexes

where indid = 255

and id = @id

update #spt_space

set data = @pages

where objid = @id

/* index: sum(used) where indid in (0, 1, 255) – data */

update #spt_space

set indexp = (select sum(used)

from sysindexes

where indid in (0, 1, 255)

and id = @id)

– data

where objid = @id

/* unused: sum(reserved) – sum(used) where indid in (0, 1, 255) */

update #spt_space

set unused = reserved

– (select sum(used)

from sysindexes

where indid in (0, 1, 255)

and id = @id)

where objid = @id

update #spt_space

set rows = i.rows

from sysindexes i

where i.indid < 2

and = @id

and objid = @id

fetch next from c_tables

into @id


select top 25

Table_Name = (select left(name,30) from sysobjects where id = objid),

rows = convert(char(11), rows),

reserved_KB = ltrim(str(reserved * d.low / 1024.,15,0) + ‘ ‘ + ‘KB’),

data_KB = ltrim(str(data * d.low / 1024.,15,0) + ‘ ‘ + ‘KB’),

index_size_KB = ltrim(str(indexp * d.low / 1024.,15,0) + ‘ ‘ + ‘KB’),

unused_KB = ltrim(str(unused * d.low / 1024.,15,0) + ‘ ‘ + ‘KB’)

from #spt_space, master.dbo.spt_values d

where d.number = 1

and d.type = ‘E’

order by reserved desc

drop table #spt_space

close c_tables

deallocate c_tables

**********************CODE ABOVE HERE*************************

Open Transactions:

Open transactions – transactions that run too long or are hung – can cause havoc on the ConfigMgr database. Notice a backlog of files (DDRs or Mifs)? Slow processing in general? Collections having a problem updating? You might want to give this a quick check.

First, lets find out what the oldest transaction on the ConfigMgr database is. Open SQL Management Studio, and start a new query. Change the focus to your SCCM database,and run this command first:


Did any transactions come back? If so, check their start time and make sure they aren’t too long in the past. Anything past a couple of minutes, except for the largest of queries, would be unacceptable. Note the Process ID – we will use that next.

So now we know what the oldest transaction is, but what do we do with this info? Let’s see what that transaction is doing. Run this command next:


You will see a snippet of the code that the process is running. Does this help track down what the open transaction is? Perhaps a long-running query rule for a collection, or a site maintenance task that is hung. Typically you can get a decent idea what it is by examine the output of Inputbuffer.

Now that you know what is causing the problem, how do you deal with it? Well, if you are sure that you want to stop this transaction, you do it easily with one more command. Use it with caution!


Note that if is sometimes helpful to do these same steps on the tempdb of the SQL server the ConfigMgr database sits on…especially for long running transactions.