Wednesday, January 26, 2011

UNIQUEIDENFIER and a primary key constraint..

I would lke to share some tips which can affect performance  big way.
 
This is about "uniquifiers " datatype...although i hope it wont be favourite for many after reading this.
 
When you chose your cluster index which is variable length column then you are adding 4 bytes to every row which you is an overhead.
 
Now interesting point is that UniqueIdentifier are also stored in variable blocks, which means you are adding overhead of 4 bytes every time to your row.
 
Now you can imagine what I am trying to tell you :) ......Ofcourse you are performing more IO unneccesary by adding those extra 4 bytes.
 
Also uniqueidentifier is not sequential, which means sql server has to work more harder when you make it as a clustered index(which works best sequentially).
 
In a nutshell, UniqueIdentifier  is very big data type and when you are inserting large number of rows this cause huge fragmentation and consequently the size of your cluster index grows huge as well.
 
Please use data type like  int and  make your cluster index narrow, sequential(uniqueidntifier is non sequential) and with fixed size columns.
 
Believe me this will make your life much easier :)
 

Does backup records any changes happening at the time backup job is running

I was at one of my client site and very simple though very smart question was thrown at me by one of the developer:
The question was "Does backup records any changes happening at the time backup process is running?"

The answer is "Yes". You can perform small exercise where you can insert many rows into a single table, Kick off the database backup process and open other session and run few DDL statements against the same database.

Oncebackup is finish try restoring the database with the .bak file you just took and you will see those new objects(tables in my experiment) were captured by the  process although i ran those statments after i kicked off backup.

:) :) try it yourself

Wednesday, January 12, 2011

Is recompilation good or bad?

I am assuming you all know what recompilation means here. In simple words for those who don't know, SQL Server recompiles the statement (in sql server 2008) or procedure when it realises that Plan existing in cache is no longer beneficial or aged out due to specific reasons.
Recompilation is useful when new execution strategy is require to  process the query but as recompilation involves cost of extra CPU cycle, too often recompilation of a store procedure can be expensive.
You have to make sure you know the cause of recompilation of store procedure. There can be various reasons behind this .Few of them are:
·         Schema changes to the temporary tables define inside the body of procedure
·         General DDL changes to the local tables
·         Change in statistics of table due to lot of insertions or distribution of data
·         Force recompilation by using recompile hint
·         Changing the SET option

If you know the cause of too much recompilation (I am using the word too much as occasional recompilation is not concern here) then you should look to rectify that cause. For example if the frequent recompilation is happening due to change in distribution of data or statistic change then you can use Keepfixed plan hint to avoid so.
If you are not dealing with large amount of data sets then use table variables instead of temporary tables. Unlike temporary tables, table variables do not create statistics hence you can save unnecessary recompilations caused because of temporary tables.
Using brilliant tool like profiler, you can capture the statements causing recompilation. Once you know which statements are causing recompilation, you can dig into the cause and work on it.

So next time if your trace shows lot of recompiles you know what to look for :)

Tuesday, January 11, 2011

Why my Buffercache hit ratio is High but Page Life expectancy is very low?

Buffer cache it ratio and Page Life expectancy are the two important counters which helps to identiy SQL Server memory issue.I have often seen the scenarios where these two counters conflict each other.

Recently I was monitoring one production SQL Server using spotlight and constantly I was getting pagelifeexpectancy alert but when i sneaked into the BufferCacheHit ratio counter, it was 99.6%.This is especially in the case of 32 bit SQL Server.

So the question came to my mind was "is their really Memory bottleneck?"

Answer is "Yes". What this mean is logical reads are still using buffer  but the pages are not staying their long enough also.

I would prefer PageLifeExpctancy alert over BufferCachHitRatio.Ideally only buffer cache hit ratio is not enough to conclude memory bottleneck.

Also in this kind of scenario i would look at IO wait as well.