Tuesday, December 18, 2012

SQL Server audit and Lock request time out period exceeded. (.Net SqlClient Data Provider) error


Have you ever noticed "PREEMPTIVE_XE_CALLBACKEXECUTE" wait type on one of your SPID which is not letting you perform any operation at SQL Server instance???

I have seen this wait type  when using SQL Server auditing and it seems that transaction is stuck and blocking meta deta. The exclusive lock was placed on sysclsobj in master database

If you try to see corresponding OS thread(KPID), you would notice that thread  actually belongs to sql server agent task.

Below are some additional findings :

 SPID for server audit is blocking all other instance level operation
 Stopping and restarting sql server agent job does nothing
 "PREEMPTIVE_XE_CALLBACKEXECUTE" reflects that sql thread is waiting on OS level resource
 Eventually there is no wait resource

This is a bug in SQL Server 2008 R2 SP1 and is fixed in CU 6 : ): )
Puneet

Sunday, December 9, 2012

Update statistics some key points!

As a DBA, we all know that update stats is one of the key, which let SQL optimizer to decide the best (best means fast :)) way to retrieve the query results.
Now we also know that the best practice is to not just let statistics get updated automatically but also to place some kind of manual plan.
Although, you can choose a best time to manually update the statistics on every server in your farm, but realistically, you don’t have control over the time of modification of data into the tables as its control by application/users/business.
So what you can do your best is to make sure that statistics are updated most of the time!
You can leverage "sys.dm_db_stats_properties" to monitor the stats on your objects, especially large objects.
For smaller to medium table, it’s not a big concern as auto stats with manual plans can take care of most of the situation but with Large size tables(millions of rows), it can be bit tricky!
I would not rely on default automatic update of statistics for large objects.
Tables with millions of rows mean, you have to wait longer before it reaches 20% of threshold value.
Thankfully, SQL Server 2008 R2 released trace flag 2371, which controls the update dynamically.
What this mean is larger the number of rows exposed to modification, smaller the threshold get set and quicker the automatic update can happen.
Having said all this, make sure you test this trace flag  properly for your application and then make a decision.

Tuesday, September 25, 2012

Memory architecture changed in SQL Server 2012

So far, as a DBA'S we were always  trained to configure memory on 64 bit OS and follow the things like lock pagesin memory, setting max and min memory etc

With the release of SQL Server 2012, some of the things need consideration as memory structure has changed a bit ( for good ofcourse :) )

In SQL Server 2012, now CLR and MPA are now part of the buffer pool itself.

What this means?????

This means requests for CLR allocation and multi-page allocation(MPA) will be handled by buffer pool and is not outside like it used to be in sql server 2008 R2 and previous versions.

Now when we are configuring max server and min server memory, we have to keep in mind that allocations of pages greater than 8KB, like that for CLR and MPA are also controlled by max and min settings.

So these settings need to be revised, whenever you are upgrading to sql server 2012.

sql server 2012  now will account for more memory allocations(even pages > 8kb) from buffer.

Also for the 32 bit SQL, AWE feature is deprecated.
 

Thursday, September 20, 2012

Set based Windows function should remove the need of cursors completly

Today I was refreshing myself with the new Windows functions availaible in SQL Server 2012. It all started with back in windows server 2005, when we saw lot of usage of CTE,OVER(), RANK,DENSE_RANK etc to support lot of complex queries into simple code.

With SQL Server 2012, this list has extended with extended support over clause, FRAME, LEAD,LAG,FIRST_VALUE,LAST_VALUE,PERCENT_RANK,PERCENTILE_DISC etc

As a developer, you should leverage more of these set based functions and as a DBA, when reviewing code, make sure you see usage of these functions more :)

Thursday, May 10, 2012

Effect of Transactional Replication on your Publisher database!

This is one of the other similar things, which you can only tell once you have performed some kind of test.

Have you ever wondered, If transactional replication has any overhead on your Production database? and if yes, then how much really?

As transactional replication relies on reading and writing transactional logs, The answer is "YES" :)

You will notice degrade in your write performance if you have transactional replication configured!

In a scenario, where you are hosting distribution database on the separate server, I have seen write performance going bad by even 30% !! Consider a scenario, where you are inserting 100 of thousands of rows as a part of your batch process.You will be better of pausing your replication, If you want to speed up this operation.