SQL Performance Tuning – Overview

2012/11/17

SQL performance tuning is a huge topic, and you need to know many detailed info of behavior with different version of OS and SQL. In these posts I will share my general approach of such work.

Before Tuning

The first thing you should keep in mind is end users don’t care SQL, they care the overall application response time. You have to know your application. Sometime we need to tune how application call database. One example is you should cache your blog post from dynamical page(query from database every time) to statics page.

The second thing is database performance is related to application behavior(queries). And application behavior could change from time to time. So tuning is not one time action. You should create your performance baseline and keep on monitoring your environment. The usage of performance baseline is to tell you how SQL server run when the application response time is acceptable and to shorten your time to find out what’s abnormal. Unlike other product(AD, Exchange), SQL and IIS are platform to run your application and each SQL has its own performance baseline. You can’t simply relay on one general baseline, you should create for each of the critical application. You could tune SQL without baseline but it will take you longer time to find out what’s abnormal.

The baseline should cover
1. OS level performance counter like CPU, memory, disk and network etc.
2. SQL instance level performance counter like buffer pool, access methods etc.
3. SQL instance level DMV info like top n database, top n wait stats and top n query.
4. (optional) Application level key workload count.

Load test before RTM is very important, you could build the initial baseline here. Although it’s hard to simulate all the end user query pattern, it’s useful to find out what’s different between load test and real production loading later on. We will discuss performance counter meaning in 3rd post.

The third thing is set your tuning target and prioritize the findings. My idea is no need to change code immediately if the overall application response time is acceptable. You could put the finding into potential issue list. But you need to keep on monitoring the performance data and plan your human resource to fix these potential issues before they become real problem. One example is you find out full scan number is high but database size is small and physical memory is sufficient. Most of the read is logical read than physical read. Then there is no performance issue at this moment. But you should know that logical read will increase when table size growing, and logical read will become physical read once database size being larger than memory buffer pool size.

Tuning Workflow
Typical tuning workflow is like:
1. Data collection. There are multiple information could be captured by different tools like msinfo32, eventvwr, SQL error log, perfmon and profiler. I will share the tips in 2nd post.
2. Perfmon analysis – OS. We will find out if any hardware resource is bottleneck here.
3. Perfmon analysis – SQL. We will find out how SQL Server consume hardware resource here.
4. Top N databases by bottleneck resource. Using profiler output or DMV to find out how databases consume resource here. Move them to different instance or tune their Top N queries.
5. Top N queries by bottleneck resource. Using profiler output or DMV to find out top n queries, then tune the single execution cost or reduce the execution number.
6. Wait stats. SQL Query Duration = CPU Time + Wait Time. Both duration and CPU time could be found from profiler output and DMV, then you can calc the wait time. If wait time is high, no doubt that you should dig into wait stat.
7. Check top n queries’ execution plans, tune them case by case. There are 2 performance tuning scenarios, 1st is single query slow, 2nd is concurrency related. This step is for single query slow scenario. The latter is more difficult to tune. I plan to cover wait stat and concurrency in 4th post of this tuning serial.


Support Policy of WSS_LOGGING Database is Changed

2012/11/17

SharePoint logging database (default name wss_logging) is created by SharePoint usage and health data collection. The purpose of this database is to store different logs(event log, ULS etc) and you were allowed to query directly and create your own view. But support policy is changed recently. You can only query from the pre-define view but aren’t allow to change the schema.

What does this mean to me? The change is simpe and rough to avoid potential performance issue but you can’t benefit from the log analysis ability powered by logging database. What we should do are..
1) Pay attention to the size of wss_logging. Though table partition is applied to this database if you are using Enterprise Edition, the database could become too huge.
2) Which instance this database is hosted, don’t put it in same instance as your critical content database. Or restore a backup from PROD env to UAT env then query from it instead.

Ref:

http://technet.microsoft.com/en-us/library/hh769360(v=office.14).aspx

http://support.microsoft.com/kb/841057


SQL Server 2008 Microsoft Certified Master

2012/11/09

SQL Microsoft Certified Master, what’s that? To me it means a milestone of my SQL Server journey.

I began to work with SQL server since SQL 2000 as dev in an internal .Net based project. It was mid of year 2005 but we still used SQL 2000 in our project. We were the only small team which adopted Microsoft based technology in our company, rest of the people used Java + Oracle. My tech lead Ric Zhou asked me to design the database schema and write stored proc. I still remember the 1st task of my SQL journey is to write a stored proc template to handle error handling and nested transaction. Do you remember the

select @@error

 

in the old day and that nested transaction rollback behavior (don’t begin a nested tran, use save point!) During this project, Ric joined Microsoft as Sr. Premier Field Engineer. After this project, I coded Java + Oracle PL/SQL and many Oracle report (version 6.0, buggy but good to grow your patience), and the same time I acted as DBA to maintain the previous .Net project.

Mid 2007, I joined Microsoft as vendor Premier Field Engineer as well. Ric acted as my mentor and gave me a 20+ learning items list in day one. I was under pressure as I stepped out my safe zone, but after 3 months hard working I used to it. Sep 2007, I joined another project which in the middle of Death March, it’s over budget at the end. We used SQL 2005. My role first was dev to re-write schema and stored proc of one critical module, the requirement is high performance. After weeks, I became module owner to lead 3 dev to rewrite .Net code. Later I was also owner to transfer data from previous system to the new platform. Before RTM, I also acted as DBA to maintain Dev/UAT/PROD environment and owner of DB version control + release. I continuously worked more than 5 months with no break with team of dedicated and passional people. And finally we delivered the 1st version in time. I learned precious thing from this project, you could always learn more from failure than success. Specific to SQL, I used denormalization method to make query run faster like, there was no HierarchyID in SQL 2005 but I implemented one, paging function, new error handling, template to generate TSQL from schema to meet quick change, data ELT and many other DBA skill like how to configure cluster.

After this project, I became full time employee to support SQL for Premier customer in South China and HK. Any reader who is IT Dept lead, you should buy one Microsoft Premier Support contract, it is worth that price. Any reader who loves technology and enjoys challenges, you should join Premier Field Engineering. It’s place to learn and use your knowledge from real customer case every day. BTW, forget about work/life balance if you are geek you will love those challenges like me. We cover both proactive and reactive request, from storage capacity plan to replace broken SAN, from health check to performance tuning, from HA/DR solution design to fix corrupted database, from teaching workshop to schema design review. Every time I hit a new problem, I learn more about SQL after fixing it.

Microsoft also encourage people to share what they learn and learn from others. My SQL teammate Ernest Ho teaches me A LOT during these years. This crazy hardworking SQL guru is 1st SQL MCM in our region, and this inspires me to consider to try SQL MCM program. But MCM program was costly at that moment, 3 weeks training in Redmond! Year ago, SQL MCM is changed a bit. You could go to the exam without attending the training. I still recommend you should join training from SQLSkill.com hosted by Paul Randal if you have time and budget.

I visited Seattle in Feb 2012 to join an internal meeting, free MCM knowledge exam was offered there. (One more reason to join us, isn’t it?) In the last day of the meeting, I told myself why not to take a trial if I was confident with my SQL skill. Then I booked the exam and rushed into it without preparation. It’s TOUGH 4 hours and I was 100% exhausted after that. Weeks later I found out I passed it when I checking something else in MCP website. WOW! I booked remote lab exam quickly in March before Robert Davis’ last working day, Robert was PM of SQL MCM project at that moment. It’s other TOUGH 5.5 hours! The feeling was like I hoped time would never end then I could have time to solve all the problems but I hoped time elapsed quickly to escape from the mire. No surprise that I failed the lab exam. I realized that what SQL MCM test was different from what I assumed, and there were many thing I should know about SQL. So before my 2nd trial in Oct 2012, I watched all the free video in SQL MCM website and setup my own lab. I learned a lot from this which no one could take away even I’m not MCM. With help from boB Taylor, current PM of SQL MCM, I took the 2nd trial and passed it this time. Now I’m one of SQL 2008 Microsoft Certified Master.


Read IP from TMG Log

2012/04/23

 You can use SQL Server as TMG log destination. You can also use BCP to save data to local csv file then edit with your favorite excel. The only problem here is that columns ClientIP, DestHostIP and NATAddress are uniqueidentifier type. Record is like "C0A84C4A-FFFF-0000-0000-000000000000". The script below is to transfer them to readable IPv4 format. Enjoy it.

PS. C0A84C4A == 192.168.76.74.

SELECT *,
    CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),ClientIP),1,2), 1)))
    +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),ClientIP),3,2), 1)))
      +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),ClientIP),5,2), 1)))
      +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),ClientIP),7,2), 1)))
      as ClientIP_New
        ,
      CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),DestHostIP),1,2), 1)))
    +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),DestHostIP),3,2), 1)))
      +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),DestHostIP),5,2), 1)))
      +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),DestHostIP),7,2), 1)))
      as DestHostIP_New
        ,
      CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),NATAddress),1,2), 1)))
    +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),NATAddress),3,2), 1)))
      +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),NATAddress),5,2), 1)))
      +‘.’ + CONVERT(varchar(3),CONVERT(int, CONVERT(varbinary, ‘0x’+SUBSTRING(CONVERT(varchar(max),NATAddress),7,2), 1)))
      as NATAddress_new
from dbo.WebProxyLog
 

Faster DBCC CHECK

2011/12/21

It’s common nowaday that taking long time to run DBCC CHECK for TB level database. Now there is a way to consume more IO resource to run DBCC CHECK faster, by using flag 2562 and 2549. This is only availble for SQL 2008 R2 CU11 now.

My 2 cents, pay attention to tempdb usage when enabling these flags as the trade-off is tempdb size and io. And dont set them as startup parameter, just enable them in the session level when you are ready for the trade-off.

http://support.microsoft.com/kb/2634571


GUI Error Message is Not Enough

2011/12/18

One of my customers was migrating their databases from one SQL 2005 instance to another one last night, more powerful hardware with same SQL Server version. They didn’t recover system database, just moved the login/user/job/linked servers and user databases to new instance. Then they tried to create transactional publication in the new instance via GUI wizard but failed. The same transactional publication was created in the previous instance without error.

The error message from GUI read: "Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addpublication, Line 159 This database is not enabled for publication." And we tried to restore the database again with KEEP_REPLICATION option like what the error message help link told us to do. It failed again. Uhh, what I did next was to use GUI wizard to generate the creation scripts first and executed the script one by one to see which step failed. The error message I got then was different from GUI one, it told me that connecting to linked server repl_distributor time out. After confirming with customer, they created this repl_distributor linked server when moving the system objects. Unluckily this one is reserved from replication and it would be created by replication initialize process. I then used sp_dropdistributor @no_checks=1, @ignore_distributor=1 to remove it. After that we could create the new publication.

What it impresses me again is how to do the t-shoot to SQL Server problem. Error message from application is not enough, (yes, GUI wizard here is application which sending T-SQL to SQL Server), we should capture what sent to back end and run them it one by one to narrow down the problem.


Chinese Character Unicode Conversion – 1

2011/12/16

There are 2 ways to do the conversion: in-place and side-by-side. In-place means alter the column type directly; side-by-side means create a new database with new schema, and then use bcp to export/import data. It’s hard to say which one is better, it depends. If most of the data in database will be converted, I will use side-by-side. If not, I will use in-place.
 

This is a post from SQLCAT discuss side-by-side conversion http://blogs.msdn.com/b/mssqlisv/archive/2006/07/07/659374.aspx, you can take it as good start of this topic. In my post, I will discuss how to convert char/nvarch/ntext columns to unicode type, which store Chinese charater using in-place method. To narrow the scope, I assume collation of the database is already Unicode ready like Chinese_Hong_Kong_Stroke_90, and the SQL Server is SQL 2005/2008 which have datatype nvarchar(max). And I will cover real-world scenarios: find out columns which store Chinese character, planning of extra storage, validate the converted result and handle constraint.

 


Aggreagation Design Wizard – “Next” in “Specify Object Counts” is Disable

2011/12/11

Last month I hit this issue on customer site: I was using Aggreagation Design Wizard to do the usage based optimization design, but the "Next" button in the "Specify Object Counts" page was grey even I pressed "Count" button.

What you need to do if you hit the same problem is: Take a look at the dimension list to see any dimension has red line under it, open it to see which attribute has red line under. Input whatever number to update the count of this attribute, then the under red line would disappear.. And the "Next" button can be pressed to continue now!

Hope this can save your time.


User DB Last Backup Time

2011/09/30

Use the script below, you can check the last  time of different user database backup, pay attention to those never been backup!

SELECT db.Name AS DatabaseName,
COALESCE(CONVERT(VARCHAR(50), MAX(bs.backup_finish_date), 120),”) AS LastBackUpTime,
 CASE ISNULL( bs.type,’X’)
 WHEN ‘D’ THEN ‘Full backup’
 WHEN ‘L’ THEN ‘Log backup’
 WHEN ‘X’ THEN ‘No backup’
 WHEN ‘I’ THEN ‘Differential database backup’
 WHEN ‘F’ THEN ‘File or filegroup backup’
 WHEN ‘G’ THEN ‘Differential file backup’
 WHEN ‘P’ THEN ‘Partial backup’
 WHEN ‘Q’ THEN ‘Differential partial backup’
 END AS TYPE
 FROM sys.sysdatabases db
 LEFT OUTER JOIN msdb.dbo.backupset bs ON bs.database_name = db.name
 WHERE db.dbid > 4
 GROUP BY db.Name,bs.type

Good luck.
 
 


Long Running Transaction

2011/09/27

For SQL 2000, the only way to find out long running transaction is to run DBCC OPENTRAN.

Begin from SQL 2005, you can query DMV with the script below:

SET NOCOUNT ON
select db_name(database_id) as dbname,tr.transaction_id,ses.session_id,database_transaction_begin_time
from sys.dm_tran_database_transactions as tr
join sys.dm_tran_session_transactions as ses
on tr.transaction_id = ses.transaction_id
where is_user_transaction = 1
and database_transaction_begin_time < (GETDATE()-’00:05:00′)

It will help you to find out transactions which running longer than 5 min.