When you title your blog entry Indifferent Performance everything you write all of a sudden sticks. While the title is possibly catchy the way you explain it; is a classic battle between good vs. bad.
We find this very thing in SQL Database Performance. There is going to be scans that are either good or bad. How the database was created will determine which type of scan is the best scenario. We need to run a trace to see which one is appropriate. Please take a second to read about each type of scan.
We need to select a few different events and run a few different scans to identify the tables or indexes causing the performance problems. Once we have a baseline set of scans we will be able to investigate where our performance hits are.
The first event you need to capture in any of the scans is the Scan:Started Event. This event is captured when ever a table or index scan is started. This will also be the big hitter to help identify excessive scanning. As always this will not be the only event you capture. Here is a list of all the players and their locations in the Trace Properties > Events Selection.
- Scan:Started: Scans
- Scan:Stopped: Scans
- SQL: BatchStarting: TSQL
- SQL: BatchCompleted: TSQL
- SP:Completed: Stored Procedures
When you select the data columns you will have to pull more then you believe is necessary. Some of these events do lack useful and needed properties. So you have to collect more which will obviously take more out of your server so I would recommend running this trace for an hour at a time. You are going to gather a significant amount of data in that time. Be sure to select the following data columns where available.
In the past I have filtered on the EventClass. For this trace I’m going to filter on the DatabaseName. This will allow me to look at performance on just that database . It’s also a good idea to group by ObjectID. This will reduce the amount of data on the page and allow for easy analyses.
Ready to Run the Trace!
As I have discussed on previous blogs it’s always a good idea to run a baseline trace. Maybe your first trace can be during normal business hours. Then run another trace during high usage hours.
Breaking It Down
Here is an example of a trace that ran for 1 hour.
This could be Indifferent Performance. At this point these are just numbers. As you can see the highlighted row has 396,246 scans in one hour. Again, just a number right now until we know if it’s a problem. Time for some good old investigation.
Let’s first look Object ID 2117582582. We can get the name of the object and look even further.
WHERE object_id = 2117582582
This tells me that the name of the table that is running the object. Query that table and find out how many rows are in that table, so we know if it’s small or large. If your dealing with a large table you should look at the indexes on the table. Are they cluster or not? If they are not clustered, well a clustered index is more efficient on columns that are searched for ranges of values. This could be red flag if it’s not a clustered index and it is a large table. At this point I would recommend creating clustered indexes. Also query the table with the Actual Execution Plan. See if it coded to use Table Scans, Index Scan, or Index Seeks. If you using a Table Scan on a huge table; that’s another red flag. Switch it to a Index Seek to speed things up.
Of course it could be an clustered index but the index is still in need of care. When dealing with an object like we are it’s a good idea to view the context of the code and find if it’s a stored procedure. In your results window disaggregate the data by selecting View > Aggregated View (Unchecked). Get to a find window and search on your Object ID to see when the first time this object was scanned. Look for the SP:StmtCompleted event in conjunction with the SQL:BatchCompleted event. With these two events you can see the code within the SQL:BatchCompleted, the CPU, Duration, and Reads. You will see if it’s a stored procedure or not. If these events are high you have a red flag, if not then you need to evaluate the code even closure. This Blog would be mammoth if I discussed code reviews. However, I would recommend regular code review calls with your DBA and software programmer. You may determine it’s possibly to look at adding indexs to the table and turn Table Scans into Index Seeks.
We we covered a lot of ground and no I did not get into every option of finding the performance hit. Keep running scans during both high usage and low usage hours. Run them for an hour only as it could cause performance issues themselves. Try to find the objects that are running the most and determine if they can be better optimized through clean indexes and Index Seeks. Once you find a few objects that are heavy performance hitters you could run a trace on the duration of that specific object. Read my blog on SQL Server Monitoring Pt. 2 – Run a SQL Profiler trace during high usage hours for more information. Once that trace is set up use the Column Filters to scan the ObjectsID’s that you found are running the most. This will give you more details on how long the Objects is running.