Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Wednesday, February 18, 2015

Performance Monitor : How to run relog to process performance monitor data

Run relog from the directory where the datacollector file is

Type the following in the command prompt
c:\cd PerfLogs\Admin\FromTemplate\000001

( the above is the path where my file is)

running relog
c:\cd PerfLogs\Admin\FromTemplate\000001 > relog ProdSample.blg -f  SQL -o SQL:BaselineData!ProdSample.blg


ProdSample is my perflog file
-f is the format in which you want
-o is the output
SQL:BaselineData  ( it is the ODBC  connection that  I had created pointing to the database where I wanted  the data to be dumped)

Once I ran the above command  relog transferred the data and it created the following tables in the database connected trough ODBC . BaseLinedata in this case.

DisplayToID
CounterData
CounterDetails

Tuesday, January 13, 2015

Perfmon helpful couters

Buffer cache hit ratio :

It is the percentage of data server had in the cache and did not have to read it from the disk.
Buffer cache hit ratio should be more than 90% for a SQL Server

SELECT (a.cntr_value * 1.0 / b.cntr_value) * 100.0 as BufferCacheHitRatio

FROM sys.dm_os_performance_counters a

JOIN (SELECT cntr_value,OBJECT_NAME

FROM sys.dm_os_performance_counters

WHERE counter_name = 'Buffer cache hit ratio base'

AND OBJECT_NAME = 'SQLServer:Buffer Manager') b ON a.OBJECT_NAME = b.OBJECT_NAME

WHERE a.counter_name = 'Buffer cache hit ratio'

AND a.OBJECT_NAME = 'SQLServer:Buffer Manager'




Page Life Expectancy (PLE) :

It is the number of seconds the data will stay in the  cache. The ideal value is 300 if less than 300 or between 300-400 the sever will require more memory.


SELECT *,cntr_value as [PLE in secs],cntr_value / 60 as [PLE in mins],

cntr_value / 3600 as [PLE in hours],

cntr_value / 86400 as [PLE in days]

FROM sys.dm_os_performance_counters

WHERE counter_name = 'Page life expectancy'

AND OBJECT_NAME = 'SQLServer:Buffer Manager'








Friday, January 6, 2012

Guidelines for index-fragmetnation


Some starting points to keep in mind for index fragmentation
  •  If an index has less than 1000 pages and is in memory, don't bother removing fragmentation
  • if the index has:
    • less than 5% logical fragmentation, don't do anything
    • between 5% and 30% logical fragmentation, reorganize it (using DBCC INDEXDEFRAG or ALTER INDEX ... REORGANIZE)
    • more than 30% logical fragmentation, rebuild it (using DBCC DBREINDEX or ALTER INDEX ... REBUILD)
 The guidelines are taken from 
http://sqlskills.com/BLOGS/PAUL/post/Where-do-the-Books-Online-index-fragmentation-thresholds-come-from.aspx

Monday, May 23, 2011

Top 10 SQL Server 2008 Features for the Database Administrator (DBA)

We migrated to the new version Sql server 2008 R2 early this year but I am afraid that we have not been able to utilize all of the "awesome" features that sql 2008 comes with, though we have used SSIS, DBmail etc but that is in development aspect. Here is a list (yes of course the top 10). More blog posts to follow on the research and hands on experience on each of the follwing features.

1.Activity Monitor
2.[Sql Server Audit]
3.BackUp Compression - ( I love this feature - considering how many databases we backup daily and monthly)
4.Central Managemant Servers
5.Data Collector and Management Data Warehouse
6.Data Compression
7.Policy-Based Management ( as the team grows this will be helpful - to implement standards on that naming conventions.)
8.Predictable Performance and Concurrency
9.Resource Governor
10.Transparent Data Encryption (TDE)

Tuesday, December 7, 2010

Sql server - performance

http://www.sql-server-performance.com/articles/dba/dt_dbcc_showcontig_p2.aspx

The Results Explained
The results from the previous command will look something like the following:
DBCC SHOWCONTIG scanning 'MyTable1' table...Table: 'MyTable1' (1556968673); index ID: 1, database ID: 16TABLE level scan performed.
-Pages Scanned................................: 18986
-Extents Scanned..............................: 2443
-Extent Switches..............................: 9238
- Avg. Pages per Extent........................: 7.8
- Scan Density [Best Count:Actual Count].......: 25.70% [2374:9239]
- Logical Scan Fragmentation ..................: 44.58%
- Extent Scan Fragmentation ...................: 87.07%
- Avg. Bytes Free per Page.....................: 1658.7
- Avg. Page Density (full).....................: 79.51%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DBCC SHOWCONTIG scanning 'MyTable2' table...Table: 'MyTable2' (183984032); index ID: 1, database ID: 16TABLE level scan performed.
- Pages Scanned................................: 28980
- Extents Scanned..............................: 3687
- Extent Switches..............................: 22565
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 16.06% [3623:22566]
- Logical Scan Fragmentation ..................: 83.05%
- Extent Scan Fragmentation ...................: 87.44%
- Avg. Bytes Free per Page.....................: 3151.1
- Avg. Page Density (full).....................: 61.07%
DBCC execution completed. If DBCC printed error messages,contact your system administrator.

In the first table, MyTable1, we see that there were 18,986 pages examined to create the report. Those pages existed within 2,443 extents, indicating that the table consumed approximately 97% (7.8 pages per extent on average) of the extents allocated for it. We then see that while examining the pages for fragmentation, the server had to switch extent locations 9, 238 times. The Scan Density restates this by indicating the percentage of all pages within all extents were contiguous. In an ideal environment, the density displayed would be close to 100. The Logical Scan Fragmentation and Extent Scan Fragmentation are indications of how well the indexes are stored within the system when a clustered index is present (and should be ignored for tables that do not have a clustered index). In both cases, a number close to 0 is preferable. There is another anomaly being displayed here that is a little difficult to explain, but it is that SQL Server allows multiple tables to exist within a single extent, which further explains the 7.8 pages per extent (multiple tables may not however exist within a page).

The next items discuss a somewhat more mundane but important issue of page utilization. Again using the first table as the example, there are an average of 1659 bytes free per page, or that each page is 79.51% utilized. The closer that number gets to 100, the faster the database is able to read in records, since more records exist on a single page. However, this must be balanced with the cost of writing to the table. Since a page split will occur if a write is required on a page that is full, the overhead can be tremendous. This is exaggerated when using RAID 5 disk subsystems, since RAID 5 has a considerably slower write time compared to its read time. To account for this, we have the ability of telling SQL Server to leave each page a certain percentage full.

DBCC REINDEX is a related tool that will reorganize your database information in much the same way Norton Defrag will work on your hard drive (see Books Online for information on how to use DBCC REINDEX). The following report displays the differences in the data after we defragmented the data using DBCC DBREINDEX.

DBCC SHOWCONTIG scanning 'MyTable1' table...Table: 'MyTable1' (1556968673); index ID: 1, database ID: 16TABLE level scan performed.
- Pages Scanned................................: 15492
- Extents Scanned..............................: 1945
- Extent Switches..............................: 2363
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 81.94% [1937:2364]
- Logical Scan Fragmentation ..................: 15.43%
- Extent Scan Fragmentation ...................: 20.15%
- Avg. Bytes Free per Page.....................: 159.8
- Avg. Page Density (full).....................: 98.03%'
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DBCC SHOWCONTIG scanning 'MyTable2' table...Table: 'MyTable2' (183984032); index ID: 1, database ID: 16TABLE level scan performed.
- Pages Scanned................................: 35270
- Extents Scanned..............................: 4415
- Extent Switches..............................: 4437
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.35% [4409:4438]
- Logical Scan Fragmentation ..................: 0.11%
- Extent Scan Fragmentation ...................: 0.66%
- Avg. Bytes Free per Page.....................: 3940.1
- Avg. Page Density (full).....................: 51.32%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Here, we can see several key improvements and some examples of how proper indexing can be very important. The most glaring items for us are how well we were able to increase the scan density. Again, using the MyTable1 table as a reference, we can see that out of 1,945 extents, there were only 2363 extent switches. Notice that the number of extent switches is now a lower number than the original number of extents. This is due to the more efficient allocation of the data. And, since there is a significant reduction of the number of extent switches, searches for large quantities of contiguous data will be fulfilled much more quickly.