Hence, we had highlighted on the main topics of ugly coding parts , we could round down into indexing enhancement parts.
Indexing is a big a article and need for a good artist to play it well
Therefore we are going to draw a simple approach for simple circumstances to be a good basis for further challenging circumstances:
1- Query Execution plan is almost the best reference to find out missing indexes + determine enhancement percent of expensive queries without running them even
2- Also DTA database tuning advisor could be used to find out probable indexes but they need to reviewed + validated against current workload .
3- You have to seek for Ugly query execution plan parts through either estimated or actual query execution plans like :
- Index Scans
- Table Scans
- TV Scans (Table valued Scan)
- Indexes Scan /Seek with high I/O or CPU cost ( particularly more those larger than 1)
- Key Lookup or Bookmark lookup.
4- Then we could address each one and put the appropriate workable solutions for them as coming by next
A- Index Scans:
This is the most topic we have to emphasis more on it , thereby we have to create the appropriate indexes by following up the below steps:
- Create non-clustered index on tables either covered compound indexes or multiple single indexes.
- Covered compound indexes could be for DBs having OLAP transactions
And multiple single indexes could be for DBs of intensive OLTP transactions
- If we assume we have OLAP transactions that it is more probable for expensive queries , then what Covered compound index …? How to configure it..?
It is an index having all of Query parts inside it Where these parts could be addressed as below:
- Join columns + Where conditions columns are the main ones to be allocated in Key Column
- Order By + Group by columns could be included in key column part if they represent a big deal for sorting wise and if not we could get enough of allocating them inside include column part
- All relevant Select columns should be included in Include column part to assure well covered compound indexing concept is there.
- By this way , we could guarantee basically that query analyzer could estimate its query execution plan by seeking more these indexes ending up with healthy index seek parts not index scan parts.
B- Table Scans
This basically could take place in cases of missing PK , so it has to create PKs on the appropriate columns of relevant tables.
However if stuck to define appropriate columns due to shortage of business information , you could add and identity column there and assign it as PK which could enhance significantly OLTP transactions as well
C- TV Scans (Table valued Scan)
This part could be resolved by the coding enhancements explained it the 4th part for TVF
D- Indexes Scan /Seek with high I/O or CPU cost ( particularly more those larger than 1)
Although you may reached to the optimal index design for Key Column + include column parts , query analyzer still unfortunately couldn’t seek this index due to its higher I/O cost or CPU cost.
Or perhaps might be query analyzer has no better choice rather than them but unfortunately, their I/O cost +CPU costs might end up with a culprit point of performance.
Therefore new 2008 techniques of indexing could be good exist out here like :
- Filter indexing of 2008 to narrow down seek scope of this index yielding definitively to the least I/O cost
Like for example :
<strong> </strong> Declare @employename nvarchar (50)='Jone' Declare @employid nvarchar (50)=2 select * from Employees s where s.empname=@Employename and s.empid=@employid and s.salary <10000 and s.salary>5000 CREATE NONCLUSTERED INDEX [Employee_index1] ON [dbo].[Employees] ( [empid] ASC, [empname] ASC, [salary] ASC ) WHERE ([salary]<(10000) AND [salary]>(5000)) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO
Please bear in mind that :
I. Filtration criteria should have static variable strings or values besides of so simple operators like < ,> ,<>,= But not complex operators even between operator is inapplicable to be used there
II. Filtration criteria should have the same syntax as queries itself like between operators should be switched to us < and > operators as above.
III. Filtration criteria couldn’t use any derived function like Avg([salary]) <100 is inapplicable to be used
IV. From the 3 points above, we could come up with a brief conclusion that filter index of 2008 is so beneficial but restrictively used
- Data Compression of 2008 particularly more page compression where a significant reduction of I/O cost could be there
( Row compressions could be used also but according to USP_Compression_Savings sp results where clear estimations about compression percent for both type page +row compression).
E- RID Lookup or Bookmark lookup (Key lookup):
1- Bookmark lookup cases here mark for missing indexes on the relevant columns either to be added within Key Column part or included column part according to its position in query as explained above.
2- RID lookup (Record identifier ) lookup which marks for missing clustered indexes which come among PK by default.
Therefore both cases could be overcome by the same workable solutions discussed within the previous points
Irrespective of that , you have to keep 2 DB maintenance jobs are there:
- Online index rebuild to overcome index fragmentations particularly more if fragmentation exceeded 30 %
- This could be done on a daily or weekly basis according to 3 parameters:
- Data entity size
- Available peak off times
- Index fragmentations percent
- Update statistics particularly more for index statistics (Much preferred to be daily basis at least)