Thursday, September 11, 2008

Differences between SQL Server temporary tables and table variables

Temporary tables are created in tempdb. The name "temporary" is slightly misleading, for even though the tables are instantiated in tempdb, they are backed by physical disk and are even logged into the transaction log. They act like regular tables in that you can query their data via SELECT queries and modify their data via UPDATE, INSERT, and DELETE statements. If created inside a stored procedure they are destroyed upon completion of the stored procedure. Furthermore, the scope of any particular temporary table is the session in which it is created; meaning it is only visible to the current user. Multiple users could create a temp table named #TableX and any queries run simultaneously would not affect one another - they would remain autonomous transactions and the tables would remain autonomous objects. You may notice that my sample temporary table name started with a "#" sign. This is the identifier for SQL Server that it is dealing with a temporary table.

Table Variables

The syntax for creating table variables is quite similar to creating either regular or temporary tables. The only differences involve a naming convention unique to variables in general, and the need to declare the table variable as you would any other local variable in Transact SQL

table variables have certain clear limitations. Table variables can not have Non-Clustered Indexes
You can not create constraints in table variables
You can not create default values on table variable columns
Statistics can not be created against table variables


Similarities with temporary tables include:

Instantiated in tempdb
Clustered indexes can be created on table variables and temporary tables
Both are logged in the transaction log
Just as with temp and regular tables, users can perform all Data Modification Language (DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.

Deleting Data in SQL Server with TRUNCATE vs DELETE commands

Question
There are two main keywords used for deleting data from a table: TRUNCATE and DELETE. Although each achieves the same result, the methods employed for each vastly differ. There are advantages, limitations, and consequences of each that you should consider when deciding which method to use.

Answer

Deleting Data Using TRUNCATE TABLE

TRUNCATE TABLE is a statement that quickly deletes all records in a table by deallocating the data pages used by the table. This reduces the resource overhead of logging the deletions, as well as the number of locks acquired; however, it bypasses the transaction log, and the only record of the truncation in the transaction logs is the page deallocation. Records removed by the TRUNCATE TABLE statement cannot be restored. You cannot specify a WHERE clause in a TRUNCATE TABLE statement-it is all or nothing. The advantage to using TRUNCATE TABLE is that in addition to removing all rows from the table it resets the IDENTITY back to the SEED, and the deallocated pages are returned to the system for use in other areas.

In addition, TRUNCATE TABLE statements cannot be used for tables involved in replication or log shipping, since both depend on the transaction log to keep remote databases consistent.
TRUNCATE TABLE cannot be used when a foreign key references the table to be truncated, since TRUNCATE statements do not fire triggers. This could result in inconsistent data because ON DELETE/UPDATE triggers would not fire. If all table rows need to be deleted and there is a foreign key referencing the table, you must drop the index and recreate it. If a TRUNCATE TABLE statement is issued against a table that has foreign key references, the following error is returned:

Deleting Data Using DELETE FROM Statement

DELETE TABLE statements delete rows one at a time, logging each row in the transaction log, as well as maintaining log sequence number (LSN) information. Although this consumes more database resources and locks, these transactions can be rolled back if necessary. You can also specify a WHERE clause to narrow down the rows to be deleted. When you delete a large number of rows using a DELETE FROM statement, the table may hang on to the empty pages requiring manual release using DBCC SHRINKDATABASE (db_name).
When large tables require that all records be deleted and TRUNCATE TABLE cannot be used, the following statements can be used to achieve the same result as TRUNCATE TABLE:

DELETE from "table_name"
DBCC CHECKIDENT("table_name", RESEED, "reseed_value")

Script to create commands to disable, enable, drop and recreate Foreign Key constraints in SQL Server

TO READ MORE...

Problem
Foreign keys (FK) are designed to maintain referential integrity within your database. When used properly FKs allow you to be sure that your data is intact and there are no orphaned records. On the flipside of using FKs to maintain referential integrity, they also become an issue when you need to change table structures or temporarily modify data that might violate the foreign key constraint. Other tips have been written that show you how to identify your FKs and why you should use them, but what is the best approach for manipulating FK constraints to make structure or data changes?

Solution
As mentioned already other tips have been written that show you how to find your foreign keys within your database and why you should use foreign keys. You can refer to these tips for this information:

The Importance of Foreign Keys
Identify all of your foreign keys in a database

--------------------------------------------------------------------------------

Below is a script that can be used to find all foreign keys that reference the primary table that you wish to work with. In this script you provide the table name and the schema name (object owner). The script will then return a list of statements that can be copied and pasted into a query window to make these changes.

The script also takes three different parameter values depending on the action you want to take:

DISABLE - this will create the command to disable all FK constraints that reference the table you are working with
ENABLE - this will create the command to enable all FK constraints that reference the table you are working with
DROP - this will create a command to drop all FK constraints and create a command to create all FK constraints that reference the table are working with
The values below use a table in the AdventureWorks database, so you can just copy and paste this code and run this sample against that database.

-- Enable, Disable, Drop and Recreate FKs based on Primary Key table
-- Written 2007-11-18
-- Edgewood Solutions / MSSQLTips.com
-- Works for SQL Server 2005

SET NOCOUNT ON

DECLARE @operation VARCHAR(10)
DECLARE @tableName sysname
DECLARE @schemaName sysname

SET @operation = 'DROP' --ENABLE, DISABLE, DROP
SET @tableName = 'SpecialOfferProduct'
SET @schemaName = 'Sales'

DECLARE @cmd NVARCHAR(1000)

DECLARE
@FK_NAME sysname,
@FK_OBJECTID INT,
@FK_DISABLED INT,
@FK_NOT_FOR_REPLICATION INT,
@DELETE_RULE smallint,
@UPDATE_RULE smallint,
@FKTABLE_NAME sysname,
@FKTABLE_OWNER sysname,
@PKTABLE_NAME sysname,
@PKTABLE_OWNER sysname,
@FKCOLUMN_NAME sysname,
@PKCOLUMN_NAME sysname,
@CONSTRAINT_COLID INT


DECLARE cursor_fkeys CURSOR FOR
SELECT Fk.name,
Fk.OBJECT_ID,
Fk.is_disabled,
Fk.is_not_for_replication,
Fk.delete_referential_action,
Fk.update_referential_action,
OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name,
schema_name(Fk.schema_id) AS Fk_table_schema,
TbR.name AS Pk_table_name,
schema_name(TbR.schema_id) Pk_table_schema
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join
WHERE TbR.name = @tableName
AND schema_name(TbR.schema_id) = @schemaName

OPEN cursor_fkeys

FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER

WHILE @@FETCH_STATUS = 0
BEGIN

-- create statement for enabling FK
IF @operation = 'ENABLE'
BEGIN
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] CHECK CONSTRAINT [' + @FK_NAME + ']'

PRINT @cmd
END

-- create statement for disabling FK
IF @operation = 'DISABLE'
BEGIN
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] NOCHECK CONSTRAINT [' + @FK_NAME + ']'

PRINT @cmd
END

-- create statement for dropping FK and also for recreating FK
IF @operation = 'DROP'
BEGIN

-- drop statement
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] DROP CONSTRAINT [' + @FK_NAME + ']'

PRINT @cmd

-- create process
DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT

-- create cursor to get FK columns
DECLARE cursor_fkeyCols CURSOR FOR
SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name,
COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN
sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID
WHERE TbR.name = @tableName
AND schema_name(TbR.schema_id) = @schemaName
AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008
ORDER BY Fk_Cl.constraint_column_id

OPEN cursor_fkeyCols

FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME

SET @COUNTER = 1
SET @FKCOLUMNS = ''
SET @PKCOLUMNS = ''

WHILE @@FETCH_STATUS = 0
BEGIN

IF @COUNTER > 1
BEGIN
SET @FKCOLUMNS = @FKCOLUMNS + ','
SET @PKCOLUMNS = @PKCOLUMNS + ','
END

SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']'
SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']'

SET @COUNTER = @COUNTER + 1

FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME
END

CLOSE cursor_fkeyCols
DEALLOCATE cursor_fkeyCols

-- generate create FK statement
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME + '] WITH ' +
CASE @FK_DISABLED
WHEN 0 THEN ' CHECK '
WHEN 1 THEN ' NOCHECK '
END + ' ADD CONSTRAINT [' + @FK_NAME
+ '] FOREIGN KEY (' + @FKCOLUMNS
+ ') REFERENCES [' + @PKTABLE_OWNER + '].[' + @PKTABLE_NAME + '] ('
+ @PKCOLUMNS + ') ON UPDATE ' +
CASE @UPDATE_RULE
WHEN 0 THEN ' NO ACTION '
WHEN 1 THEN ' CASCADE '
WHEN 2 THEN ' SET_NULL '
END + ' ON DELETE ' +
CASE @DELETE_RULE
WHEN 0 THEN ' NO ACTION '
WHEN 1 THEN ' CASCADE '
WHEN 2 THEN ' SET_NULL '
END + '' +
CASE @FK_NOT_FOR_REPLICATION
WHEN 0 THEN ''
WHEN 1 THEN ' NOT FOR REPLICATION '
END

PRINT @cmd

END

FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER
END

CLOSE cursor_fkeys
DEALLOCATE cursor_fkeys

How to create a case-sensitive instance of SQL Server 2000 and How can I make my SQL queries case sensitive?

How to create a case-sensitive instance of SQL Server 2000 and How can I make my SQL queries case sensitive?

1. How to create a case-sensitive instance of SQL Server 2000
To create a case-sensitive instance of SQL Server 2000 follow the bellow steps at the time of installation of SQL Server Setup:

1.1 Run SQL Server Setup to install SQL Server 2000 Components, select Install Database Server, and then click Next at the Welcome screen of the SQL Server Installation Wizard.

1.2 In Computer Name dialog box, Local Computer is the default option and the local computer name appears in the edit box. Click Next.

1.3 In the Installation Selection dialog box, click click Create a new instance of SQL Server, or install Client Tools, and then click Next.

1.4 Follow the directions on the User Information and related screens.

1.5 In the Installation Definition dialog box, click Server and Client Tools, and then click Next.

1.6 In the Instance Name dialog box:
To create a case-sensitive default instance, accept the Default check box and click Next.

1.7 To create a case-sensitive named instance, clear the Default check box and type an instance name.
In the Setup Type dialog box, click Custom, and click Next.

1.8 In the Select Components, Services Accounts, and Authentication Mode dialog boxes, change or accept the default settings, and then click Next.

1.9 Security Note When possible, use Windows Authentication.

1.10 In the Collation Settings dialog box, you have two options:
To make a Windows Locale collation case-sensitive, select Collation designator and then select the correct collation designator from the list. Clear the Binary check box, and then select the Case-sensitive check box.

1.11 To make a SQL collation case-sensitive, select SQL Collations, and then select the correct collation name.
For more information about collation options, click Help. When you finish setting the options, click Next.

1.12 In subsequent dialog boxes, change or accept the default settings, and then click Next.

1.13 When you are finished specifying options, click Next in the Start Copying Files dialog box.

1.14 In the Choose Licensing Mode dialog box, make selections according to your license agreement, and click Continue to begin the installation.

1.15 Click Help for information about licensing, or see your system administrator.

2. How can I make my SQL queries case sensitive?

If you installed SQL Server with the default collation options, you might find that the following queries return the same results:

CREATE TABLE mytable
(
mycolumn VARCHAR(10)
)
GO

SET NOCOUNT ON

INSERT mytable VALUES('Case')
GO

SELECT mycolumn FROM mytable WHERE mycolumn='Case'
SELECT mycolumn FROM mytable WHERE mycolumn='caSE'
SELECT mycolumn FROM mytable WHERE mycolumn='case'

You can alter your query by forcing collation at the column level:

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'caSE'

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'case'

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'Case'

-- if myColumn has an index, you will likely benefit by adding
-- AND myColumn = 'case'

If you want to do this in a more global way, instead of modifying each individual query, you can force the collation at the database level, or at the column level, using the ALTER DATABASE and ALTER TABLE commands, respectively. You can see the current collation level on the properties tab of the database server, through Enterprise Manager (if you're going to change this setting, MAKE NOTE OF THIS VALUE):

And you can see the description from running the following query:

SELECT DATABASEPROPERTYEX('', 'Collation')

As changing this setting can impact applications and SQL queries, I would isolate this test first. In SQL Server 2000, you can easily run an ALTER TABLE statement to change the sort order of a specific column, forcing it to be case sensitive. First, execute the following query to determine what you need to change it back to:

EXEC sp_help 'mytable'

The second recordset should contain the following information, in a default scenario:

Column_Name Collation
----------- ----------------------------------------------
mycolumn SQL_Latin1_General_CP1_CI_AS

Whatever the 'Collation' column returns, you now know what you need to change it back to after you make the following change, which will force case sensitivity:

ALTER TABLE mytable
ALTER COLUMN mycolumn VARCHAR(10)
COLLATE Latin1_General_CS_AS
GO

SELECT mycolumn FROM mytable WHERE mycolumn='Case'
SELECT mycolumn FROM mytable WHERE mycolumn='caSE'
SELECT mycolumn FROM mytable WHERE mycolumn='case'

If this screws things up, you can change it back, simply by issuing a new ALTER TABLE statement (be sure to replace my COLLATE identifier with the one you found previously):

ALTER TABLE mytable
ALTER COLUMN mycolumn VARCHAR(10)
COLLATE SQL_Latin1_General_CP1_CI_AS

If you are stuck with SQL Server 7.0, you can try this workaround, which might be a little more of a performance hit (you should only get a result for the FIRST match):

SELECT mycolumn FROM mytable WHERE
mycolumn = 'case' AND
CAST(mycolumn AS VARBINARY(10)) = CAST('Case' AS VARBINARY(10))

SELECT mycolumn FROM mytable WHERE
mycolumn = 'case' AND
CAST(mycolumn AS VARBINARY(10)) = CAST('caSE' AS VARBINARY(10))

SELECT mycolumn FROM mytable WHERE
mycolumn = 'case' AND
CAST(mycolumn AS VARBINARY(10)) = CAST('case' AS VARBINARY(10))

-- if myColumn has an index, you will likely benefit by adding
-- AND myColumn = 'case'

SQL Server script to rebuild all indexes for all tables and all databases

Problem
One of the main functions of a DBA is to maintain database indexes. There have been several tips written about different commands to use for both index rebuilds and index defrags as well as the differences between index maintenance with SQL Server 2000 and SQL Server 2005. In addition, other tips have been written about using maintenance plans to maintain indexes on all databases. One of the issues with maintenance plans is that they don't always seem to be as reliable as you would hope and you also sometimes get false feedback on whether the task actually was successful or not. What other options are there to rebuild indexes on all databases besides using a maintenance plan?

Solution
The one nice thing about maintenance plans is that it works across multiple databases and therefore you can push out one task to handle the same activity across all of your databases. The problem that I have seen with maintenance plans though is that sometimes they do not work as expected, therefore here is another approach.

The script below allows you to rebuild indexes for all databases and all tables within a database. This could be further tweaked to handle only indexes that need maintenance as well as doing either index defrags or index rebuilds.

The script uses two cursors one for the databases and another cursor for the tables within the database. In addition, it uses the INFORMATION_SCHEMA.TABLES view to list all of the tables within a database.

Because we need to change from database to database we also need to create dynamic SQL code for the queries. For the DBCC DBREINDEX option we can just pass in the parameters, but for the ALTER INDEX statement we need to again build the query dynamically.

DECLARE @Database VARCHAR(255)
DECLARE @Table VARCHAR(255)
DECLARE @cmd NVARCHAR(500)
DECLARE @fillfactor INT

SET @fillfactor = 90

DECLARE DatabaseCursor CURSOR FOR
SELECT name FROM master.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb','distrbution')
ORDER BY 1

OPEN DatabaseCursor

FETCH NEXT FROM DatabaseCursor INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN

SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT table_catalog + ''.'' + table_schema + ''.'' + table_name as tableName
FROM ' + @Database + '.INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE'''

-- create table cursor
EXEC (@cmd)
OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @Table
WHILE @@FETCH_STATUS = 0
BEGIN

-- SQL 2000 command
--DBCC DBREINDEX(@Table,' ',@fillfactor)

-- SQL 2005 command
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)

FETCH NEXT FROM TableCursor INTO @Table
END

CLOSE TableCursor
DEALLOCATE TableCursor

FETCH NEXT FROM DatabaseCursor INTO @Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor

The DBCC DBREINDEX command will work for both SQL 2000 and SQL 2005, but the new syntax that you should use for SQL Server 2005 is the ALTER INDEX command, so based on this you could do the following:

SQL 2000
uncomment this line
DBCC DBREINDEX(@Table,' ',@fillfactor)
comment these lines
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)
SQL 2005
comment this line
DBCC DBREINDEX(@Table,' ',@fillfactor)
uncomment these lines
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)
Also, I have excluded the system databases, so you can include these or also add other databases to exclude from you index maintenance routines.

How to create a log file in asp.net using c#

How to create a log file in asp.net using c#

1 Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
2 'Put user code to initialize the page here
3 Dim fs, fname, path, timestamp
4
5 If Request.ServerVariables("request_method") = "POST" Then
6
7 fs = Server.CreateObject("Scripting.FileSystemObject")
8
9 path = "c:\" 'set your logfile directory path here
10 timestamp = Day(Date.Now) & "-" & Month(Date.Now) & "-" & Year(Date.Now)
11 timestamp = timestamp & "--" & Hour(Now) & "-" & Minute(Now) & "-" & Second(Now)
12
13 fname = fs.CreateTextFile(path & timestamp & "-" & Request.Form("oid") & ".log", True)
14
15 fname.WriteLine("OrderID - " & Request.Form("oid"))
16 fname.WriteLine("Transaction Status - " & Request.Form("transactionstatus"))
17 fname.WriteLine("Total - " & Request.Form("total"))
18 fname.WriteLine("ClientID - " & Request.Form("clientid"))
19 fname.WriteLine("Transaction Time Stamp - " & Request.Form("datetime"))
20 fname.Close()
21
22 fname = Nothing
23 fs = Nothing
24 End If
25 End Sub
26 End Class

Read Form Controls Either in Server Side or Client Side

Read Form Controls Either in Server Side or Client Side

1. To read a form controls from server side we are following the bellow mwntion code:

Request.Form.Count -> used to get all from controls

for(int i = 0; i < Request.Form.Count ; i++)
{
Request.Form.GetKey(i) + ": " + Request.Form.Get(i);
}


2. To read a form controls from clientside we are following the bellow mwntion code:

for (var i=0;i < document.forms[0].elements.length;i++)
{
var e = document.forms[0].elements[i];
if (e.type == "checkbox")
{
if (e.checked)
{
iChk++;
}
}

Avoid Index Redundancy in SQL Server Tables

Problem
I'm trying to design indexes on a new Customer table and I have three separate query searches to satisfy. One query search is by lastname. The 2nd query search is by last name and first name. The last query search is by lastname and an active flag. How should I create my indexes?

Solution
I've come across more instances than I'd like of data models where the original database developer would solve this problem by creating three separate indexes as follows:

create index ix_customer_lastname on dbo.Customer(lastname)
GO
create index ix_customer_lastname_firstname on dbo.Customer(lastname, firstname)
GO
create index ix_customer_lastname_activesw on dbo.Customer(lastname, activesw)
GO

The reality is, in this case you only need one!

The following queries

select * from dbo.Customer where lastname = 'Washington'
GO
select * from dbo.Customer where lastname = 'Washington' and activesw = 1
GO
select * from dbo.Customer where firstname = 'George' and lastname = 'Washington'
GO
select * from dbo.Customer
where firstname = 'George' and lastname = 'Washington' and activesw = 1
GO

can all be satisfied using the following single index

create index ix_customer_lastname on dbo.Customer(lastname, firstname, activesw)
GO


The lead column of each index is lastname and this is the most important one. SQL Server keeps frequency distribution statistics in the form of a histogram on the lead column of every index. It will also keep secondary selectivity information for the additional index column permutations.

Having redundant indexes in your database wastes SQL Server system resources because the database engine has to maintain more data structures than it needs to. Consider a new Customer being added to the database. The new last name inserted would have to be accounted for in the logical ordering of each index resulting in unnecessary overhead to maintain each of these indexes. Furthermore, redundant indexes waste disk space.

Let's create a Customer table and some indexes to illustrate these concepts. This script will create the table and 10,000 customer rows.

set nocount on
go
create table dbo.Customer (customerid int identity primary key,
firstname char(40), lastname char(40),
address1 char(500), city char(40), state char(20), activesw bit)

declare @counter int, @id varchar(4)
select @counter = 1
while (@counter <= 10000)
begin
select @id = cast(@counter as varchar)
insert into customer(firstname, lastname, address1, city, activesw)
select 'f'+@id, 'l'+@id, 'a'+@id, 'c'+@id, 1

select @counter = @counter + 1
end

create index ix_customer_lastname on dbo.Customer(lastname)
create index ix_customer_lastname_firstname on dbo.Customer(lastname, firstname)
create index ix_customer_lastname_activesw on dbo.Customer(lastname, activesw)
go


Now let's examine the space used by the table's indexes

We see that the indexes take up 2kb of disk space


Now let's run the following queries and examine the optimizer's index selections

set showplan_text on
go
select customerid from dbo.Customer where lastname = 'l22'
select customerid from dbo.Customer where lastname = 'l22' and activesw = 1
select customerid from dbo.Customer where firstname = 'f22' and lastname = 'l22'
select customerid from dbo.Customer where firstname = 'f22' and lastname = 'l22' and activesw = 1

Now these are interesting. The first and second queries used index ix_customer_lastname_activesw. The third query used index ix_customer_lastname_firstname. For the last query, the optimizer decided to use ix_customer_lastname_firstname as well and decided to do a lookup to get the activesw value. If you haven't noticed yet, index ix_customer_lastname was not used to satisfy any of these queries. It's just sitting around, literally taking up space.

Now let's drop these indexes and add a single index to satisfy the same four queries

drop index dbo.Customer.ix_customer_lastname
drop index dbo.Customer.ix_customer_lastname_firstname
drop index dbo.Customer.ix_customer_lastname_activesw

create index ix_customer_lastname on dbo.Customer(lastname, firstname, activesw)


If we now re-run sp_spaceused, we'll see that the index space used by the table has been reduced by over 100%

Now let's re-run the queries we ran earlier and examine the new execution plans

As we now see, the single index satisfies all four queries. In addition, the last query that performed a lookup for the activesw now reads it directly from the index and does not resort to searching the clustered table for the value.

By carefully defining your indexes you can save the database engine some work, save some space on your disks, and still have high performing queries.

Understanding SQL Server Indexing (clustered and nonclustered)

Problem
With so many aspects of SQL Server to cover and to write about, some of the basic principals are often overlooked. There have been several people that have asked questions about indexing along with a general overview of the differences of clustered and non clustered indexes. Based on the number of questions that we have received, this tip will discuss the differences of indexes and some general guidelines around indexing.

Solution
From a simple standpoint SQL Server offers two types of indexes clustered and non-clustered. In its simplest definition a clustered index is an index that stores the actual data and a non-clustered index is just a pointer to the data. A table can only have one Clustered index and up to 249 Non-Clustered Indexes. If a table does not have a clustered index it is referred to as a Heap. So what does this actually mean?

To further clarify this lets take a look at what indexes do and why they are important. The primary reason indexes are built is to provide faster data access to the specific data your query is trying to retrieve. This could be either a clustered or non-clustered index. Without having an index SQL Server would need to read through all of the data in order to find the rows that satisfy the query. If you have ever looked at a query plan the difference would be an Index Seek vs a Table Scan as well as some other operations depending on the data selected.

Here are some examples of queries that were run. These were run against table dbo.contact that has about 20,000 rows of data. Each of these queries was run with no index as well as with a clustered and non-clustered indexes. To show the impact a graphical query plan has been provided. This can be created by highlighting the query and pressing Control-L (Ctrl-L) in the query window.


1 - Table with no indexes
When the query runs, since there are no indexes, SQL Server does a Table Scan against the table to look through every row to determine if any of the records have a lastname of "Adams". This query has an Estimated Subtree Cost of 0.437103. This is the cost to SQL Server to execute the query. The lower the number the less resource intensive for SQL Server.

2- Table with non-clustered index on lastname column
When this query runs, SQL Server uses the index to do an Index Seek and then it needs to do a RID Lookup to get the actual data. You can see from the Estimated Subtree Cost of 0.263888 that this is faster then the above query.

3- Table with clustered index on lastname column
When this query runs, SQL Server does an Index Seek and since the index points to the actual data pages, the Estimated Subtree Cost is only 0.0044572. This is by far the fastest access method for this type of query.

4- Table with non-clustered index on lastname column
In this query we are only requesting column lastname. Since this query can be handled by just the non-clustered index (covering query), SQL Server does not need to access the actual data pages. Based on this query the Estimated Subtree Cost is only 0.0033766. As you can see this even better then example #3.

To take this a step further, the below output is based on having a clustered index on lastname and no non-clustered index. You can see that the subtree cost is still the same as returning all of the columns even though we are only selecting one column. So the non-clustered index performs better.

5- Table with clustered index on contactId and non-clustered on lastname column
For this query we now have two indexes. A clustered and non-clustered. The query that is run in the same as example 2. From this output you can see that the RID Lookup has been replaced with a Clustered Index Seek. Overall it is the same type of operations, except using the Clustered Index. The subtree cost is 0.264017. This is a little better then example 2.

So based on these examples you can see the benefits of using indexes. This example table only had 20,000 rows of data, so this is quite small compared to most database tables. You can see the impact this would have on very large tables. The first idea that would come to mind is to use all clustered indexes, but because this is where the actual data is stored a table can only have one clustered index. The second thought may be to index every column. Although this maybe helpful when querying the data, there is also the overhead of maintaining all of these indexes every time you do an INSERT, UPDATE or DELETE.

Split the String values with a special character in MS Flow to convert this into Array

 Many times we have a requirement to prepare the Mailing address for some of the documents, suppose there are Address Line1, Address Line2, ...