Thursday, September 11, 2008

Differences between SQL Server temporary tables and table variables

Temporary tables are created in tempdb. The name "temporary" is slightly misleading, for even though the tables are instantiated in tempdb, they are backed by physical disk and are even logged into the transaction log. They act like regular tables in that you can query their data via SELECT queries and modify their data via UPDATE, INSERT, and DELETE statements. If created inside a stored procedure they are destroyed upon completion of the stored procedure. Furthermore, the scope of any particular temporary table is the session in which it is created; meaning it is only visible to the current user. Multiple users could create a temp table named #TableX and any queries run simultaneously would not affect one another - they would remain autonomous transactions and the tables would remain autonomous objects. You may notice that my sample temporary table name started with a "#" sign. This is the identifier for SQL Server that it is dealing with a temporary table.

Table Variables

The syntax for creating table variables is quite similar to creating either regular or temporary tables. The only differences involve a naming convention unique to variables in general, and the need to declare the table variable as you would any other local variable in Transact SQL

table variables have certain clear limitations. Table variables can not have Non-Clustered Indexes
You can not create constraints in table variables
You can not create default values on table variable columns
Statistics can not be created against table variables


Similarities with temporary tables include:

Instantiated in tempdb
Clustered indexes can be created on table variables and temporary tables
Both are logged in the transaction log
Just as with temp and regular tables, users can perform all Data Modification Language (DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.

Deleting Data in SQL Server with TRUNCATE vs DELETE commands

Question
There are two main keywords used for deleting data from a table: TRUNCATE and DELETE. Although each achieves the same result, the methods employed for each vastly differ. There are advantages, limitations, and consequences of each that you should consider when deciding which method to use.

Answer

Deleting Data Using TRUNCATE TABLE

TRUNCATE TABLE is a statement that quickly deletes all records in a table by deallocating the data pages used by the table. This reduces the resource overhead of logging the deletions, as well as the number of locks acquired; however, it bypasses the transaction log, and the only record of the truncation in the transaction logs is the page deallocation. Records removed by the TRUNCATE TABLE statement cannot be restored. You cannot specify a WHERE clause in a TRUNCATE TABLE statement-it is all or nothing. The advantage to using TRUNCATE TABLE is that in addition to removing all rows from the table it resets the IDENTITY back to the SEED, and the deallocated pages are returned to the system for use in other areas.

In addition, TRUNCATE TABLE statements cannot be used for tables involved in replication or log shipping, since both depend on the transaction log to keep remote databases consistent.
TRUNCATE TABLE cannot be used when a foreign key references the table to be truncated, since TRUNCATE statements do not fire triggers. This could result in inconsistent data because ON DELETE/UPDATE triggers would not fire. If all table rows need to be deleted and there is a foreign key referencing the table, you must drop the index and recreate it. If a TRUNCATE TABLE statement is issued against a table that has foreign key references, the following error is returned:

Deleting Data Using DELETE FROM Statement

DELETE TABLE statements delete rows one at a time, logging each row in the transaction log, as well as maintaining log sequence number (LSN) information. Although this consumes more database resources and locks, these transactions can be rolled back if necessary. You can also specify a WHERE clause to narrow down the rows to be deleted. When you delete a large number of rows using a DELETE FROM statement, the table may hang on to the empty pages requiring manual release using DBCC SHRINKDATABASE (db_name).
When large tables require that all records be deleted and TRUNCATE TABLE cannot be used, the following statements can be used to achieve the same result as TRUNCATE TABLE:

DELETE from "table_name"
DBCC CHECKIDENT("table_name", RESEED, "reseed_value")

Script to create commands to disable, enable, drop and recreate Foreign Key constraints in SQL Server

TO READ MORE...

Problem
Foreign keys (FK) are designed to maintain referential integrity within your database. When used properly FKs allow you to be sure that your data is intact and there are no orphaned records. On the flipside of using FKs to maintain referential integrity, they also become an issue when you need to change table structures or temporarily modify data that might violate the foreign key constraint. Other tips have been written that show you how to identify your FKs and why you should use them, but what is the best approach for manipulating FK constraints to make structure or data changes?

Solution
As mentioned already other tips have been written that show you how to find your foreign keys within your database and why you should use foreign keys. You can refer to these tips for this information:

The Importance of Foreign Keys
Identify all of your foreign keys in a database

--------------------------------------------------------------------------------

Below is a script that can be used to find all foreign keys that reference the primary table that you wish to work with. In this script you provide the table name and the schema name (object owner). The script will then return a list of statements that can be copied and pasted into a query window to make these changes.

The script also takes three different parameter values depending on the action you want to take:

DISABLE - this will create the command to disable all FK constraints that reference the table you are working with
ENABLE - this will create the command to enable all FK constraints that reference the table you are working with
DROP - this will create a command to drop all FK constraints and create a command to create all FK constraints that reference the table are working with
The values below use a table in the AdventureWorks database, so you can just copy and paste this code and run this sample against that database.

-- Enable, Disable, Drop and Recreate FKs based on Primary Key table
-- Written 2007-11-18
-- Edgewood Solutions / MSSQLTips.com
-- Works for SQL Server 2005

SET NOCOUNT ON

DECLARE @operation VARCHAR(10)
DECLARE @tableName sysname
DECLARE @schemaName sysname

SET @operation = 'DROP' --ENABLE, DISABLE, DROP
SET @tableName = 'SpecialOfferProduct'
SET @schemaName = 'Sales'

DECLARE @cmd NVARCHAR(1000)

DECLARE
@FK_NAME sysname,
@FK_OBJECTID INT,
@FK_DISABLED INT,
@FK_NOT_FOR_REPLICATION INT,
@DELETE_RULE smallint,
@UPDATE_RULE smallint,
@FKTABLE_NAME sysname,
@FKTABLE_OWNER sysname,
@PKTABLE_NAME sysname,
@PKTABLE_OWNER sysname,
@FKCOLUMN_NAME sysname,
@PKCOLUMN_NAME sysname,
@CONSTRAINT_COLID INT


DECLARE cursor_fkeys CURSOR FOR
SELECT Fk.name,
Fk.OBJECT_ID,
Fk.is_disabled,
Fk.is_not_for_replication,
Fk.delete_referential_action,
Fk.update_referential_action,
OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name,
schema_name(Fk.schema_id) AS Fk_table_schema,
TbR.name AS Pk_table_name,
schema_name(TbR.schema_id) Pk_table_schema
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join
WHERE TbR.name = @tableName
AND schema_name(TbR.schema_id) = @schemaName

OPEN cursor_fkeys

FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER

WHILE @@FETCH_STATUS = 0
BEGIN

-- create statement for enabling FK
IF @operation = 'ENABLE'
BEGIN
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] CHECK CONSTRAINT [' + @FK_NAME + ']'

PRINT @cmd
END

-- create statement for disabling FK
IF @operation = 'DISABLE'
BEGIN
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] NOCHECK CONSTRAINT [' + @FK_NAME + ']'

PRINT @cmd
END

-- create statement for dropping FK and also for recreating FK
IF @operation = 'DROP'
BEGIN

-- drop statement
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] DROP CONSTRAINT [' + @FK_NAME + ']'

PRINT @cmd

-- create process
DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT

-- create cursor to get FK columns
DECLARE cursor_fkeyCols CURSOR FOR
SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name,
COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN
sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID
WHERE TbR.name = @tableName
AND schema_name(TbR.schema_id) = @schemaName
AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008
ORDER BY Fk_Cl.constraint_column_id

OPEN cursor_fkeyCols

FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME

SET @COUNTER = 1
SET @FKCOLUMNS = ''
SET @PKCOLUMNS = ''

WHILE @@FETCH_STATUS = 0
BEGIN

IF @COUNTER > 1
BEGIN
SET @FKCOLUMNS = @FKCOLUMNS + ','
SET @PKCOLUMNS = @PKCOLUMNS + ','
END

SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']'
SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']'

SET @COUNTER = @COUNTER + 1

FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME
END

CLOSE cursor_fkeyCols
DEALLOCATE cursor_fkeyCols

-- generate create FK statement
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME + '] WITH ' +
CASE @FK_DISABLED
WHEN 0 THEN ' CHECK '
WHEN 1 THEN ' NOCHECK '
END + ' ADD CONSTRAINT [' + @FK_NAME
+ '] FOREIGN KEY (' + @FKCOLUMNS
+ ') REFERENCES [' + @PKTABLE_OWNER + '].[' + @PKTABLE_NAME + '] ('
+ @PKCOLUMNS + ') ON UPDATE ' +
CASE @UPDATE_RULE
WHEN 0 THEN ' NO ACTION '
WHEN 1 THEN ' CASCADE '
WHEN 2 THEN ' SET_NULL '
END + ' ON DELETE ' +
CASE @DELETE_RULE
WHEN 0 THEN ' NO ACTION '
WHEN 1 THEN ' CASCADE '
WHEN 2 THEN ' SET_NULL '
END + '' +
CASE @FK_NOT_FOR_REPLICATION
WHEN 0 THEN ''
WHEN 1 THEN ' NOT FOR REPLICATION '
END

PRINT @cmd

END

FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER
END

CLOSE cursor_fkeys
DEALLOCATE cursor_fkeys

How to create a case-sensitive instance of SQL Server 2000 and How can I make my SQL queries case sensitive?

How to create a case-sensitive instance of SQL Server 2000 and How can I make my SQL queries case sensitive?

1. How to create a case-sensitive instance of SQL Server 2000
To create a case-sensitive instance of SQL Server 2000 follow the bellow steps at the time of installation of SQL Server Setup:

1.1 Run SQL Server Setup to install SQL Server 2000 Components, select Install Database Server, and then click Next at the Welcome screen of the SQL Server Installation Wizard.

1.2 In Computer Name dialog box, Local Computer is the default option and the local computer name appears in the edit box. Click Next.

1.3 In the Installation Selection dialog box, click click Create a new instance of SQL Server, or install Client Tools, and then click Next.

1.4 Follow the directions on the User Information and related screens.

1.5 In the Installation Definition dialog box, click Server and Client Tools, and then click Next.

1.6 In the Instance Name dialog box:
To create a case-sensitive default instance, accept the Default check box and click Next.

1.7 To create a case-sensitive named instance, clear the Default check box and type an instance name.
In the Setup Type dialog box, click Custom, and click Next.

1.8 In the Select Components, Services Accounts, and Authentication Mode dialog boxes, change or accept the default settings, and then click Next.

1.9 Security Note When possible, use Windows Authentication.

1.10 In the Collation Settings dialog box, you have two options:
To make a Windows Locale collation case-sensitive, select Collation designator and then select the correct collation designator from the list. Clear the Binary check box, and then select the Case-sensitive check box.

1.11 To make a SQL collation case-sensitive, select SQL Collations, and then select the correct collation name.
For more information about collation options, click Help. When you finish setting the options, click Next.

1.12 In subsequent dialog boxes, change or accept the default settings, and then click Next.

1.13 When you are finished specifying options, click Next in the Start Copying Files dialog box.

1.14 In the Choose Licensing Mode dialog box, make selections according to your license agreement, and click Continue to begin the installation.

1.15 Click Help for information about licensing, or see your system administrator.

2. How can I make my SQL queries case sensitive?

If you installed SQL Server with the default collation options, you might find that the following queries return the same results:

CREATE TABLE mytable
(
mycolumn VARCHAR(10)
)
GO

SET NOCOUNT ON

INSERT mytable VALUES('Case')
GO

SELECT mycolumn FROM mytable WHERE mycolumn='Case'
SELECT mycolumn FROM mytable WHERE mycolumn='caSE'
SELECT mycolumn FROM mytable WHERE mycolumn='case'

You can alter your query by forcing collation at the column level:

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'caSE'

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'case'

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'Case'

-- if myColumn has an index, you will likely benefit by adding
-- AND myColumn = 'case'

If you want to do this in a more global way, instead of modifying each individual query, you can force the collation at the database level, or at the column level, using the ALTER DATABASE and ALTER TABLE commands, respectively. You can see the current collation level on the properties tab of the database server, through Enterprise Manager (if you're going to change this setting, MAKE NOTE OF THIS VALUE):

And you can see the description from running the following query:

SELECT DATABASEPROPERTYEX('', 'Collation')

As changing this setting can impact applications and SQL queries, I would isolate this test first. In SQL Server 2000, you can easily run an ALTER TABLE statement to change the sort order of a specific column, forcing it to be case sensitive. First, execute the following query to determine what you need to change it back to:

EXEC sp_help 'mytable'

The second recordset should contain the following information, in a default scenario:

Column_Name Collation
----------- ----------------------------------------------
mycolumn SQL_Latin1_General_CP1_CI_AS

Whatever the 'Collation' column returns, you now know what you need to change it back to after you make the following change, which will force case sensitivity:

ALTER TABLE mytable
ALTER COLUMN mycolumn VARCHAR(10)
COLLATE Latin1_General_CS_AS
GO

SELECT mycolumn FROM mytable WHERE mycolumn='Case'
SELECT mycolumn FROM mytable WHERE mycolumn='caSE'
SELECT mycolumn FROM mytable WHERE mycolumn='case'

If this screws things up, you can change it back, simply by issuing a new ALTER TABLE statement (be sure to replace my COLLATE identifier with the one you found previously):

ALTER TABLE mytable
ALTER COLUMN mycolumn VARCHAR(10)
COLLATE SQL_Latin1_General_CP1_CI_AS

If you are stuck with SQL Server 7.0, you can try this workaround, which might be a little more of a performance hit (you should only get a result for the FIRST match):

SELECT mycolumn FROM mytable WHERE
mycolumn = 'case' AND
CAST(mycolumn AS VARBINARY(10)) = CAST('Case' AS VARBINARY(10))

SELECT mycolumn FROM mytable WHERE
mycolumn = 'case' AND
CAST(mycolumn AS VARBINARY(10)) = CAST('caSE' AS VARBINARY(10))

SELECT mycolumn FROM mytable WHERE
mycolumn = 'case' AND
CAST(mycolumn AS VARBINARY(10)) = CAST('case' AS VARBINARY(10))

-- if myColumn has an index, you will likely benefit by adding
-- AND myColumn = 'case'

SQL Server script to rebuild all indexes for all tables and all databases

Problem
One of the main functions of a DBA is to maintain database indexes. There have been several tips written about different commands to use for both index rebuilds and index defrags as well as the differences between index maintenance with SQL Server 2000 and SQL Server 2005. In addition, other tips have been written about using maintenance plans to maintain indexes on all databases. One of the issues with maintenance plans is that they don't always seem to be as reliable as you would hope and you also sometimes get false feedback on whether the task actually was successful or not. What other options are there to rebuild indexes on all databases besides using a maintenance plan?

Solution
The one nice thing about maintenance plans is that it works across multiple databases and therefore you can push out one task to handle the same activity across all of your databases. The problem that I have seen with maintenance plans though is that sometimes they do not work as expected, therefore here is another approach.

The script below allows you to rebuild indexes for all databases and all tables within a database. This could be further tweaked to handle only indexes that need maintenance as well as doing either index defrags or index rebuilds.

The script uses two cursors one for the databases and another cursor for the tables within the database. In addition, it uses the INFORMATION_SCHEMA.TABLES view to list all of the tables within a database.

Because we need to change from database to database we also need to create dynamic SQL code for the queries. For the DBCC DBREINDEX option we can just pass in the parameters, but for the ALTER INDEX statement we need to again build the query dynamically.

DECLARE @Database VARCHAR(255)
DECLARE @Table VARCHAR(255)
DECLARE @cmd NVARCHAR(500)
DECLARE @fillfactor INT

SET @fillfactor = 90

DECLARE DatabaseCursor CURSOR FOR
SELECT name FROM master.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb','distrbution')
ORDER BY 1

OPEN DatabaseCursor

FETCH NEXT FROM DatabaseCursor INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN

SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT table_catalog + ''.'' + table_schema + ''.'' + table_name as tableName
FROM ' + @Database + '.INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE'''

-- create table cursor
EXEC (@cmd)
OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @Table
WHILE @@FETCH_STATUS = 0
BEGIN

-- SQL 2000 command
--DBCC DBREINDEX(@Table,' ',@fillfactor)

-- SQL 2005 command
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)

FETCH NEXT FROM TableCursor INTO @Table
END

CLOSE TableCursor
DEALLOCATE TableCursor

FETCH NEXT FROM DatabaseCursor INTO @Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor

The DBCC DBREINDEX command will work for both SQL 2000 and SQL 2005, but the new syntax that you should use for SQL Server 2005 is the ALTER INDEX command, so based on this you could do the following:

SQL 2000
uncomment this line
DBCC DBREINDEX(@Table,' ',@fillfactor)
comment these lines
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)
SQL 2005
comment this line
DBCC DBREINDEX(@Table,' ',@fillfactor)
uncomment these lines
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)
Also, I have excluded the system databases, so you can include these or also add other databases to exclude from you index maintenance routines.

How to create a log file in asp.net using c#

How to create a log file in asp.net using c#

1 Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
2 'Put user code to initialize the page here
3 Dim fs, fname, path, timestamp
4
5 If Request.ServerVariables("request_method") = "POST" Then
6
7 fs = Server.CreateObject("Scripting.FileSystemObject")
8
9 path = "c:\" 'set your logfile directory path here
10 timestamp = Day(Date.Now) & "-" & Month(Date.Now) & "-" & Year(Date.Now)
11 timestamp = timestamp & "--" & Hour(Now) & "-" & Minute(Now) & "-" & Second(Now)
12
13 fname = fs.CreateTextFile(path & timestamp & "-" & Request.Form("oid") & ".log", True)
14
15 fname.WriteLine("OrderID - " & Request.Form("oid"))
16 fname.WriteLine("Transaction Status - " & Request.Form("transactionstatus"))
17 fname.WriteLine("Total - " & Request.Form("total"))
18 fname.WriteLine("ClientID - " & Request.Form("clientid"))
19 fname.WriteLine("Transaction Time Stamp - " & Request.Form("datetime"))
20 fname.Close()
21
22 fname = Nothing
23 fs = Nothing
24 End If
25 End Sub
26 End Class

Read Form Controls Either in Server Side or Client Side

Read Form Controls Either in Server Side or Client Side

1. To read a form controls from server side we are following the bellow mwntion code:

Request.Form.Count -> used to get all from controls

for(int i = 0; i < Request.Form.Count ; i++)
{
Request.Form.GetKey(i) + ": " + Request.Form.Get(i);
}


2. To read a form controls from clientside we are following the bellow mwntion code:

for (var i=0;i < document.forms[0].elements.length;i++)
{
var e = document.forms[0].elements[i];
if (e.type == "checkbox")
{
if (e.checked)
{
iChk++;
}
}

Avoid Index Redundancy in SQL Server Tables

Problem
I'm trying to design indexes on a new Customer table and I have three separate query searches to satisfy. One query search is by lastname. The 2nd query search is by last name and first name. The last query search is by lastname and an active flag. How should I create my indexes?

Solution
I've come across more instances than I'd like of data models where the original database developer would solve this problem by creating three separate indexes as follows:

create index ix_customer_lastname on dbo.Customer(lastname)
GO
create index ix_customer_lastname_firstname on dbo.Customer(lastname, firstname)
GO
create index ix_customer_lastname_activesw on dbo.Customer(lastname, activesw)
GO

The reality is, in this case you only need one!

The following queries

select * from dbo.Customer where lastname = 'Washington'
GO
select * from dbo.Customer where lastname = 'Washington' and activesw = 1
GO
select * from dbo.Customer where firstname = 'George' and lastname = 'Washington'
GO
select * from dbo.Customer
where firstname = 'George' and lastname = 'Washington' and activesw = 1
GO

can all be satisfied using the following single index

create index ix_customer_lastname on dbo.Customer(lastname, firstname, activesw)
GO


The lead column of each index is lastname and this is the most important one. SQL Server keeps frequency distribution statistics in the form of a histogram on the lead column of every index. It will also keep secondary selectivity information for the additional index column permutations.

Having redundant indexes in your database wastes SQL Server system resources because the database engine has to maintain more data structures than it needs to. Consider a new Customer being added to the database. The new last name inserted would have to be accounted for in the logical ordering of each index resulting in unnecessary overhead to maintain each of these indexes. Furthermore, redundant indexes waste disk space.

Let's create a Customer table and some indexes to illustrate these concepts. This script will create the table and 10,000 customer rows.

set nocount on
go
create table dbo.Customer (customerid int identity primary key,
firstname char(40), lastname char(40),
address1 char(500), city char(40), state char(20), activesw bit)

declare @counter int, @id varchar(4)
select @counter = 1
while (@counter <= 10000)
begin
select @id = cast(@counter as varchar)
insert into customer(firstname, lastname, address1, city, activesw)
select 'f'+@id, 'l'+@id, 'a'+@id, 'c'+@id, 1

select @counter = @counter + 1
end

create index ix_customer_lastname on dbo.Customer(lastname)
create index ix_customer_lastname_firstname on dbo.Customer(lastname, firstname)
create index ix_customer_lastname_activesw on dbo.Customer(lastname, activesw)
go


Now let's examine the space used by the table's indexes

We see that the indexes take up 2kb of disk space


Now let's run the following queries and examine the optimizer's index selections

set showplan_text on
go
select customerid from dbo.Customer where lastname = 'l22'
select customerid from dbo.Customer where lastname = 'l22' and activesw = 1
select customerid from dbo.Customer where firstname = 'f22' and lastname = 'l22'
select customerid from dbo.Customer where firstname = 'f22' and lastname = 'l22' and activesw = 1

Now these are interesting. The first and second queries used index ix_customer_lastname_activesw. The third query used index ix_customer_lastname_firstname. For the last query, the optimizer decided to use ix_customer_lastname_firstname as well and decided to do a lookup to get the activesw value. If you haven't noticed yet, index ix_customer_lastname was not used to satisfy any of these queries. It's just sitting around, literally taking up space.

Now let's drop these indexes and add a single index to satisfy the same four queries

drop index dbo.Customer.ix_customer_lastname
drop index dbo.Customer.ix_customer_lastname_firstname
drop index dbo.Customer.ix_customer_lastname_activesw

create index ix_customer_lastname on dbo.Customer(lastname, firstname, activesw)


If we now re-run sp_spaceused, we'll see that the index space used by the table has been reduced by over 100%

Now let's re-run the queries we ran earlier and examine the new execution plans

As we now see, the single index satisfies all four queries. In addition, the last query that performed a lookup for the activesw now reads it directly from the index and does not resort to searching the clustered table for the value.

By carefully defining your indexes you can save the database engine some work, save some space on your disks, and still have high performing queries.

Understanding SQL Server Indexing (clustered and nonclustered)

Problem
With so many aspects of SQL Server to cover and to write about, some of the basic principals are often overlooked. There have been several people that have asked questions about indexing along with a general overview of the differences of clustered and non clustered indexes. Based on the number of questions that we have received, this tip will discuss the differences of indexes and some general guidelines around indexing.

Solution
From a simple standpoint SQL Server offers two types of indexes clustered and non-clustered. In its simplest definition a clustered index is an index that stores the actual data and a non-clustered index is just a pointer to the data. A table can only have one Clustered index and up to 249 Non-Clustered Indexes. If a table does not have a clustered index it is referred to as a Heap. So what does this actually mean?

To further clarify this lets take a look at what indexes do and why they are important. The primary reason indexes are built is to provide faster data access to the specific data your query is trying to retrieve. This could be either a clustered or non-clustered index. Without having an index SQL Server would need to read through all of the data in order to find the rows that satisfy the query. If you have ever looked at a query plan the difference would be an Index Seek vs a Table Scan as well as some other operations depending on the data selected.

Here are some examples of queries that were run. These were run against table dbo.contact that has about 20,000 rows of data. Each of these queries was run with no index as well as with a clustered and non-clustered indexes. To show the impact a graphical query plan has been provided. This can be created by highlighting the query and pressing Control-L (Ctrl-L) in the query window.


1 - Table with no indexes
When the query runs, since there are no indexes, SQL Server does a Table Scan against the table to look through every row to determine if any of the records have a lastname of "Adams". This query has an Estimated Subtree Cost of 0.437103. This is the cost to SQL Server to execute the query. The lower the number the less resource intensive for SQL Server.

2- Table with non-clustered index on lastname column
When this query runs, SQL Server uses the index to do an Index Seek and then it needs to do a RID Lookup to get the actual data. You can see from the Estimated Subtree Cost of 0.263888 that this is faster then the above query.

3- Table with clustered index on lastname column
When this query runs, SQL Server does an Index Seek and since the index points to the actual data pages, the Estimated Subtree Cost is only 0.0044572. This is by far the fastest access method for this type of query.

4- Table with non-clustered index on lastname column
In this query we are only requesting column lastname. Since this query can be handled by just the non-clustered index (covering query), SQL Server does not need to access the actual data pages. Based on this query the Estimated Subtree Cost is only 0.0033766. As you can see this even better then example #3.

To take this a step further, the below output is based on having a clustered index on lastname and no non-clustered index. You can see that the subtree cost is still the same as returning all of the columns even though we are only selecting one column. So the non-clustered index performs better.

5- Table with clustered index on contactId and non-clustered on lastname column
For this query we now have two indexes. A clustered and non-clustered. The query that is run in the same as example 2. From this output you can see that the RID Lookup has been replaced with a Clustered Index Seek. Overall it is the same type of operations, except using the Clustered Index. The subtree cost is 0.264017. This is a little better then example 2.

So based on these examples you can see the benefits of using indexes. This example table only had 20,000 rows of data, so this is quite small compared to most database tables. You can see the impact this would have on very large tables. The first idea that would come to mind is to use all clustered indexes, but because this is where the actual data is stored a table can only have one clustered index. The second thought may be to index every column. Although this maybe helpful when querying the data, there is also the overhead of maintaining all of these indexes every time you do an INSERT, UPDATE or DELETE.

Wednesday, September 10, 2008

Advanced SQL Injection In SQL Server Applications

This document discusses in detail the common 'SQL injection' technique, as it applies to the popular Microsoft Internet Information Server/Active Server Pages/SQL Server platform. It discusses the various ways in which SQL can be 'injected' into the application and addresses some of the data validation and database lockdown issues that are related to this class of attack.
The paper is intended to be read by both developers of web applications which communicate with databases and by security professionals whose role includes auditing these web applications.

[Introduction]

Structured Query Language ('SQL') is a textual language used to interact with relational databases. There are many varieties of SQL; most dialects that are in common use at the moment are loosely based around SQL-92, the most recent ANSI standard. The typical unit of execution of SQL is the 'query', which is a collection of statements that typically return a single 'result set'. SQL statements can modify the structure of databases (using Data Definition Language statements, or 'DDL') and manipulate the contents of databases (using Data Manipulation Language statements, or 'DML'). In this paper, we will be specifically discussing Transact-SQL, the dialect of SQL used by Microsoft SQL Server.
SQL Injection occurs when an attacker is able to insert a series of SQL statements into a 'query' by manipulating data input into an application.
A typical SQL statement looks like this:
select id, forename, surname from authors
This statement will retrieve the 'id', 'forename' and 'surname' columns from the 'authors' table, returning all rows in the table. The 'result set' could be restricted to a specific 'author' like this:
select id, forename, surname from authors where forename = 'john' and surname = 'smith'
An important point to note here is that the string literals 'john' and 'smith' are delimited with single quotes. Presuming that the 'forename' and 'surname' fields are being gathered from user-supplied input, an attacker might be able to 'inject' some SQL into this query, by inputting values into the application like this:
Forename: jo'hn
Surname: smith
The 'query string' becomes this:
select id, forename, surname from authors where forename = 'jo'hn' and
Page 3
surname = 'smith'
When the database attempts to run this query, it is likely to return an error:
Server: Msg 170, Level 15, State 1, Line 1
Line 1: Incorrect syntax near 'hn'.
The reason for this is that the insertion of the 'single quote' character 'breaks out' of the single-quote delimited data. The database then tried to execute 'hn' and failed. If the attacker specified input like this:
Forename: jo'; drop table authors--
Surname:
…the authors table would be deleted, for reasons that we will go into later.


[Stored Procedures]

Traditional wisdom holds that if an ASP application uses stored procedures in the database, that SQL injection is not possible. This is a half-truth, and it depends on the manner in which the stored procedure is called from the ASP script.
Essentially, if a parameterised query is run, and the user-supplied parameters are passed safely to the query, then SQL injection is typically impossible. However, if the attacker can exert any influence over the non - data parts of the query string that is run, it is likely that they will be able to control the database.
Good general rules are:
• If the ASP script creates a SQL query string that is submitted to the server, it is vulnerable to SQL injection, *even if* it uses stored procedures
• If the ASP script uses a procedure object that wraps the assignment of parameters to a stored procedure (such as the ADO command object, used with the Parameters collection) then it is generally safe, though this depends on the object's implementation.
Obviously, best practice is still to validate all user supplied input, since new attack techniques are being discovered all the time.
To illustrate the stored procedure query injection point, execute the following SQL string:
sp_who '1' select * from sysobjects
or
sp_who '1'; select * from sysobjects

Tuesday, September 9, 2008

Importing text files into tables, and Creating Text Files using BCP

[Importing text files into tables]

Using the 'bulk insert' statement, it is possible to insert a text file into a temporary table. Simply create the table like this:

create table foo( line varchar(8000) )

…and then run an bulk insert to insert the data from the file, like this:

bulk insert foo from 'c:\inetpub\wwwroot\process_login.asp'

…the data can then be retrieved using any of the above error message techniques, or by a 'union' select, combining the data in the text file with the data that is normally returned by the application. This is useful for obtaining the source code of scripts stored on the database server, or possibly the source of ASP scripts.

[Creating Text Files using BCP]

It is fairly easy to create arbitrary text files using the 'opposite' technique to the 'bulk insert'. Unfortunately this requires a command line tool, 'bcp', the 'bulk copy program'

Since bcp accesses the database from outside the SQL Server process, it requires a login. This is typically not difficult to obtain, since the attacker can probably create one anyway, or take advantage of 'integrated' security mode, if the server is configured to use it.
The command line format is as follows:

bcp "SELECT * FROM test..foo" queryout c:\inetpub\wwwroot\runcommand.asp -c -Slocalhost -Usa -Pfoobar

The 'S' parameter is the server on which to run the query, the 'U' is the username and the 'P' is the password, in this case 'foobar'.

Thursday, September 4, 2008

Identifying Queries Running Slower Than Normal with SQL Server 2005

Introduction
As part of your normal monitoring it makes sense to ensure the runtime duration your queries does not deviate significantly from the expected norm. Identifying these queries will allow you to target your efforts and correct any performance problems in a timely manner.

A query can start to run slower for many reasons, including: an increased number of users (resulting in more waiting on resources), increased volume of data to process, too much context switching for parallel queries, and slower hardware.

When a query is run, its query plan is cached to improve subsequent run of the query. Many interesting details relating to the query are recorded, these details can be accessed via SQL Server’s Dynamic Management Views (DMVs). In particular the DMV sys.dm_exec_query_stats contains performance statistics for cached query plans. This DMV has everything we need to determine if a query is running slower than normal.

Details of the sys.dm_exec_query_stats columns relevant to us, are described in figure 1 (edited and taken from SQL Server 2005 Books online).

Column name Data type Description
statement_start_offset int Indicates, in bytes, beginning with 0, the starting position of the query that the row describes within the text of its batch or persisted object.
statement_end_offset int Indicates, in bytes, starting with 0, the ending position of the query that the row describes within the text of its batch or persisted object. A value of -1 indicates the end of the batch.
plan_handle varbinary(64) A token that refers to the compiled plan that the query is part of. This value can be passed to the sys.dm_exec_query_plan dynamic management function to obtain the query plan.
execution_count bigint Number of times that the plan has been executed since it was last compiled.
total_worker_time bigint Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled.
last_worker_time bigint CPU time, in microseconds, that was consumed the last time the plan was executed.
total_physical_reads bigint Total number of physical reads performed by executions of this plan since it was compiled.
last_physical_reads bigint Number of physical reads performed the last time the plan was executed.
total_logical_writes bigint Total number of logical writes performed by executions of this plan since it was compiled.
last_logical_writes bigint Number of logical writes performed the last time the plan was executed.
total_logical_reads bigint Total number of logical reads performed by executions of this plan since it was compiled.
last_logical_reads bigint Number of logical reads performed the last time the plan was executed.
total_elapsed_time bigint Total elapsed time, in microseconds, for completed executions of this plan.
last_elapsed_time bigint Elapsed time, in microseconds, for the most recently completed execution of this plan.

Figure 1 shows the sys.dm_exec_query_stats fields used by the Query Time Delta utility

The _elapsed_time columns record how long the query took, taking into account any waiting on resources.

The _worker_time columns record how long the query took, ignoring any waiting on resources, instead it is concerned only with the time spent using the CPU.

In this article, we will look only at queries that are slower based on the _worker_time columns. This will allow us to ignore many of the variable factors that affect query duration, allowing us instead to create and discuss a simplified utility to quickly determine if a query is running slower than normal, based on the amount of CPU it has used.

Query Time Delta Utility
To determine if a query is running slower than normal, we need to calculate the average duration of the query and compare it to its last run value, adjusted for the amount of data it has processed.

The number of runs (execution_count) and the total worker time (total_worker_time) both include data relating to the last run, thus in order to create a more representative average, the number of runs is reduced by 1, and the total worker time has the last run time subtracted from it.

The average duration of the query is calculated by subtracting the last_worker_time from the total_worker_time and dividing the result by execution_count – 1.

The algorithm used to determine if a query is running slower than average is:

%Slower = (LAST – AVG) x 100 / AVG where LAST represents the duration of last query, and AVG represents the average duration of the query

For example, if the average query duration (AVG) is 40ms, and the last query duration (LAST) is 160ms, then %Slower = (160 – 40) x 100 / 40 = 300%

The duration of a query is affected by the amount of data it processes, this volume of data is reflected in the various IO columns of sys.dm_exec_query_stats. This utility calculates the slowness of a query (Impedance) by combining the duration of the query with the amount of IO performed. Again the ‘total’ IO values include the last run values, and thus need to be adjusted.

To see the SQL text of the individual query statement together with its parent query we call the Dynamic Management Function (DMF) sys.dm_exec_sql_textpassing it the query’s plan_handle as the function parameter. We can extract the individual query using the statement start and end offset values that are part of the DMV sys.dm_exec_query_stats.The DMF sys.dm_exec_sql_textcontains the id of the database the query related to, this is also shown. Note the WHERE clause only selects queries that have been executed at least twice.

For maintainability, the calculation of “% IO deviation” is separated from the main query. The data is sorted to show the TOP 100 slowest queries, by Impedance, this is a reflection of time deviation taking into account IO deviation.

Running the utility on my SQL Server gives the results given in Figure 2.



Figure 2 Output from the Query Time Delta utility.

The results show the individual statements, within a stored procedure or batch, that are running slower than normal. Adhoc or prepared queries are not associated with a database, hence the database name column is set to null.

Discussion
The utility described in this article allows you to quickly identify which queries are running slower than expected, after taking into account the volume of data processed.

Since this utility concerns itself with _worker_time rather than _elapsed_time, we can ignore many of the reasons why the query might be running slower (e.g. concurrency and blocking), and concentrate on why the query is using more CPU (i.e. worker_time) to fulfill its needs.

Perhaps the most common reason why a query starts to run slower (often suddenly) relates to “parameters sniffing”. Here the query plan is optimised based on the value of the parameters when the query is first used. You can imagine a parameter that retrieves only a few rows would produce a different plan than a parameter that would retrieve many rows.

Various solutions exist to create cached plans that are more optimal for the passed parameters, including:

1. Execute the stored procedure with the recompile option, this does not replace the query’s current cached plan.

e.g. EXEC sprocName WITH RECOMPILE

2. Add WITH RECOMPILE to the body of the stored procedure signature, this causes the stored procedure to be recompiled on every usage.

e.g. CREATE PROC sprocName WITH RECOMPILE

3. Use the optimizer hint RECOMPILE on an individual query within a stored procedure.

e.g. SELECT col3 FROM table1 WHERE col3 = @param1

OPTION (RECOMPILE)

4. Use the optimizer hint OPTIMIZE FOR. This allows you to create a plan based on more typical parameter values.

e.g. SELECT col3 FROM table1 WHERE col3 = @param1

OPTION (OPTIMIZE FOR (@param1=71))

5. Use Plan Guides, these provide a degree of stability by allowing you to use an existing optimised query plan.

Each of these proposed solutions has advantages and disadvantages, and is left as an exercise for the reader to investigate them further.

Further work
If you’re interesting in the effect of changing concurrency patterns, blocking, waits etc you might want to look at elapsed time rather than the worker time. You can see the effects of these by replacing the _worker_time variables with _elapsed_time variables in the Utility script.

Various changes (e.g. adding indexes, updating statistics) can cause a query to be recompiled, resulting in a new cache plan, thus making comparisons of a query over time difficult. In this case, you should keep a history for comparison purposes. Keeping a history is also useful because DMV data is transient, often being cleared due to server restarts or memory pressures.

Conclusion
The utility described in this article will help identify easily and quickly those queries that have taken longer to run than normal, and should prove valuable in the everyday work of the SQL Server DBA/developer.

Credits
Ian Stirk has been working in IT as a developer, designer, and architect since 1987. He holds the following qualifications: M.Sc., MCSD.NET, MCDBA, and SCJP. He is a freelance consultant working with Microsoft technologies in London England. He can be contacted at Ian_Stirk@yahoo.com.

Code:
CREATE PROC [dbo].[dba_QueryTimeDelta]
AS
/*----------------------------------------------------------------------

Purpose: Identify queries that are running slower than normal
, when taking into account IO volumes.
------------------------------------------------------------------------

Parameters: None.

Revision History:
13/01/2008 Ian_Stirk@yahoo.com Initial version

Example Usage:
1. exec YourServerName.master.dbo.dba_QueryTimeDelta

----------------------------------------------------------------------*/

BEGIN

-- Do not lock anything, and do not get held up by any locks.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

-- Identify queries running slower than normal.
SELECT TOP 100
[Runs] = qs.execution_count
--, [Total time] = qs.total_worker_time - qs.last_worker_time
, [Avg time] = (qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)
, [Last time] = qs.last_worker_time
, [Time Deviation] = (qs.last_worker_time - ((qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)))
, [% Time Deviation] =
CASE WHEN qs.last_worker_time = 0
THEN 100
ELSE (qs.last_worker_time - ((qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1))) * 100
END
/ (((qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)))
, [Last IO] = last_logical_reads + last_logical_writes + last_physical_reads
, [Avg IO] = ((total_logical_reads + total_logical_writes + total_physical_reads)
- (last_logical_reads + last_logical_writes + last_physical_reads))
/ (qs.execution_count - 1)
, [Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)
, [Parent Query] = qt.text
, [DatabaseName] = DB_NAME(qt.dbid)
INTO #SlowQueries
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) qt
WHERE qs.execution_count > 1
ORDER BY [% Time Deviation] DESC

-- Calculate the [IO Deviation] and [% IO Deviation].
-- Negative values means we did less I/O than average.
SELECT TOP 100 [Runs]
, [Avg time]
, [Last time]
, [Time Deviation]
, [% Time Deviation]
, [Last IO]
, [Avg IO]
, [IO Deviation] = [Last IO] - [Avg IO]
, [% IO Deviation] =
CASE WHEN [Avg IO] = 0
THEN 0
ELSE ([Last IO]- [Avg IO]) * 100 / [Avg IO]
END
, [Individual Query]
, [Parent Query]
, [DatabaseName]
INTO #SlowQueriesByIO
FROM #SlowQueries
ORDER BY [% Time Deviation] DESC

-- Extract items where [% Time deviation] less [% IO deviation] is 'large'
-- These queries are slow running, even when we take into account IO deviation.
SELECT TOP 100 [Runs]
, [Avg time]
, [Last time]
, [Time Deviation]
, [% Time Deviation]
, [Last IO]
, [Avg IO]
, [IO Deviation]
, [% IO Deviation]
, [Impedance] = [% Time Deviation] - [% IO Deviation]
, [Individual Query]
, [Parent Query]
, [DatabaseName]
FROM #SlowQueriesByIO
WHERE [% Time Deviation] - [% IO Deviation] > 20
ORDER BY [Impedance] DESC

-- Tidy up.
DROP TABLE #SlowQueries
DROP TABLE #SlowQueriesByIO
END

Identifying Queries Running Slower Than Normal with SQL Server 2005

Introduction
As part of your normal monitoring it makes sense to ensure the runtime duration your queries does not deviate significantly from the expected norm. Identifying these queries will allow you to target your efforts and correct any performance problems in a timely manner.

A query can start to run slower for many reasons, including: an increased number of users (resulting in more waiting on resources), increased volume of data to process, too much context switching for parallel queries, and slower hardware.

When a query is run, its query plan is cached to improve subsequent run of the query. Many interesting details relating to the query are recorded, these details can be accessed via SQL Server’s Dynamic Management Views (DMVs). In particular the DMV sys.dm_exec_query_stats contains performance statistics for cached query plans. This DMV has everything we need to determine if a query is running slower than normal.

Details of the sys.dm_exec_query_stats columns relevant to us, are described in figure 1 (edited and taken from SQL Server 2005 Books online).

Column name Data type Description
statement_start_offset int Indicates, in bytes, beginning with 0, the starting position of the query that the row describes within the text of its batch or persisted object.
statement_end_offset int Indicates, in bytes, starting with 0, the ending position of the query that the row describes within the text of its batch or persisted object. A value of -1 indicates the end of the batch.
plan_handle varbinary(64) A token that refers to the compiled plan that the query is part of. This value can be passed to the sys.dm_exec_query_plan dynamic management function to obtain the query plan.
execution_count bigint Number of times that the plan has been executed since it was last compiled.
total_worker_time bigint Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled.
last_worker_time bigint CPU time, in microseconds, that was consumed the last time the plan was executed.
total_physical_reads bigint Total number of physical reads performed by executions of this plan since it was compiled.
last_physical_reads bigint Number of physical reads performed the last time the plan was executed.
total_logical_writes bigint Total number of logical writes performed by executions of this plan since it was compiled.
last_logical_writes bigint Number of logical writes performed the last time the plan was executed.
total_logical_reads bigint Total number of logical reads performed by executions of this plan since it was compiled.
last_logical_reads bigint Number of logical reads performed the last time the plan was executed.
total_elapsed_time bigint Total elapsed time, in microseconds, for completed executions of this plan.
last_elapsed_time bigint Elapsed time, in microseconds, for the most recently completed execution of this plan.

Figure 1 shows the sys.dm_exec_query_stats fields used by the Query Time Delta utility

The _elapsed_time columns record how long the query took, taking into account any waiting on resources.

The _worker_time columns record how long the query took, ignoring any waiting on resources, instead it is concerned only with the time spent using the CPU.

In this article, we will look only at queries that are slower based on the _worker_time columns. This will allow us to ignore many of the variable factors that affect query duration, allowing us instead to create and discuss a simplified utility to quickly determine if a query is running slower than normal, based on the amount of CPU it has used.

Query Time Delta Utility
To determine if a query is running slower than normal, we need to calculate the average duration of the query and compare it to its last run value, adjusted for the amount of data it has processed.

The number of runs (execution_count) and the total worker time (total_worker_time) both include data relating to the last run, thus in order to create a more representative average, the number of runs is reduced by 1, and the total worker time has the last run time subtracted from it.

The average duration of the query is calculated by subtracting the last_worker_time from the total_worker_time and dividing the result by execution_count – 1.

The algorithm used to determine if a query is running slower than average is:

%Slower = (LAST – AVG) x 100 / AVG where LAST represents the duration of last query, and AVG represents the average duration of the query

For example, if the average query duration (AVG) is 40ms, and the last query duration (LAST) is 160ms, then %Slower = (160 – 40) x 100 / 40 = 300%

The duration of a query is affected by the amount of data it processes, this volume of data is reflected in the various IO columns of sys.dm_exec_query_stats. This utility calculates the slowness of a query (Impedance) by combining the duration of the query with the amount of IO performed. Again the ‘total’ IO values include the last run values, and thus need to be adjusted.

To see the SQL text of the individual query statement together with its parent query we call the Dynamic Management Function (DMF) sys.dm_exec_sql_textpassing it the query’s plan_handle as the function parameter. We can extract the individual query using the statement start and end offset values that are part of the DMV sys.dm_exec_query_stats.The DMF sys.dm_exec_sql_textcontains the id of the database the query related to, this is also shown. Note the WHERE clause only selects queries that have been executed at least twice.

For maintainability, the calculation of “% IO deviation” is separated from the main query. The data is sorted to show the TOP 100 slowest queries, by Impedance, this is a reflection of time deviation taking into account IO deviation.

Running the utility on my SQL Server gives the results given in Figure 2.



Figure 2 Output from the Query Time Delta utility.

The results show the individual statements, within a stored procedure or batch, that are running slower than normal. Adhoc or prepared queries are not associated with a database, hence the database name column is set to null.

Discussion
The utility described in this article allows you to quickly identify which queries are running slower than expected, after taking into account the volume of data processed.

Since this utility concerns itself with _worker_time rather than _elapsed_time, we can ignore many of the reasons why the query might be running slower (e.g. concurrency and blocking), and concentrate on why the query is using more CPU (i.e. worker_time) to fulfill its needs.

Perhaps the most common reason why a query starts to run slower (often suddenly) relates to “parameters sniffing”. Here the query plan is optimised based on the value of the parameters when the query is first used. You can imagine a parameter that retrieves only a few rows would produce a different plan than a parameter that would retrieve many rows.

Various solutions exist to create cached plans that are more optimal for the passed parameters, including:

1. Execute the stored procedure with the recompile option, this does not replace the query’s current cached plan.

e.g. EXEC sprocName WITH RECOMPILE

2. Add WITH RECOMPILE to the body of the stored procedure signature, this causes the stored procedure to be recompiled on every usage.

e.g. CREATE PROC sprocName WITH RECOMPILE

3. Use the optimizer hint RECOMPILE on an individual query within a stored procedure.

e.g. SELECT col3 FROM table1 WHERE col3 = @param1

OPTION (RECOMPILE)

4. Use the optimizer hint OPTIMIZE FOR. This allows you to create a plan based on more typical parameter values.

e.g. SELECT col3 FROM table1 WHERE col3 = @param1

OPTION (OPTIMIZE FOR (@param1=71))

5. Use Plan Guides, these provide a degree of stability by allowing you to use an existing optimised query plan.

Each of these proposed solutions has advantages and disadvantages, and is left as an exercise for the reader to investigate them further.

Further work
If you’re interesting in the effect of changing concurrency patterns, blocking, waits etc you might want to look at elapsed time rather than the worker time. You can see the effects of these by replacing the _worker_time variables with _elapsed_time variables in the Utility script.

Various changes (e.g. adding indexes, updating statistics) can cause a query to be recompiled, resulting in a new cache plan, thus making comparisons of a query over time difficult. In this case, you should keep a history for comparison purposes. Keeping a history is also useful because DMV data is transient, often being cleared due to server restarts or memory pressures.

Conclusion
The utility described in this article will help identify easily and quickly those queries that have taken longer to run than normal, and should prove valuable in the everyday work of the SQL Server DBA/developer.

Credits
Ian Stirk has been working in IT as a developer, designer, and architect since 1987. He holds the following qualifications: M.Sc., MCSD.NET, MCDBA, and SCJP. He is a freelance consultant working with Microsoft technologies in London England. He can be contacted at Ian_Stirk@yahoo.com.

Code:
CREATE PROC [dbo].[dba_QueryTimeDelta]
AS
/*----------------------------------------------------------------------

Purpose: Identify queries that are running slower than normal
, when taking into account IO volumes.
------------------------------------------------------------------------

Parameters: None.

Revision History:
13/01/2008 Ian_Stirk@yahoo.com Initial version

Example Usage:
1. exec YourServerName.master.dbo.dba_QueryTimeDelta

----------------------------------------------------------------------*/

BEGIN

-- Do not lock anything, and do not get held up by any locks.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

-- Identify queries running slower than normal.
SELECT TOP 100
[Runs] = qs.execution_count
--, [Total time] = qs.total_worker_time - qs.last_worker_time
, [Avg time] = (qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)
, [Last time] = qs.last_worker_time
, [Time Deviation] = (qs.last_worker_time - ((qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)))
, [% Time Deviation] =
CASE WHEN qs.last_worker_time = 0
THEN 100
ELSE (qs.last_worker_time - ((qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1))) * 100
END
/ (((qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)))
, [Last IO] = last_logical_reads + last_logical_writes + last_physical_reads
, [Avg IO] = ((total_logical_reads + total_logical_writes + total_physical_reads)
- (last_logical_reads + last_logical_writes + last_physical_reads))
/ (qs.execution_count - 1)
, [Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)
, [Parent Query] = qt.text
, [DatabaseName] = DB_NAME(qt.dbid)
INTO #SlowQueries
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) qt
WHERE qs.execution_count > 1
ORDER BY [% Time Deviation] DESC

-- Calculate the [IO Deviation] and [% IO Deviation].
-- Negative values means we did less I/O than average.
SELECT TOP 100 [Runs]
, [Avg time]
, [Last time]
, [Time Deviation]
, [% Time Deviation]
, [Last IO]
, [Avg IO]
, [IO Deviation] = [Last IO] - [Avg IO]
, [% IO Deviation] =
CASE WHEN [Avg IO] = 0
THEN 0
ELSE ([Last IO]- [Avg IO]) * 100 / [Avg IO]
END
, [Individual Query]
, [Parent Query]
, [DatabaseName]
INTO #SlowQueriesByIO
FROM #SlowQueries
ORDER BY [% Time Deviation] DESC

-- Extract items where [% Time deviation] less [% IO deviation] is 'large'
-- These queries are slow running, even when we take into account IO deviation.
SELECT TOP 100 [Runs]
, [Avg time]
, [Last time]
, [Time Deviation]
, [% Time Deviation]
, [Last IO]
, [Avg IO]
, [IO Deviation]
, [% IO Deviation]
, [Impedance] = [% Time Deviation] - [% IO Deviation]
, [Individual Query]
, [Parent Query]
, [DatabaseName]
FROM #SlowQueriesByIO
WHERE [% Time Deviation] - [% IO Deviation] > 20
ORDER BY [Impedance] DESC

-- Tidy up.
DROP TABLE #SlowQueries
DROP TABLE #SlowQueriesByIO
END

Monday, September 1, 2008

Cross Tab queries , Converting Rows to Columns

Hi all,

While browsing on the net I got a really good article by Jeff Moden on Cross Tab queries , Converting Rows to Columns . Hats off to jeff for this beautiful article. Some part of it that I feel was really very helpful for me I am reproducing it here for my future reference and I recommend you guys to read the complete series here

Cross Tab queries , Converting Rows to Columns
Sometimes it is necessary to rotate results so that [the data in] columns are presented horizontally and [the data in] rows are presented vertically. This is known as creating a cross-tab report, or rotating data.

In other words, you can use a Cross Tab or Pivot to convert or transpose information from rows to columns.
A simple introduction to Cross Tabs:

The Cross Tab Report example from Books Online is very simple and easy to understand. I've shamelessly borrowed from it to explain this first shot at a Cross Tab

The Test Data

Basically, the table and data looks like this...

--===== Sample data #1 (#SomeTable1)
--===== Create a test table and some data
CREATE TABLE #SomeTable1
(
Year SMALLINT,
Quarter TINYINT,
Amount DECIMAL(2,1)
)
GO
INSERT INTO #SomeTable1
(Year, Quarter, Amount)
SELECT 2006, 1, 1.1 UNION ALL
SELECT 2006, 2, 1.2 UNION ALL
SELECT 2006, 3, 1.3 UNION ALL
SELECT 2006, 4, 1.4 UNION ALL
SELECT 2007, 1, 2.1 UNION ALL
SELECT 2007, 2, 2.2 UNION ALL
SELECT 2007, 3, 2.3 UNION ALL
SELECT 2007, 4, 2.4 UNION ALL
SELECT 2008, 1, 1.5 UNION ALL
SELECT 2008, 3, 2.3 UNION ALL
SELECT 2008, 4, 1.9
GO

Every row in the code above is unique in that each row contains ALL the information for a given quarter of a given year. Unique data is NOT a requirement for doing Cross Tabs... it just happens to be the condition that the data is in. Also, notice that the 2nd quarter for 2008 is missing.
The goal is to make the data look more like what you would find in a spreadsheet... 1 row for each year with the amounts laid out in columns for each quarter with a grand total for the year. Kind of like this...

... and, notice, we've plugged in a "0" for the missing 2nd quarter of 2008.

Year 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr Total
------ ------- ------- ------- ------- -----
2006 1.1 1.2 1.3 1.4 5.0
2007 2.1 2.2 2.3 2.4 9.0
2008 1.5 0.0 2.3 1.9 5.7

The KEY to Cross Tabs!
Let's start out with the most obvious... we want a Total for each year. This isn't required for Cross Tabs, but it will help demonstrate what the key to making a Cross Tab is.
To make the Total, we need to use the SUM aggregate and a GROUP BY
... like this...

--===== Simple sum/total for each year
SELECT Year,
SUM(Amount) AS Total
FROM #SomeTable1
GROUP BY Year
ORDER BY Year


And, that returns the following...

Year Total
------ ----------------------------------------
2006 5.0
2007 9.0
2008 5.7


Not so difficult and really nothing new there. So, how do we "pivot" the data for the Quarter?
Let's do this by the numbers...
1. How many quarters are there per year? Correct, 4.
2. How many columns do we need to show the 4 quarters per year? Correct, 4.
3. How many times do we need the Quarter column to appear in the SELECT list to make it show up 4 times per year? Correct, 4.
4. Now, look at the total column... it gives the GRAND total for each year. What would we have to do to get it to give us, say, the total just for the first quarter for each year? Correct... we need a CASE statement inside the SUM.
Number 4 above is the KEY to doing this Cross Tab... It should be a SUM and it MUST have a CASE to identify the quarter even though each quarter only has 1 value. Yes, if each quarter had more than 1 value, this would still work! If any given quarter is missing, a zero will be substituted.

To emphasize, each column for each quarter is just like the Total column, but it has a CASE statement to trap info only for the correct data for each quarter's column. Here's the code...

--===== Each quarter is just like the total except it has a CASE
-- statement to isolate the amount for each quarter.
SELECT Year,
SUM(CASE WHEN Quarter = 1 THEN Amount ELSE 0 END) AS [1st Qtr],
SUM(CASE WHEN Quarter = 2 THEN Amount ELSE 0 END) AS [2nd Qtr],
SUM(CASE WHEN Quarter = 3 THEN Amount ELSE 0 END) AS [3rd Qtr],
SUM(CASE WHEN Quarter = 4 THEN Amount ELSE 0 END) AS [4th Qtr],
SUM(Amount) AS Total
FROM #SomeTable1
GROUP BY Year

... and that gives us the following result in the text mode (modified so it will fit here)...
Year 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr Total
------ ------- ------- ------- ------- -----
2006 1.1 1.2 1.3 1.4 5.0
2007 2.1 2.2 2.3 2.4 9.0
2008 1.5 .0 2.3 1.9 5.7

Also notice... because there is only one value for each quarter, we could have gotten away with using MAX instead of SUM.
For most applications, that's good enough. If it's supposed to represent the final output, we might want to make it a little prettier. The STR function inherently right justifies, so we can use that to make the output a little prettier. Please, no hate mail here! I'll be one of the first that formatting of this nature is supposed to be done in the GUI!

--===== We can use the STR function to right justify data and make it prettier.
-- Note that this should really be done by the GUI or Reporting Tool and
-- not in T-SQL
SELECT Year,
STR(SUM(CASE WHEN Quarter = 1 THEN Amount ELSE 0 END),5,1) AS [1st Qtr],
STR(SUM(CASE WHEN Quarter = 2 THEN Amount ELSE 0 END),5,1) AS [2nd Qtr],
STR(SUM(CASE WHEN Quarter = 3 THEN Amount ELSE 0 END),5,1) AS [3rd Qtr],
STR(SUM(CASE WHEN Quarter = 4 THEN Amount ELSE 0 END),5,1) AS [4th Qtr],
STR(SUM(Amount),5,1) AS Total
FROM #SomeTable1
GROUP BY Year


The code above gives us the final result we were looking for...
Year 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr Total
------ ------- ------- ------- ------- -----
2006 1.1 1.2 1.3 1.4 5.0
2007 2.1 2.2 2.3 2.4 9.0
2008 1.5 0.0 2.3 1.9 5.7

Wednesday, August 13, 2008

What are the main differences between .net 1.1 and .net 2.0

ASP.NET 2.0 introduces a lot of new features. Some of this features aim to simplify the problems faced using the earlier versions and some features are introduced to provide lot of new facilities.
The most important features that are incorporated in ASP.NET 2.0 are:

(a)Master Pages
Master pages are introduced to remove one of the most important deficiencies of earlier version of ASP.NET. One thing that has become apparent in the earlier version of ASP.NET is the lack of architecture for applying a consistent look and feel. In earlier version of ASP.NET whenever a developer wants to replicate a common functionality of a web page in other pages, the most possible options he uses is creating a user control and then replicate the functionality in other pages.

ASP.NET 2.0 aims to solve this problem by introducing the concept of Master pages. First the developer needs to define a master page containing the content that he wants to appear on other pages and then use the ContentPlaceHolder controls to define the locations where the sub pages can plug in the content of their own. The he has to build the sub pages - .aspx pages – that reference the master using directives like this one:

<%@Page MasterPageFile = ~/MyMasterPage.master” %>

In addition, an application can designate a default Master Page in web.config as shown here:







(b)PreCompilation

By default, ASP.NET web pages and code files are compiled dynamically when a first request is made to the page. After the initial compilation, the compiled pages is cached; the cache is used to satisfy the subsequent requests for the same page. Even though this approach is flexible, when the page is requested for the first time, it requires a bit of extra time to compile the code. You can avoid this overhead by leveraging a new feature known as precompilation; by using this feature, you can compile an ASP.NET web site before making the web site available to the users.

(c)Sharing code in the application

In earlier version of ASP.NET, if you were to reference a reusable component from your dot net application, you had to compile the assembly and place it in the bin folder (or place it in the GAC) of the web application. But now with ASP.NET 2.0, creating a reusable component is very simple and straightforward. All you need to do is to create a component in a pre-defined subdirectory called code. Any component placed in this directory will be automatically compiled at runtime into a single assembly. This assembly is automatically referenced and will be available to all the page in the site.

(d)Themes and Skins

ASP.NET 2.0 introduces the concepts of Themes and Skins by means of which the look and feel of the web pages can be enhanced to a great extent to make them visually catchy and attractive.

A skin is a set of visual attributes applied to a control type. A theme is a collection of skins. There are a lot of predefined themes in ASP.NET 2.0. One can use it by using the following line of code:

<%@ Page Theme=”SmokeAndGlass” %>

The page directive’s Them attribute declaratively applies a theme to a page. Themes can also be applied programmatically using the page class’s Theme property

Themes and Skins in ASP.NET 2.0

When you build a Web application, it usually has a similar look and feel across all its pages. Not too many applications are designed with each page dramatically different from the next. Generally, for your applications you use similar fonts, colors, and server control styles across all the pages.

You can apply these common styles individually to each and every server control or object on each page, or you can use a new capability provided by ASP.NET 2.0 to centrally specify these styles. All pages or parts of pages in the application can then access them.

You can read more about this topic..

How to apply themes
Themes can be used by a variety of methods. The following examples show you how you can define a theme named "SmokeAndGlass" in your application by different methods:

<%@ Page Theme="SmokeAndGlass" Language="C#"%>
Themes can be set programmatically under the Page_PreInit event handler. It is necessary to define them in this event handler as all the looks are applied to the server controls defined in a particular theme before the page loads. Here is how you can set themes programmatically:


protected void Page_PreInit(object sender, EventArgs e)
{
string theme = "";
if (Page.Request.Form.Count > 0)
{
theme = Page.Request["Themes"].ToString();
if (theme == "Default")
{
theme = "";
}
}
this.Theme = theme;
}

Tuesday, August 12, 2008

Set up SQL Server Mode

In this tutorial we are going to focus on setting up the SQL Server method for maintaining session state. Here are the steps for setting this method up on your SQL server:
• Connect to your SQL server using Query Analyzer
• Open the SQL script InstallSqlState.sql script that is normally located in C:\Windows\Microsoft.Net\Framework\[YourVersion]. Replace C: with the drive letter you have yours installed on. This script will create the ASPState database, tables, and stored procedures needed for this method of maintaining session state.
• If using a trusted connection for connecting to your SQL server you must change ownership of the ASPNet database to the 'sa' account. To do this, in Query Analyzer run this
SQL
use ASPNet
exec sp_changedbowner 'sa', 'true'

The sp_changedbowner stored procedure is a system stored procedure that is used to, you guessed it, change the owner of a database.
• If you decide to use SQL Authentication instead of Windows Authentication you need to create a user account for the process. This account must have, at the very minimum, execute rights for all the stored procedures in the ASPNet database.
Now that we have your SQL Server configured for this mode of state management, we now need to configure your Web application to look at your newly created ASPState database. Fortunately doing this is far fewer steps than getting the database ready. These steps are:
• Set the mode of the sessionState element in your web.config to SQLServer.
• Set the sqlConnectionString attribute of the same element to the connection string for your SQL server. The web.config should now look like:
CODE
mode="SQLServer"
sqlConnectionString="data source=YourServer;user id=YourUsername;password=YourPassword"
cookieless="false" timeout="20" />
Now you have your SQL Server ready for this, and your Web application ready, so you would think you're done now right? Not necessarily; Lets say you're running SQL Server 2000 with Service pack 3, if this is the case the first time you run your Web application against your ASPState database you're going to get errors such as:
CODE
SELECT permission denied on object 'ASPStateTempApplications', database 'tempdb', owner 'dbo'.
Why is this you might ask, well Im glad you asked because I have the answer. Service Pack 3 for SQL Server 2000 adds a new secutiry feature, which as you can imagine from the above error message, is disabled by default. This new security feature is Cross DB Ownership Chaining. How nice of Microsoft to inform you of this before you went through all this isn't it.

don't be discouraged, fortunately this can be enabled by running the sp_configure and sp_dboption system stored procedures. First you run the sp_configure stored procedure, like so
SQL
use master
go
EXEC sp_configure 'Cross DB Ownership Chaining', '0';
RECONFIGURE
GO
Then reboot your SQL Server. Once your server is rebooted you will then need to run the sp_dboption stored procedure, like this:
SQL
use master
go
EXEC sp_dboption 'ASPState', 'db chaining', 'true'
go
Once you have completed this final step your SQL Server is now ready to handle session state management. There you have it, setting up session management using SQL Server 2000. In the next tutorial we will look at setting up a state server for session management, until then I hope you found this tutorial informative and useful. Thanks for reading and happy coding!

SQL SERVER 2000 - EXEC master..xp_cmdshell permissions

[xp_cmdshell]
Extended stored procedures are essentially compiled Dynamic Link Libraries (DLLs) that use a SQL Server specific calling convention to run exported functions. They allow SQL Server applications to have access to the full power of C/C++, and are an extremely useful feature. A number of extended stored procedures are built in to SQL Server, and perform various functions such as sending email and interacting with the registry.
xp_cmdshell is a built-in extended stored procedure that allows the execution of arbitrary command lines. For example:
exec master..xp_cmdshell 'dir'

TO execute any EXE from your stored procedure then you will need to use
XP_cmdshell command. It will successfully execute the exe.

Syntax of cmd shell:-

DECLARE @CMDPATH VARCHAR (500)
SET CMDPATH= 'exepath.exe ' +convert(varchar, @param1) + ' ' + convert(varchar,@param2)
EXEC master ..XP_CMDSHELL @CMD

Just try it....

Monday, August 4, 2008

select one radio button which are in datalist in asp.net

ASPX Page Code--------->

javascript tag start from here---
function CheckOnes(chkControl)
{
var oItem = chkControl.children;
var theBox= (chkControl.type=="radio") ? chkControl : chkControl.children.item[0];

xState=theBox.unchecked;
elm=theBox.form.elements;

for(i=0;i

Data list code------------->
within datalist use server side checkbox
DataList ID="lstTemplates" runat="server" OnItemDataBound="lstTemplates_ItemDataBound"/
ItemTemplate/ asp:RadioButton ID="rdbTemplate" runat="server" /ItemTemplate
/DataList

Server side code for DataList-------------->

protected void lstTemplates_ItemDataBound(object sender, DataListItemEventArgs e)
{
RadioButton rdb;
rdb = (RadioButton)e.Item.FindControl(" rdbTemplate?);>if (rdb != null)
{
rdb.Attributes.Add("onclick", "CheckOnes(this);");
}
}

Sunday, August 3, 2008

Check and Uncheck All Checkboxes in a DataList using ASP.NET and C#

Here's the client-side javascript function in the aspx page


function CheckAll(checkAllBox, fieldName)
{
var frm = document.Form1;
var actVar = checkAllBox.checked;
for(i = 0; i < frm.length; i++)
{
e = frm.elements[i];
if (e.type == 'checkbox' && e.name.indexOf(fieldName) != -1)
e.checked = actVar ;
}
}


function UnCheckAll(fieldName)
{
var frm = document.frmAdsKeyword;
for(i = 0; i < frm.length; i++)
{
e = frm.elements[i];
if (e.type == 'checkbox' && e.id.indexOf(fieldName) != -1)
e.checked = false ;
}
return false;
}

Thursday, July 24, 2008

No Security tab exist in properties of folder in windows XP.

security tab missing - no security tab - enable security tab - no security tab xp - windows xp no security tab - missing security tab - no security tab in properties - sharing and security tab missing - xp no security tab - no security tab in XP - folder security tab missing - all - no security tab windows xp - sharing and security missing - "no security tab" - no security tab in windows xp - windows no security tab - how to enable security tab - security tab not showing - sharing tab missing - security tab is missing - no sharing and security tab - permissions tab missing - windows xp security tab missing - no security tab in folder properties - xp folder Security tab missing - explorer security tab missing - security tab missing xp - security tab missing in folder properties - how to enaBLE SECURITY TAB IN XP - no security tab in explorer - enable security tab in xp - no security tab on file properties - no sharing tab - windows xp security tab not showing - security tab missing windows xp - enable security tab xP - Security tab missing from folder - missing security tab in folder properties - no security tab under properties - missing folder security tab xp pro - hide security tab - no security tab on folder properties - security tab missing in file properties - security tab not found - windows xp missing security tab - no security tab - windows sharing tab missing - no security tab on xp - security tab not showing in folder properties - windows xp pro no security tab - hide sharing tab - missing security tab on folders - security tab not showing xp - xp security tab missing - Security tab missing from Properties - file properties no security tab - +"windows xp" +"file properties" +ntfs +"security tab" - security tab not showing on folder - sharing tab is hidden - folder security option not showing - "missing security tab" explorer ntfs - no security tab windows - "windows xp" "no security tab" - folder properties security tab missing - security tab missing from folder properties - no security tab on file - no security tab in windows explorer - folder missing security tab - sharing tab is missing - missing security tab folder properties - unhide security tab in XP - use simple file sharing missing - enable security tab in folder properties - security tab not shown -

Missing or No Security Tab Found in Windows XP Professional.

Security tab for the properties of files or folders in Windows XP Professional is the important functionality that enable administrators and users to define security permissions and rights for particular user or group to the computer resources. By default, Windows XP Professional follows recommended setting to enable the use of simple file sharing that hide the Security tab, leaving you with only General, Sharing, Web Sharing & Customize tabs as in the Simple File Sharing UI.

So to see and unhide the Security tab, just use the following steps:

Launch Windows Explorer or My Computer.
Click on the Tools at the menu bar, then click on Folder Options.
Click on View tab.
In the Advanced Settings section at the bottom of the list, uncheck and unselect (clear the tick) on the “Use simple file sharing (Recommended)” check box.
Click OK.
Security tab is available only to Administrator or users with administrative rights. So make sure you login as one. And security can only be set in an NTFS partition. If you’re still having problem to reveal or display the Security tab on files or folder properties, check out the following registry hack and set the value to 0 or simply delete the key:

Hive: HKEY_CURRENT_USER
Key: Software\Microsoft\windows\CurrentVersion\Policies\Explorer
Name: Nosecuritytab
Type: REG_DWORD
Value: 1

If you’re using a Windows XP Professional system that is installed in a Workgroup, the Security tab is also hidden by default because in Windows XP Home Edition and Windows XP Professional, guests are forced to log on to a workgroup. Follow the instruction and information on Microsoft Knowledge Base where you need to set the value for ForceGuest registry key.

Split the String values with a special character in MS Flow to convert this into Array

 Many times we have a requirement to prepare the Mailing address for some of the documents, suppose there are Address Line1, Address Line2, ...