SQL Server 2008 DBA

Published on June 2016 | Categories: Types, Instruction manuals | Downloads: 50 | Comments: 0 | Views: 475
of 263
Download PDF   Embed   Report

In this upto half topics only there i think it will help you

Comments

Content

SQL server2008 DBA

Microsoft sql server database Administrator

What Is A Device?
 The terms device and database are often confused. The

basic storage container for Microsoft SQL Server is a device, which is an operating system file that resides on the physical disk, or hard drive, of the server.  A device is the container that allocates space to Microsoft SQL Server on the server¶s hard drive.  Microsoft SQL Server does not acquire disk space on the server dynamically. You must specify the amount of disk to set aside for it to use.  This allocation is accomplished through the device.

A device carries with it a file extension of .DAT. This is important to know if you are in a multiple-programmer environment and are using the data server for file services as well as data services. For example, in File Manager or Windows NT Explorer, note the physical file C:\MSSQL\Data\master.dat. You can highlight this file, hit the Delete key, and if it is not currently being used by Microsoft SQL Server, it will be deleted like any other file. If it is in use, Microsoft SQL Server and the operating system will not allow it to be deleted. This prevents an accidental delete.

The only acceptable way to recover the space given to a device is to drop the device and re-create it with a smaller size. When you drop a device, ensure that you go to the file system and delete the physical file. If you do not remove the device file, you will receive an error message when you re-create the device with the same name. Once you remove the file, you use the Enterprise Manager to re-create the device with a smaller size. You can then restore any contents of the old device to the new device, provided all the objects fit in the new space. Try to avoid creating one big device that takes up the whole hard drive.Doing so will not give you the flexibility you need from the server. You will be very limited in your options down the road and will have to jump through some fairly complicated hoops to change this configuration on a production machine.

What, Then, Is A Database?
Databases are also considered containers. They hold the objects that make up your server¶s purpose in life. Tables, views, indexes, and stored procedures are all objects that reside in your database. You can, and often will, have multiple user-defined databases residing on your server. These databases are where the production information and code reside. Other databases are installed on your server to give it the intelligence it needs to function; I will cover these databases in a few different areas throughout the book. However, our focus will be on setting up a production system, not on the inner workings of Microsoft SQL Server.

One of the most common mistakes new users make is to confuse the device and the database. You place your databases within your devices. To understand this, think of a database as a division within your company. For instance, Human Resources deals with very specific kinds of information, so you would logically put all of that type of information in a container for centralized management and access control. Accounting is an area that often requires more security than others, and the information generated from this area would justly be placed in a separate container for security reasons. You would not scatter information for the Human Resources department throughout all the offices; instead, you would put all those functions and resources in one place. The same applies to databases and good database design.

An interesting point for all PC-based database programmers is that Microsoft SQL Server does not store the information or data in the database. Remember, the database is a container. Instead, the server stores your data in a table. The index you create for fast access to data is not stored in the table with the raw data; it is stored as another object within the database. A database is a collection of objects. This concept is not hard to follow, but it is different enough from the organization of other database programs that it is sometimes a stumbling block for the small-system programmer. An MIS department accustomed to dBASE or Microsoft FoxPro databases will struggle with this at first. Since this structure is common to most large database systems today, you should become familiar with it.

What Are Character Sets And Sort Orders?
 Another preinstallation issue is choosing a character

set and sort order. A character set is the basic text and symbols that are loaded in your system.  Regardless of the character set you choose, the first 128 characters are the same. The extended characters, including language-specific characters, reside in the remaining half of the character set.  Your decision depends on whether you are doing business overseas or in other languages and need to store text and special characters.  In most cases, the default is fine and should provide you with what you need to function.

You should make this determination prior to installation. Changing character sets can be a daunting task with many system ramifications. If your company is concerned about character sets, chances are you are experienced in these issues and this feature should be nothing new to you. Another interesting issue concerns sort orders. Sort orders determine the way the data is organized when stored by Microsoft SQL Server. The default sort order for Microsoft SQL Server is dictionary order and caseinsensitive. This is fine and probably the best default setting. It is not, however, the fastest setting you can use on your system.

The fastest sort order is binary. The use of this setting has some impact on how you perform certain tasks down the road, so choose it carefully. It will change all of your SQL scripts, stored procedures, and client pass-through code to be case-sensitive. If you type a statement and use a different case than was specified when the table was created, you will get an error message . Say, for instance, you have a table called ³My Table´ on your system. To access it, you type ³my table´. An ³Object Not Found´ error is returned.

Another consideration in choosing a character set and a sort order is whether you are setting up a distributed server environment. If you are, you must use compatible character sets and sort orders among your servers. If you are going to share, replicate, or distribute data, use a common character set and sort order throughout your enterprise. Do not forget that in business today we must occasionally share data with other companies. If your system interacts with another company¶s system, again make sure the character sets and sort orders are compatible.

Another consideration in choosing a character set and a sort order is whether you are setting up a distributed server environment. If you are, you must use compatible character sets and sort orders among your servers. If you are going to share, replicate, or distribute data, use a common character set and sort order throughout your enterprise. Do not forget that in business today we must occasionally share data with other companies. If your system interacts with another company¶s system, again make sure the character sets and sort orders are compatible.

What Is The Recommended System Configuration?
 Let me first comment on the Microsoft recommendations for

your system and what I have found to be a more realistic configuration for your server. Microsoft¶s recommendations should be taken with a grain of salt and applied with care to each environment. Likewise, my recommendations²or anyone else¶s, for that matter²should not be followed blindly. Recommendations are intended to give you an idea of where to start and should not be considered the end solution or setting for your system.  The system requirements for installing Microsoft SQL Server are actually very easy to meet, often leading the administrator into a false sense of security with regard to how well the server will perform.

Microsoft system requirements for an Intel-based system.

 CPU 80486  RAM Minimum 16MB  Minimum 32MB required for replication  Hard Disk Minimum 60MB  Additional 15MB for Microsoft SQL Server

Books Online  File System FAT or NTFS  OS Windows NT Server 3.51 or higher

Where Should The Microsoft SQL Server Be Installed?
 Primary domain controllers (PDCs) have the useful

role of logging people on and off your Microsoft network. They also handle synchronization with backup domain controllers (BDCs) on your network. Any type of domain controller is not the optimal location to install Microsoft SQL Server.  Gateway Services for NetWare is another of the services you should consider moving off your Microsoft SQL Server. This service allows for NetWare files to be shared through Microsoft shares on your server. Although this is often a convenient way to get to your files, putting these files on your database server adds to the overhead of that machine. You should strive to install your server on as clean a machine as possible²one that will only be used for database services. This means that you should not set

Prior to installing Microsoft SQL Server, you should create a domain or local user account under which the SQL Executive service will perform its tasks.

What¶s Stored In The Master Database?
 The server¶s system catalog and all the

environmental information is stored in the master database, which is contained within the master device. The master database is the brains of your server. Great care should be taken when modifying any information contained in the master database. You should get in the habit of backing up your master database whenever you make environmental changes to your server, including changing the sizes of databases or adding users. The following items should trigger a backup of the master database:

‡ CREATE, ALTER, or DROP statements (SQL) ‡ DISK statements (SQL) ‡ Altering a transaction log ‡ Adding or removing a mirrored device ‡ Adding or dropping remote servers ‡ Adding or dropping a login ID ‡ Any change in server configuration The size of the master device is another important consideration. By default incurrent versions of Microsoft SQL Server, the master is set to 25MB.

This value is totally dependent on the system that it must support. Many things affect the size of the master device. For most production systems, you must alter the size of the master device when adding major components to the server. Most end up in the 30MB range unless they need an abnormally large Tempdb. Upon installation, I usually change this setting to 30MB to avoid having to resize it a few weeks down the road. The additional 5MB of disk space will not hurt the server and provides more flexibility right off the bat. Keep in mind, however, that the size of the master device can be increased after installation.

The Master Database
 System tables and environmental information are

stored in the master database. Tables such as Sysdatabases, Syslocks, Sysprocesses, and Sysusages store critical information about your server. Other tables, such as Sysobjects,keep track of the objects that reside in each database on your server; each database has a copy of these tables.  The server will allow you to edit these and other important tables through raw SQL; however, I strongly recommend that you do not modify data in any of the tables in the master through SQL commands. Such modifications should be attempted only when absolutely necessary and only by someone with an intimate understanding of Microsoft SQL Server. Plenty of tools are available in Microsoft SQL Server to protect you from yourself. Use these tools at your disposal to make server changes.

This is not to say that you cannot check these tables for information needed to run your client-server applications effectively. I have often used information in system tables to find certain server-side permission or relation information. You can read data all day long without making direct modifications to these tables. By default all users of a database will have some kind of permission to access the system tables for that database. This is a requirement for the system to run well and cannot be avoided. To clarify, let¶s look at this kind of information in a different light. You probably have committed to memory the layout of all the furniture in your house or apartment. If you woke up in the middle of the night and made a trip to the kitchen to get a drink of milk, you would probably make that trip fairly well even without the lights on. The system tables store the information you take for granted, similar to the location and size of the coffee table, the doors, and so on. Incorrectly changing these stored values by hand would in effect move the furniture on you. This would not lend itself to a good environment for getting to your data. It could in some cases crash your server, rendering it useless.

The Pubs Database
 In a production environment, Pubs does you

no good and should probably be removed. This database is used as a learning tool and for testing the basics of your installation. Once your production machine is up and running, you can remove this database from the master device.

The Model Database
 The model database is like a stencil for creating new

user-defined databases.This stencil gives you a starting point for your CREATE DATABASE statements. The system tables for user-defined databases are stored in the model. Any stored procedures or users that need to exist in all your user databases should be placed in the model database. By placing them in the model, they will be copied to each successive database that is created. Be careful when placing things in the model. This action will increase the minimum size of your databases and may add unnecessary objects to databases.

Tempdb
 Many things can affect the space required for

Tempdb. This database is part of the master device by default and resides on disk. This ³scratch pad´ is shared by all the users on the server for worktable space and to resolve join issues in processing your queries. If you have many users on your system, you might need a bigger Tempdb. You might also need a bigger Tempdb if your users have the ability to write their own ad hoc queries or reports, or if a query  returns a large number of rows.

The Msdb Database
 The Msdb database is perhaps the most versatile

piece of your server. This is basically your server¶s to-do list. You can add tasks to this database that will be performed on a scheduled recurring basis. You can also view the history of the defined tasks and their execution results. The Msdb database is the component that allows you to proactively manage your data server. Used primarily by the SQL Executive service, the Msdb is created on two separate devices: one for your data and one for the transaction log

Be Careful With Memory
Microsoft SQL Server should not be installed with a memory footprint larger than available memory. This configuration option can be set to more than the system has installed. (Microsoft SQL Server will try to start with that setting, too.) In some situations the server will start and run very slowly, and in others it will appear to hang. You can fix this memory setting by starting the server with the -f switch. This starts the server in a basic configuration and then allows you to go in and change the memory setting. This memory setting is configured after installation. You should pick a setting, make the changes on your server, and then monitor the impact of that setting. Never assume that your math is correct or that what you heard someone else has done is right for your situation.

Test it first. To set or configure the memory for your server, do the following: 1. Start the SQL Enterprise Manager. 2. Select the Server menu option. 3. Select the Configure menu option. 4. When the Server Configuration dialog box appears, select the Configure tab. 5. Scroll to the Memory option and modify the memory settings

Hardware

SQL Server

2K Setting

16MB 4MB (²) 2048 (²) 24MB 8MB (²) 4096 (²) 32MB 16MB (18) 8192 (9216) 48MB 28MB (34) 14336 (17408) 64MB 40MB (46) 20480 (23552) 128MB 100MB (108) 51200 (55296) 256MB 216MB (226) 110592 (115712) 512MB 464MB (472) 237568 (241664) 6. Make the changes based on your hardware configuration. 7. Stop and start the MSSQLServer service to let the changes take effect.

This memory setting is in 2K units and can be a little confusing. You must convert the MB value of the RAM on the machine to the equal number of kilobytes. This is done by multiplying the MB value by 1024. Subtract the amount of memory that Microsoft Windows NT needs (at least 12MB), then divide that number by 2. This result will give the amount of memory in 2K units that you should give Microsoft SQL Server, provided no other services or applications are running on the server.

What Security Model Will Be Used?
 In Microsoft SQL Server, you have three

choices when it comes to data security 1) standard security 2) Integrated security 3) Mixed security

standard security (default)
 The standard security setting requires each

user to supply a valid login ID and password to attach to the server. This validation is separate from the network login scheme. This setting supports connections by nonWindows NT validated users accessing your data server.

Integrated security
 Integrated security allows you to use the network

login and password supplied to Microsoft Windows NT as a security mechanism for access to your data server. If users are validated with a login and password by Microsoft Windows NT, they can connect to the server. This provides you with the one login, one password scenario that many companies are looking for. Keep in mind that just because a user can connect to the server does not mean he or she has access to your database.

Mixed security
 Mixed security is used when you want your

Microsoft Windows NT users to supply only one login and password to be on the network and connect to your server. This method would also allow other network users to connect to the server as long as they can provide a valid database login and password. In the mixed-legacy environments of today¶s businesses, this is a very popular method of implementing security.

Microsoft SQL Server uses an interesting security model that has two levels of security. First, you must be allowed to connect to the server. Then, for each database you are granted access to, you are granted rights and permissions on a case-by-case basis. To explain this concept using our office example, say that you have been given a key to get into the office building. This key gives you the right to enter and walk through any hallways and public areas to function and find your way around. Then, for access to certain areas in the building, you need an access card (or permission) to get into each office or room (database) you do business in. If you are not granted access to, say, the Human Resources department, you simply cannot access this area (database). By assigning security on a departmental level, you can give your users freedom to do their jobs while protecting sensitive data from people who should not see it.

This model is very good for a few reasons. In a lowerbudget design, you can have both the production databases and training or development databases coexist on the same server. You don¶t have to worry about adding to an existing system and having users gain rights by association to other databases.Users are restricted by default and granted access by the owner of the database to do what they need. No one except the SA (system administrator) has rights in a database unless they own it.

Spring Cleaning
As mentioned, in a production environment, Pubs does you no good and should probably be removed. Likewise, you should periodically look for things like Pubs within your system. The tables, or copies of tables, stored procedures that are left to sit until no one knows any longer what they are or what they do. In over half the systems I have worked on (even in the one I am developing right now), I have made a copy of something and left the original in place, changed my copy until it was just the way I wanted it, and forgotten to remove the original or the test copy I ran to see if the system was faster. Keep your system as clean as possible and you will have less garbage to clean up later. Each object you define in your system takes up resources of some kind.

Protocols
You might have to support multiple protocols on your network. Keep in mind the default Named Pipes is slower than IPX/SPX or TCP/IP. You should try to use one of the latter two for client connections because they connect faster and transfer results better. Use as few protocols as necessary to reduce network traffic.

Microsoft SQL Server allows for multiple protocols to be supported and used simultaneously. Obviously, the number of protocols you are trying to support will have an impact on performance. Keep the list as small as possible, and you will be just fine. You can change your network support at any time after installation by rerunning Setup and selecting the Change Network Support radio button.

Named Pipes :-SQL Server default protocol. Multi-Protocol:- Required to use integrated security. Supports encryption. NWLink IPX/SPX:- Allows Novell IPX/SPX clients to connect. TCP/IP Sockets:- Allows TCP/IP clients to connect. Uses port 1433. Banyan VINES:- (Check SQL Books Online or Banyan documentation for configuration issues.) AppleTalk ADSP:- Allows Apple Macintosh-based clients to connect. DECnet:- Allows PATHWORKS connectivity. (Check SQL Books Online or the DEC documentation.) *Microsoft SQL Server always listens on Named Pipes by default.

You may drop support for Named Pipes altogether. Before doing this,however, make sure you have another protocol installed for client connections to your server. Also ensure that the client configuration utility is installed and returns the expected values on the server. All software that runs on your server runs as a client. Administrators often take this for granted and have the perception that the Enterprise Manager, for example, is really the server. It is just a client and must connect like any other.

Services
As mentioned, try not to have a lot of extra services running on your machine. Each of these services takes up processor time and resources. Administrators often forget that these services run all the time and automatically unless they are changed

What About The SQL Mail Client?
Having your Microsoft SQL Server send you a mail message or report automatically is a great feature. I have found this to be a tremendous benefit in setting up a new system. Microsoft SQL Server will interact with a number of mail clients through MAPI (Mail Application Programming Interface). Good step-bystep setup instructions are given in the Microsoft SQL Server Books Online. Perform a search on Mail, and look up your particular mail system and how to configure it to run with Microsoft SQL Server. Do this early in the process, and it will help keep you informed of just what your server is doing. Keep in mind, however, that too much of a good thing will slow processes down. Making the call to the external stored procedure for mail does take time

Should I Use The Default Location For My Devices?
Whether to use the default location for devices depends on whether you have a disk configuration that will better support a separate area for your data. In most single-disk situations, the default directory is fine. If you are installing on a machine with a multiple-disk subsystem or RAID system installed, then putting the data files on highperformance disks will improve performance and should be done at installation

Any server purchase you make today will be outdated by the time you unpack the box. Hardware changes on a daily basis, which is very frustrating. I like to buy servers with a good expansion path. A lot of potential expansion allows me to keep up with changes in the industry better. I buy brand-name servers because I don¶t like to invest money in machines that have poor technical support and might not be supported next year. I always check the hardware compatibility list for Windows NT Server. This is a must. I check each component, from CPU to disk controller, when needed. This ensures that I will not have an operating-system problem with the server I am configuring.

I like to configure my servers with a RAID disk subsystem for my data. When reliable access to the data is critical, I require some sort of RAID configuration for the data to reside on. With the ability of Microsoft Windows NT to implement RAID at the operating-system level, this is easily accomplished with even a limited budget. I try to keep the operating system and program files separate from the data. I usually place these files on a separate disk and controller from the data, and I mirror the disk and controller when budget allows. This provides the maximum amount of protection from hard drive failures while keeping performance at the highest-possible levels. The number of disks in the RAID array can be as small as three and as many as the disk subsystem will support.

Default Port
When SQL Server is installed as a named instance, by default SQL Server will use port 1433 to accept user connections. The below are the steps which need to be followed by a DBA to add a Windows firewall exception for a SQL Server Instance which is running on the default port 1433. 1. Click Start | Run and type FIREWALL.CPL this will open up Windows Firewall:

By default, Microsoft Windows XP Service Pack 2 and later, Windows Server 2003 Service Pack 1, Windows Vista and Windows Server 2008 Operating Systems enables Windows Firewall, which closes port 1433 to prevent internet computers from connecting to a default instance of SQL Server on your computer. 2. In the Windows Firewall dialog box, click the Exceptions Tab, and then click Add Port«:

3. In the Add a Port« dialog box, specify the SQL Server <Instance Name> in the Name textbox and also specify the Port Number as 1433 which will be the port number used by the Database Engine for the default instance of SQL Server:

4. Verify that TCP is selected and the click OK. 5. To open the port to expose the SQL Server Browser Service, click Add Port« In the Add a Port Dialog box, Type SQL Server Browser in the Name text box, type 1434 in the Port Number text box and select UDP and finally click OK to save:

The SQL Server Browser service lets SQL Server users connect to an instance of the Database Engine that is not listening on port 1433. If the SQL Server Browser Service is running then the SQL Server users can connect without knowing the port number. To use the SQL Server Browser Service, a DBA must open UDP (User Datagram Port) port 1434. To promote the most secure environment, leave the SQL Server Browser service stopped, and configure clients to connect using the port number. 6. To allow the named pipes access through the firewall, a DBA needs to enable File and Printer Sharing through the firewall. 7. To close the Windows Firewall dialog box, click OK.

Physical database architecture
Microsoft® SQL ServerŒ 2005 data is stored in databases. The data in a database is organized into the logical components visible to users. A database is also physically implemented as two or more files on disk. When using a database, you work primarily with the logical components such as tables, views, procedures, and users. The physical implementation of files is largely transparent. Typically, only the database administrator needs to work with the physical implementation.

Each instance of SQL Server has four system databases (master, model, tempdb, and msdb) and one or more user databases. Some organizations have only one user database, containing all the data for their organization. Some organizations have different databases for each group in their organization, and sometimes a database used by a single application. For example, an organization could have one database for sales, one for payroll, one for a document management application, and so on. Sometimes an application uses only one database; other applications may access several databases.

It is not necessary to run multiple copies of the SQL Server database engine to allow multiple users to access the databases on a server. An instance of the SQL Server Standard or Enterprise Edition is capable of handling thousands of users working in multiple databases at the same time. Each instance of SQL Server makes all databases in the instance available to all users that connect to the instance, subject to the defined security permissions.

When connecting to an instance of SQL Server, your connection is associated with a particular database on the server. This database is called the current database. You are usually connected to a database defined as your default database by the system administrator, although you can use connection options in the database APIs to specify another database. You can switch from one database to another using either the Transact-SQL USE database_name statement, or an API function that changes your current database context.

SQL Server 2005 allows you to detach databases from an instance of SQL Server, then reattach them to another instance, or even attach the database back to the same instance. If you have a SQL Server database file, you can tell SQL Server when you connect to attach that database file with a specific database name.

Files and Filegroups Architecture
SQL Server maps a database over a set of operating-system files. Data and log information are never mixed in the same file, and individual files are used only by one database. Filegroups are named collections of files and are used to help with data placement and administrative tasks such as backup and restore operations.

SQL Server databases have three types of files: 1)Primary data files The primary data file is the starting point of the database and points to the other files in the database. Every database has one primary data file. The recommended file name extension for primary data files is .mdf.

2) Secondary data files
Secondary data files make up all the data files, other than the primary data file. Some databases may not have any secondary data files, while others have several secondary data files. The recommended file name extension for secondary data files is .ndf.

3) Log files :Log files hold all the log information that is used to recover the database. There must be at least one log file for each database, although there can be more than one. The recommended file name extension for log files is .ldf. SQL Server does not enforce the .mdf, .ndf, and .ldf file name extensions, but these extensions help you identify the different kinds of files and their use.

Transaction Log
When SQL Server is functioning and operating, the database engine keeps track of almost every change that takes place within the database by making entries into the transaction log so that it can be used later if needed. The location of the SQL Server transaction log is configured at the same time the database is created. When creating a database, the location of the SQL Server transaction log is specified as well as other options associated with the transaction log. The following screen shows the SQL Server transaction log options that can be set during the creation of the database.

The options allow you to specify the location of the SQL Server transaction log files that are used by the database that you are creating. These transaction log files are stored just like the data files used in SQL Server. These files can also be configured just like the data files in SQL Server. The screen above shows the options available. For example, along with the file location you can specify a minimum size to start the SQL Server transaction log file out with. This size is just the minimum starting point because as the database is used the transaction log will grow.

The growth must be planned for and the options must be configured to handle the growth accordingly or error messages can occur relating to the transaction log being full. The rate at which the transaction log files grow can be specified by a size in megabytes or by a percentage. This setting tells SQL Server that when the transaction log reaches a specified point, automatically grow the file by the amount of growth specified in order to accommodate future transactions.

The other option that can be set is the maximum size of the transaction log files. They can be set to have unrestricted file growth or they can be set to only occupy a specific amount of space in megabytes. One thing to keep in mind is that the transaction logs can be used in a backup situation so possibly putting them on a disk other than that occupied by the primary data files may be a good idea for future use. To elaborate on the idea of using the SQL Server transaction log as part of the backup and storing it on a separate drive, the transaction log can be backed up and used to recover transactions since your last backup.

The last entries you have made will be stored in the transaction log and can be reenacted on the database to give a better database restoration by minimizing the amount of work lost since the last backup. So what exactly occurs during the logging of a transaction? When a transaction is logged in the database it can occur in different manners based on the statement that is being logged. In essence, however, all transactions log a copy of the old data and the new data in the transaction log. Some transactions will log of copy of the entire row and other transactions will log the bytes that have changed during the transaction. On many occasions it is not necessary to know exactly what is occurring in the transaction log as long as it is utilizing correctly when programming with it.

How can the SQL Server transaction log be used when developing stored procedures, database objects or interactions with the database in order to ensure that proper recovery methods can be implemented during the development of these objects or segments of code? When using Transact-SQL to interact with the database engine to process the statements needed, effective use of certain statements within the code will allow for transactions and recovery options to be implemented in case something occurs in the code we create. These statements are the Begin Tran, Rollback Tran, Commit Tran and the Save Tran. The Begin Tran statement will instruct the database engine to being a transaction block within the database so that the work can be handled explicitly in the code. For example, if you wanted to insert a group of records into a specified table only if a certain condition was true, you could begin the transaction, insert the records and check the condition to see if it was met.

If the condition was met you could then issue the Commit Tran command to commit the block of transactions since the last Save Tran or Begin Tran was encountered. If the condition was not met you could, on the other hand, issue the Rollback Tran in order to stop the transaction and rollback all changes to the database since the last Save Tran or Begin Tran was issued. The Save Tran command is issued to save a point in the transaction handling that will allow a save point to be specified. For example, you could create a save point during a large operation every so often so that the rollback or commit trans is not having to handle as many records when it is performed. One thing to keep in mind with this is that it causes an impact on performance during execution of these statements as they are database operations just like statements you execute.

We have examined the transaction log and how it may prove valuable to us when working with SQL Server and have found some good uses for the transaction log and the ways to interact with it.

Pages and Extents
 The fundamental unit of data storage in SQL

Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.  Extents are a collection of eight physically contiguous pages and are used to efficiently manage the pages. All pages are stored in extents.

Pages
 In SQL Server, the page size is 8 KB. This

means SQL Server databases have 128 pages per megabyte. Each page begins with a 96-byte header that is used to store system information about the page. This information includes the page number, page type, the amount of free space on the page, and the allocation unit ID of the object that owns the page.

Page type Data

Contents
Data rows with all data, except text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data, when text in row is set to ON.

Index Text/Image

Index entries.
Large object data types: text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data Variable length columns when the data row exceeds 8 KB: varchar, nvarchar, varbinary, and sql_variant

Global Allocation Map, Shared Global Allocation Map

Information about whether extents are allocated.

Page Free Space

Information about page allocation and free space available on pages. Information about extents used by a table or index per allocation unit. Information about extents modified by bulk operations since the last BACKUP LOG statement per allocation unit. Information about extents that have changed since the last BACKUP DATABASE statement per allocation unit.

Index Allocation Map

Bulk Changed Map

Differential Changed Map

Note: Log files do not contain pages; they contain a series of log records.
 Data rows are put on the page serially,

starting immediately after the header. A row offset table starts at the end of the page, and each row offset table contains one entry for each row on the page. Each entry records how far the first byte of the row is from the start of the page. The entries in the row offset table are in reverse sequence from the sequence of the rows on the page.

Large Row Support Rows cannot span pages, however portions of the row may be moved off the row's page so that the row can actually be very large. The maximum amount of data and overhead that is contained in a single row on a page is 8,060 bytes (8 KB). However, this does not include the data stored in the Text/Image page type. This restriction is relaxed for tables that contain varchar, nvarchar, varbinary, or sql_variant columns. When the total row size of all fixed and variable columns in a table exceeds the 8,060 byte limitation, SQL Server dynamically moves one or more variable length columns to pages in the ROW_OVERFLOW_DATA allocation unit, starting with the column with the largest width. This is done whenever an insert or update operation increases the total size of the row beyond the 8060 byte limit. When a column is moved to a page in the ROW_OVERFLOW_DATA allocation unit, a 24byte pointer on the original page in the IN_ROW_DATA allocation unit is maintained. If a subsequent operation reduces the row size, SQL Server dynamically moves the columns back to the original data page

Extents
 Extents are the basic unit in which space is managed. An extent    

is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte. To make its space allocation efficient, SQL Server does not allocate whole extents to tables with small amounts of data. SQL Server has two types of extents: Uniform extents are owned by a single object; all eight pages in the extent can only be used by the owning object. Mixed extents are shared by up to eight objects. Each of the eight pages in the extent can be owned by a different object. A new table or index is generally allocated pages from mixed extents. When the table or index grows to the point that it has eight pages, it then switches to use uniform extents for subsequent allocations. If you create an index on an existing table that has enough rows to generate eight pages in the index, all allocations to the index are in uniform extents.

Database options:Setting Database Options
 A number of database-level options that determine

the characteristics of the database can be set for each database. Only the system administrator, database owner, members of the sysadmin and dbcreator fixed server roles and db_owner fixed database roles can modify these options. These options are unique to each database and do not affect other databases. The database options can be set by using the SET clause of the ALTER DATABASE statement, the sp_dboption system stored procedure or, in some cases, SQL Server Enterprise Manager.  Note Server-wide settings are set using the sp_configure system stored procedure or SQL Server Enterprise Manager. Connection-level settings are specified by using SET statements

After you set a database option, a checkpoint is automatically issued that causes the modification to take effect immediately. To change the default values for any of the database options for newly created databases, change the appropriate database option in the model database. For example, if you want the default setting of the AUTO_SHRINK database option to be ON for any new databases subsequently created, set the AUTO_SHRINK option for model to ON.

There are five categories of database options: Auto options Cursor options Recovery options SQL options State options Auto Options Auto options control certain automatic behaviors.

AUTO_CLOSE When set to ON, the database is closed and shut down cleanly when the last user of the database exits and all processes in the database complete, thereby freeing any resources. By default, this option is set to ON for all databases when using Microsoft® SQL ServerŒ 2000 Desktop Engine (MSDE 2000), and OFF for all other editions, regardless of operating system. The database reopens automatically when a user tries to use the database again. If the database was shut down cleanly, the database is not reopened until a user tries to use the database the next time SQL Server is restarted. When set to OFF, the database remains open even if no users are currently using the database.

AUTO_CREATE_STATISTICS

When set to ON, statistics are automatically created on columns used in a predicate. Adding statistics improves query performance because the SQL Server query optimizer can better determine how to evaluate a query. If the statistics are not used, SQL Server automatically deletes them. When set to OFF, statistics are not automatically created by SQL Server; instead, statistics can be manually created. By default, AUTO_CREATE_STATISTICS is ON. The status of this option can be determined by examining the IsAutoCreateStatistics property of the DATABASEPROPERTYEX function.

AUTO_SHRINK
 When set to ON, the database files are candidates for periodic

 

 

shrinking. Both data file and log files can be shrunk automatically by SQL Server. When set to OFF, the database files are not automatically shrunk during periodic checks for unused space. By default, this option is set to ON for all databases when using SQL Server Personal Edition, and OFF for all other editions, regardless of operating system. AUTO_SHRINK only reduces the size of the transaction log if the database is set to SIMPLE recovery model or if the log is backed up. The AUTO_SHRINK option causes files to be shrunk when more than 25 percent of the file contains unused space. The file is shrunk to a size where 25 percent of the file is unused space, or to the size of the file when it was created, whichever is greater. It is not possible to shrink a read-only database. The status of this option can be determined by examining the IsAutoShrink property of the DATABASEPROPERTYEX function.

CURSOR_CLOSE_ON_COMMIT
 When set to ON, any open cursors are closed

automatically (in compliance with SQL-92) when a transaction is committed. By default, this setting is OFF and cursors remain open across transaction boundaries, closing only when the connection is closed or when they are explicitly closed.  Connection-level settings (set using the SET statement) override the default database setting for CURSOR_CLOSE_ON_COMMIT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting CURSOR_CLOSE_ON_COMMIT to OFF for the session when connecting to SQL Server.

The status of this option can be determined by examining the IsCloseCursorsOnCommitEnabled property of the DATABASEPROPERTYEX function.

CURSOR_DEFAULT LOCAL | GLOBAL
 When CURSOR_DEFAULT LOCAL is set, and a cursor is

not defined as GLOBAL when it is created, the scope of the cursor is local to the batch, stored procedure, or trigger in which the cursor was created. The cursor name is valid only within this scope. The cursor can be referenced by local cursor variables in the batch, stored procedure, or trigger, or a stored procedure OUTPUT parameter. The cursor is implicitly deallocated when the batch, stored procedure, or trigger terminates, unless it was passed back in an OUTPUT parameter. If it is passed back in an OUTPUT parameter, the cursor is deallocated when the last variable referencing it is deallocated or goes out of scope.  When CURSOR_DEFAULT GLOBAL is set, and a cursor is not defined as LOCAL when created, the scope of the cursor is global to the connection. The cursor name can be referenced in any stored procedure or batch executed by the connection. The cursor is implicitly deallocated only at disconnect. CURSOR_DEFAULT GLOBAL is the default setting.

The status of this option can be determined by examining the IsLocalCursorsDefault property of the DATABASEPROPERTYEX function.
 Recovery Options  Recovery options controls the recovery model for the

database.  RECOVERY FULL | BULK_LOGGED | SIMPLE  When FULL is specified, database backups and transaction log backups are used to provide full recoverability from media failure. All operations, including bulk operations such as SELECT INTO, CREATE INDEX, and bulk loading data, are fully logged.  When BULK_LOGGED is specified, logging for all SELECT INTO, CREATE INDEX, and bulk loading data operations is minimal and therefore requires less log space. In exchange for better performance and less log space usage, the risk of exposure to loss

SQL Options
 ANSI_NULL_DEFAULT: Allows the user to control the database default nullability. When

NULL or NOT NULL is not specified explicitly, a user-defined data type or a column definition uses the default setting for nullability. Nullability is determined by session and database settings. Microsoft SQL ServerŒ2000 defaults to NOT NULL. For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database default to NULL.  When this option is set to ON, all user-defined data types or columns that are not explicitly defined as NOT NULL during a CREATE TABLE or ALTER TABLE statement default to allowing null values. Columns that are defined with constraints follow constraint rules regardless of this setting.

Connection-level settings (set using the SET statement) override the default database-level setting for ANSI_NULL_DEFAULT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting ANSI_NULL_DEFAULT to ON for the session when connecting to SQL Server . The status of this option can be determined by examining the IsAnsiNullDefault property of the DATABASEPROPERTYEX function.

ANSI_NULLS When set to ON, all comparisons to a null value evaluate to NULL (unknown). When set to OFF, comparisons of nonUnicode values to a null value evaluate to TRUE if both values are NULL. By default, the ANSI_NULLS database option is OFF. Connection-level settings (set using the SET statement) override the default database setting for ANSI_NULLS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting ANSI_NULLS to ON for the session when connecting to SQL Server. SET ANSI_NULLS also must be set to ON when you create or manipulate indexes on computed columns or indexed views. The status of this option can be determined by examining the IsAnsiNullsEnabled property of the DATABASEPROPERTYEX function.

ANSI_PADDING When set to ON, trailing blanks in character values inserted into varchar columns and trailing zeros in binary values inserted into varbinary columns are not trimmed. Values are not padded to the length of the column. When set to OFF, the trailing blanks (for varchar) and zeros (for varbinary) are trimmed. This setting affects only the definition of new columns. Char(n) and binary(n) columns that allow nulls are padded to the length of the column when SET ANSI_PADDING is set to ON, but trailing blanks and zeros are trimmed when SET ANSI_PADDING is OFF. Char(n) and binary(n) columns that do not allow nulls are always padded to the length of the column. Important It is recommended that ANSI_PADDING always be set to ON. SET ANSI_PADDING must be ON when creating or manipulating indexes on computed columns or indexed views. The status of this option can be determined by examining the IsAnsiPaddingEnabled property of the DATABASEPROPERTYEX function.

ANSI_WARNINGS

When set to ON, errors or warnings are issued when conditions such as "divide by zero" occur or null values appear in aggregate functions. When set to OFF, no warnings are raised when null values appear in aggregate functions, and null values are returned when conditions such as "divide by zero" occur. By default, ANSI_WARNINGS is OFF. SET ANSI_WARNINGS must be set to ON when you create or manipulate indexes on computed columns or indexed views. Connection-level settings (set using the SET statement) override the default database setting for ANSI_WARNINGS. By default, ODBC and OLE DB clients issue a connectionlevel SET statement setting ANSI_WARNINGS to ON for the session when connecting to SQL Server The status of this option can be determined by examining the IsAnsiWarningsEnabled property of the DATABASEPROPERTYEX function.

ARITHABORT

When set to ON, an overflow or divide-by-zero error causes the query or batch to terminate. If the error occurs in a transaction, the transaction is rolled back. When set to OFF, a warning message is displayed if one of these errors occurs, but the query, batch, or transaction continues to process as if no error occurred. SET ARITHABORT must be set to ON when you create or manipulate indexes on computed columns or indexed views. The status of this option can be determined by examining the IsArithmeticAbortEnabled property of the DATABASEPROPERTYEX function.

NUMERIC_ROUNDABORT
If set to ON, an error is generated when loss of precision occurs in an expression. When set to OFF, losses of precision do not generate error messages and the result is rounded to the precision of the column or variable storing the result. SET NUMERIC_ROUNDABORT must be set to OFF when you create or manipulate indexes on computed columns or indexed views. The status of this option can be determined by examining the IsNumericRoundAbortEnabled property of the DATABASEPROPERTYEX function.

CONCAT_NULL_YIELDS_NULL
When set to ON, if one of the operands in a concatenation operation is NULL, the result of the operation is NULL. For example, concatenating the character string "This is" and NULL results in the value NULL, rather than the value "This is". When set to OFF, concatenating a null value with a character string yields the character string as the result; the null value is treated as an empty character string. By default, CONCAT_NULL_YIELDS_NULL is OFF. SET CONCAT_NULL_YIELDS_NULL must be set to ON when you create or manipulate indexes on computed columns or indexed views. Connection-level settings (set using the SET statement) override the default database setting for CONCAT_NULL_YIELDS_NULL. By default, ODBC and OLE DB clients issue a connection-level SET statement setting CONCAT_NULL_YIELDS_NULL to ON for the session when connecting to SQL Server. The status of this option can be determined by examining the IsNullConcat property of the DATABASEPROPERTYEX function.

QUOTED_IDENTIFIER
When set to ON (default), identifiers can be delimited by double quotation marks and literals must be delimited by single quotation marks. All strings delimited by double quotation marks are interpreted as object identifiers. Quoted identifiers do not have to follow the Transact-SQL rules for identifiers. They can be keywords and can include characters not generally allowed in Transact-SQL identifiers. If a single quotation mark (') is part of the literal string, it can be represented by double quotation marks ("). When set to OFF, identifiers cannot be in quotation marks and must follow all Transact-SQL rules for identifiers. Literals can be delimited by either single or double quotation marks. SQL Server also allows identifiers to be delimited by square brackets ([ ]). Bracketed identifiers can always be used, regardless of the setting of QUOTED_IDENTIFIER.

SET QUOTED_IDENTIFIER must be set to ON when you create or manipulate indexes on computed columns or indexed views. When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the table's meta data even if the option is set to OFF when the table is created. Connection-level settings (set using the SET statement) override the default database setting for QUOTED_IDENTIFIER. By default, ODBC and OLE DB clients issue a connection-level SET statement setting QUOTED_IDENTIFIER to ON when connecting to SQL Server. The status of this option can be determined by examining the IsQuotedIdentifiersEnabled property of the DATABASEPROPERTYEX function.

RECURSIVE_TRIGGERS
When set to ON, triggers are allowed to fire recursively. When set to OFF (default), triggers cannot be fired recursively. Note Only direct recursion is prevented when RECURSIVE_TRIGGERS is set to OFF. To disable indirect recursion, you must also set the nested triggers server option to 0. The status of this option can be determined by examining the IsRecursiveTriggersEnabled property of the DATABASEPROPERTYEX function.

State Options
 State options control whether the database is online

 

 

or offline, who can connect to the database, and whether the database is in read-only mode. A termination clause can be used to control how connections are terminated when the database is transitioned from one state to another. OFFLINE | ONLINE When OFFLINE is specified, the database is closed and shutdown cleanly and marked offline. The database cannot be modified while the database is offline. When ONLINE is specified, the database is open and available for use. ONLINE is the default setting. The status of this option can be determined by examining the Status property of the DATABASEPROPERTYEX function.

READ_ONLY | READ_WRITE
 When READ_ONLY is specified, the database is in

    

read-only mode. Users can retrieve data from the database, but cannot modify the data. Because a read-only database does not allow data modifications: Automatic recovery is skipped at system startup. Shrinking the database is not possible. No locking takes place in read-only databases, which can result in faster query performance. When READ_WRITE is specified, users can retrieve and modify data. READ_WRITE is the default setting. The status of this option can be determined by examining the Updateability property of the DATABASEPROPERTYEX function.

SINGLE_USER | RESTRICTED_USER | MULTI_USER
 SINGLE_USER allows one user at a time to connect

to the database. All other user connections are broken. The timeframe for breaking the connection is controlled by the termination clause of the ALTER DATABASE statement. New connection attempts are refused. The database remains in SINGLE_USER mode even if the user who set the option logs off. At that point, a different user (but only one) can connect to the database.  To allow multiple connections, the database must be changed to RESTRICTED_USER or MULTI_USER mode.

db_owner fixed database role and dbcreator and sysadmin fixed server roles to connect to the database, but it does not limit their number. Users who are not members of these roles are disconnected in the timeframe specified by the termination clause of the ALTER DATABASE statement. Moreover, new connection attempts by unqualified users are refused. MULTI_USER allows all users with the appropriate permissions to connect to the database. MULTI_USER is the default setting.The status of this option can be determined by examining the UserAccess property of the DATABASEPROPERTYEX function. WITH <termination> The termination clause of the ALTER DATABASE statement specifies how to terminate incomplete transactions when the database is to be transitioned from one state to another. Transactions are terminated by breaking their connections to the database. If the termination clause is omitted, the ALTER DATABASE statement waits indefinitely, until the

RESTRICTED_USER allows only members of the

ROLLBACK AFTER integer [SECONDS]
ROLLBACK AFTER integer SECONDS waits for the specified number of seconds and then breaks unqualified connections. Incomplete transactions are rolled back. When the transition is to SINGLE_USER mode, unqualified connections are all connections except the one issuing the ALTER DATABASE statement. When the transition is to RESTRICTED_USER mode, unqualified connections are connections for users who are not members of the db_owner fixed database role and dbcreator and sysadmin fixed server roles.

ROLLBACK IMMEDIATE
 ROLLBACK IMMEDIATE breaks unqualified

connections immediately. All incomplete transactions are rolled back. Unqualified connections are the same as those described for ROLLBACK AFTER integer SECONDS .  NO_WAIT: NO_WAIT checks for connections before attempting to change the database state and causes the ALTER DATABASE statement to fail if certain connections exist. When the transition is to SINGLE_USER mode, the ALTER DATABASE statement fails if any other connections exist. When the transition is to RESTRICTED_USER mode, the ALTER DATABASE statement fails if any unqualified connections exist.

Collation sequences
 SQL Server Collation Fundamentals

 Microsoft® SQL ServerŒ 2000 supports several

collations. A collation encodes the rules governing the proper use of characters for either a language, such as Greek or Polish, or an alphabet, such as Latin1_General (the Latin alphabet used by western European languages).  Each SQL Server collation specifies three properties:  The sort order to use for Unicode data types (nchar, nvarchar, and ntext). A sort order defines the sequence in which characters are sorted, and the way characters are evaluated in comparison operations.

The sort order to use for non-Unicode character data types (char, varchar, and text). The code page used to store non-Unicode character data. Note You cannot specify the equivalent of a code page for the Unicode data types (nchar, nvarchar, and ntext). The doublebyte bit patterns used for Unicode characters are defined by the Unicode standard and cannot be changed. SQL Server 2000 collations can be specified at many levels. When you install an instance of SQL Server 2000, you specify the default collation for that instance. Each time you create a database, you can specify the default collation used for the database. If you do not specify a collation, the default collation for the database is the default collation for the instance. Whenever you define a character column, you can specify its collation. If you do not specify a collation, the column is created with the default collation of the database. You cannot specify a collation for character variables and parameters; they are always created with the default collation of the database.

If all of the users of your instance of SQL Server speak the same language, you should pick the collation that supports that language. For example, if all of the users speak French, choose the French collation. If the users of your instance of SQL Server speak multiple languages, you should pick a collation that best supports the requirements of the various languages. For example, if the users generally speak western European languages, choose the Latin1_General collation. When you support users who speak multiple languages, it is most important to use the Unicode data types, nchar, nvarchar, and ntext, for all character data. Unicode was designed to eliminate the code page conversion difficulties of the non-Unicode char, varchar, and text data types. Collation still makes a difference when you implement all columns using Unicode data types because it defines the sort order for comparisons and sorts of Unicode characters. Even when you store your character data using Unicode data types you should pick a collation that supports most of the users in case a column or

A SQL Server collation defines how the database engine stores and operates on character and Unicode data. After data has been moved into an application, however, character sorts and comparisons done in the application are controlled by the Windows locale selected on the computer. The collation used for character data by applications is one of the items controlled by the Windows locale (a locale also defines other items, such as number, time, date, and currency formats). For Microsoft Windows NT® 4.0, Microsoft Windows® 98, and Microsoft Windows 95, the Windows locale is specified using the Regional Settings application in Control Panel. For Microsoft Windows 2000, the locale is specified using the Regional Options application in Control Panel.

Multiple collations can use the same code page for nonUnicode data. For example, the 1251 code page defines a set of Cyrillic characters. This code page is used by several collations, such as Cyrillic_General, Ukrainian, and Russian. Although all of these collations use the same set of bits to represent non-Unicode character data, the sorting and comparison rules they apply are slightly different to handle the dictionary definitions of the correct sequence of characters in the language or alphabet associated with the collation.

Because SQL Server 2000 collations control both the Unicode and non-Unicode sort orders, you do not encounter problems caused by specifying different sorting rules for Unicode and non-Unicode data. In earlier versions of SQL Server, the code page number, the character sort order, and the Unicode collation are specified separately. Earlier versions of SQL Server also support varying numbers of sort orders for each code pages, and for some code pages support sort orders not available in Windows locales. In SQL Server 7.0, it is also possible to specify a Unicode sort order that is different from the sort order chosen for non-Unicode data. This can cause ordering and comparison operations to return different results when working with Unicode data as opposed to non-Unicode data.

Collation sequences
 A collation encodes the rules governing the proper

use of characters for either a language, such as Greek or Polish, or an alphabet, such as Latin1_General.  Each SQL Server collation specifies three properties:  The sort order to use for Unicode data types (nchar, nvarchar, and ntext). A sort order defines the sequence in which characters are sorted, and the way characters are evaluated in comparison operations.  The sort order to use for non-Unicode character data types (char, varchar, and text).

The code page used to store non-Unicode character data. Note You cannot specify the equivalent of a code page for the Unicode data types (nchar, nvarchar, and ntext). The double-byte bit patterns used for Unicode characters are defined by the Unicode standard and cannot be changed. SQL Server 2000 collations can be specified at many levels. When you install an instance of SQL Server 2000, you specify the default collation for that instance. Each time you create a database, you can specify the default collation used for the database. If you do not specify a collation, the default collation for the database is the default collation for the instance. Whenever you define a character column, you can specify its collation. If you do not specify a collation, the column is created with the default collation of the database. You cannot specify a collation for character variables and parameters; they are always created with the default collation of the database.

If all of the users of your instance of SQL Server speak the same language, you should pick the collation that supports that language. For example, if all of the users speak French, choose the French collation. If the users of your instance of SQL Server speak multiple languages, you should pick a collation that best supports the requirements of the various languages. For example, if the users generally speak western European languages, choose the Latin1_General collation. When you support users who speak multiple languages, it is most important to use the Unicode data types, nchar, nvarchar, and ntext, for all character data. Unicode was designed to eliminate the code page conversion difficulties of the non-Unicode char, varchar, and text data types. Collation still makes a difference when you implement all columns using Unicode data types because it defines the sort order for comparisons and sorts of Unicode characters. Even when you store your character data using Unicode data types you should pick a collation that supports most of the users in case a column or

A SQL Server collation defines how the database engine stores and operates on character and Unicode data. After data has been moved into an application, however, character sorts and comparisons done in the application are controlled by the Windows locale selected on the computer. The collation used for character data by applications is one of the items controlled by the Windows locale (a locale also defines other items, such as number, time, date, and currency formats). For Microsoft Windows NT® 4.0, Microsoft Windows® 98, and Microsoft Windows 95, the Windows locale is specified using the Regional Settings application in Control Panel. For Microsoft Windows 2000, the locale is specified using the Regional Options application in Control Panel.

Multiple collations can use the same code page for non-Unicode data. For example, the 1251 code page defines a set of Cyrillic characters. This code page is used by several collations, such as Cyrillic_General, Ukrainian, and Russian. Although all of these collations use the same set of bits to represent non-Unicode character data, the sorting and comparison rules they apply are slightly different to handle the dictionary definitions of the correct sequence of characters in the language or alphabet associated with the collation. Because SQL Server 2000 collations control both the Unicode and non-Unicode sort orders, you do not encounter problems caused by specifying different sorting rules for Unicode and nonUnicode data. In earlier versions of SQL Server, the code page number, the character sort order, and the Unicode collation are specified separately. Earlier versions of SQL Server also support varying numbers of sort orders for each code pages, and for some code pages support sort orders not available in Windows locales. In SQL Server 7.0, it is also possible to specify a Unicode sort order that is different from the sort order chosen for

Index architecture
 Table and Index Architecture: Objects in a Microsoft® SQL ServerŒ 2005

database are stored as a collection of 8-KB pages. This topic describes the way the pages for tables and indexes are organized.  SQL Server 2000 supports indexes on views. The first index allowed on a view is a clustered index. At the time a CREATE INDEX statement is executed on a view, the result set for the view is materialized and stored in the database with the same structure as a table that has a clustered index. The result set that is stored is the same as that which is produced by this statement.

SELECT * FROM ViewName The data rows for each table or indexed view are stored in a collection of 8KB data pages. Each data page has a 96-byte header containing system information such as the identifier (ID) of the table that owns the page. The page header also includes pointers to the next and previous pages that are used if the pages are linked in a list. A row offset table is at the end of the page. Data rows fill the rest of the page.

Organization of Data Pages SQL Server 2000 tables use one of two methods to organize their data pages: Clustered tables are tables that have a clustered index. The data rows are stored in order based on the clustered index key. The index is implemented as a B-tree index structure that supports fast retrieval of the rows based on their clustered index key values. The pages in each level of the index, including the data pages in the leaf level, are linked in a doubly-linked list, but navigation from one level to another is done using key values. Heaps are tables that have no clustered index. The data rows are not stored in any particular order, and there is no particular order to the sequence of the data pages. The data pages are not linked in a linked list.

Indexed views have the same storage structure as clustered tables. SQL Server also supports up to 249 nonclustered indexes on each table or indexed view. The nonclustered indexes have a Btree index structure similar to the one in clustered indexes. The difference is that nonclustered indexes have no effect on the order of the data rows. Clustered tables and indexed views keep their data rows in order based on the clustered index key. The collection of data pages for a heap is not affected if nonclustered indexes are defined for the table. The data pages remain in a heap unless a clustered index is defined. The pages holding text, ntext, and image data are managed as a single unit for each table. All of the text, ntext, and image data for a table is stored in one collection of pages. All of the page collections for tables, indexes and indexed views are anchored by page pointers in the sysindexes table. Every table and indexed view has one collection of data pages, plus additional collections of pages to implement each index defined for the table or view.

Each table, index and indexed view has a row in sysindexes uniquely identified by the combination of the object identifier (id) column and the index identifier (indid) column. The allocation of pages to tables, indexes, and indexed views is managed by a chain of IAM pages. The column sysindexes.FirstIAM points to first IAM page in the chain of IAM pages managing the space allocated to the table, index or indexed view. Each table has a set of rows in sysindexes: A heap has a row in sysindexes with indid = 0. The FirstIAM column points to the IAM chain for the collection of data pages for the table. The server uses the IAM pages to find the pages in the data page collection because they are not linked together.

A clustered index on a table or view has a row in sysindexes with indid = 1. The root column points to the top of the clustered index B-tree. The server uses the index B-tree to find the data pages. Each nonclustered index created for a table or view has a row in sysindexes. The values for indid in the rows for each nonclustered index range from 2 through 250. The root column points to the top of the nonclustered index B-tree. Each table that has at least one text, ntext, or image column also has a row in sysindexes with indid = 255. The column FirstIAM points to the chain of IAM pages that manage the text, ntext, and image pages. In SQL Server version 6.5 and earlier, sysindexes.first always points to the start of a heap, the start of the leaf level of an index, or the start of a chain of text and image pages. In SQL Server version 7.0 and later, sysindexes.first is largely unused. In SQL Server version 6.5 and earlier, sysindexes.root in a row with indid = 0 points to the last page in a heap. In SQL Server version

Clustered Index & Non Clustered index
 Indexes in SQL Server are similar to the indexes in

books. They help SQL Server retrieve the data quicker. Indexes are of two types. Clustered indexes and nonclustered indexes. When you craete a clustered index on a table, all the rows in the table are stored in the order of the clustered index key. So, there can be only one clustered index per table. Non-clustered indexes have their own storage separate from the table data storage. Non-clustered indexes are stored as B-tree structures (so do clustered indexes), with the leaf level nodes having the index key and it's row locater. The row located could be the RID or the Clustered index key, depending up on the absence or presence of clustered index on the table.

If you create an index on each column of a table, it improves the query performance, as the query optimizer can choose from all the existing indexes to come up with an efficient execution plan. At the same t ime, data modification operations (such as INSERT, UPDATE, DELETE) will become slow, as every time data changes in the table, all the indexes need to be updated. Another disadvantage is that, indexes need disk space, the more indexes you have, more disk space is used.

Index Options : Fill Factor
 FILLFACTOR specifies a percentage that

indicates how much the Database Engine should fill each index page during index creation or rebuild.  Fill-factor is always an integer valued from 1 to 100. The fill-factor option is designed for improving index performance and data storage. By setting the fill-factor value, you specify the percentage of space on each page to be filled with data, reserving free space on each page for future table growth.

Specifying a fill-factor value of 70 would implies that 30 percent of each page will be left empty, providing space for index expansion as data is added to the underlying table. The empty space is reserved between the index rows rather than at the end of the index. The fill-factor setting applies only when the index is created or rebuilt. The SQL Server Database Engine does not keep the specified percentage of empty space in the pages AFTER the index is created. Trying to maintain the extra space on the data pages would be counterproductive because the Database Engine would have to perform page splits to maintain the percentage of free space specified by the fill factor on each page as data is entered. What it means for you as SQL DBA is that when you set FILLFACTOR at 80 and create Index with these settings, your index has 20% space to grow freely without need for page splits that affect the performance!

You can avoid serious performance issues by providing extra space for index expansion when data is added to the underlying table,.Usually, when a new row is added to a full index page, the Database Engine moves approximately half the rows to a new page to make room for the new row. This is known as a page split. While making room for new records, page split can take time to perform; it is a resource intensive operation. Also, it can cause fragmentation that leads to increased I/O operations. When frequent page splits occur, it is advisable to rebuild the index while setting an appropriate fill-factor value to redistribute the data. Although a low fill-factor value (>0) may reduce the page splits as the index grows, the index may require more storage space. It will keep creating new, ³halfempty´ pages, and it can eventually impair performance of your ³SELECTs´. For example, a fill-factor value of 50 can cause database read performance to decrease by two times. Read performance is decreased because the index contains more pages, thus increasing the disk IO operations required

Points to remember while using the FILLFACTOR argument:
 1. If fill-factor is set to 100 or 0, the Database Engine fills pages     

to their capacity while creating indexes. 2. The server-wide default FILLFACTOR is set to 0. 3. To modify the server-wide default value, use the sp_configure system stored procedure. 4. To view the fill-factor value of one or more indexes, use sys.indexes. 5. To modify or set the fill-factor value for individual indexes, use CREATE INDEX or ALTER INDEX statements. 6. Creating a clustered index with a FILLFACTOR < 100 may significantly increase the amount of space the data occupies because the Database Engine physically reallocates the data while building the clustered index.

Maintaining Indexes
 Maintaining SQL Server indexes is an uncommon practice. If a

query stops using indexes, oftentimes a new non-clustered index is created that simply holds a different combination of columns or the same columns.  maintaining Indexes in a SQL Server 2005 database - Dealing with Fragmented indexes, Reorganizing an Index, Rebuilding an Index, Disabling Non-clustered Indexes to Reduce Disk Space During Rebuild Operations, Rebuilding Large Indexes, Setting Index Options, Page Splits and Performance Considerations, Max Degree of Parallelism, Online Index Operations, Partition Index Operations, Statistical Information, Asynchronous Statistics Updates, Disabling Automatic Statistics, Statistics after Upgrading a Database to SQL Server 2005, Bulk copy options and Index operation logging.

Tuning Indexes
 Most database administrators are familiar with the potential

performance benefits they can gain through the judicious use of indexes on database tables. Indexes allow you to speed query performance on commonly used columns and improve the overall processing speed of your database.  SQL Server provides a wonderful facility known as the Index Tuning Wizard which greatly enhances the index selection process. To use this tool, first use SQL Profiler to capture a trace of the activity for which you wish to optimize performance. You may wish to run the trace for an extended period of time to capture a wide range of activity. Then, using Enterprise Manager, start the Index Tuning Wizard and instruct it to recommend indexes based upon the captured trace. It will not only suggest appropriate columns for queries but also provide you with an estimate of the performance increase you¶ll experience after making those changes.

SQL Server Index Tuning Wizard Tips
‡ The Index Tuning Wizard is a powerful tool designed to help you identify existing indexes that aren¶t being used, along with recommending new indexes that can be used to help speed up queries. It uses the actual queries you are running in your database, so its recommendations are based on how your database is really being used. The queries it needs for analysis come from the SQL Server Profiler traces you create. ‡ Use the SQL Server Index Wizard and an appropriate Profiler trace file to help identify potential indexed views (SQL Server 2005 Enterprise is required). When the Index Wizard runs, it automatically looks for potential indexed views and recommends any that if finds. But don't rely on this tool as the only way to identify indexed views,

If you don't need the full power of the Index Tuning Wizard, but still want assistance when creating indexes for a table, you can use the "Perform Index Analysis" option located under the "Query" menu of the Query Analyzer in SQL Server 7. For SQL Server 2000, select "Index Tuning Wizard" from the "Query" menu of the Query Analyzer. Instead of using a Profiler trace file to perform the analysis, it uses the query found in the Query window. While not as comprehensive an analysis as you get from using the Index Tuning Wizard, it is a good starting point when analyzing the performance of specific queries in your database. When creating the Profiler trace that is used to base the Index Tuning Wizard index analysis on, select a time of day that is representative of typical transactions run in your SQL Server application. Since the Index Tuning Wizard bases its recommendations on actual queries, you want the queries in your trace file to be truly representative of how your users use your application.

The longer you run the Profiler trace, the more accurate the Index Tuning Wizard recommendations will be because you will capture more of the types of typical queries run by your application. But keep in mind that there is an upper limit of 32,767 queries the Index Tuning Wizard can analyze at one time, and that the longer the trace, the longer the analysis will take (could take hours). If you find that the Index Tuning Wizard takes more time to run than you have, consider setting the "Maximum columns to per index", under the "Advanced" settings, from the default of 16 columns to a smaller number. This reduces the number of possible indexes the Index Tuning Wizard will evaluate, saving some time, although not a lot. In most cases, especially for OLTP applications, you won't want an index with a lot of columns in it. Don't capture more in your Profiler trace than you need. For example, only collect data for a single database, not all of the databases on your server. Also, don't collect more events or data columns that you need to use the Index Wizard. The only events and data columns required by the Index Wizard include the SQL:BatchCompleted and the RPC:Completed events in the TSQL category, and the EventClass and Text data columns. The fewer the events and data columns you capture, the smaller the load Profiler puts on your server when collecting data.

When creating a trace for the Index Tuning Wizard, consider using the "Create Trace Wizard" to create a "Find the worst performing queries" trace. You can set this trace to only capture queries that run longer than a specified amount of time, such as 1000 milliseconds. Generally, I like to set it at 5 milliseconds, and let it run all day (periodically). Then I use this trace to feed into the Index Tuning Wizard. My goal here, of course, is to limit the number of queries being tuned to those that are performing the worst. When doing this, be sure you don't have the Index Tuning Wizard evaluate potential indexes to be dropped. This is because you are not collecting performance data on every query, only the slow ones. If you do evaluate for dropping indexes, you might drop an index that is needed because the queries that use it run faster than the time specified above. Don't run the SQL Server Profiler or the Index Tuning Wizard on your production SQL Servers. Both tools use SQL Server resources that are best left to your users. Ideally, run them on a workstation connected to the server via your network. In one instance, I was running the Index Wizard on a database with over 800 tables. When the Index Wizard was running, it used over 1GB of virtual memory, greatly slowing my computer. Fortunately, I was using a desktop for the analysis, not my SQL Server. If I had run this same analysis on my production server, my users would have complained loudly.

Even if you run the Index Tuning Wizard from a computer other than the one where the database you are analyzing resides, running the Index Tuning Wizard still puts a load on your production server. Because of this, you should only run the Index Tuning Wizard when your production database is less busy. Another option is to restore the production to a non-production server, and then run the Index Tuning Wizard against the backup database on the nonproduction server. Once you have tuned your indexes using the Index Tuning Wizard, don¶t assume that you are set for life. The type of queries, along with the data often change over time, and you should periodically rerun the Index Tuning Wizard to see if it recommends any new changes based on the mix of queries that change over time. Don¶t blindly accept every recommendation made by the Index Tuning Wizard. Personally review each recommendation, and based on your knowledge of the database and how it is used, then either accept or not accept the recommendations on a recommendation-by-recommendation basis. For example, the Index Tuning Wizard might recommend adding an index to a table that you know is subject to a tremendous number of INSERTS and UPDATES. Adding an index to such a table may or may not be a good idea.

Also, before blindly taking the Index Tuning wizards recommendations, review the queries that hit the table that the Index Tuning Wizard is recommending adding an index, and see if perhaps the queries themselves are the problem. Perhaps instead of needing a new index, you really need to rewrite one or more queries. The Index Tuning Wizard may also recommend you drop one or more indexes. Always carefully review this recommendation before removing any indexes. Remember, the Index Tuning Wizard makes its recommendations based on the trace data you provided it. It is very possible that the trace that was used may not include all relevant data. For example, perhaps you run long reports at night that need certain indexes, and this information was not captured in the trace you created. If you were to delete an index needed for these reports, these reports then may take forever to run because they are missing their needed indexes. Furthermore, don't rely on the Index Tuning Wizard to recommend all of your table's indexes. You should make the original selection of indexes for your tables based on the types of queries you expect to be run against your data. Only use the Index Tuning Wizard as an adjunct to your original work in order to help finetune it. Sometime, the Index Tuning Wizard will not recommend an index, even if you know that one is needed. This can happen if the queries are complex, or they are part of a larger stored procedure. If you run into this situation, consider breaking up the complex query or stored procedure into smaller queries, and then run these individually through the Index Tuning Wizard.

When the Index Tuning Wizard runs, it creates what are called hypothetical indexes in the sysindexes table. The names of these indexes start with "hind_%". These tables are used by the Index Tuning Wizard to help determine if new indexes should be added to your tables. Normally, these hypothetical tables are deleted when the Index Wizard is completed, but if the Index Wizard is interrupted before it is completed, it may leave these hypothetical indexes in the sysindexes table. In some cases, the existence of these tables can lead to an unusual performance problem. What can happen is that some stored procedures may be forced to recompile every time they run, even if they should not be recompiled. When you run the Index Tuning Wizard, you have the option to choose if you want a fast, medium, or thorough analysis (only fast or thorough in 7.0). When analyzing large Profiler traces, your choice makes a significant different in how fast the analysis is done. But don't try to cut corners here, you should always choose a thorough analysis. After all, you want the optimum indexes for your database, so why would you want to do a less thorough analysis and have a less than optimum set of indexes for your database? In most cases, using the GUI interface for the Index Tuning Wizard is the best way to perform index analysis. But if you are running a lot of Index Tuning Wizard analysis, you may want to automate this task by using the itwiz command line utility. You can use this utility, along with the proper command line options, to complete any analysis you want, just as with the GUI interface.

When using the Index Tuning Wizard to identify potential new indexes, it ignores triggers on any of your tables when they are executed. While the Profiler has the ability to capture trigger code using the SQL:StmtCompleted Event, the Index Tuning Wizard is not able to use it for index analysis on trigger code. This means that you must manually tune all indexes used by your trigger manually .

Dealing with Fragmented indexes
 Though the SQL Server 2005 data base engine automatically maintains

indexes whatever the operations made on the database, the modifications can cause a degradation of the index over time. This is turn will degrade query performance. To help the DBA over come the problems of fragmented indexes, SQL Server 2005 provides an option of reorganizing or rebuilding the index. This option can be used on whole indexes or partitioned indexes.  Fragmentation of an index can be recognized by a process of analysis using the sys.dm_db_index_physical_stats function. This function detects fragmentation of a particular index or all indexes in the database. For partitioned indexes, the information is provided for each partition. The results of this operation are displayed in the AvgFragmentation column. The AvgFragmentSize column describes the average number of pages in one fragment in an index. Once the fragmentation size and value is known it has to be evaluated against the parameters for correction provided with SQL Server 2005. If the AvgFragmentation value is <=30% the corrective would be to ALTER INDEX REORGANIZE else the corrective would be to ALTER INDEX REBUILD WITH (ONLINE=ON). The first option is always done online while the second option can be used online or offline.

The syntax would be as under: USE Exforsys; GO SELECT IndexName, AvgFragmentation FROM sys.dm_db_index_physical_stats (N'Employeetransact.EmployeeID', DEFAULT, DEFAULT, N'Detailed'); GO

Reorganizing an Index
 In SQL Server 2005 the ALTER INDEX REORGANIZE

statement has replaced the DBCC INDEXDEFRAG statement. A single partition of a partitioned index can be reorganized using this statement.  When an index is reorganized the leaf level of the clustered and non clustered indexes on tables and views are reorganized and reordered to match the logical order²i.e. left to right of the leaf nodes. The index is organized within the allocated pages and if they span more than one file they are reorganized one at a time. No pages are migrated between files. Moreover, pages are compacted and empty pages created as a consequence are removed and the disk space released. The compaction is determined by the fill factor value in sys.indexes catalog view. Large object data types contained in clustered index or underlying tables will also be compacted by default if the LOB clause is present..

The good news is that the reorganize process is economical on the system resources and is automatically performed online. There are no long term blocking locks which jam up the works! DBAs are advised to reorganize the index when it is minimally fragmented. Heavily fragmented indexes will require rebuilding.

Rebuilding an Index
 When an index is rebuilt, it is dropped and a new one

is created. In the process fragmentation is removed and disk space is reclaimed. The fill factor setting is used to reorder the pages after compacting in a sequential order. Performance is improved and number of page reads is reduced. The following methods are used to drop and rebuild the index.  1. ALTER INDEX with the REBUILD clause. 2. CREATE INDEX with the DROP_EXISTING clause.

Each of these functions have their own advantages and disadvantages. Disabling Non-clustered Indexes to Reduce Disk Space During Rebuild Operations When a rebuilding operation is performed it is a best practice to disable the nonclustered indexes. Diabling the non-clustered index implies that the data rows are deleted but the definition is retained in metadata. The index is enabled after it is rebuilt. Rebuilding Large Indexes Indexes which have more than 128 extents must be rebuilt in two phases²the logical and the physical. The logical phase the allocation units of the index are marked for deallocation, the data rows are copied and sorted before they are moved to new allocation units in the rebuilt index. The physical phase involves dropping of the allocation units marked for deallocation in short transaction without locks. Setting Index Options When reorganizing an index, the index options cannot be specified as a rule. However, the ALTER INDEX REBUILD and CREATE INDEX WITH DROP_EXISTING allows users set the options such as PAD_INDEX, FILLFACTOR, SORT_IN_TEMPDB, IGONORE_DUP_KEY and STATISTICS_NORECOMPUT etc. Additionally the ALTER INDEX statement allows the specification of ALLOW_PAGE_LOCKS and ALLOW_ROW_LOCKS

Page Splits and Performance Considerations A page split is a process by which a new row added to a table is some rows in the table are moved to a new page to make room for the new records. This happens during reorganization at the risk of being resource intensive and causing fragmentation. To reduce the risk the fill factor has to be correctly selected. Else the index will have to be frequently rebuilt. Max Degree of Parallelism When several processors are used to perform the scan and sort operations on indexes, the number of processors to be used is to be specified. This can be specified in the configuration option max degree of parallelism and also by the current workload. This option limits the number of processors that can be used in parallel. This option is available only for the Enterprise version of the edition. When the DBA wants to manually configure the number of processors that can be used to run the index statement the MAXDOP index statement is used. This limits the number of processors to be used during an index operation of the query optimizer. This option overrides the max degree of parallelism option for the query. The MAXDOP index option cannot be specified for the ALTER INDEX REORGANIZE statement. Online Index Operations Concurrent index operations can be performed during Online index operations. The MAXDOP operation can be used ot control the number of processors dedicated ot online index operations.

Partition Index Operations Partition index operations can be memory resource intensive if the query optimizer applies degrees of parallelism. Statistical Information Statistical information can be created about the distribution of values in a column. This is used by the query optimizer to determine the optimal query plan by estimating the cost of using an index to evaluate a query. These values are sorted by the database engine on which the statistics is being built and a histogram is created for a maximum of 200 values separated by intervals. Additional information is introduced on statistics created on char, varchar, varchar(max), nchar, nvarchar(max), text and ntext columns. This is known as string summary. The string summary helps the query optimizer estimate the selectivity of query predicates on string patterns. This makes for accurate estimates of result set sizes and frequently better query plans. When the query optimizer is configured to automatically store statistical information about indexed columns, statistics on the columns are automatically generated without indexes that are used in a predicate. Asynchronous Statistics Updates The AUTO_UPDATE_STATISTICS_ASYNC option can be used to ensure that the query optimizer is prevented from returning a result set while it waits for the out of date statistics to be updated and compiled. The out of date statistics are put on queue for updating by a worker thread in a background process and the query and the concurrent queries compile immediately. This option is to be set at database level and determines the update method all statistics at the database.

Disabling Automatic Statistics Automatic statistics can be disabled for a particular column or index by using the sp_autostats system stored procedure or the STATISTICS_NORECOMPUTE clause of the CREATE INDEX statement. There are other clauses also which can be used with the update statement etc to prevent automatic generation of statistics. However, the statistics can be updated manually using the sp_createstats system stored procedure. Statistics after Upgrading a Database to SQL Server 2005 When the user upgrades the version of SQL server the statistics of the earlier version is treated as out of date. On first use, the statistics will have to be updated using the AUTO_UPDATE_STATISTICS database option. Bulk copy options and Index operation logging. Bulk copy options are useful to copy data into a table without a non clustered index. Logging the index operations minimally during this process makes it more efficient and reduces the possibility of the index operation filling the log. However the option depends on whether the table is indexed or not and whether the table is empty or not. If the table is empty, both data and index pages are minimally logged. If the table has no clustered index, data pages are always minimally logged. If the table is empty, the index pages are minimally logged. In non empty tables index pages are fully logged.

Security
 SQL Server 2005 supports Windows and mixed authentication

modes and is closely integrated with it. In this mode access is granted based on a security token assigned during successful domain logon by a Windows account and the SQL Server is requested access subsequently. The precondition is that both must belong to the same windows environment. The Active Directory domain environment provides an additional level of protection of the Kerberos protocol. This protocol governs the behaviour of the Windows authentication mechanism. In the mixed mode SQL Server Authentication can also be used. The credentials are verified from the repository maintained by the SQL Server. The increased security has made redundant the need to maintain separate set of accounts. However, the SQL Server logins have been improved with encryption of SQL Generated Certificates for communications that involve MADC client software based on .NET provider.

A very significant enhancement to SQL Server 2005 is the ability to manage account passwords and lockout properties. This can be within the local and domain based group policies. The DBA can impose restrictions on password complexity, password expiration and account lockout. The following complexities can be imposed: The length of the password can be set to be minimum 6 characters. The password can contain uppercase characters, lowercase character, numbers and non-alphanumeric characters. The password cannot be ³Admin´, ³Administrator´, ³Password´ etc The Password expiration can be determined by the values of ³Maximum password age´ and the lockout behaviour can be determined by ³Account lockout duration´, ³Account lockout threshold´, ³Reset account lockout counter after´. ALTER LOGIN T-SQL statement can be used to unlock locked password. The DBA uses the CHECK_EXPIRATION and CHECK_POLICY clauses while creating new logins with the CREATE LOGIN T-SQL statement. While CHECK_EXPIRATION controls the password expiration, CHECK_POLICY controls account lockout settings. Both have to be set ON or OFF. Other combinations are not supported. The syntax would be as under: CREATE LOGIN xxx WITH PASSWORD = 'CHANGEPASS' MUST_CHANGE, CHECK_EXPIRATION = ON, CHECK_POLICY = ON The enforcement of the password policy for the existing logins can be verified by the DBA from the catalog view outputs. This can be verified in the graphical user interface of SQL Server Management Studio.

The endpoints in SQL Server 2005 are versatile with different transport and payload protocols, listening ports, authentication modes and permissions. When creating or modifying HTTP endpoints using the CREATE ENDPOINT and ALTER ENDPOINT statements the preferred login type is designated by the LOGIN_TYPE option(which can be WINDOWS or MIXED values). While WINDOWS is default, the MIXED mode will have to be configured to operate over a Secure Socket Layer channel. The login credentials must be specified in the Web Services Security headers preceding the SOAP requests of the client application. The HTTP authentication mechanism can be assigned an Integrated, Digest or Basic value if the communication is SOAP based. The INTEGRATED mechanism applies windows based Kerberos or NTLM authentication protocol when establishing the HTTP communication between the client and server. The SQL Server account must be associated with Service Principal Name for the mutual Kerberos authentication to work. DIGEST is a hashing algorithm applied to user¶s windows credentials on the client side. This is compared with the result of the same algorithm being applied on the server side. BASIC compares the Windows BASE 64 Credentials on the client and server side.

Surface area configuration
 After you install SQL Server 2005, you can select

Microsoft SQL Server 2005 COnfiguration Tool, SQL Server 2005 Surface Area Configuration to launch the SQL Server Surface Area Configuration Tools. The initial screen, shown in Figure 1, provides a brief explanation of the surface area concept and offers hyperlinks that let you start the individual surface area configuration tools. You'll almost certainly want to run this tool when you finish installing SQL Server 2005. That's because SQL Server 2005 defaults to installing with most features disabled in the interest of security. It's up to you to enable the ones that you really need.

Logins
 Logins are what SQL Server uses to determine if a

given person/application has the right to connect to SQL Server. If you are familiar with any of the versions of SQL Server prior to 2005, the concept of logins hasn't changed, though new login types have been added.  Types of Logins  SQL Server 2005 maintains the 3 types of logins from earlier versions of SQL Server. It also adds logins mapped to certificates and asymmetric keys. A breakdown of the login types are:

Login types from earlier editions of SQL Server: SQL Server login Windows user login Windows group login New login types for SQL Server 2005: Login mapped to a certificate Login mapped to an asymmetric key SQL Server 2005's support of encryption within SQL Server itself allows for the support of the two new login options. However, the original 3 logins, corresponding to SQL Server logins and the two Windows-based ones, are and will likely continue to be the ones most commonly used. SQL Server Logins This is the traditional method to login to SQL Server. SQL Server stores both the username and the password (actually a hash of the password) with the master database and verifies a login attempt to use a SQL Server login internally. Because some 3rd party products still do not use Windows authentication and clients from other operating systems may not be able to connect using Windows authentication, SQL Server logins have remained. Also, SQL Server logins allow connections from a Windows system in an untrusted domain or workgroup because in such cases Windows-based logins would fail (with a notable exception which is beyond the scope of this article). While SQL Server logins are the same as they were in previous versions of SQL Server, the options we have with SQL Server logins have been increased with respect to password handling. One of the knocks against SQL Server logins was there were security weaknesses in how SQL Server handled the administration of

A member of the sysadmin fixed server role created a SQL Server login and set the password. However, there was no requirement for the user to change the password. Nothing within SQL Server forced this. Windows has a flag which indicates that a user must change the password on the next successful login, but SQL Server didn't have this. This is a security weakness. There is no reason for the administrator to know the user's password. SQL Server 2005 has an option, MUST_CHANGE, which forces the user to change the password upon first successful use of the login. There was no password complexity requirements on the password for a SQL Server login. About the only thing a DBA could check for was whether or not there was a blank password without resorting to some password cracking tool. With SQL Server 2005 running on Windows Server 2003, the password complexity requirements set for the operating system (whether by local or Group Policy) can be applied to SQL Server logins now. There was no password expiration. Therefore, a password could stay in existence as long as the SQL Server did. A DBA could check the updatedate column in the syslogins system table to see when a password changed, however, this wasn't always a valid check. This column gets changed whenever anything about the login gets changed, to include the default database or language. Therefore, this column could reflect a change, even if it's unrelated to the password. Also, even if a DBA flags a particular login, there was no automated mechanism within SQL Server to disable the account. A DBA would have to execute the system stored procedure sp_denylogin manually to stop the login from being used. If installed on Windows Server 2003, SQL Server 2005 now can handle password expiration according to the requirements set by the operating system (again through the local

All of these changes were made to enhance the security of SQL Server 2005. SQL Server 2005 doesn't require you to use any of these options. Furthermore, it's not an all or nothing choice. You can use these options on some logins and not on others. A case where you wouldn't use these options is when a 3rd party product has a hard-coded login and password. Obviously, the password complexity check and expiration policy does you no good because you wouldn't be able to change the password within the application. With that said, whenever you do have the choice to use the new password options, it is recommended that you do so. New Syntax for Managing Logins Given the new options for logins in SQL Server 2005, changes to the syntax for managing logins was necessary. Previously, the following stored procedures were used: sp_addlogin sp_denylogin sp_droplogin sp_grantlogin One way to handle the new features would be to expand these stored procedures. However, Microsoft took a different approach. If logins are though of as objects within SQL Server just as tables, views, stored procedures, functions, etc. are, then there should be specific T-SQL commands to handle their management. As a result, there is now CREATE LOGIN, ALTER LOGIN, and DROP LOGIN. Do the old stored procedures still work? Yes, they do, and their syntax is the same as it was in previous versions of SQL Server. The stored procedure

SQL Server Logins The options are for dealing with SQL Server logins and they correspond to some of the parameters available in the old sp_addlogin system stored procedure (default database, default language, and SID) as well as new parameters based on the enhancements to password management. The options are: PASSWORD = 'password' [HASHED] [MUST_CHANGE] [, additional options [, ...] ] When creating a SQL Server login, the PASSWORD must be set. A blank password can be given by specifying anything between the single quotes, but obviously, this is recommended against from a security perspective. When specifying the password, two optional arguments may be specified: HASHED tells SQL Server what is being specified is already in the form of a hash (the password is already "encrypted"). This is the same as specifying 'skip_encryption' as the value of the @encryptopt parameter for sp_addlogin. MUST_CHANGE will tell SQL Server to prompt the user to change the password on the first successful login. However, if you choose this option you must also choose to turn on policy checking and password expiration (more on those later in the article). Also, this option is only supported on Windows Server 2003. If you attempt to choose this option on another operating system (Windows 2000 or XP), you'll receive the following error: Msg 15195, Level 16, State 1, Line 1 The MUST_CHANGE option is not supported by this version of Microsoft Windows. In order to turn on policy checking and password expiration, additional options must be specified. Those additional options are: SID = SID DEFAULT_DATABASE = default database

CHECK_EXPIRATION and CHECK_POLICY tell SQL Server to enforce the settings on password found in the computer's effective local security policy. Since Group Policy overrides the local security policy, the effective setting may actually be in a Group Policy. When CHECK_POLICY is on, SQL Server 2005 will get password policies and enforce them. However, CHECK_EXPIRATION can still be turned off, even if you want to ensure password complexity, password history, and account lockout settings are observed and enforced. If you set CHECK_POLICY on, though, CHECK_EXPIRATION will also be on unless you explicitly turn it off. CHECK_POLICY is only fully enforced in Windows Server 2003. It can be turned on in Windows 2000, but only the password complexity is checked. Even with that said, the password complexity check in Windows 2000 just verifies the password isn't any of the following: null or empty, the name of the computer, the name of the login, 'password', 'admin', 'administrator', 'sa', or 'sysadmin'. CREDENTIAL is an option which associates the login with what SQL Server 2005 calls a credential. A credential contains the authentication information (such as username and password) to connect to a resource outside of SQL Server. Since this is a basic article on logins, I won't go into any more detail on credentials. Putting this all together, an example CREATE LOGIN command for a SQL Server login would be: CREATE LOGIN TestLogin WITH PASSWORD = 'Ch4ng3M3!' MUST_CHANGE, DEFAULT_DATABASE = AdventureWorks, CHECK_EXPIRATION = ON,

Windows Logins The FROM source is for Windows-based logins, certificates, and asymmetric keys. However, I'll only cover Windows-based logins in this article. Given that, the syntax is: FROM WINDOWS [WITH Windows options [, ...] ] FROM WINDOWS covers both Windows user accounts and Windows security groups. The Windows options are: DEFAULT_DATABASE = default database DEFAULT_LANGUAGE = default language Again, DEFAULT_DATABASE and DEFAULT_LANGUAGE are self-explanatory. However, the ability to set these two options at the same time the login is created is new. Such functionality did not and does not exist with sp_grantlogin. An example CREATE LOGIN command for a Windows account would be: CREATE LOGIN [BUILTIN\Users] FROM WINDOWS WITH DEFAULT_DATABASE = AdventureWorks Deleting Logins Getting rid of a login is extremely easy: DROP LOGIN name A word of caution with this command, however. SQL Server 2005 will allow you to drop the login even if the login has been mapped into one or more databases as a user. Therefore, be sure to verify the login does not exist as a user in all databases before dropping the login.

Modifying Logins There are two ways to modify logins. The first is to enable or disable the login. The second is to make changes to the login's properties. Both ways begin with ALTER LOGIN: ALTER LOGIN name { status | WITH option [, ...] } Enabling and Disabling Logins A login may be set to one of two statuses: ENABLE DISABLE ENABLE means the login can be used to connect to SQL Server. DISABLE toggles the login so it cannot be used to connect. To disable the login TestUser, we'd execute the following command: ALTER LOGIN TestLogin DISABLE Notice that I've disabled a SQL Server login. This represents new functionality in SQL Server 2005. Previously, I could deny a Windows login from connecting by executing the sp_denylogin system stored procedure. However, there was no way to temporarily prevent a SQL Server login from connecting. With SQL Server 2005, any login can be disabled. This is perfect for situations where a given application connects using a SQL Server login and you need to perform some sort of database maintenance. The login can be disabled until your maintenance is complete. !' UNLOCK

Setting Options There are several options which can be executed using the ALTER LOGIN statement. All of these apply to the WITH option [, ...] portion of the ALTER LOGIN. They are: Resetting the password on the login Setting the default database Setting the default language Changing the login name itself (renaming the login) Setting whether or not to check the password policy Setting whether or not to check password expiration Setting a credential for the login (or unsetting a credential) These options can be stacked together. Let's look at each of them in turn, with the exception of credentials. Resetting the Password The password options are: PASSWORD = 'new password' [ OLD_PASSWORD = 'old password' | secadmin password option [ secadmin password option ] The two secadmin password options are: MUST_CHANGE and UNLOCK. The first forces the user to change the password upon first login. The second option unlocks a login which has been locked due to too many failed login attempts. If MUST_CHANGE is set, password policy and expiration must also be set (see below). An example of changing the password on a locked account is: ALTER LOGIN TestLogin WITH PASSWORD = 'MyNewP4ssw0rd

Changing the Default Database or Language Changing the default database and language are similar: DEFAULT_DATABASE = database DEFAULT_LANGUAGE = language An example where both options are set (and an example of stacking options is): ALTER LOGIN TestLogin WITH DEFAULT_DATABASE = master, DEFAULT_LANGUAGE = us_english Renaming the Login Renaming the login is new to SQL Server 2005. The syntax is the following: NAME = new login Here we can rename TestLogin to TestNewLogin: ALTER LOGIN TestLogin WITH name = TestNewLogin Checking Password Policy and Expiration The settings to check password policy and expiration are: CHECK_POLICY = { ON | OFF } CHECK_EXPIRATION = { ON | OFF } If CHECK_EXPIRATION is set to ON, CHECK_POLICY must also be set to ON. Otherwise, the following error will be returned: Msg 15122, Level 16, State 1, Line 1 The CHECK_EXPIRATION option cannot be used when CHECK_POLICY is OFF. Putting this together with a password reset we could execute the following: ALTER LOGIN TestLogin WITH PASSWORD = 'MyNewP4ssw0rd!' MUST_CHANGE, CHECK_POLICY = ON,

User Instance Limitations The unique User Instance architecture introduces some functional limitations as follows: Only local connections are allowed. Replication does not work with user instances. Distributed queries do not work to remote databases. User instances only work in the Express Edition of SQL Server 2005. Common Issues The User Instance architecture sometimes leads to confusion when databases don't behave the way we are accustomed to. Most of these issues are related to the database files that get attached to the user instance and how they are handled. Following are the more common issues. The user instance cannot attach the database because the user does not have the required permissions. The user instance executes in the context of the user who opened the connection²not the normal SQL Server service account. The user who opened the user instance connection must have write permissions on the .mdf and .ldf files that are specified in the AttachDbFilename option of the connection string. One common issue occurs when working with the Visual Web Designer. The application connects to a user instance database from the Visual Studio integrated development environment (IDE) and then fails to connect when the database is opened by the Web page. When the ASP page opens the database it is generally running as ASPNET. If ASPNET does not have write permissions on the database files, the connection fails. Another common issue is when you open a database file successfully when the database is attached to the SQL Server Express instance, but fails when you try to

A variation of this issue is when the user that opens the user instance connection has read permissions on the database files but does not have write permissions. In this case, SQL Server attaches the database as a READ_ONLY database. If you get a message saying that the database is opened as read only, you need to change the permissions on the database file. The other main issue with user instances occurs because SQL Server opens database files with exclusive access. This is necessary because SQL Server manages the locking of the database data in its memory. Thus, if more than one SQL Server instance has the same file open, there is the potential for data corruption. If two different user instances use the same database file, one instance must close the file before the other instance can open it. There are two common ways to close database files, as follows. User instance databases have the Auto Close option set so that if there are no connections to a database for 8-10 minutes, the database shuts down and the file is closed. This happens automatically, but it can take a while, especially if connection pooling is enabled for your connections. Detaching the database from the instance by calling sp_detach_db will close the file. This is the method Visual Studio uses to ensure that the database file is closed when the IDE switches between user instances. For example, you are using the IDE to design a data-enabled Web page. You press F5 to run the application. The IDE detaches the database so that ASP.NET can open the database files. If you leave the database attached to the IDE and try to run the ASP page from your browser, ASP.NET cannot open the database because the file is still in use by the IDE.

Configuring Services and Protocols Your first stop in configuration should be the Surface Area Configuration for Services and Protocols tool. This is where you can turn on (or off) the broadest swathes of SQL Server 2005 functionality

One thing you'll discover right off the bat when you install SQL Server 2005 is that you can't talk to it from across the network. While this does enhance security by preventing remote attacks, it may not be the most useful configuration for a shared database server! The protocols node of this tool lets you enable TCP/IP or named pipes connections so that other machines on your network can access the new server. Stick to TCP/IP unless you've got a known requirement for named pipes because TCP/IP doesn't require opening as many ports in your firewall.

The other node in this tool lets you selectively enable or disable the various services that collectively make up SQL Server 2005. Depending on which edition of SQL Server you installed, and which installation options you selected you can enable or disable any of these services here: Analysis Services Database Engine/li> Full-Text Search Service Integration Services Service MSSQLServerADHelper Service Notification Services Service Reporting Services Service SQL Server Agent Service SQL Server Browser Service SQL Server Writer Service

Configuring Features
 After you've decided which services to

enable, you can proceed to finer-grained configuration by turning individual features on or offf. As with many other things in the software world, SQL Server offers tradeoffs between power and danger.

The Surface Area Configuration for Features tool, shown in Figure 3, lets you enable and disable individual features. Depending on which services you have installed, you'll see different selections in this tool. Here's a summary of the features that you can manage with this tool.

Analysis Services Features Ad-hoc Data Mining Queries allow Analysis Services to use external data sources via OPENROWSET. Anonymous Connections allow unauthenticated users to connect to Analysis Services. Linked Objects enables linking dimensions and measures between instances of Analysis Services. User-Defined Functions allows loading user-defined functions from COM objects Database Engine Features Ad-hoc Remote Queries allows using OPENROWSET and OPENDATASOURCE. CLR Integration allows using stored procedures and other code written using the .NET Common Language Runtime. Database Mail lets you use the new Database Mail system to send e-mail from SQL Server.

HTTP Access enables HTTP endpoints to allow SQL Server to accept HTTP connections OLE Automation enables the OLE automation extended stored procedures. Service Brokerenables Service Broker endpoints. SMO and DMO turns on Server Management Objects and Distributed Management Objects. SQL Mail lets you use the older SQL Mail syntax for sending e-mail from SQL Server. Web Assistant enables the Web Assistant for automatic output to Web pages. xp_cmdshell turns on the xp_cmdshell extended stored procedure. Reporting Services Features HTTP and Web Service Requests allows Reporting Services to deliver reports via HTTP. Scheduled Events and Report Delivery enables "push" delivery of reports.

As you can see, there are a fairly wide variety of features that you can turn on or off in the features configuration tool. In new SQL Server 2005 installations, you'll find that the bulk of these features are disabled by default. This is a radical change from SQL Server 2000, where just about everything was enabled right out of the box.

SQL Server 2005 Encryption types
 Encryption is the key for data security. Sensitive data such as



    

Social Security numbers, credit card numbers, passwords, etc. should be protected from hacking. In SQL Server 2000, you have to create your own user-defined functions to encrypt the data or use external DLLs to encrypt the data. In SQL Server 2005, these functions and methods are available by default. SQL Server 2005 provides the following mechanism of encryption in order to encrypt the data. ENCRYPTION by passphrase ENCRYPTION by symmetric keys ENCRYPTION by Asymmetric keys ENCRYPTION by certificates

SQL Server 2005 provides two functions regarding encryption: one for Encrypting and another for decrypting. ³ENCRYPTION by passphrase´ is basically encrypting the data using a password. The data can be decrypted using the same password. Let us try to encrypt the data and decrypt it using the ENCRYPTION by passphrase mechanism. select EncryptedData = EncryptByPassPhrase('MAK', '123456789' ) Result EncryptedData0x0100000000214F5A73054F3AB954DD 23571154019F3EFC031ABFCCD258FD22ED69A48002

Now let us execute the above Encryptbypassphrase function three times as shown below. declare @count intdeclare @SocialSecurityNumber varchar(500)declare @password varchar(12)set @count =1while @count<=3beginset @SocialSecurityNumber = '123456789'set @Password = 'MAK'select EncryptedData = EncryptByPassPhrase (@password, @SocialSecurityNumber )set @count=@count+1end Result EncryptedData0x01000000CBB7EE45B5C1460D6996B149C E16B76C7F7CD598DC56364D106B05D47B930093 (1 row(s) affected) EncryptedData0x010000005E884D30C8FF7E4723D4E70A0 3B0B07F877 667BAF1DA9BE1E116434842D11B99 (1 row(s) affected) EncryptedData0x01000000C508FB0C4FC7734B47B414D26 02A71A338417DD685229173684D319334A084CD Note: Here ³123456789´ is the simulated data of a social security number and ³MAK´ is the password. The result of the Encryptbypassphrase is different every time

when you decrypt the data it would decrypt perfectly. Now let us try to decrypt the above-encrypted data using the DecryptByPassPhrase function. select convert(varchar(100),DecryptByPassPhrase('MAK' , 0x01000000CBB7EE45B5C1460D6996B149CE16B76C7F7C D598DC56364D106B05D47B930093))select convert(varchar(100),DecryptByPassPhrase('MAK' , 0x010000005E884D30C8FF7E4723D4E70A03B0B07F87766 7BAF1DA9BE1E116434842D11B99))select convert(varchar(100),DecryptByPassPhrase('MAK' , 0x01000000C508FB0C4FC7734B47B414D2602A71A338417 DD685229173684D319334A084CD)) Result 123456789(1 row(s) affected) 123456789(1 row(s) affected) 123456789(1 row(s) affected)

Now let us try to decrypt the encrypted data using a different password. Execute the following command. select convert(varchar(100),DecryptByPassPhrase('test' , 0x01000000C508FB0C4FC7734B47B414D2602A71A33 8417DD685229173684D319334A084CD)) Result NULL(1 row(s) affected) As you can see, SQL Server generates NULL as the result when the password is wrong.

Now let¶s create a table with a few rows of credit card numbers and social security number and then encrypt the data permanently with a passphrase. USE [master]GO/****** Object: Database [admin] Script Date: 11/25/2007 10:50:47 ******/IF EXISTS (SELECT name FROM sys.databases WHERE name = N'Customer DB')DROP DATABASE [Customer DB]gocreate database [Customer DB]gouse [Customer DB]gocreate table [Customer data]([customer id] int,[Credit Card Number] bigint,[Social Security Number] bigint)goinsert into [Customer data] values (1, 1234567812345678, 123451234)insert into [Customer data] values (2, 1234567812345378, 323451234)insert into [Customer data] values (3, 1234567812335678, 133451234)insert into [Customer data] values (4, 1234567813345678, 123351234)insert into [Customer data] values (5, 1234563812345678, 123431234)go

Now let us create two columns to hold the encrypted data. use [Customer DB]goalter table [Customer Data] add [Encrypted Credit Card Number] varbinary(MAX)goalter table [Customer Data] add [Encrypted Social Security Number] varbinary(MAX)go Let¶s update the two columns with the encrypted data as shown below. use [Customer DB]goupdate [Customer Data] set [Encrypted Credit Card Number] =EncryptByPassPhrase('Credit Card', convert(varchar(100),[Credit Card Number]) )goupdate [Customer Data] set [Encrypted Social Security Number] =EncryptByPassPhrase('Social Security', convert(varchar(100),[Social Security Number]) )Go Query the table as shown below. [Refer Fig 1.0] use [Customer DB]goselect * from [customer data]go

Result

Let¶s remove the columns that have clear text data. use [Customer DB]goalter table [Customer Data] drop column [Credit Card Number]goalter table [Customer Data] drop column [Social Security Number] go Query the table as shown below. [Refer Fig 1.2] use [Customer DB]goselect * from [customer data]go

Result

Let¶s decrypt the data on the table using the decryptbypassphrase function as shown below. [Refer Fig 1.3] use [Customer DB]goselect [customer id],convert(bigint,convert(varchar(100),decryptbypassphrase('Credi t Card',[Encrypted Credit Card Number]) )) as[Credit Card Number],convert(bigint,convert(varchar(100),decryptbypassphrase( 'Social Security',[Encrypted Social Security Number] ) )) as[Social Security Number] from [customer data]Go

Result customer id,Credit Card Number,Social Security Number1, 1234567812345678, 1234512342, 1234567812345378, 3234512343, 1234567812335678, 1334512344, 1234567813345678, 1233512345, 1234563812345678, 123431234

Dedicated Administrator Connection in SQL Server
 As a Database Administrators you may come across a scenario

where you are unable to access a SQL Server instance, especially when the CPU or Memory utilization on the server is very high. To allow database administrators to troubleshoot such scenarios, Microsoft introduced the Dedicated Administrator Connection (DAC) in SQL Server 2005; this is a special diagnostic connection for database administrators when standard connections to the SQL Server are not possible. SQL Server will make every possible attempt to successfully connect using DAC feature; however in some extreme conditions it may not be successful. The Dedicated Administrator Connection feature is also available in SQL Server 2008. Database Administrators need to keep it in mind that only one DAC connection can be established to a SQL Server Instance. Once the connection is established using DAC you can access SQL Server and execute queries to troubleshoot performance issues. In this article you will see how to configure and enable a Remote Dedicated Administrator Connection and you will also see how you can use DAC with SQL Server Management Studio and the SQLCMD command line utility.

Enabling the Remote Dedicated Administrator Connection
 By default, the DAC can only be run on the server. A

Remote DAC is not possible until it is configured by the database administrator using the sp_configure system stored procedure with the remote admin connections option. To enable a Remote DAC, execute the code below: USE master GO sp_configure 'show advanced options', '1' GO RECONFIGURE WITH OVERRIDE GO /* 0 = Allow Local Connection, 1 = Allow Remote Connections*/ sp_configure 'remote admin connections', '1' GO RECONFIGURE WITH OVERRIDE GO

In SQL Server 2005 you can also enable remote computes to access a DAC by using the SQL Server Surface Area Configuration Tool. However in SQL Server 2008, the SQL Server Surface Area Configuration Tool is not available. To access the Area Configuration Tool in SQL Server 2005 select Start | All Programs | Microsoft SQL Server 2005 | Configuration Tools | SQL Server Surface Area Configuration.

In SQL Server 2005, the Surface Area Configuration screen you need to select is the Surface Area Configuration for Features. This will open the screen below:

Click on DAC under Database Engine and select the Enable remote DAC option and click OK to save the configuration changes.
TCP/IP Port Used by Dedicated Administrator Connection

The Dedicated Administrator Connection requires a dedicated TCP/IP port which gets assigned dynamically when the Database Engine starts up. By default the DAC listener accepts connections on the local port, i.e., for a default instance of SQL Server, DAC uses TCP/IP port 1434. Once the remote administrator connection is configured then the DAC listener is enabled without requiring a restart of the SQL Server service. You can check the port which was assigned for the DAC in SQL Server error log.

Using DAC with SQLCMD Command Line Utility

It is advised to use the SQLCMD command line utility to use the DAC feature, especially when SQL Server is facing high CPU or Memory utilization issues. The reason to use SQLCMD is because it is a light weight command line utility and it uses very little server resources such as Memory and CPU when connected locally or remotely. In scenarios when the server is not responding to standard SQL Server connections this is the best approach. You need to be a member of sysadmin fixed server role to connect and use the DAC. The (-A) is the special administrator switch which needs to be used when connecting to a SQL Server 2005 or aSQL Server 2008 instance using DAC with the SQLCMD command line utility. SQLCMD ±S AKMEHTA ±U UserA ±P UserA$ ±A

Explanation for Command Line Options which we have used in SQLCMD: -S <Provide SQL Server Instance> -U <User Name> -P <Password> -A Logs in to SQL Server with a Dedicated Administrator Connection (DAC).

Once connected using SQLCMD, database administrators can use the SQL Server Diagnosis queries to troubleshoot performance issues. Query Dynamic Management Views (DMV) like sys.dm_tran_locks, sys.dm_exec_sessions, sys.dm_exec_requests etc Execute Basic DBCC Commands like DBCC SQLPERF(LOGSPACE), DBCC DROPCLEANBUFFERS etc Run KILL 'SPID' etc.

Using DAC with SQL Server Management Studio

1. In SQL Server Management Studio, Press CTRL + N or click Database Engine Query 2. In Connect to Server dialog box, type ADMIN: followed by the name of SQL Server Instance in the Server name textbox. You can see that in the below example to get connect to a SQL Server instance named AKMEHTA, we have provided the Server name value as ADMIN:AKMEHTA 3. In the Authentication drop down list, there will be two options, namely Windows Authentication and SQL Server Authentication. In this example I will be using SQL Server Authentication. I have provided the credentials of a member of the sysadmin group and then clicked Connect to establish the connection using DAC.

4. If there is no other dedicated administrator connection in use then the attempt will be successful. Otherwise the connection will fail with an error indicating it cannot establish the connection. 5. In the new query window which has opened up, you can type the queries below which will help you quickly diagnosis performance issues. SELECT * FROM sys.dm_tran_locks SELECT * FROM sys.dm_exec_sessions SELECT * FROM sys.dm_exec_requests

Limitations when using DAC
 Only one DAC connection is allowed per instance of SQL Server. This

limitation is there in both SQL Server 2005 and SQL Server 2008  You will be receive the error below when a user tries to connect using DAC and another DAC connection is active  Could not connect because the maximum number of dedicated administrator connections already exists. Before a new connection can be made, the existing dedicated administrator connection must be dropped, either by logging off or ending the process.
 It is not possible to take a database backup or restore a database when

you have connected using a DAC  It is advised not to run resource intensive queries when connected to an SQL Server Instance using a DAC  You need to be a member of the ysadmin fixed server role to use a DAC  It is likely that if your database engine is running you will be able to access master database and then diagnosis the performance issues on the SQL Server

Introduction and Explanation to SYNONYM
 DBA have been referencing database objects

in four part names. SQL Server 2005 introduces the concept of a synonym. A synonyms is a single-part name which can replace multi part name in SQL Statement. Use of synonyms cuts down typing long multi part server name and can replace it with one synonyms. It also provides an abstractions layer which will protect SQL statement using synonyms from changes in underlying objects (tables etc).



Create Synonyms : USE AdventureWorks; GO CREATE SYNONYM MyLocation FOR AdventureWorks.Production.Location; GO Use Synonyms : USE AdventureWorks; GO SELECT TOP 5 * FROM MyLocation; GO Drop Synonyms : USE AdventureWorks; GO DROP SYNONYM MyLocation; GO

Synonyms can be created on only following objects.
           

Assembly (CLR) Stored Procedure Assembly (CLR) Table-valued Function Assembly (CLR) Scalar Function Assembly Aggregate (CLR) Aggregate Functions Replication-filter-procedure Extended Stored Procedure SQL Scalar Function SQL Table-valued Function SQL Inline-table-valued Function SQL Stored Procedure View Table (User-defined)

Additionally SYNONYMS can be used only to change data of object not the schema of the object. SYNONYMS can be used with only SELECT, UPDATE, INSERT, DELETE, EXECUTE commands.

 An example of the usefulness of this might be if you

had a stored procedure on a Users database that needed to access a Clients table on another production server. Assuming you created the stored procedure in the database Users, you might want to set up a synonym such as the following: USE Users; GO CREATE SYNONYM Clients FOR Offsite01.Production.dbo.Clients; GO

 Now when writing the stored procedure

instead of having to write out that entire alias every time you accessed the table you can just use the alias Clients. Furthermore, if you ever change the location or the name of the production database location all you need to do is modify one synonym instead of having to modify all of the stored procedures which reference the old server.

Backups
 SQL Backups can be created a number of

ways and can incorporate all or some of the data, as well as some part of the transaction log.

Recovery Models
 In order to begin working on backups, the business needs

 





define a database recovery model. In essence, a recovery model defines what you're going to do with the transaction log data. There are three recovery models: Full, Simple and Bulk Logged. These are pretty easy to define: Simple ± in simple recovery mode, the transaction log is not backed up so you can only recover to the most recent full or differential backup. Full ± in full recovery mode you backup the database and the transaction log so that you can recover the database to any point in time. Bulk Logged ± in bulk logged mode, most transactions are stored in the transaction log, but some bulk operations such as bulk loads or index creation are not logged.

The two most commonly used modes are Simple and Full. Don't necessarily assume that, of course, you always need to use Full recovery to protect your data. It is a business decision. The business is going to tell you if you need to recover to a point in time or if you simply need the last full backup. It's going to define if your data is recoverable by other means, such as manual entry, or if you have to protect as much as possible as it comes across the wire. You use Simple recovery if you can afford to lose the data stored since the last full or differential backup and/or you just don't need recovery to a point in time. In Simple mode, you must restore all secondary read/write file groups when you restore the primary. You use Simple mostly on secondary databases that are not an absolute vital part of the enterprise or reporting systems, with read only access so there isn't a transaction log to worry about anyway. You use Full if every bit of the data is vital, you need to recover to a point in time or, usually in the case of very large databases (VLDB), you need to restore individual files and file groups independently of other files and file groups. With both Simple and full recovery models, you can now run a CopyOnly backup which allows you to copy the database to a backup file, but doesn't affect the log, differential backup schedules or impact recovery to a point in time. but not the files and filegroups.

Working with Simple Recovery
 To set it to simple recovery:  ALTER DATABASE AdventureWorks SET RECOVERY SIMPL

E  Your simplest backup strategy is to run, at regular intervals, the following SQL Server backup command, which will perform a full backup of the database:  BACKUP DATABASE AdventureWorks TO DISK = 'C:\Backups\AdventureWorks.BAK'  What's with all the typing you ask? Don't we have GUI tools to handle the work for us? Yes, most simple backups can be performed using SQL Server Management Studio. However, if you want to learn and understand what Management Studio is doing for you, or if you want some fine grained control over what is backed up, how and where, then you're going to have to break out the keyboard and put away the mouse.

Copy-only backups
 Normally, backing up a database affects other backup and

restore processes. For example after running the previous command, any differential backups (a backup that only copies data changed since the last backup) would be using this as the starting point for data changes, not the backup you ran last night. As noted earlier, SQL 2005 introduces a new concept to backups, COPY_ONLY backups, which allow us to keep from interrupting the cycle:  BACKUP DATABASE AdventureWorks TO DISK = 'C:\Backups\AdventureWorks.bak' WITH COPY_ONLY;  Already we've found one of those more granular moments when the Management Studio wouldn't help you. If you want a copy only backup, you have to use the command line.

Differential backups
 Let's assume for a moment, that we're still in simple recovery, but we're

     

dealing with a larger database, say something above 100 GB in size. Full backups can actually start to slow down the process a bit. Instead, after consultation with the business, we've decided to do a weekly full backup and daily differential backups. Differential backups only backup the data pages that have changed since the last full backup. Following is the SQL backup command to perform a differential backup: BACKUP DATABASE AdventureWorks TO DISK = 'C:\backups\AdventureWorks.bak' WITH DIFFERENTIAL; Now, if we had to restore this database, we'd first go to the last full backup, restore that, and then restore the differential backups in order (more on that later). BACKUP DATABASE Adventureworks TO DISK = 'C:\backups\AdventureWorks.bak' WITH INIT; There are a number of other backup options that I won't be detailing here. Read the books online to see details on BLOCKSIZE, EXPIREDATE, RETAINDAYS, PASSWORD, NAME, STATS, and so on. You can also run a statement that will check the integrity of a database backup. It doesn't check the integrity of the data within a backup, but it does verify that the backup is formatted correctly and accessible. RESTORE VERIFYONLY
FROM DISK = 'C:\backups\Adventureworks.bak'

Full recovery and log backups
 We've primarily been working on a database that was in Simple

recovery mode (this used to be called Truncate Log on Checkpoint). In this mode, we do not backup the transaction logs for later recovery. Every backup under this mechanism is a database backup. Log backups are simply not possible.  However, you've only protected the data as of the last good backup, either full or differential. Let's change our assumptions. Now we're dealing with a large, mission critical application and database. We want to be able to recover this database up to the latest minute. This is a very important point. In theory, since the log entries are being stored and backed up, we're protected up to the point of any failure. However, some failures can cause corruption of the log, making recovery to a point in time impossible. So, we have to determine what the reasonable minimum time between log backups will be. In this case we can live with no more than 15 minutes worth of lost data.  So, let's start by putting our database in FULL recovery mode:

ALTER DATABASE AdventureWorks SET RECOVERY FULL Then, on a scheduled basis, in this case every 15 minutes, we'll run the SQL backup command for the transaction log: BACKUP LOG Adventureworks TO DISK = 'C:\backups\AdventureWorks_Log.bak'; This script will backup committed transactions from the transaction log. It has markers in the file that show the start and stop time. It will truncate the log when it successfully completes, cleaning out from the transaction log the committed transactions that have been written to the backup file. If necessary, you can use the WITH NO_TRUNCATE statement to capture data from the transaction log regardless of the state of the database, assuming it's online and not in an EMERGENCY status. This is for emergencies only. Note that we are not using the INIT statement in this case, but you can do so if you choose. When doing log backups, you've got options:

Run all the backups to a single file, where they'll stack and all you have to do, on restore (covered later), is cycle through them.Name the backups uniquely, probably using date and time in the string.In that latter case, safety says, use INIT because you're exercising maximum control over what gets backed up where, and you'll be able to know exactly what a backup is, when it was taken and from where based on the name. This is yet another place where operating backups from the command line gives you more control than the GUI. We've used both approaches in our systems for different reasons. You can decide what is best for your technology and business requirements.Most of the options available to the database backup are included in Log backup, including COPY_ONLY. This would allow you to capture a set of transaction data without affecting the log or the next scheduled log backup. This would be handy for taking production data to another system for troubleshooting etc.

If you have your database set to FULL Recovery, you need to run log backups. Sometimes, people forget and the transaction log grows to the point that it fills up the disk drive. In this case, you can run: BACKUP LOG Adventureworks WITH NO_LOG; Attaching NO_LOG to the log backup, and not specifying a location for the log, causes the inactive part of the log to be removed and it does this without a log entry itself, thus defeating the full disk drive. This is absolutely not recommended because it breaks the log chain, the series of log backups from which you would recover your database to a point in time. Microsoft recommends running a full backup immediately after using this statement.

Restoring Databases
 As important as SQL Server backups are,

and they are vital, they are useless without the ability to restore the database
Restoring a full database backup Restoring a full database backup is as simple as it was to create: RESTORE DATABASE Adventureworks FROM DISK = 'C:\Backup\AdventureWorks.bak'; It's really that simple ± unless, as we we are backing up everything to a file as if it were a backup device. In that case, you'll need to specify which file within the "device" you're accessing. If you don't know which file, you'll need to generate a list: RESTORE HEADERONLY FROM DISK = 'C:\Backup\Adventureworks.bak'; This will give you the same list as I showed above from Management Studio. So now, if we wanted to restore the second file in the group, the COPY_ONLY backup, you would issue the following command: RESTORE DATABASE AdventureWorks

Unfortunately, if you're following along, you may find that you just generated this error: Msg 3159, Level 16, State 1, Line 1 The tail of the log for the database "AdventureWorks" has not been backed up. Use BACKUP LOG WITH NORECOVERY to backup the log if it contains work you do ot want to lose. Use the WITH REPLACE or WITH STOPAT clause of the RESTORE statement to just overwrite the contents of the log. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. What this means is, that your database is in full recovery mode, but you haven't backed up the "tail of the log", meaning the transactions entered since the last time you ran a backup. You can override this requirement if you change the previous syntax to: RESTORE DATABASE AdventureWorks FROM DISK = 'C:\Backups\Adventureworks.bak' WITH FILE = 2, REPLACE;

That's the first time we've stacked the WITH clauses (WITH FILE=2 and WITH REPLACE is represented as WITH FILE=2, REPLACE), but it won't be the last. Read through the books online. Most of the WITH clause statements can be used in combination with the others. What happens if we want to restore to a different database than the original? For example, we want to make a copy of our database from a separate backup. Maybe we want to move it down to a production support server where we are going to do some work on it, separate from the production copy of the database. If we take the simple approach, well, try this: RESTORE DATABASE AdventureWorks_2 FROM DISK = 'C:\Backups\Adventureworks.bak' WITH FILE = 2; In this case, you should see a whole series of errors relating to files not being overwritten. You really can create new databases from backups, but if you're doing it on a server with the existing database, you'll need to change the location of the physical files using the logical names. In order to know the logical names of the files for a given database, run this prior to attempting to move the files: RESTORE FILELISTONLY FROM DISK = 'C:\Backups\Adventureworks.bak' WITH FILE = 2; This can then be used to identify the appropriate logical names in order to generate this script: RESTORE DATABASE AdventureWorks_2 FROM DISK = 'C:\Backups\Adventureworks.bak' WITH FILE = 2,

Restoring a differential backup The last method is to apply the differential backup. This requires two steps. First, we'll restore the database, but with a twist and then we'll apply the differential backup: RESTORE DATABASE AdventureWorks FROM DISK = 'C:\Backups\Adventureworks.bak' WITH FILE = 1 , NORECOVERY, REPLACE; RESTORE DATABASE AdventureWorks FROM DISK = 'C:\Backups\AdventureWorks.bak' WITH FILE = 3; Most of this is probably self-explanatory based on what we've already covered. The one wrinkle is the inclusion of the NORECOVERY keyword. Very simply, during a restore, transactions may have started during the backup process. Some of them complete and some don't. At the end of a restore, completed transactions are rolled forward into the database and incomplete transactions are rolled back. Setting NORECOVERY keeps transactions open. This allows for the next set of transactions to be picked up from the next backup in order. We're mainly dealing with simple backups and restores in this article, but a more advanced restore in 2005 allows secondary file groups to be restored while the database is online. Its primary file group must be online during the operation. This will be more helpful for very large database systems.

Restoring SQL Server databases to a point in time Restoring logs is not much more difficult than the differential database restore that we just completed. There's just quite a bit more involved in restoring to a moment in time. Assuming you're backing up your logs to a single file or device: RESTORE HEADERONLY FROM DISK = 'C:\Backups\Adventureworks_log.bak'; Otherwise, you simply go and get the file names you need. First run the database restore, taking care to leave it in a non-recovered state. Follow this up with a series of log restores to a point in time. RESTORE DATABASE AdventureWorks FROM DISK = 'C:\Backups\Adventureworks.bak' WITH FILE = 1, NORECOVERY, REPLACE, STOPAT = 'Oct 23, 2006 14:30:29.000';

RESTORE LOG AdventureWorks FROM DISK = 'C:\Backups\Adventureworks_log.bak' WITH FILE = 1, NORECOVERY, STOPAT = 'Oct 23, 2006 14:30:29.000'; RESTORE LOG AdventureWorks FROM DISK = 'C:\Backups\Adventureworks_log.bak' WITH FILE = 2, NORECOVERY, STOPAT = 'Oct 23, 2006 14:30:29.000'; RESTORE LOG AdventureWorks FROM DISK = 'C:\Backups\Adventureworks_log.bak' WITH FILE = 3, NORECOVERY, STOPAT = 'Oct 23, 2006 14:30:29.000'; RESTORE LOG AdventureWorks FROM DISK = 'C:\Backups\Adventureworks_log.bak' WITH FILE = 4, STOPAT = 'Oct 23, 2006 14:30:29.000'; Now what we have is a database that is up to the exact, last committed transaction at 14:30:29 on the 23rd of October. Remember, during multi-step restores such as this, you have to leave the database in a recovering status. That means appending NORECOVERY to each statement until you've completed the restore process. If for some reason you've added

Automating Administrative Tasks
 Microsoft SQL Server allows you to automate

administrative tasks. To automate administration, you define predictable administrative tasks and then specify the conditions under which each task occurs. Using automated administration to handle routine tasks and events frees your time to perform other administrative functions.

About SQL Server Agent
 SQL Server Agent is a Microsoft Windows service

that executes scheduled administrative tasks, which are called jobs. SQL Server Agent uses SQL Server to store job information. Jobs contain one or more job steps. Each step contains its own task, for example, backing up a database. SQL Server Agent can run a job on a schedule, in response to a specific event, or on demand. For example, if you want to back up all the company servers every weekday after hours, you can automate this task. Schedule the backup to run after 22:00 Monday through Friday; if the backup encounters a problem, SQL Server Agent can record the event and notify you.

To automate administration, follow these steps: Establish which administrative tasks or server events occur regularly and whether these tasks or events can be administered programmatically. A task is a good candidate for automation if it involves a predictable sequence of steps and occurs at a specific time or in response to a specific event. Define a set of jobs, schedules, alerts, and operators by using SQL Server Management Studio, Transact-SQL scripts, or SQL Server Management Objects (SMO). Run the SQL Server Agent jobs you have defined. Note: For the default instance of SQL Server, the SQL Server service is named SQLSERVERAGENT. For named instances, the SQL Server Agent service is named SQLAgent$instancename. If you are running multiple instances of SQL Server, you can use multiserver administration to automate

Creating Jobs
 A job is a specified series of operations performed sequentially by

SQL Server Agent. A job can perform a wide range of activities, including running Transact-SQL scripts, command prompt applications, Microsoft ActiveX scripts, Integration Services packages, Analysis Services commands and queries, or Replication tasks. Jobs can run repetitive or schedulable tasks, and they can automatically notify users of job status by generating alerts, thereby greatly simplifying SQL Server administration.  To create a job, a user must be a member of one of the SQL Server Agent fixed database roles or the sysadmin fixed server role. A job can be edited only by its owner or members of the sysadmin role. For more information about the SQL Server Agent fixed database roles, see SQL Server Agent Fixed Database Roles.  Jobs can be written to run on the local instance of SQL Server or on multiple instances across an enterprise. To run jobs on multiple servers, you must set up at least one master server and one or more target servers. For more information about master and target servers, see Automating Administration Across an Enterprise

To create a job SQL Server Management Studio Transact-SQL SQL Server Management Objects (SMO) To give others ownership of a job SQL Server Management Studio Transact-SQL SQL Server Management Objects (SMO) Organizing Jobs Job categories help you organize your jobs for easy filtering and grouping. For example, you can organize all your database backup jobs in the Database Maintenance category. You can also create your own job categories. Multiserver categories exist only on a master server. There is only one default job category available on a master server: [Uncategorized (MultiServer)]. When a multiserver job is downloaded, its category is changed to Jobs From MSX at the target server.

To create a job category SQL Server Management Studio Transact-SQL SQL Server Management Objects (SMO)

To delete a job category SQL Server Management Studio Transact-SQL SQL Server Management Objects (SMO)

To assign a job to a job category SQL Server Management Studio Transact-SQL SQL Server Management Objects (SMO)

To change the membership of a job category SQL Server Management Studio Transact-SQL SQL Server Management Objects (SMO)

To list category information Transact-SQL SQL Server Management Objects (SMO) Job Ownership For security reasons, only the job owner or a member of the sysadmin role can change the definition of the job. Members of the sysadmin role can assign job ownership to other users, and they can run any job, regardless of the job owner.

Creating Job Steps A job step is an action that the job takes on a database or a server. Every job must have at least one job step. Job steps can be: Executable programs and operating system commands. Transact-SQL statements, including stored procedures and extended stored procedures. Microsoft ActiveX scripts. Replication tasks. Analysis Services tasks. Integration Services packages.

Every job step runs in a specific security context. If the job step specifies a proxy, the job step runs in the security context of the credential for the proxy. If a job step does not specify a proxy, the job step runs in the context of the SQL Server Agent service account. Only members of the sysadmin fixed server role can create jobs that do not explicitly specify a proxy. Because job steps run in the context of a specific Microsoft Windows user, that user must have the permissions and configuration necessary for the job step to execute. For example, if you create a job that requires a drive letter or a Universal Naming Convention (UNC) path, the job steps may run under your Microsoft Windows user account while testing the tasks. However, the Windows user for the job step must also have the necessary permissions, drive letter configurations, or access to the required drive. Otherwise, the job step fails. To prevent this problem, ensure that the proxy for each job step has the necessary permissions for the task that the job step performs. For more information, see Security Considerations for SQL Server. Job Step Logs SQL Server Agent can write output from some job steps either to an operating system file or to the sysjobstepslogs table in the msdb database. The following job step types can write output to both destinations:

Executable programs and operating system commands. Transact-SQL statements. Analysis Services tasks. Only job steps that are executed by users who are members of the sysadmin fixed server role can write job step output to operating system files. If job steps are executed by users who are members of the SQLAgentUserRole, SQLAgentReaderRole, or the SQLAgentOperatorRole fixed database roles in the msdb database, then the output from these job steps can be written only to the sysjobstepslogs table. Job step logs are automatically deleted when jobs or job steps are deleted. Note: Replication task and Integration Services package job step logging is handled by their respective subsystem. You cannot use SQL Server Agent to configure jog step logging for these types of job steps. Executable Programs and Operating System Commands as Job Steps Executable programs and operating system commands can be used as job steps. These files may have .bat, .cmd, .com, or .exe file extensions. When you use an executable program or an operating system command as a job step, you must specify: The process exit code returned if the command was successful. The command to execute. To execute an operating system command, this is simply the command itself. For an external program, this is the name of the program and

Creating a SMO application
1.

2. 3.

4. 5.

Click start,point to all programs,point to microsoft visual studio2005,and then click microsoft visual studio2005. On the file menu,point to new,and then click project. In the project types list,expand other languages,and then click visual Basic.In the templates list click Window Application.Change the name of the project to SMOPractice and then click ok. On the project menu,click Add Reference. On the .Net tab,use CTRL+click to select the following libraries:

Microsoft.Sqlserver.connectionInfo Microsoft.SqlServer.Smo

6)If the Toolbox is not visible,on the view menu,Click Toolbox. 7)In the toolbox,double-click Button1 to view the button1_click code. 9)At the top of the code window(above the Public Class Form1 declaration),add the following code to import the SMO namespaces: Imports Microsoft.SqlServer.Managment.Smo; Imports Microsoft.SqlServer.Managment.Common

10)In the Button1_Click event handler,add the following code to connect to your SQL Server:

Dim myServer as New Server() Dim conn as ServerConnection=myserver.ConnectionContext Conn.serverinstance=³localhost´ Conn.Connect()

MessageBox.show(³Connected to localhost´) 11)On the file menu,click save all.Use the default names and location. 12)On the Debug menu,click Start.When the windows form appears,click button1. 13)When the message box appears,click ok and close the windows form application.

Retrive server information

1)Add the following code immediately before the End Sub:

Messagebox.show(³Edition:´&myserver.Information.Edition) Messagebox.show(³Language:´&myserver.Information.language) Messagebox.show(³OSVersion:´&myserver.Information.osversion) Messagebox.show(³platform:´&myserver.Information.platform) Messagebox.show(³Product:´&myserver.Information.product) Messagebox.show(³Version:´&myserver.Information.VersionString)

On the file menu click save all On debug menu click tart .when the window form appears click button1

Creating Backup Jobs in SQL Server 2005
 The SQL Server 2005 Maintenance Plan

feature has been significantly modified in comparison with SQL 2000: now it utilizes new Integration Services. Also, creating database and transaction log backups is not as clear as it was in SQL 2000. This article does not describe all available SQL Server 2005 backup features or provide some tricks dealing with them; instead, it offers solutions for the most commonly used backup jobs.

Using SQL Server 2000 Backup Job Scripts
 If you created backup maintenance plans in SQL

2000, you probably noticed that the key element of the backup job was the xp_sqlmaint extended stored procedure, which used the sqlmaint utility. Despite the fact that Microsoft has deprecated both sqlmaint and xp_sqlmaint, and is planning to remove them from future versions of SQL Server, they are still here and they work well. So, you can take your existing SQL 2000 backup jobs, modify the server and database names, the backup files folders, the output and report files, etc., and run those scripts on your SQL 2005 server.

 In case you do not have those scripts, here is an

example of a database backup job that uses the xp_sqlmaint procedure. It runs a full database backup of the AdventureWorks database on the dba02\sql2005 instance to the shared dbbackup folder on the server02 server, deletes backup files older than 4 days, and stores a report into the C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG folder on the local database server. Below is the code fragment that utilizes the xp_sqlmaint procedure

 EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep

@job_id = @JobID, @step_id = 1, @step_name = N'Step 1', @command = N'EXECUTE master.dbo.xp_sqlmaint N"-S "dba02\sql2005" -D "AdventureWorks" -Rpt "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG backup_aw.txt" -WriteHistory -VrfyBackup BkUpMedia DISK -BkUpDB "\\server02\dbbackup\sql2005" -CrBkSubDir DelBkUps 4days -BkExt "BAK""', @database_name = N'master', @server = N", @database_user_name = N", @subsystem = N'TSQL', @cmdexec_success_code = 0, @flags = 4, --Overwrite output file @retry_attempts = 0, @retry_interval = 0, @output_file_name = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\aw_backup.log', @on_success_step_id = 0, @on_success_action = 1, -- (default) Quit with success @on_fail_step_id = 0, @on_fail_action = 2 -- (default) Quit with failure



Creating a SQL Server 2005 Maintenance Plan
 If you are going to use new Maintenance Plan

features, I strongly recommend installing SQL Server 2005 Service Pack 1 first. Among the new features and improvements that are included in SQL Server 2005 Service Pack 1, there is a fix for the previously existing discrepancy between Back up database task, which allowed storing database backups in separate folders, and Maintenance Cleanup Task, which could not delete backup files from those subfolders.

 In order to create a new maintenance plan in Management

Studio, first connect to the target server using Windows Authentication, then right-click on the Maintenance Plan folder in Object Explorer, select New Maintenance Plan, and enter the plan name. As a result, a Maintenance Plan design panel will appear on the right, and a toolbox with available Maintenance Plan Tasks will be displayed on the left. Click on the Connection button to verify that the current connection uses Windows Authentication (recommended by Microsoft). Currently connected Windows user will become the owner of the job created by this maintenance plan.  The first step in creating a database or transaction log backup is to drag and drop Back up database task from the toolbox to the design panel. Then double-click on that item to set the following necessary properties:

Databases: Click on the dropdown field to bring up the database selection window. For this example, I chose Northwind and Pubs as Figure 1 shows:

Backup type: choose Full. Destination parameters: Back up to ² choose Disk. Make sure that the Create a backup file for every database option is selected and the Create a sub-directory for each database box is checked. You can use the default destination folder or specify your own. For this example, the network folder \\server02\dbbackup\sql2005 has been selected. Backup file extension ² make sure that its value is bak without a leading dot. Check the Verify backup integrity box

When you are done, the Back up database task properties window should look like the one shown on Figure 2.

Here is the code to "fix" the existing jobs: SET NOCOUNT ON SELECT IDENTITY(int, 1, 1) AS agentJobId, name AS agentJobName INTO #agentJob FROM msdb.dbo.sysjobs ORDER BY name DECLARE @agentJobName sysname, @agentJobId int, @job_id uniqueidentifier SET @agentJobId = 1 SELECT @agentJobName = agentJobName FROM #agentJob WHERE agentJobId = @agentJobId WHILE @@ROWCOUNT <> 0 BEGIN EXEC msdb.dbo.sp_verify_job_identifiers '@job_name', '@job_id', @agentJobName OUTPUT, @job_id OUTPUT EXEC msdb.dbo.sp_update_job @job_id, @notify_level_eventlog = 2 SELECT @agentJobId = @agentJobId + 1, @job_id = NULL SELECT @agentJobName = agentJobName FROM #agentJob WHERE agentJobId = @agentJobId END DROP TABLE #agentJob

How to schedule a database backup operation by using SQL Server Management Studio in SQL Server 2005
 To schedule a database backup operation by using SQL Server  

   

Management Studio in SQL Server 2005, follow these steps: Start SQL Server Management Studio. In the Connect to Server dialog box, click the appropriate values in the Server type list, in the Server name list, and in the Authentication list. Click Connect. In Object Explorer, expand Databases. Right-click the database that you want to back up, click Tasks, and then click Back Up. In the Back Up Database - DatabaseName dialog box, type the name of the backup set in the Name box, and then click Add under Destination.

 In the Select Backup Destination dialog box, type a path and a

file name in the Destinations on disk box, and then click OK.  In the Script list, click Script Action to Job.  In the New Job dialog box, click Steps under Select a page, and then click Edit if you want to change the job parameters.

Note In the Job Step Properties - 1 dialog box, you can see the backup command.  Under Select a page, click Schedules, and then click New.  In the New Job Schedule dialog box, type the job name in the Name box, specify the job schedule, and then click OK.

 Note If you want to configure alerts or

notifications, you can click Alerts or Notifications under Select a page.  Click OK two times.

You receive the following message:  Note To verify the backup job, expand SQL Server Agent, and then expand Jobs. When you do this, the SQL Server Agent service must be running.

How to: Schedule a Job (SQL Server Management Studio)
 To create and attach a schedule to a job  In Object Explorer, connect to an instance of the    

SQL Server Database Engine, and then expand that instance. Expand SQL Server Agent, expand Jobs, rightclick the job you want to schedule, and click Properties. Select the Schedules page, and then click New. In the Name box, type a name for the new schedule. Clear the Enabled check box if you do not want the schedule to take effect immediately following its creation.

 For Schedule Type, select one of the following:


Click Start automatically when SQL Server Agent starts to start the job when the SQL Server Agent service is started. Click Start whenever the CPUs become idle to start the job when the CPUs reach an idle condition. Click Recurring if you want a schedule to run repeatedly. To set the recurring schedule, complete the Frequency, Daily Frequency, and Duration groups on the dialog. Click One time if you want the schedule to run only once. To set the One time schedule, complete the One-time occurrence group on the dialog.







 To attach a schedule to a job  In Object Explorer, connect to an instance of the SQL      

Server Database Engine, and then expand that instance. Expand SQL Server Agent, expand Jobs, right-click the job that you want to schedule, and click Properties. Select the Schedules page, and then click Pick. Select the schedule that you want to attach, and then click OK. In the Job Properties dialog box, double-click the attached schedule. Verify that Start date is set correctly. If it is not, set the date when you want for the schedule to start, and then click OK. In the Job Properties dialog box, click OK.

E-Mail Functionality in SQL Server 2005
 Sending an e-mail has become very

important in any system for purposes such as sending notifications. SQL Server database has an integrated mailing system. With the arrival of SQL Server 2005, users now have the new functionality of Database Mail, which is different from SQL Server 2000 SQL Mail. The purpose of this article is to introduce Database Mail and highlight the advantages of using it over legacy SQL Mail.

Issues With SQL Mail
 If you have experience in SQL Server 2000 SQL

Mail, you will know the headaches of SQL Mail. Personally, I have not used SQL Mail much recently due to the implementation difficulties. Outlook installations, Messaging Application Programming Interface (MAPI) profiles, third party Simple Mail Transfer Protocol (SMTP) connector, and extended stored procedures are all needed for SQL Mail. More importantly, SQL Mail will degrade SQL Server performance.  Check out KB article 315886 for common SQL Mail problems. Due to these, users were forced to look for other means such as stored procedures with CDO to send mail from SQL Server.

Features of Database Mail
 Before going into the detail about configuring Database Mail, it

is worth highlighting the main features:  Database Mail can be configured with multiple profiles and multiple SMTP accounts, which can be on several SMTP servers. In the case of failure of one SMTP server, the next available server will take up the task of sending e-mails. This increases the reliability of the mailing system.  SQL Server continues to queue messages when the external mailing process fails. Whenever the process is successful, it starts to send queued messages.  Mailing is an external process so it does not decrease your database performance. This external process is handled by an executable called DatabaseMail90.Exe located in the MSSQL\Binn directory.

 Availability of an auditing facility is a major enhancement in

Database Mail. Previously, DBAs could not verify whether the system had sent an e-mail. All mail events are logged so that DBAs can easily view the mail history. In addition, DBAs can view the errors to fix SMTP related issues. Plus, there is the capability to send HTML messages.  Database Mail has the option of limiting files sizes to prevent sending large files that would degrade mail server performance. In addition, you have the option of limiting files by their extensions. For example, .exe.com can be prevented from being sent from the database server.


Enabling Database Mail
 In SQL Server 2005, Database Mail is disabled by

default. So you have to enable it after installation. I believe it is not provided at installation because of security reasons. There are several ways of enabling it.  One way is from the SQL Server Surface Area Configuration (SSSAC), which is located under Configuration Tools of SQL Server 2005 installation. Run SSSAC and select Surface Area Configuration for Features, select Database Mail from the SQL Server instance you need and then select Enable Database Mail stored procedure option. This means that Database Mail is enabled for a particular SQL Server instance.

Another option is selecting from SQL Server Management Studio (SSMS).

 By right-clicking Database Mail and selecting

Configure Database Mail option, you will be prompted to enable this option if it was not enabled. Probably, this is the easiest of all the available options.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close