4/11/2007

SQL Server Security


SQL Server 2000 Security - Part 1 (Net Libraries)
April 13, 2004


One of the most interesting phenomena in the computing arena within the last few years is the growing concern for security, which, at this point, is being treated on a par with efficiency and cost (as a matter of fact, security became one of the primary factors when evaluating any technology-based solution) . The main driving factors behind this trend are the increasing number of computer worms and viruses (combined with ubiquity of the Internet), the financial impact of their destructive nature, as well as mounting international political tensions. Microsoft Secure Computing initiative, launched in early 2002, triggered major programming efforts geared towards securing the Windows 2003 operating system code; however, signs of this tendency were present in a number of other, previously released products. In this series of articles, we will focus on security features in SQL Server 2000, starting with the most basic configuration settings and their security implications. In particular, we will look into authentication related issues, affected by the choice of Net Libraries, authentication mode and security context of the SQL Server service account.
Net Libraries

From the OSI application layer perspective, commands between SQL Server 2000 and its clients are interchanged in the form of Tabular Data Streams (TDS) packets. TDS packets, which are created and read by SQL Server OLE DB provider, ODBC driver, or DB-Library DLL, are than handed over to Net Library protocol DLLs. Net Libraries facilitate interprocess communication (IPC) by providing a mechanism to exchange TDS packets between SQL Server and its clients, so they require (in most cases - with the exception of such IPC methods as shared memory or Local Procedure Calls) a network protocol to carry this communication over a network. This is the reason why the majority of Net-Libraries are named after underlying network protocols, such as TCP/IP, IPX/SPX (or rather its Microsoft implementation called NWLink), AppleTalk (for communication with Macintosh systems), or Banyan Vines (used in legacy Banyan Vines environments). There are, however, a few exceptions, such as Named Pipes and Multiprotocol Net Libraries, which can operate using any of the most popular network/transport protocols offered by Microsoft (such as TCP/IP, NWLink, or NetBEUI) or via shared memory (for local server access). In addition, SQL Server 2000 introduces new SuperSocket Net Library, unique in its characteristics, because it functions strictly as a helper to other Net Libraries, rather than operating on its own, (its sole purpose is to encrypt data using Secure Sockets Layer mechanism). We will discuss its features in more details later in this article.

As you can expect, a server and its clients need to be set up with at least one matching Net Library (with equivalent parameters) in order to be able to communicate. Server Net Library configuration is done with SQL Server Network Utility; its client-based counterpart is called SQL Server Client Network Utility. They both contain "General" and "Network Libraries" tabs, where the "General" tab allows enabling (or disabling) individual Net Library protocols and modification of their parameters, and "Network Libraries" tab lists details of DLL files implementing each of the Net Libraries. Client Network Utility has two additional tabs - "Alias" which allows for the creating of aliases for connecting to different servers with distinct connection settings, and "DB-Library Options" listing DB-Library DLL information and providing limited access to its options.

Even though both client and server can have multiple Net Libraries enabled (making it possible for differently configured clients and servers to communicate), it is beneficial from security point of view to limit the total number of different Net Libraries used in your environment (since each one constitutes a potential vulnerability). Limiting the number of Net Libraries on a client to only those that are used by its server also speeds up initial connection, since it allows the first attempt to succeed.

In order to determine which Net Library you should use, consider the following criteria:

* performance: TCP/IP and IPX/SPX Net Libraries are slightly faster than their counterparts due to their implementation (they bypass Net Library Router internal core component, utilized by other types of Net Libraries).
* ability to support named instances of SQL Server (you can install multiple instances of SQL Server 2000 on the same physical computer): not provided in the Multiprotocol, AppleTalk, and Banyan Vines Net Libraries.
* authentication:
o dependency on secure channel between client and server computers: Named Pipes and Multiprotocol Net Libraries require that a Windows-authenticated connection already exists between client and server computers, before authentication can be attempted on the SQL Server level. This means that while these two Net Libraries will work without any problems in a single- or trusted multi-domain environments (assuming a user connecting to a SQL Server is logged on with appropriate domain credentials), they will fail in a situation where no trusted connection exists between the two systems. Note that on one hand, this complicates connecting to a SQL Server in a distributed environment, on the other, it provides an additional level of security, by preventing access from rogue computers.
o support for delegation: available only with TCP/IP Net Library (due to the fact that delegation is based on having a Service Principal Name assigned to SQL Service Account linked to a specific IP address and port combination). Delegation is necessary in order for a local server to impersonate logged on users when they run distributed queries against linked servers (we will describe delegation and its impact on linked server functionality in more details in our next article).
* support for encryption: while Multiprotocol Net Library still can be used to encrypt data transmitted between SQL Server 2000 and its clients (just as in previous versions of SQL Server), it is no longer the only option providing this capability. In SQL Server 2000, you can also encrypt data by configuring Net Libraries with Secure Sockets Layer (SSL), which takes advantage of the mentioned earlier SuperSocket Net Library.

Multiprotocol Net Library utilizes Windows encryption API and can be turned on by simply selecting the "Enable encryption" checkbox in the Multiprotocol Net Library Properties dialog box, once you enable the Multiprotocol Net Library using the SQL Server Network Utility (obviously your clients need to have Multiprotocol Net Library enabled as well). The Secure Sockets Layer option offers several advantages, such as stronger encryption, message integrity and server authentication, but it also is more complex to configure. More specifically, it requires installation of a certificate issued by a Certificate Authority (CA) on the Windows system hosting SQL Server 2000 and ensuring that clients are configured to trust the issuing CA. To implement SSL based encryption for communication between SQL Server 2000 and its clients, you need to follow these general steps (for more detailed instructions, refer to Microsoft Knowledge Base Article Q276553 and Q316898, which describe two different implementation methods):

o decide which type of CA is appropriate for your environment. You can chose between Microsoft CA Server, available at no additional cost on Windows 2000 and 2003 platform, or use commercial, third party certificates issued by CA companies such as VeriSign or Thawte. The first option is ideal if your clients are members of the same or trusted domain, since this greatly simplifies certificate deployment (and clients can be configured to automatically trust the CA that issued the certificate). The latter case is suitable for situations where client population includes systems outside of your management scope (since it is likely they will already trust the third party, commercial CA). Its main drawback is the fact that you need to purchase certificates (which are free with Microsoft-based CA).
o obtain certificate for server authentication purposes (server name on the certificate needs to match fully qualified DNS name of your SQL server) and install it on the Windows server hosting SQL Server 2000. When using Windows based CA, you can simplify this step by requesting the certificate after logging to the SQL Server with MSSQLServer service account and connecting to your CA server from Internet Explorer (using http://CAServerName/certsrv URL path, where CAServerName is the name of your CA server)
o restart SQL Server Service (the certificate needs to be read during every service startup in order to become available)
o configure the issuing CA as the Trusted Root Authority on each client. This can be done by exporting from the SQL Server computer the Trusted Root Certificate Authority of the newly installed server certificate, and then, importing it to client computers.
o in case the Windows system hosting the SQL Server 2000 has multiple certificates installed, you will need to specify which one is to be used by SQL Server. This can be done with the SETCERT.EXE utility from SQL Server 2000 Resource Kit. Alternatively, you can create the following registry key (of REG_BINARY data type): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\SuperSocketNetLib and populate it with the Thumbprint value of the certificate.
o select the "Force protocol encryption" check box in the Server Network Utility if you want to encrypt all incoming server connections or use the same checkbox in the Client Network Utility on your client computers, in case you want to ensure that encryption is used only for the connections, which they initiated.

Note that, in addition to encryption, you can further secure packets sent between SQL Server and its clients with IP Security (IPSec), which obfuscates communication down to the network layer (SSL encryption operates on the session layer of the OSI model, so it does not secure network information portion of the packet) and provides protection against certain types of exploits that SSL does not eliminate (such as man-in-the-middle attacks). On the other hand, keep in mind that every level of encryption has negative impact on performance, since each requires extra processing power.

In general, you should try to use TCP/IP Net Library whenever possible, due to its optimum speed and security characteristics. With TCP/IP Net Library, you can also change the default port (from TCP 1433) and choose the option to "Hide server," which switches the port to TCP 2433). Note that this also requires SQL clients to switch to the same port using Client Network Utility (e.g. by creating an alias). Unfortunately, using the "Hide server" option has also serious drawbacks (check the Microsoft Knowledge Base articles 308091 and 814064 for details) and is not recommended by Microsoft. Another consideration applicable to TCP/IP Net Library is blocking UDP port 1434 (the one exploited by the notorious Slammer in February 2003) on your company's network perimeter.

In this article, we presented the Net Library protocols available in SQL Server 2000 from a security perspective. We will follow with discussion on the authentication options and their security implications.

SQL Server 2000 Security - Part 2 (Authentication)
By Marcin Policht

Access to SQL Server resources is controlled by two separate mechanisms. The first one is authentication, which determines the identity of a user attempting to connect, based on a verifiable identifier. The second one is authorization, which establishes the level of privileges granted to a login associated with the logged on user. In this article, we will focus on the authentication (authorization will be covered later in the series).

In the case of SQL Server, authentication is handled by comparing credentials provided by a connecting user with set of entries stored in the sysxlogin table of the SQL server master database (to get familiar with its content, you might want to use sysloging view, presenting data in a more friendly format). The way these credentials are provided depends on the authentication mode, which can be set to one of the following:

* SQL Server authentication - in this case, SQL Server requires that a user specifies a name and password at the time when connection is attempted. The login name and one-way hash of the password need to match one of entries in the sysxlogins table for the login attempt to be successful.
* Windows authentication - in this case, SQL Server does not prompt a user for credentials, but instead it uses an access token assigned at the time the user logged on using a Windows account. This can be a token generated by the Windows operating system on which SQL Server was installed (in case of a local user account) or by a Windows NT 4.0 or Active Directory domain, (if a user domain account is used). The token contains a security identifier (SID) which uniquely identifies the user, as well as SIDs of local (or domain) groups that the user is a member of. SQL compares all of the SIDs stored in the token against entries in the sysxlogin table and, depending on the outcome, it grants or denies login privileges. Presence of a matching entry is not sufficient though for logon to be allowed, since with Windows authentication, it is possible to not only grant login privileges to a specific SID but also deny them (denying rights will always take precedence over granting them). This means that you can, for example, grant login privileges to a Windows group, but still deny access to SQL server to some of its members.

SQL Server creates two default logins (entries in the sysxlogin table) at the installation time, granted the highest possible level of administrative privileges. The first one is the sa login, allowing access using SQL Server authentication, the second is the BUILTIN\Administrators, representing SID of the local Administrators Windows group on the SQL Server computer and allowing every member of this group to access the server via Windows authentication. As part of the installation procedure, you are prompted to assign a password to the sa account. Make sure to choose a complex one (Spida worm took advantage of SQL Servers where sa was assigned a blank password). SQL Server logins passwords can be up to 128 characters long and contain any type of ASCII characters, however, you should note that password strength is affected by case sensitivity settings. Case insensitive installations store hashes of passwords in sysxlogins table after converting all characters to upper case first, which makes them more vulnerable to brute force attacks and defeats the purpose of creating mixed case passwords. As an additional security measure, you might consider modifying default Windows authentication configuration by removing BUILTIN\Administrators login, replacing it with another group or user account, and assigning to it Sysadmin fixed server role. This is recommended in environments where Windows server and SQL server administration is handled by separate teams. For more information on this procedure, including possible side effects and cluster specific caveats, refer to the Microsoft Knowledge Base article 263712.

At the installation time, you also need to decide on the authentication mode, although you can change it afterwards from the Security tab of the SQL Server Properties dialog box of the SQL Enterprise Manager console (the change requires that SQL Server service be restarted). Two available options are "SQL Server and Windows" and "Windows only," which means that you cannot disable Windows authentication, regardless of your choice. In either case, you can create new logins from the Logins node in Security folder of SQL Server Enterprise Manager or with system stored procedures (sp_addlogin for standard SQL logins and sp_grantlogin for Windows-based logins). When creating a new login, in the New Login dialog box, you can either select an existing user or group account from any available Windows domain or local Windows account store on the SQL Server computer (subject to Windows authentication), or type in a new name and password (subject to SQL Server Authentication). As mentioned before, when selecting accounts for Windows authentication, you can not only grant, but also explicitly deny access. In the same dialog box, you can assign logins to fixed server roles and set individual database access (using "Server Roles" and "Database Access" tabs, respectively). We will describe these configuration options when discussing authorization later in this series.

Windows authentication is inherently more secure than SQL Server authentication (and therefore recommended by Microsoft). Windows credentials are delivered to SQL Server without passing the actual password, while SQL authentication sends the login name and password in unencrypted format, using a fairly simple obfuscation algorithm, involving conversion to Unicode, bit swapping, and a couple of XOR operations with a constant value (an algorithm reversing the obfuscation algorithm, which produces original password can be downloaded from http://www.sqlsecurity.com/Portals/57ad7180-c5e7-49f5-b282-c6475cdb7ee7/decrypt_ODBC_sql.txt). This means that anyone who can capture network traffic carrying client authentication information can easily retrieve it. If you are forced to resort to using SQL Authentication (when your clients are running legacy operating systems such as Windows 98 or Me, or are not part of the same or trusted domain), to protect passwords from being easily captured and exposed, you should always encrypt communication between SQL server and its clients with the built-in feature of Multiprotocol Net Library or by implementing SSL (for details, refer to the first article of this series).

Windows domain environment provides a number of additional advantages over SQL Server 2000 authentication mechanism. For both local and domain Windows accounts, you can implement account policies, enforcing password complexity, maximum and minimum age, history, as well as lockout settings. This mitigates the effectiveness of brute force attacks. Note, however, that the highest level of security is provided by Kerberos authentication, which requires that login accounts reside in a Windows 2000 or 2003 native mode Active Directory domain and that client computers run Windows 2000 or later. NTLM authentication protocol used in legacy operating systems is inherently less secure and can be exploited using a number of popular hacking utilities (such as L0phtcrack - currently available as LC4). If you cannot switch to Kerberos, you should implement NTLM v2 (much more resilient than its predecessor), following instructions in the Microsoft Knowledge Base article 239869.

Another benefit of operating in the native Active Directory environment is the ability to use delegation. In order to understand its functionality, you need to first get familiar with another, closely related feature called impersonation, much more prevalent in Windows operating systems. Its primary purpose is to allow system processes and applications to run tasks on behalf of a user. Impersonation is employed whenever a user starts a new process or application, so their tasks execute in the user's security context and their security boundaries reflect the user's privileges. This way, when a user launches a Command Prompt or Windows Explorer, capabilities of each program are limited by what the user account is allowed to do. However, impersonation has its limitations. While it works fine for processes operating on the same system on which a user is logged on, remote processes (processes created on systems remote to the user) can take advantage of impersonation only when accessing their local resources (resources residing on the same remote computer). In other words, if a user X logged on to a computer A launches a process impersonating this user on a computer B, than this process is not capable to access resources on a computer C in the security context of the same user (i.e. user X).

In the context of SQL Server operations, this creates a problem with linked servers (for more information on linked servers, refer to the Books Online) where local and linked servers are set up with Windows Authentication mode. Ideally, in the scenario where all SQL Servers authenticate against the same Active Directory domain (or two Active Directory domains linked by trust relationships) and a user logged on to the first server wants to run a distributed query against a linked server, the same user's credentials should be used to connect to it. Unfortunately, creating such configuration is not possible with impersonation. In order to accomplish this goal, you need to resort to delegation.

Several important conditions need to be satisfied in order for delegation to work properly. First of all, since delegation is the function of Kerberos authentication protocol, your server and user accounts need to reside in a native mode Active Directory domain or trusted domains (Windows 2000 or 2003). User accounts that will be connecting to linked servers need to be trusted for delegation (this is enabled from the Account tab of the user's account Properties dialog box in the Active Directory Users and Computers console). Accounts of SQL servers involved in distributed queries must be trusted for delegation (this is done by checking the "Trust computer for delegation" checkbox on the General tab of each computer's account Properties dialog box in the Active Directory Users and Computers console). You also need to ensure that there exists a Service Principal Name (SPN) associated with each SQL Server instance (with a unique port for each instance, if multiple instances are installed on the same computer). If the SQL Server service runs in the security context of the Local System account, then a temporary SPN is created every time the service starts, so no additional configuration is needed. However, if the service uses a local or domain user account, you need to create one with the Windows 2000 Resource Kit utility SETSPN.EXE (downloadable from the Microsoft Web site) and execute if from the Command Prompt using the following syntax:

SETSPN -A MSSQLSvc/SQLServerName:SQLPort SQLServiceAccount

The command listed above generates a new SPN for the SQL server SQLServerName (this has to be in the form of the fully qualified domain name) listening on the TCP port SQLPort and operating in the security context of the SQLServiceAccount. In addition, you need to ensure that servers communicate using TCP/IP Net Library, since this is the only one which supports Kerberos based authentication.

This concludes our coverage of authentication related topics. In our next article, we will look closer into configuration of SQL Server service accounts and their impact on server security.

SQL Server 2000 Security - Part 3 (Service Accounts)
By Marcin Policht

In the previous article of this series, we looked into authentication mechanisms used to determine validity of credentials specified by a user connecting to a SQL Server 2000. Now, we are going to explore another topic related to authentication, but on a different level - SQL Server Services accounts - which determine security context in which SQL Server processes operate. In particular, we will review available configuration options and security implications of each for the following services:

* SQL Server Engine Service (MSSQLServer) - providing core features, necessary for the SQL Server to operate in a normal fashion. In the case of a multi-instance configuration, each instance uses a separate service, named MSSQL$InstanceName, where InstanceName is the one assigned during its installation.
* SQL Server Agent Service (SQLServerAgent) - responsible for auxiliary functionality such as monitoring, firing alerts, job management (including multiserver environments), replication, SQLAgentMail, or execution of xp_cmdshell extended stored procedure, ActiveX scripts, and CmdExec-based jobs owned by non-sysadmin accounts. As with SQL Server Engine Service, in a multi-instance configuration, each instance would have its own service named SQLAgent$InstanceName, where InstanceName is the one assigned during its installation.
* SQL Server Active Directory Helper (MSSQLServerADHelper) - assisting with registering SQL Server and its objects (and Analysis server) in Active Directory integration. This service serves all instances sharing the same physical computer.
* Full-Text Search Service (MSSearch) - facilitating full-text searches, including creation and management of index catalogs. As with SQL Server Active Directory Helper, there is only one such service per physical computer, regardless of the number of SQL Server instances.
* Microsoft Distributed Transaction Coordinator (MSTDC) - responsible for managing distributed transactions - also shared among all instances on a single physical computer.

Each of these services (just like any other Windows service) is associated with a Windows account, in which security context it operates. Capabilities of a service are determined by rights and permissions granted to this account. From a security perspective, you want to limit them only to those that are absolutely necessary, since compromising SQL Server might allow a hacker to take advantage of privileges assigned to the underlying service account.

SQL Server Engine Service and SQL Server Agent Service accounts are first configured during the SQL Server 2000 setup (you can easily alter the original configuration after the installation completes). At that time, you are prompted to choose whether services will start automatically and which account will be used for each (you can assign different account to each service). Regarding the latter, you have two options - the Local System account or a Windows user account.

In general, it is not recommended to use the Local System account, as this configuration, if compromised, grants unlimited access to all Operating System resources. In addition, since the Local System account is recognized only on the same computer where SQL Server 2000 is installed, any attempt to connect to remote systems initiated by SQL services will fail. This, effectively, prevents use of any SQL distributed features such as MS Exchange integration (via SQL Mail or SQL Agent Mail) or replication.

A Windows user account can be either a Windows domain account (from a legacy Windows NT 4.0 or Active Directory 2000/2003 domain) or a local SAM database account (of the local Windows installation where SQL Server 2000 resides). Note that using local Windows accounts also (just as with Local System accounts) affects the ability to access remote resources over the network. In effect, domain user accounts are used most commonly (in addition, management of local accounts in larger environments tends to be cumbersome).

When selecting service accounts and their configuration, you should keep in mind the following guidelines:

Set each account's password to never expire (using Local Users and Groups or Active Directory Users and Computers MMC snap-in, depending on the type of account). Note that if you intend to change it, you will also need to type in the new password in the SQL Enterprise Manager or Services MMC snap-in (Services MMC snap-in can be used if SQL Server is not running).

Always use SQL Server Enterprise Manager to designate a new account for SQL Server or SQL Server Agent services (from the Security tab of the SQL Server Properties dialog box and from the General tab of the SQL Server Agent Properties dialog box). This way, the account will automatically be granted proper user rights, permissions on the relevant folders (hosting SQL Server 2000 installation directory and databases), and registry entries. If for some reason this is not possible (e.g. in case of MSDE), you will need to perform the following steps manually in order to obtain the same results (the detailed description of this process is published in the Microsoft Knowledge Base article Q283811:

* Select an existing (or create a new) user account which will be used by the SQL Server Service and/or SQL Server Agent Service.
* Assign it to the SQL Server and/or SQL Server Agent services using Services MMC snap-in (part of the Administrative Tools menu) - but do not start these services yet.
* Grant the service account appropriate user rights - with Active Directory based or local group policy, depending on account type. After you launch the Group Policy Editor, containing either local or Active Directory container-specific settings, in the Computer Configuration portion of the policy, drill down to Windows Settings -> Security Settings -> Local Policies -> User Rights Assignment folder. After double-clicking on the relevant (outlined in the KB article Q283811) user rights from the list displayed in the details pane, add the service account name to the group of privileged accounts.
* Grant the service account permissions to registry keys outlined in the KB article Q283811- with the help of the Registry Editor.
* Grant the service account permissions to folders and their content outlined in the KB article Q283811 (assuming that you followed Microsoft recommendations and installed SQL Server on an NTFS-formatted partition),
* You might need to add the service account to the SQL Server 2000 fixed server sysadmin role. You can handle this either with SQL Enterprise Manager (e.g. by launching Create Login Wizard) or with Query Analyzer (by executing sp_grantlogin and sp_addsrvrolemember stored procedures, as demonstrated in the KB article Q283811).

Last but not least - avoid adding the account to local privileged groups (such as Administrators or Power Users), unless absolutely necessary. This membership is not required unless you are planning on performing the following actions:

* publishing the server or any of its objects (e.g. databases or replication articles) in Active Directory. For more information on this functionality, refer to one of our earlier articles. In this case, you need to ensure that the SQL Server Service account is a member of the local Administrators or Power Users group (in order to be able to start SQL Server Active Directory Helper Service, which takes care of the registration process).
* executing xp_cmdshell extended stored procedure or ActiveX scripting and CmdExec jobs owned by users who are not part of SysAdmin fixed server roles (we will discuss server and database roles in our next article). In this case, you need to ensure that SQL Server Agent Service account has "Act as Part of the Operating System" and "Replace a Process Level Token" privileges, in order for jobs to execute in the security context of their owner's account. It also needs to be a member of the local Administrators group, in order to be able to retrieve SQL Agent proxy account (defiined on the Job System tab of the SQL Server Agent Properties dialog box in SQL Server Enterprise Manager) credentials stored locally in the form of LSA secrets (Windows-specific secure mechanism for storing credentials, which makes them accessible only to members of the local Administrators group).
* using the AutoRestart feature of SQL Server Agent account (available from the Advanced tab of the SQL Server Agent Properties dialog box in the SQL Server Enterprise Manager). In this case, the SQL Server Agent service account needs to be a member of the local Administrators group.
* applying "Start whenever the CPU(s) become idle" setting when scheduling SQL Server Agent jobs (available from the job schedule Properties dialog box). In this case, the SQL Server Agent service account needs to be a member of the local Administrators group.
* using replication with the default snapshot folder and remote Distribution and Merge Agents. By default, the snapshot folder is set to C:\Program Files\Microsoft SQL Server\MSSQL\Repldata on the Distributor computer and is accessed via C$ administrative share (drive letter might change, depending on the installation directory of SQL Server 2000 instance). In order for Distribution and Merge Agents, which operate in the security context of SQL Server Agent account, to access this share, they need to be members of the local Administrators group on the Distributor.
* implementing multiserver administration, which provides the ability to manage SQL Server Agent jobs across environments consisting of multiple SQL Servers from a single computer, known as the master server (MSX). A master server functions as the source of jobs, which are copied to all target (TSX) servers and executed. The same master server functions also as a repository for jobs status for all of its target servers. The procedure of authenticating the connection between target servers and their master changed with SQL Server 2000 SP3.
Prior to the release of SQL Server 2000 SP3, a pre-defined TSX SQL login was auto generated and SQL Authentication enforced during MultiServer Setup Wizard (which is the primary method to configure the MSX environment). In this case, the SQL Server Agent Service account needs to be a member of the local Administrators group. This requirement results from the fact that when SQL Authentication is used, the name and password of TSX login are stored locally on the master server in the form of an LSA secret (just as previously mentioned SQL Agent proxy account) and can be retrieved only by members of the local Administrators group. Since SQL Server Agent Service is responsible for managing MSX operations, it has to be a member of the local Administrators group in order to retrieve TSX login authentication information.
Starting with SQL Server 2000 SP3, SQL Server Authentication is no longer enforced when running MultiServer Setup Wizard and the TSX account is no longer auto-generated. Instead, you have an option of selecting either SQL or Windows Authentication. In the case of Windows Authentication, SQL Server Agent Service account is used to communicate between Master and Target servers. An additional benefit (besides increased security resulting from eliminating inherently less-secure SQL Authentication) is the fact that the SQL Server Agent Service account no longer needs to be a member of the local Administrators group on the master server.

As far as the three remaining services we listed at the beginning of this article, you should enable them only if you are relying on their functionality. SQL Server Active Directory Helper (MSSQLServerADHelper) and Full-Text Search Services operate in the security context of the Local System account. Distributed Transaction Coordinator Service can be configured to use the Network Service account, which gives it sufficient network access and, at the same time, limits substantially its local privileges (and potential exposure to new vulnerabilities).

In our next article, we will discuss the process of authorization, which follows successful authentication and determines the level of access granted to SQL Server and its objects.

SQL Server 2000 Security - Part 4 (Authorization)
By Marcin Policht

In our series, so far, we have been discussing topics related to authentication, which establishes identity of a login attempting to access SQL Server. Starting with this article, we shift our focus to authorization, which takes place once the submitted credentials (in the form of a login name and password or Windows access token, depending on authentication mechanism) are validated. The authorization process determines the level of privileges granted to SQL Server, its databases and all of their manageable properties and objects. This determination is based on checking associations between a login name and server roles for SQL Server level privileges and a login name and database users and roles for database level privileges (there are also application roles, which we will cover separately). The first of these two processes occurs as soon as a successful login takes place; the second is triggered for every database accessed afterwards. Let's analyze in more detail server roles (we will look into database users, database roles, and application roles next).

Server roles are used to control delegation of server-wide management features and settings. SQL Server designers decided to limit room for potential misconfiguration mistakes and disallowed the creation of custom roles. Instead, they looked into common tasks that are performed routinely as part of server administration and defined corresponding fixed (non-modifiable) server roles:

* sysadmin - grants its members complete control over the SQL Server, its databases, and all of their objects. The group initially contains two logins - sa SQL login and local BUILTIN\Administrators Windows login. You can assign additional logins to it (both SQL and Windows), you can also remove Windows local BUILTIN\Administrator group from it (if you intend to separate Windows and SQL server administration), however sa login membership can not be altered (and the account can not be deleted, disabled, or renamed).
* serveradmin - intended for users responsible for the configuration of SQL Server. This typically consists of modifying server-wide settings and options, such as, the amount of memory or processor time allocated to the SQL Server or query governor behavior (in essence, all operations which can be performed with sp_configure stored procedure). Members of this role can also modify table options (covered by sp_tableoption stored procedure).
* setupadmin - gives its members the power to control configuration settings for linked servers and stored procedures to be executed at startup.
* securityadmin - provides the ability to manage security related settings, such as changing authentication mode, creating logins or database users, and granting, denying, or revoking permissions to create databases (execute CREATE DATABASE statement).
* processadmin - limited to terminating processes with the KILL command (from T-SQL) or via graphical interface in SQL Server Enterprise Manager.
* dbcreator - permits its members to create, drop, and modify databases (execute CREATE DATABASE, DROP DATABASE, and ALTER DATABASE statements).
* diskadmin - exists strictly for backwards compatibility purpose - allowing its members to manage disk devices created in the SQL Server 6.5.
* bulkadmin - grants permissions to execute the BULK INSERT command, used to import large quantities of data into SQL Server.

You can assign a login to any of the fixed server roles listed above using either SQL Enterprise Manager (by expanding Security folder, double-clicking on the appropriate login listed under Logins node, selecting Server Roles tab on the Login Properties dialog box, and placing a checkbox next to the server role to which that login should be assigned) or from Query Analyzer window (by running sp_addsrvrolemember stored procedure).

Access rights to each database are determined based on a number of factors. The first one is association between a login used during initial authentication and the name of a database user. When a database is created, the only account that is defined in it by default is dbo (of course this can be changed by including additional users in the model database), which maps to the members of sysadmin fixed server role and grants unlimited access to the database and all of its objects. For non-sysadmin logins, access can be granted by either mapping them to the dbo user (only if such login requires full management rights to the database), by creating a separate users account (by selecting New Database User option from the context sensitive menu of the Users container under database node in Enterprise Manager) and creating appropriate mapping (associating login name and user name in the Database User Properties dialog box), or by creating a guest account. The last option should be avoided, since it grants access to the database to anyone with login rights to the server (for logins without explicit mapping to a database user, database level rights are equivalent to those associated with the guest account).

This process of determining which database user should be used is performed by comparing the login stored in the sysxlogin master database table and a user name contained in the sysusers table, residing in each database. In case of a match, login is granted access to the database with rights assigned to the matching user account. Otherwise, access is denied, unless the guest account exists in the target database, in which case, its access rights are used.

The next factor determining access rights to a database takes into consideration database roles. Here, SQL Server designers granted more flexibility than with server roles, by allowing two different role types - standard and application. You also have an option of defining your own custom roles, in addition to fixed (predefined and non-modifiable) ones, which include:

* db_accessadmin - intended for administrators responsible for granting and revoking access to the database (which also implies the ability to create or drop users).
* db_backupoperator - provides the ability to backup a database (but not restore it).
* db_datareader - allows reading all database tables and views (executing SELECT statement against them).
* db_datawriter - allows modifying content of all database tables and views (executing INSERT, UPDATE and DELETE statement against them). Due to the fact that functionality provided by this role includes the ability to delete all data, you should very carefully control its membership.
* db_dlladmin - grants its members the ability to execute any Data Definition Language (DDL) command (which result in creation of database objects, such as tables, triggers, stored procedures, etc.). Members of this role who issue the CREATE statements automatically become their owners, which, in turn, means they have full control over them.
* db_denydatareader - serving function reverse to db_datareader - denies read access to all tables and views (through DENY SELECT permissions). Since impact of the membership in this role cannot be overridden by granting permissions to individual objects, this serves as a convenient mechanism to secure them against particular users or groups of users.
* db_denydatawriter - serving function reverse to db_datawriter - denies write access to all tables and views (through DENY INSERT, DENY UPDATE and DENY DELETE permissions). Just as with the db_denydatareader role, impact of the membership in this role cannot be overridden by granting permissions to individual objects, so this can also be conveniently used to secure them against particular users or groups of users.
* db_owner - the most powerful role on the database level (equivalent to the sysadmin on the SQL server level) with full administrative control over all database objects and operations. The role initially contains a single user dbo, but additional user accounts can be added to it.
* db_securityadmin - its members have power to grant, revoke and deny permissions on every object in the database, in addition to managing membership of fixed and custom roles.
* public - contains all database users and roles (resembling Everyone Windows group) and its membership can not be altered. This is important to remember, since it means that all users who are allowed to access a particular database are automatically granted all permissions assigned to public role. In general, you should avoid granting permissions to public role.

In order to define your own custom database roles, you can use either Enterprise Manager (by selecting the New Database Role option from the context sensitive menu of the Roles subnode listed under respective database node) or Query Analyzer (by executing sp_addrole stored procedure). Once the roles are defined, you add the users to them either from the Database Role Properties dialog box or with sp_addrolemember stored procedure (this applies to both fixed and custom standard database roles).

In our next article, we will continue our discussion of database roles and ways to manage permissions to database objects and operations.

SQL Server 2000 Security - Part 5 (Application roles)
By Marcin Policht

In our previous article, we started our discussion about SQL Server 2000 authorization, focusing initially on permissions assigned to standard fixed server and database roles. We also explained that on the database level, SQL Server 2000 provides more flexibility, since it allows you to define your own custom roles, with the help of the sp_addrole stored procedure (or via graphical interface of SQL Server Enterprise Manager). In addition to these standard roles, which control access to SQL Server and database objects on per-login and per-user basis respectively, you also have available database-level application roles, which control access to database objects and operations on per-application basis. This is most commonly used in client-server scenarios, where a client application is invoked by multiple clients and direct database access (outside of the application) by the same clients needs to be prevented.

Similar to the standard database roles previously described, application roles are used to secure access rights on the database level, and therefore they have to be created separately for each database in which they are needed. What makes them unique is that their membership is not explicitly specified. Instead, after initial login to the SQL server, a client who wants to use a specific application role must connect to the target database and activate the role by running sp_setapprole stored procedure (this client is typically an application itself, with SQL login, connection to a database, and invocation of the stored procedure included in its code). At that point, all user permissions to the database are replaced with permissions granted to the application role. This remains in effect as long as the role remains active (until the client connection closes). Note that, if during this time, access to another database is required, it can only be granted via a guest user account (assuming, of course, that this account exists in another database). To prevent unauthorized use of application roles, they are secured with a password assigned during their creation, which needs to be provided every time sp_setapprole is invoked. Even though you have an option of obfuscating this password, the algorithm used in such cases is identical to the one used to secure login credentials via SQL Server authentication, and as such, it can be fairly easily decrypted with utilities downloadable from the Internet (for details, refer to one of our previous articles). In order to protect credentials from being discovered while being transferred over a network, you should use Secure Sockets Layer or built-in encryption (depending on Net Library selected). You should also try to avoid hardcoding passwords into applications, since this introduces a potential vulnerability and makes it difficult to alter them, (it is a good practice to change passwords on regular basis). One way to accomplish this is to store the encrypted password outside of an application (e.g. in the registry of SQL Server computer) and use a decryption mechanism, activated every time the application is launched.

To create an application role, connect to a target database and launch sp_addrole stored procedure. sp_addrole has two parameters - @rolename and @password, intended to contain the name and password of the role, respectively. Alternatively, from SQL Server Enterprise Manager, bring-up the context sensitive menu of the Roles node of the target database and select New Role option. In the Database Role Properties dialog box, fill out the Name and Password text boxes.

To activate an application role, execute sp_setapprole stored procedure. sp_setapprole takes three parameters - @rolename, @password, and @encrypt, where the last one indicates whether encryption should take place (by default, this is not the case). If you decide to use encryption, set the @encrypt parameter to the string 'Odbc' and @password parameter to {Encrypt N'password'}. This converts application password ('password' in this example) to Unicode and calls the Encrypt function to encrypt it. Note that this function cannot be used when connecting to SQL Server 2000 via DB-Library, but only via ODBC client or OLE DB Provider.

Now that we have an understanding of different authorization mechanisms available in SQL Server 2000, let's look into recommendations regarding their use and management:

* Monitor existing SQL Server logins and database users by analyzing mappings between them. This task can be simplified with the help of sp_change_users_login stored procedure, which outcome depends on the @Action input parameter, which can take one of the following values:
o Auto_Fix - fixes missing mappings by comparing entries in the sysxlogins master database table and sususers table in the current database. Since this can potentially grant an unintended level of database access to existing logins, simply based on a coincidental match between the two, it should be used with caution.
o Report - provides a listing of all users in the current database with no mapping to a SQL Server login. This might happen when transferring databases between SQL Servers with attach/detach operations or as a result of running sp_grantdbaccess stored procedure (graphical interface in SQL Server Enterprise Manager has built-in safety feature, which prevents creation of user accounts without linking them at the same time to existing SQL Server login).
o Update_One - creates a mapping between a user within the current database to a SQL Server login (provided as @UserNamePattern and @LoginName input parameters).
* Assign permissions to and ownership of database objects using database roles, rather than database users (the same way Windows groups are used to control the level of privileges). This simplifies maintenance tasks when one user needs to be replaced by another or when a user needs to be dropped.
* Keep track of membership in privileged fixed server and database roles. Ensure that you keep track of individuals who are their members. This might involve checking membership of Windows groups, if they (instead of individual Windows users) are mapped to SQL Logins or database users assigned to the roles.
* Make sure you fully understand set of privileges granted to fixed server and database roles (and keep the record of permissions granted to your custom database roles). Take advantage of sp_dbfixedrolepermission stored procedure, which provides comprehensive listing of permissions for all fixed database roles.
* Be aware of intricacies involving security context of database access with Windows authentication. In particular, when a Windows user is allowed to login to SQL Server based on the group membership (i.e. user's Windows group, rather than user's Windows account, is listed in the sysxlogins master database table), connection to a target database will be established using one of the following methods:
o with a database user account that is mapped directly to a Windows user account (rather than the Windows group that this user is a member of), providing that such database user account exists. Note that creating database users without having corresponding SQL Server logins defined (for Windows accounts) is not possible using SQL Server Enterprise Manager, however you can accomplish it by running sp_grantdbaccess stored procedure (and specify an appropriate Windows account as its only parameter).
o if a database user account matching a Windows user account does not exist, SQL Server searches for a database user account mapped to the corresponding Windows group login. Providing that such mapping exists, a Windows user is connected with this user account to the database.
o finally, if both methods fail (i.e. the target database does not contain a database user mapped to a Windows user or group account), then connection is possible only if the guest account exists in the target database.

Note that this situation is even more complex if there are multiple Windows groups through which login to SQL Server is permitted (i.e. there are multiple SQL Server logins mapped to Windows groups that a Windows user is a member of). In this case, if there is no defined database user account matching the Windows user account, but both Windows groups have explicit database access (each have a corresponding user account in a target database), a database user account that will be used to connect to the database is difficult to predict, since it depends purely on the order in which SQL Server reads entries in the Windows access token, associated with the Windows user.

* Avoid granting permissions to public group (the same way you should avoid granting permissions to Windows built-in Everyone group), since it contains all database users, including the guest account.
* Limit to an absolute minimum the use of guest accounts in your databases. Note, however, that, by default, this account exists in all system databases with the exception of model.

In our next article, we will discuss permissions of individual database objects and operations.

SQL Server 2000 Security - Part 6 - Ownership and Object Permissions
By Marcin Policht

We have described, so far, authorization based on a predefined fixed server (determining a set of SQL server-wide privileges) and database (applying to database objects and activities) roles. We have also discussed application roles, which makes the level of permissions independent of those assigned to a SQL Server login or a database user account. Now it is time to look into permissions from the point of view of database objects. There are two main factors that play a role in determining how access rights to them are evaluated - their ownership and custom permissions. We will discuss the first one of these topics in this article and will continue with the other one in the next installment of this series.

Just as with NTFS files and directories, ownership of a database object implies full management rights to it, which includes the ability to grant or deny arbitrarily chosen permissions to any other database user or Windows users and groups, as well as execute any of relevant Transact-SQL statements (such as INSERT, UPDATE, DELETE, SELECT, or EXECUTE). In addition, ownership of an object is granted by default to its creator, although it is possible to manually alter this behavior or transfer ownership to another user (when it comes to our file system analogy, this last option became available natively only recently with the advent of Windows 2003). As far as databases are concerned, they can be created by members of sysadmin and dbcreator roles. Their SQL logins are automatically mapped to the dbo user account of the database, which is also the first member of the db_owner fixed database role. Changing existing database ownership can be handled with the help of sp_changedbowner system stored procedure (starting with SQL Server 2000 Service Pack 3, access to it is limited to members of sysadmin fixed server role). The stored procedure takes @loginame parameter, referring to the SQL login or a Windows account of a new database owner (you can obtain a list of valid logins with sp_helplogins stored procedure). Note that if this account is already mapped to an existing alias or a user account within this database, you will have to remove this mapping first. sp_changedbowner also becomes handy in situations where dbo user has been orphaned, which can happen as the result of deleting a corresponding Windows account or transfer of a database from one Windows domain to another, without a trust relationship between them.

There is also a similar sp_changeobjectowner system stored procedure, which is intended for changing ownership of database objects. The stored procedure takes two parameters: @objectname indicating an existing table, view, user-defined function or stored procedure in the current database, and @newowner, which is a valid database user, database role, or Windows user or group. Keep in mind that sp_changeobjectowner removes all existing permissions from the object, so you might want to store them first, in the form of a script, so they can be applied afterwards. This can be done from the SQL Enterprise Manager interface, by selecting the All tasks option from the database context sensitive menu, then choosing Generate SQL Script... from the secondary menu, picking the object to be scripted, and checking on the Script object-level permissions checkbox on the Options tab. Execution of this stored procedure is limited to members of sysadmin server fixed role as well as db_owner, db_dlladmin, and db_securityadmin fixed database roles.

It is generally recommended to use the dbo user account for object ownership. This simplifies references to it (since qualifying it with the owner's name is no longer required - although doing this is considered as a good practice) and simplifies handling of ownership chaining (discussed next). If this is not feasible in your environment, use database fixed roles or define your own custom ones, and assign ownership to them. Try to avoid granting ownership to database users, otherwise you will have to deal with complications in case these users need to be dropped.

Ownership not only affects privileges of those to which it is assigned, but also has potential impact on how permissions for other database users are determined. This phenomenon is called ownership chaining and takes place in situations where an access to objects in a database is granted via a view, a stored procedure or a user-defined function. Since this involves two or more objects (for example, in cases where a stored procedure references another stored procedure or a view), the way effective permissions are evaluated depends on whether all these objects are owned by the same user. If this is not the case, ownership chain is considered to be broken and permission on each of the objects constituting its links needs to be analyzed separately (i.e. for a user to access the top level object, appropriate permissions need to be granted explicitly for every object where the chain is broken). This not only has some impact on performance but, more importantly, also complicates management and troubleshooting of permissions. If ownership chain is intact (i.e. each of the objects in the chain has the same owner), then all that is required is a permission check on the object directly accessed by the user. Another benefit of such configuration is the ability to prevent direct access to each subsequent object in the chain, forcing clients to use views, stored procedures or user-defined functions instead.

Ownership chaining, however, also has its drawbacks. In particular, it can introduce a potential vulnerability, especially when used across databases. You might be surprised to find out that this is possible, since the set of users is separate for each database, but in this case, what matters are login accounts to which users from various databases are mapped. This means that if objects in two databases are owned by users mapped to the same login stored in sysxlogins table in the master database, then the ownership chain is considered unbroken. Based on the same principle, if the objects are owned by users corresponding to distinct logins, then the ownership chain is broken and permissions on each object need to be evaluated separately. Another case where ownership chain might play a role is a situation where ownership of database objects is assigned directly to Windows users and groups, which creates the possibility of the same Windows account owning objects in two separate databases. As long as the cross-database ownership chaining is enabled, this permits controlling permissions on objects residing in one database via a view, a stored procedure, or a user-defined function located in another.

Starting with SQL Server 2000 Service Pack 3, this feature is configurable and turned off by default (it is turned on in the earlier versions of the product). The first time you are prompted to confirm its settings is during installation of SQL Server 2000 Service Pack 3 (by marking appropriately "Enable cross-database ownership chaining for all databases" option). You can also change it at any point afterwards by running sp_configure stored procedure using the following syntax:

EXEC sp_configure 'Cross DB Ownership Chaining', '1'; RECONFIGURE

Alternatively, you can use the "Allow cross-database ownership chaining" option on the Security tab of the SQL Server Properties dialog box in the SQL Server Enterprise Manager. Either one of these actions effectively enables cross-database ownership chaining for all databases in the current instance of the SQL Server 2000. If the server-wide cross-database ownership chaining is disabled, you have an option of enabling it for individual databases with sp_dboption stored procedure by executing the following statement:

EXEC sp_dboption DBName, 'db chaining', 'true'

where DBName is the name of a target database (you need to run this statement for both databases between which chaining is to be allowed). You can also use the same stored procedure to determine which databases had the default modified.

Note, however that you should avoid using cross database ownership chaining whenever possible as it creates potential security vulnerability. For example, since members of fixed database roles dbo_ddladmin and db_owner can create objects owned by other users (or simply transfer their ownership), they can use this ability as a back door, granting access to objects owned by the same users in another database. The same applies to any logins with CREATE DATABASE permissions, who automatically (by the virtue of being mapped to the dbo user) become members of the db_owner fixed database role.

This concludes discussion on object ownership. In our next article, we will cover management of permissions on database objects and operations.

SQL Server 2000 Security - Part 7 - Statement and Object Permissions
By Marcin Policht

SQL Server 2000 offers a number of security mechanisms that impose predefined levels of control over database objects and operations. We have discussed the most common of them, such as server and database roles, in the previous articles of this series. While they are convenient and easy to manage, their main drawback is lack of flexibility. The best recourse for such limitation is the use of much more granular and flexible object and statement permissions.

Statement permissions restrict access to statements that result in the creation of databases and their objects, as well as to database and transaction log backups (i.e. executing any of the CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, CREATE VIEW, BACKUP DATABASE, or BACKUP LOG statements via T-SQL or SQL Server Manager interface). Object permissions control operations involving various database objects - in particular:

* DELETE - required to be able to delete data rows from a table or a view on which permissions are set,
* INSERT- required to insert data rows into a table or a view on which permissions are set,
* SELECT - required to view data rows (or individual columns) in a table or a view on which permissions are set,
* UPDATE - required to modify data rows (or individual columns) in a table or a view on which permissions are set,
* EXECUTE - required to execute a stored procedure or a scalar-valued user-defined function on which permissions are set,
* REFERENCES - required to be able to reference a primary key in a table on which permissions are set by another table's foreign key or to be able to create a user-defined function or a view that includes WITH SCHEMABINDING clause referencing an object on which permissions are set.

Object permissions, unlike statement permissions, are set on a specific object, (i.e. for two database objects of the same type, a particular user might have different object permissions), while statement permissions are database-wide (or server-wide in case of CREATE DATABASE statement). Another significant difference between the two relates to who controls them. Statement permissions can be assigned either by members of sysadmin fixed server role or by members of database owner and security admin fixed database role. Object permissions can also (in addition to the same roles as statement permissions) be managed by owners of the objects themselves and can be delegated to other users, database roles, or Windows accounts using WITH GRANT OPTION (more about this shortly).

As far as similarities are concerned, the operations involving changing statement and object permissions are typically associated with making changes to the syspermissions (and sysprotects, for backward compatibility reasons) table in a target database. In addition, for both statement and object permissions, it is possible to perform one of three actions - grant, revoke, and deny:

grant - gives a target database user, role, or Windows account the ability to execute a particular type of statement or to access a specific database object. When working with database objects using SQL Server Enterprise Manager, the "granted permissions" status is indicated by the green checkmark in the relevant statement column (in the Object properties dialog box, under the specific permission type).

For example, the following T-SQL statement grants CREATE TABLE statement permission to the user JohnDoe in the current database:

GRANT CREATE TABLE TO JohnDoe

Similarly, the following T-SQL statement grants SELECT permissions on the ShipperID and CompanyName columns of the Shippers table to the same user:

GRANT SELECT (ShipperID, CompanyName) ON Shippers TO JohnDoe

deny - prevents a target database user, role, or a Windows account from executing a particular type of statement or accessing a specific database object. Deny permissions always take precedence over grant permissions. When working with database objects using SQL Server Enterprise Manager, the "denied permissions" status is indicated by the red cross in the relevant statement column (in the Object properties dialog box, under the specific permission type).

For example, the following T-SQL statement denies CREATE TABLE statement permission to the user JohnDoe in the current database:

DENY CREATE TABLE TO JohnDoe

Similarly, the following T-SQL statement denies SELECT permissions on the ShipperID and CompanyName columns of the Shippers table to the same user:

DENY SELECT (ShipperID, CompanyName) ON Shippers TO JohnDoe

revoke - removes previously assigned permission (grant or deny) from a target database user, role, or a Windows account. Revoke might effectively eliminate impact of either granting or denying permissions, depending on a number of factors (e.g. which one has been assigned most recently, or whether permissions were granted or denied to an individual user or a role that the user is a member of). Revoking is done by removing relevant entries from the syspermissions table (which are added by both grant and deny actions). When working with database objects using SQL Server Enterprise Manager, the "revoked permissions" status is indicated by an empty checkbox in the relevant statement column (in the Object properties dialog box, under the specific permission type).

For example, the following T-SQL statement revokes CREATE TABLE statement permission from the user JohnDoe in the current database:

REVOKE CREATE TABLE FROM JohnDoe

Similarly, the following T-SQL statement revokes SELECT permissions on the ShipperID and CompanyName columns of the Shippers table from the same user:

REVOKE SELECT (ShipperID, CompanyName) ON Shippers FROM JohnDoe

Note that you can grant, deny or revoke object permissions specifying individual columns of a table or a view to which permission apply.

While managing statement and object permissions can be handled both with the graphical interface of SQL Enterprise Manager and with T-SQL statements, the latter offers more functionality. In particular, it allows you to delegate object permissions to database users, roles or Windows accounts using the WITH GRANT OPTION clause. Here is a sample statement that grants SELECT permissions on the Shippers table to a user JohnDoe using WITH GRANT OPTION:

GRANT SELECT ON Shippers TO JohnDoe WITH GRANT OPTION

This allows JohnDoe to subsequently use the GRANT SELECT statement to assign permissions on the same object to another database user, role or Windows account. You should be careful when applying WITH GRANT OPTION, since it allows a target user to grant permissions to other accounts without your knowledge. Make sure you use it in a controlled manner and monitor the number of users with elevated privileges. Interestingly, if you delegate granting permissions to a database role (rather than a user), users who are its members will need to use the AS clause when granting permissions to others:

GRANT SELECT ON Shippers TO JaneDoe AS DoeRole

In this example, the assumption is that the

GRANT SELECT ON Shippers TO DoeRole WITH GRANT OPTION

has been executed previously, granting an option to delegate SELECT object permissions on the Shippers table to members of the DoeRole user-defined database role. As you can see, when a role member executes the GRANT SELECT statement, it is necessary to specify that the execution takes place in the context of the DoeRole role (hence the AS DoeRole clause).

In general, you should use Windows groups or database roles to control permissions. This simplifies management when assigning the same level of access to multiple database or Windows users. Note that you can also easily deal with exceptions, where a user who belongs to a Windows group or a database role needs to have different permissions than the rest of its members. In such cases, you can take advantage of the fact that most restrictive permissions (i.e. deny) have priority when multiple level of permissions come into play (so, for example, denying permission to a specific user that is, at the same time, granted to a role the user is a member of, will take precedence) . This also means that if you want to ensure that a particular database or Windows user does not have a specific type of permission, you can simply create a role or a Windows group, place the user in it, and set the deny permissions on the role or group level.

Since it might be difficult to determine cumulative permissions with different levels at which they are applied, you can use sp_helprotect stored procedure to determine:

all permissions assigned to a specific user within a database with the following sample syntax (in this example, all permissions assigned to JohnDoe in the current database):

sp_helprotect NULL, 'JohnDoe'

all permissions assigned by a specific user within a database with the following sample syntax (in this example, all permissions assigned by dbo of the current database):

sp_helprotect NULL, NULL, 'dbo'

all permissions assigned to a specific database object with the following sample syntax (in this example, all object permissions assigned to Shippers table):

sp_helprotect Shippers

all users with a specific permission assigned to them. The value of fourth parameter indicates permission type - 's' for statement permissions and 'o' for object permissions. For example, to list all users with object permissions in the current database, you would use the following syntax:

sp_helprotect NULL, NULL, NULL, 'o'

As you can see, SQL Server 2000 provides plenty of options for controlling statement and object permissions on a very granular level. In our next article, we will be discussing replication related security.

SQL Server 2000 Security - Part 8 - Introduction to Replication Security
By Marcin Policht

So far in our series of articles, we have been discussing relatively generic SQL Server 2000 security related issues that every database administrator is likely to encounter on a daily basis. Now it is time to look into less common scenarios that deal with one specific area of management - database replication. We will start by presenting its architecture and implementation options, and follow with their security implications.

The main purpose of replication is data distribution. Placing multiple copies of the same data at different locations allows you to load balance or reduce network traffic generated by remote database clients, merge data from multple sources into one comprehensive target, or separate databases based on their purpose (e.g. OLTP access can be isolated from read only queries by creating two data sources, with the read only one being periodically updated). Copying critical data to a secure location also serves frequently as the basis for disaster recovery or high availability solutions.

There are different variations of replication, which differ in scope of data (from stored procedures or user defined functions to multiple tables), direction of data transfer (one or two directional, with merge functionality and conflict resolution provisions), propagation method (push or pull), and update mechanism (more about this shortly). In order to make principles of replication easier to understand, SQL Server designers decided to apply terminology derived from the publishing industry when naming its main components:

* publisher - SQL Server instance that serves as a source of replicated data,
* distributor -SQL Server instance which manages and monitors data flow from a publisher to subscribers, using the distribution database created specifically for this purpose,
* subscriber - SQL Server instance functioning as the target for replicated data,
* article of publication - the smallest unit of replicated data, consisting of a table, its subset formed by either selected columns (filtered vertically), rows (filtered horizontally), or combination of both, a view, user-defined function, or stored procedure.
* publication - collection of one or more articles grouped in order to be replicated together.

Keep in mind that the three architectural elements of replication - publisher, distributor and subscriber - are, in essence, roles that a particular SQL Server can serve. These roles are not mutually exclusive - there are situations where the same system functions as a publisher and a distributor or as a publisher and a subscriber within the same instance of replication. Similarly, with multiple replication instances, a system can be a subscriber for one, and a distributor for another.

There are three types of replication:

* snapshot - is the simplest type from the conceptual and implementation perspective. It involves capturing content of a publication at a point in time and copying it to subscribers. This content consists of scripts to create database objects included in a publication that are subsequently executed on the subscriber, as well as data residing in these objects. Its main benefit is simplified maintenance and troubleshooting as well as decreased load on replication participants, since they are not responsible for keeping track of database changes in between snapshots. On the other hand, lack of transactional support is also the main drawback of snapshot replication. Since incremental changes to a publisher are not reflected in the subscriber until the next snapshot, data between the two tend to be out of synch most of the time. In addition, size of replicated data is the same size as the size of the publication, every time the snapshot gets replicated - affecting network bandwidth and processor utilization across all replication participants. This is the main reason that snapshot replication is typically used with small and relatively static publications. There is no mechanism to propagate updates on subscribers back to the publisher.
* transactional - overcomes the limitations of snapshot replication by keeping track of incremental changes to a publication (taking advantage of database transaction logs), propagating them to distributor, which in turn distributes them to subscribers, shortly after they take effect (note that transactional replication starts with a single snapshot, which provides initial synchronization between publisher and subscribers). This ensures that subscribers represent an up-to-date state of publisher (degree of synchronization between the two is configurable). At the same rate, though, this places an additional load on publisher, distributor, and subscribers, and increases complexity of configuration and management. This type of replication is used in scenarios where data on subscribers needs to remain consistent with the original publication. Subscribers are typically treated as read only, although there are (relatively limited) configuration options (immediate updating subscribers and queued updating subscribers) allowing updates applied to subscribers to replicate back to the publisher. In general, though, it is recommended to handle updateable subscribers by configuring merge replication (presented next).
* merge - intended for scenarios where content of a publication can be updated at multiple locations. Initial snapshot is used to synchronize remote subscribers with a publisher, but from this point on, the distinction between them becomes less clear. Changes can occur at multiple locations (all subscribers and publisher), which introduces a number of issues that need to be properly addressed, such as mutual synchronization and conflict resolution. In addition, merge replication offers some unique enhancements, such as dynamic filtering (which filters replicated data based on parameters provided dynamically by subscribers). This type of replication is used in situations that involve remote offices or mobile users working with subsets of a publication, which are rolled back into the publisher.

Actual work involved in replication is performed by a number of automated processes, which are implemented as SQL Server Agent jobs, created automatically when you set up replication via SQL Server Enterprise Manager. This implies that they run in the security context of the SQL Server Agent account (on Windows NT and later operating systems) or a logged on user (on Widows 9x computers). These jobs fall into several distinct categories, known as replication agents, each based on a separate executable from Program Files\Microsoft SQL Server\80\COM folder or a stored procedure from the distribution database, and delivering different type of functionality. The primary ones are as follows:

* snapshot agent (SNAPSHOT.EXE) - running on distributor, it is responsible for creating a snapshot of a publication. The snapshot is then copied to a shared folder on distributor, which subsequently is copied to subscribers with the help of distribution agent (refer to the next item below). This agent is used by every type of replication, since each one involves the creation of a snapshot at least once.
* distribution agent (DISTRIB.EXE) - copies and applies the initial snapshot created by the snapshot agent, and, if appropriate, transactional changes, prepared by the log reader agent (discussed in the next item below), from distributor to subscribers. It is used by snapshot and transactional replication. You can configure either one shared agent serving all subscribers running on distributor (push subscription) or individual agents operating locally on each subscriber (pull subscription).
* log reader agent (LOGREAD.EXE) - running on distributor, log reader agent pulls entries from the transaction log on publisher (the ones marked for replication), making them available to the distribution agent. It is used only in transactional replication.
* merge agent (REPLMERG.EXE) - specific to merge replication, synchronizes publication content between publisher and its subscribers. Note that, in addition to snapshot agent, this is the only one that merge replication relies on (providing combined functionality of log reader and distribution agents used with transactional replication). Merge agent can reside either on a publisher (push subscriptions) or on subscribers (pull subscriptions).
* queue reader agent (QRDRSVC.EXE) - is optional in transactional replication (not used for the other two types), applicable only when queued updating subscribers in transactional replication are configured (in cases where transactional changes to subscribers need to be replicated back to the publisher).

There are also several other replication agents whose roles are secondary, performing mostly cleanup and monitoring tasks, such as agent history clean up agent, distribution clean up agent, expired subscription clean up agent, reinitialize subscriptions having data validation failures agent, or replication agent checkup agent, which are not relevant to our security-focused discussion, so we will not go beyond just listing them here.

This introductory information will serve as the basis for a discussion on replication security, in our next article.

SQL Server 2000 Security - Part 9 - Replication Security
By Marcin Policht

The previous article of our series served as the introduction for discussion about replication security, which is the topic of this article. Last time, we discussed basic terminology, replication types and their agent-based operations. You should appreciate at this point, the level of complexity associated with designing, configuring, and managing the replication process. As you can imagine, dealing with replication security is also a challenging task that needs to be carefully planned and implemented.

Let's start with noting that creation and administration of publishers, distributors, and subscribers (including enabling a database for replication) is limited to members of the sysadmin server role. Once replication participants are defined, only members of sysadmin and db_owner fixed database role on databases enabled for replication can create and configure publications. Monitoring replication is restricted to members of sysadmin server role and replmonitor fixed database role, created automatically in the distribution database on the distributor.

Distributor is designated on the publisher, typically using the "Configure Publishing, Subscribers and Distribution..." option in the Tools -> Replication menu item in the SQL Server Enterprise Manager. You can have publisher and distributor reside on the same computer, or you can use a remote distributor. In the latter case, the publisher must be enabled on the target distributor first. This is done on the Publishers tab of the Publisher and Distributor Properties dialog box, which is invoked by selecting "Configuring Publication, Subscribers and Distribution..." option. From there, you can select the publishers (from the list of the servers registered with SQL Server Enterprise Manager) that are allowed to use the currently managed server as the remote distributor. On the Distributor tab in the same dialog box, you can specify the Administrative link password that needs to be provided by publishers when connecting to the local distributor in order to perform replication-related administrative operations. On the same tabs, you have an option to create multiple distribution databases dedicated to designated set of publishers, (which increases security in environments where multiple publishers share the same distributor). Enabling publishers requires that you specify a login that replication agents will be using to access the publisher. Similarly, when enabling subscribers from the Subscribers tab on the Publisher and Distributor Properties dialog box, you will need to set the security context for Agent connections to them. Since replication agents are created as SQL Server Agent jobs, they operate by default in the security context of the SQL Server Agent account. You can either accept the default or configure them to connect to SQL Server via an arbitrarily assigned SQL Server login, providing that SQL Authentication is enabled. When defining publications, you are given a choice to allow anonymous subscriptions. While this simplifies the management of a publication, especially when the number of subscribers is large, it negatively affects security of the published data, so consider its use very carefully.

Now, let's review security requirements from the point of view of different replication types. We will start by looking at commonalities between them. One of them is the snapshot functionality, which, as we already explained in our introductory article, takes place in each type of replication at least once. Snapshots are handled by the Snapshot Agent running on the distributor, which creates several security implications:

Snapshot Agent account requires a valid login on the publisher server and SELECT permissions on the replicated publication in the source database. The best way to accomplish this, (in case publisher and distributor are hosted on separated computers), is to use SQL Server Agent on the distributor, (in which security context Snapshot agent operates on Windows NT 4.0, 2000, and later systems) a non-privileged domain account, create corresponding Windows login on the SQL Server hosting the publisher, and grant it appropriate permissions within the database containing the publication.

Snapshot Agent copies files with replicated data and scripts defining publication articles created on a publisher to a network share designated as the snapshot repository on the distributor. This location can be set in a number of ways. If the publisher and distributor reside on the same system, you are prompted for it on the Specify Snapshot Folder page of Configure Publishing and Distribution Wizard. If they are located on separate computers, then you need to specify the Snapshot folder location on the Publisher Properties dialog box (in addition to security context of the replication agent connections) when enabling publishers on the target distributor (as explained previously). In the same dialog box, you can decide whether the publisher will require a password to establish a link to the distributor. This password is assigned to a distributor_admin SQL login account (which is a member of sysadmin role). This account is created automatically when replication is first configured. You should ensure that the password meets basic complexity requirements. If you intend to change it, use the sp_changedistributor_password system stored procedure or option on the Distributor tab in Publisher and Distributor Properties dialog box. While it is possible to configure the publisher as trusted, which allows connections without setting a password, such configuration is strongly discouraged.

You can also configure properties of distributor and publisher when creating a publication using the Create Publication Wizard or managing it from Create and Manage Publications dialog box. The two wizards are available from the Tools -> Wizards menu in the Replication container, or by right clicking the Replication container and selecting the Configure Publishing, Subscribers and Distribution option. You can also modify the settings afterwards from the Snapshot Location tab on the Publication properties dialog box. Regardless of the method, the folder points by default to Program Files\Microsoft SQL Server\MSSQL\Repldata on the distributor.

The security-related problem results from the fact that, by default, this folder is accessed via hidden C$ administrative share i.e. \\distributor_server\C$\Program Files\Microsoft SQL Server\MSSQL\Repldata, which is restricted to members of the local Administrators group on the distributor server system. This, in turn, requires that the SQL Server Agent Service account is added to this group and, in effect, operates with elevated privileges, which constitutes a potential vulnerability. You can avoid it by configuring SQL Server Agent to run in the security context of a non-privileged account, (for more information on this topic, refer to one of our earlier articles in this series), and altering the default location and share designation for the snapshot folder, (by creating a target folder and share on the distribution server and pointing to it from the Snapshot Location tab on the Publication properties dialog box). Make sure that access to this folder is restricted to the account under which the SQL Server Agent on the distributor operates. Note also that each publisher can be configured with a different snapshot folder (configurable from the Publisher properties dialog box on the distributor).

New in SQL Server 2000 is the ability to specify an alternate snapshot location. This can be done by clicking on the Properties and Subscriptions button after selecting the individual publication from Create and Manage Publications dialog box (accessible via Tools -> Replication menu). In the resulting Publication Properties dialog box, on the Snapshot Location tab, you can define a primary (default) or alternative location for snapshot. This can be any location accessible to the Snapshot Agent and Subscribers, including the FTP folder (login information is defined on the same page).

Snapshot folder also needs to be accessible by SQL Server Agent on
subscribers when using pull subscriptions. Its account needs to have Read permissions to the share and underlying folder (on NTFS level), as well as SELECT permissions on a couple of tables in the distribution database (MSrepl_commands and MSrepl_transactions).

In the case of snapshot replication with push subscriptions, the SQL Server Account on distributor must have sufficient access to the subscribers, in particular CREATE and DROP permissions on the target database as well as SELECT, INSERT, EXECUTE, and DELETE on each of the publication articles.

Requirements concerning the Snapshot Agent activities are the same regardless of replication type. However, when using transactional or merge replications, you need to keep in mind additional considerations. They relate to activities of the Distribution and Log Reader Agents specific to transactional replication, or the Merge Agent in case of merge replication.

The Log Reader Agent operates on the distributor. Its purpose is reading transaction log entries marked for replication from the publication database and propagating them to the distribution database. It operates in the security context of the distributor's SQL Server Agent. Work involved in applying changes to subscribers is performed by the Distribution Agent, which, in addition to access to the distribution database, also needs INSERT, UPDATE, and DELETE permissions on the target database on subscribers, and read permissions to the snapshot folder on the distributor.

The Merge Agent maintains synchronization between publisher and subscribers, in case of merge replication, by monitoring changes on both sides, resolving possible conflicts, recording outcome and error information in the distribution database, and applying results to the publisher. This requires SELECT, DELETE, INSERT and UPDATE permissions on the publisher database. The changes on the publisher are subsequently applied to subscribers. The exact mechanism used for updating subscribers depends on whether push or pull subscriptions are used. In the first case, the Merge Agent running on the distributor, connects to subscribers and propagates changes to them, which requires INSERT, DELETE and UPDATE permissions. In the second case, the Merge Agent running on subscribers connects to the distribution database, retrieves information detailing publisher's updates and applies them to subscribers. Note that with pull subscriptions, there are separate instances of Merge Agents on the distribution and subscribers. You can take advantage of this fact and configure them with different accounts, effectively separating the level of access that subscribers would have to the distribution database (and server). Note that regardless of its location, the Merge Agent needs Read permissions on the snapshot folder (by default Program Files\Microsoft SQL Server\MSSQL\Repldata) on the distributor.

When operating in Active Directory environment, you have the option of publishing SQL Server instances, their databases and replication articles as domain objects. On one hand, this makes them easier to locate by legitimate subscribers; on the other, it exposes them to potentially unauthorized audiences.

As you can see, security aspects of database replication in the SQL Server 2000 are fairly complex. Fortunately, dealing with them is significantly simplified through the use of wizards, notifications triggered during configuration, and default settings, which satisfy most common security requirements.

SQL Server 2000 Security - Part 10 - Auditing
By Marcin Policht

So far, in our series, we have been focusing on tasks geared towards configuration changes, which increased the default security level of the SQL Server 2000 installations. However, no matter how much time and attention you dedicate to this process, it is likely that sooner or later you will have to face an attempt of unauthorized intentional or accidental data or object definition change, exploit of security vulnerability, or even a malicious modification by an unethical system administrator. You need to be prepared for such possibilities and ensure you can track their source and prevent them from happening in the future. This is the purpose of auditing, which is the topic of this article.

Auditing can be performed using several methods, with varying level of details, configuration settings and monitoring tools (and some degree of overlap between them):

built-in default Windows events auditing - Windows Event logs reside in the \%systemroot%\system32\config folder in the form of several *.evt files (at the minimum AppEvent.Evt, SecEvent.Evt and SysEvent.Evt corresponding to Application, Security and System Event Logs - the number of logs can be larger depending on the role of the server and additional software installed on it). The easiest way to view the logs is by running Event Viewer MMC snap-in (present in the Administrative Tools menu). Size and retention behavior of the logs is controlled from the log Properties dialog box. You can set from here the maximum size of each and specify what happens when its maximum size is reached (the three available options are - overwrite events as needed, overwrite events older than specified number of days, and do not overwrite events).

SQL Server related events are recorded in the Application log. This includes, by default, major activities that potentially affect the operating system, such as SQL Server startup, shutdown, backups, restores, change of configuration options.

Login auditing - four option buttons - None, Success, Failure and All - under the "Audit level" heading on the Security tab of the SQL Server Properties dialog box in SQL Server Enterprise Manager control level of SQL Server login audits. These choices correspond to values of 0, 1, 2 and 3 in the AuditLevel entry in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer registry key of DWORD data type. This value determines whether login attempts with specified outcome (or any at all) will be recorded in the Windows Application Event Log and SQL Server error log.

default SQL Server auditing - SQL Server stores up to seven subsequently created logs (although this number is configurable using HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\NumErrorLogs registry entry of DWORD data type), named ErrorLog, ErrorLog.1, ErrorLog.2, etc. in the folder C:\Program Files\Microsoft SQL Server\MSSQL\LOG. You can review their content from SQL Server Logs node under Management folder for the target SQL Server in the SQL Server Enterprise Manager. Its entries consist of the timestamp, source (this is either server or one of user processes, indicated by its process id), and message. Most of them correspond to the entries you can find in the Windows Application Event log.

C2 auditing - the US Department of Defense established a set of ratings applicable to security levels of computer systems, based on their capabilities in regard to auditing and discretionary access control. SQL Server 2000 was determined to be compliant with a C2 rating in August 2000 by the National Computer Security Center (more information about the C2 evaluation process is available on the Microsoft Web site at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/security/prodtech/dbsql/sqlc2.asp). This compliance is relevant for companies, which need to secure their computing operations according to the US Government requirements (which applies to most government agencies and contractors).

C2 auditing records information that goes beyond server-level events, such as shutdown or restart, successful and failed login attempts, extending it to successful and failed use of permissions when accessing individual database objects and executing all Data Definition, Data Access Control, and Data Manipulation Language statements. The audit information contains the timestamp, identifier of the account that triggered the event, target server name, event type, its outcome (success or failure), name of the user's application, and Server process id of the user's connection.

Audit logs are stored in the Program Files\Microsoft SQL Server\Data\ folder as AuditTrace_yyyymmddhhmmss.trc, where the second part of the name indicates date and time when the log file was created. Size of a log is limited to 200MB, but new ones are generated automatically whenever the old one is full as long as there is available disk space. Otherwise, shutdown of SQL Server is initiated. Ensure that you have sufficient space on your hard drive, as the volume of recorded information is significant. In emergency situations, where no space can be immediately freed for new log files, you can restart SQL Server with the -f flag, which will disregard auditing settings.

The content of the audit files can be viewed using SQL Server Profiler (you can also import them into a new or an existing table). Alternatively, you can use for this purpose a built-in function fn_trace_gettable, which displays the content of a trace file in a table format (the following sample T-SQL command can be executed from the SQL Query Analyzer):

SELECT * FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL

Server\MSSQL\Data\audittrace_20040822191554.trc', default)

GO

where the first parameter defines the location and name of the first log file and the second (default option) specifies that all rollover files should be read as well.

To enable C2 auditing, use sp_configure stored procedure with the 'c2 audit mode' parameter. Assigning it a value of 1 enables auditing, 0 reverts it back to the default. Since this is an advanced option, you will need to turn on the "show advanced options" setting. In addition, changing this setting requires a restart of the SQL Server. Permissions to perform these steps are limited to members of sysadmin fixed server role. The following sequence of T-SQL commands can be executed from the SQL Query Analyzer in order to enable C2 auditing (you will need to restart the SQL Server afterwards):

USE master

EXEC sp_configure 'show advanced option', '1'

RECONFIGURE



EXEC sp_configure 'c2 audit mode', 1

RECONFIGURE

Note that if you have enabled C2 auditing, you might want to disable login auditing, configurable via the Security tab of the SQL Server Properties dialog box in SQL Server Enterprise Manager (described previously), otherwise you will record the same type of event twice, unnecessarily degrading server performance.

There are also other, less common mechanisms which allow either custom audits or enhance existing standard procedures described above. For example, you can monitor changes to SQL Server via SQL Server Alerts, which monitor particular events or performance conditions and trigger notifications or corrective actions in case these take place. Similarly, you can define triggers on tables and views that fire if specific modifications take place (although in this case, you are limited to INSERT, DELETE and UPDATE T-SQL statements on target objects). There is also a number of auditing tools, both in the form of freeware/shareware and commercial software, which allow you to determine the degree of security of your SQL Server installation. For example, free password auditing utilities (which, in essence, attempt brute force attacks on SQL passwords to verify their complexity level) can be obtained from www.nextgenss.com/software/ngssqlcrack.html (NGSSQLCrack) or www.sqlsecurity.com/uploads/sqldict.zip (SQLdict).

Keep in mind that auditing serves its purpose only if the logs are regularly reviewed (although the level of details is dependent on security requirements you need to satisfy).

SQL Server 2000 Security - Part 11 - DTS Security
By Marcin Policht

In this article of our series presenting SQL Server 2000 Security, we are turning our attention towards Data Transformation Services. If you are not familiar with this topic, it might be helpful to first review its coverage on the DatabaseJournal Web site. The referenced information gives a fairly comprehensive discussion of its principles and the majority of its features - we will focus here strictly on the security related aspects of this technology.

The primary purpose of Data Transformation Services is simplifying transfer of data between different types of data stores, accommodating at the same time its potential modifications. The number of possible configurations supported in SQL Server 2000 DTS implementation is impressive, covering any OLE DB and ODBC compliant data stores and a large number of customizable transformations, including execution of executables, T-SQL or VBScript based scripts. All of the components constituting a single logical unit of work (such as connections to data stores or tasks performed during transformation) can be combined together and saved for later execution or modifications into a DTS package into four different locations (SQL Server, Structured Storage File, Visual Basic File, or Metadata Services), each with unique set characteristics.

In the first case, an entry corresponding to each new package (or a new version of a modified package) is created in the sysdtspackages table of the MSDB database, which contains such information as, for example, package name, unique identifier, owner information, creation date, and package parameters in the binary format. (There are also other associated tables, such as sysdtspackagelog, sysdtssteplog, sysdtscategories, or sysdtstasklog, which store information on package execution, providing that the "Log package execution to SQL Server" property has been set - this is done from the Logging tab of the DTS package Properties dialog box). Information stored in the DTS related tables provide the ability to access, modify, and execute the package from within the SQL Server Enterprise Manager window (using Data Transformation Services node). If, besides these capabilities, you want to be able to transfer a package easily between servers, you should consider the possibility of saving your package into a Structured Storage file. This results in the creation of a .DTS file, which can be subsequently copied to another computer and accessed with the Open Package... option from the Data Transformation node, (although you can also save your package to another SQL Server directly using the Save DTS Package dialog box from the DTS Designer interface, as long as the target server, is available at that moment). Using Visual Basic File format allows you to manage packages in the form of .BAS file within Visual Basic IDE (Integrated Development Environment) and enhancing their functionality by manipulating the DTS object model with programming methods. In order to save a package to Meta Data Services, you need to first enable this option. This is done from the Package Properties dialog box, displayed after selecting the Properties item from the context sensitive menu of the Data Transformation Services node in SQL Server Enterprise Manager. From there, you need to mark the checkbox labeled "Enable Save To Meta Data Services". This will activate another option in the Location listbox when saving a DTS package. The benefit of storing packages this way is the ability to keep track of metadata information, such as version and lineage for each package. We will be exploring this functionality in more details in our next article.

The most common way to create DTS packages is with the DTS Designer graphical interface, accessible from the SQL Server Enterprise Manager (under Data Transformation Services node). Permissions to carry this task are granted by default to all logins, regardless of database level permissions (although ultimately, level of privileges required to execute a package depends on the way in which execution takes place and the degree of protection on resources that need to be accessed). This results from the fact that all stored procedures involved in package creation reside in MSDB database with "Execute" permissions granted to the public database role. Since MSDB database contains the Guest account, anyone logged on to SQL Server, can take advantage of it. This also means that without extra precautions, all logins are allowed to open existing packages, examine their content, modify them, and, if desired, save changes using a new name (the ability to overwrite existing packages is limited to members of SysAdmin server role and original package creator).

If you want to alter this default behavior and restrict the ability to create new or modify existing packages, you can modify permissions on the sp_add_dtspackage, sp_enum_dtspackages, and sp_get_dtspackage stored procedures in the MSDB database. The first one contains T-SQL code for creating DTS packages, the second one provides ability to list them and the third one is used for retrieving them. By removing permissions to run them from the public role, you can restrict these activities to specific fixed or custom database roles.

Another possibility to secure packages saved to SQL Server or a Structured Storage file (but not to Meta Data Services or a Visual Basic file) is by assigning owner and user passwords. This is done from the DTS Designer interface, by selecting the Save As... item from the Package top level menu. In the resulting "Save DTS" Package dialog box, you can type in the owner and user password (note that you cannot set a user password without defining the owner password). An owner password secures opening and modifying the content of a package; knowledge of the user password allows its execution (you are prompted for them whenever you try to execute or open the package to which they were assigned). Keep in mind that there is no direct way to change these passwords. The only way to accomplish this is to save the package using a different name, then delete original package, and finally rename back the new one (this is clearly much more complicated than it should be).

Each package has an owner, which is the login that created it (or saved an existing one to a new server - when transferring packages between computers). Identifying information is recorded in the syspackages table of the MSDB databases in the owner and owner_sid columns. Only package owners (and members of Sysadmin fixed server role) have the ability to alter packages and save them with the same name (they are still, however, required to provide passwords in order to open or execute their packages).

Separate security-related considerations come into play when executing packages. This can be done either directly from the DTS Designer interface in SQL Server Enterprise Manager (using the Execute item from the top level Package menu or with the F5 key), with DTSRun command line utility (as well as its graphical interface version DTSRunUI), or by running compiled Visual Basic code. Regardless of the method, permissions required to complete execution successfully are evaluated at run time. In general, the outcome of this evaluation depends on the type of authentication method designated for accessing various package components (for example, in order to extract or store data, it is necessary to provide either Windows token or other, data source-specific credentials). When a package is launched interactively, it runs in the security context of the logged-on account. If Windows integrated authentication is implemented, the permissions granted to this account determines whether the package will complete or fail (and is not affected in any way by the level of privileges held by the package creator). However, if data store connection information has been recorded within a package (which might be the case with SQL Server Authentication), this could allow less-privileged accounts to query or manipulate objects to which they typically have no access. You need to consider carefully which type of authentication suits your environment and security requirements.

You can also schedule execution of DTS packages using SQL Server Agent jobs. This ability is granted by default to the public group, based on permissions to execute sp_add_job and sp_add_jobschedule stored procedures in the MSDB database (this applies to all SQL Server Agent jobs, not just the ones that invoke DTS packages). Typically, it is recommended to restrict access to them to a limited number of privileged logins. While jobs typically execute in the security contexts of their owner, in the case of DTS packages this might be different. The reason is the fact that a task that contains invocation of a DTS package accomplishes this by the running DTSRun command line utility, which involves, in turn, the use of CmdExec. If you limit permissions to launch CmdExec jobs to members of the SysAdmin fixed server role (which is the default behavior), jobs will still execute in the same security context as their owners. However, you can change this default in the SQL Server Agent Properties dialog box, accessible by selecting the Properties option from the SQL Server Agent node under the Management folder in the SQL Server Enterprise Manager console. In the Job System tab, you need to clear the checkbox next to the "Only users with SysAdmin privileges can execute CmdExec and ActiveScripting job steps" and provide a Proxy Account name and password. The proxy account provides the security context for execution of DTS packages, so ensure that it has access to all relevant data stores, file system paths or COM components, which might be used by ActiveX scripting tasks. Remember, however, that at the same time you allow users without SysAdmin privileges to run potentially dangerous ActiveScripting jobs.

What further complicates scheduling DTS packages is the fact that content of the corresponding SQL Server Agent jobs includes SID of the SQL Server Agent account as part of the encrypted string passed to the DTSRun command. This creates a problem if the password of the SQL Server Agent account is changed, since it forces you to recreate each package (existing packages will fail due to password mismatch). Also, keep in mind that the SQL Server Agent service needs to be running in order for scheduled job to be launched.

In the next article of this series, we will delve deeper into one of the topics mentioned earlier - Meta Data Services and its auditing capabilities.

SQL Server 2000 Security - Part 12 - Auditing with Meta Data Services
By Marcin Policht

In our previous two articles of this series, we discussed auditing and security related aspects of SQL Server 2000 Data Transformation Services. Now, we will combine these topics by presenting DTS Meta Data Services, which provides the ability to audit creation, execution, and modification of DTS packages as well as track associated data changes. In particular, you can track package version history or data lineage. This information is stored according to the rules of DTS Information Model, which can be accessed and manipulated by both SQL Server Enterprise Manager and programmatic methods (for review of SQL Server 2000 DTS, refer to its coverage on the DatabaseJournal Web site.

As we mentioned in our previous article, the ability to save packages to Meta Data Services is disabled by default. This is intentional, since its use is associated with an increase in resource utilization and affects package execution speed. In order to alter the default setting, in the Package Properties dialog box, which appears after selecting the Properties entry from context sensitive menu of the Data Transformation Services node in SQL Server Enterprise Manager, you need to mark the checkbox labeled "Enable Save To Meta Data Services". This will make available a new option "Meta Data Services" in the Location listbox in the Save DTS Package dialog box, displayed when saving packages from the DTS Designer interface. When you choose this option, you will also have a chance to specify the target server where the package meta data information will be stored, authentication information for connecting to it, and scanning settings. These settings determine whether and to what extent information about databases and their objects referenced by DTS packages will be scanned and written into equivalent meta data structures. The main check box determines if such meta data should be created to begin with. Once you enable it, you also need to decide whether you intend to:

* rescan all information about data stores referenced in the package into the Meta Data Services even if such information is already present from previous scans (corresponding to the "Scan all referenced catalogs into repository" option) or use existing information (corresponding to "Use scanned catalogs if already present in repository"), which saves time and resources but should be used only if data structure has not changed between scans.
* perform the activity defined by the previous selection for all data structures referenced in the package (corresponding to "Scan catalog always") or limit it only to the ones which have not yet been scanned (corresponding to "Scan catalog if not already present in Meta Data Services").

You also have an ability to import meta data for a particular database (rather than for all databases referenced by a DTS package). This can be accomplished by selecting the "Import Metadata..." item from the context sensitive menu of Meta Data Services Packages and Meta Data nodes under the Data Transformation Services folder in the SQL Server Enterprise Manager. Importing databases will allow you to view their information by clicking on the Meta Data node. The display is divided into three categories, grouped under Browse, Lineage, and Package headings:

Browse section provides a listing of Database Meta Data, including database (its name, SQL Server instance name and version), schema (in the case of SQL Server, this refers to the database owner), tables, their columns and characteristics for each (datatype, length, scale, precision, and nullability). For each table and column, you can find DTS packages that use it as their data store connection, as well as related tables and columns that function as a source or a destination for its data. This is done by selecting Package and Source/Destination items from the context sensitive menu of the icon representing individual tables and columns. For this type of statistic to be available, you need to import relevant package information into the meta data as well (which we will discuss next).

The second section deals with lineage, which provides the ability to track changes to data. In general, SQL Server 2000 Meta Data Services offers two types of lineage:

o row-level data lineage - identifies the source and destination for a particular piece of data (a table row) as well as all transformations that were applied to it (including DTS package execution information). This is accomplished by assigning a globally unique identifier (GUID) to each value, which is used for tracking purposes.

o column-level data lineage - maintains a record of columns involved in transformations (without tracking individual values in each). This allows correlating references between packages, which might be using the same columns as source or destination for their transformations. It also provides package revision and execution history, informing who, where, and when creates, runs, and modifies packages.

In order to help you understand this concept, we will demonstrate implementation of both types of lineage using a sample DTS package based on a template called Template Demo.dtt, stored in Program Files\Microsoft SQL Server\80\Tools\Templates\DTS folder. To create a package, right click on the Data Transformation Services node in the SQL Server Enterprise Manager and select the All Tasks -> Open Template option from its context sensitive menu. Point to the target file in the Select File dialog box, click on the Open command button, and choose the New Package entry in the Select Package dialog box. This will present you with the DTS Designer interface, containing a fairly rudimentary sample package, using a single, undefined Transform Data Task and two OLE DB Provider SQL Server connections, one pointing to Northwind database as the source and the other referencing tempdb as the destination. To define transformation properties, double click on it, and from the Properties dialog box, select the Shippers table as its source (on the Source tab). You can click on the Preview button to verify that the source table exists and has some data in it (three rows by default). Next, switch to the Destination tab. This will generate a Package Error informing you that the table does not exist at the destination, which makes sense, since we have not created it yet. Acknowledge the error, click on the Create button and verify that T-SQL code presented there will create the Shippers table at the destination. Switch to the Transformations tab and note that, without any customization, Transform Data Task simply copies data from Source to Destination. Click on OK to close the Properties dialog box. If you execute the package at this point, you should receive confirmation of its successful completion. You can also easily verify that tempdb will contain the Shippers table, with its content identical to Shippers in the Northwind database.

Note, however, that information about the package execution has not been recorded into Meta Data Services and no lineage statistics are available. In order to be able to track row-level data lineage, we will need to implement the following steps:

1. Import the source and target databases into the Meta Data Services.

2. Switch to the DTS Designer interface and select the Properties option from the Package top-level menu. On the Advanced tab in the DTS Package Properties dialog box, mark the "Show lineage variables as source columns" and "Write lineage to repository" checkboxes, click on OK to confirm your choice, and return to the DTS Designer.

3. Double click on the Transform Data task arrow to display the Transform Data Task Properties dialog box. Switch to the Transformations tab. You will notice that the list of columns in the Source changed and includes DTSLineage_Full and DTSLineage_Short. The first entry will store the globally unique identifier (GUID) representing the package version, the second one is intended for version checksum (unlike with GUID, its uniqueness is not guaranteed - although still very likely).

4. Now, you need to create identical columns at the destination. To accomplish this, switch to the Destination tab, and click on the Create command button. You will notice that the T-SQL statement now contains two additional fields Lineage_Full (of type UNIQUEIDENTIFIER) and Lineage_Short (of type INT). Rename the table identifier in the CREATE TABLE statement of the Create Destination Table dialog box to ShippersMDS and click on the OK button. This will create an appropriately structured table at the destination (Note that we could have done this at the very beginning as long as the appropriate options were selected on the Advanced tab of the package Properties dialog box - we went through initial steps purely to demonstrate changes that take place as the result of activating these options).

5. Now you need to modify transform properties from the Transformations tab. To change the mapping between source and destination columns, first click on Select All and then on Delete All. This will remove existing mappings. To create a new one, make sure that all Source and Destination columns are highlighted and click on the New button. From the "Create New Transformation" dialog box, select Copy Column entry, and click on OK twice to return to the Transformations tab.

6. Click on OK to close the Transform Data Task Properties dialog box and return to the DTS Designer window.

7. Assign to the package a meaningful name and save it to Meta Data Services using the Save As item from the Package menu. Ensure you select the Meta Data Services as the target location.

8. Execute the package.

At this point, all relevant row-level lineage information should be available. For example, in the Lineage section of the Meta Data node of SQL Server Enterprise Manager, you can type either Lineage Long or Lineage Short values (you can extract them from ShippersMDS table in the tempdb) in the corresponding text boxes, and click on the Find Lineage command button. This will display information about the Package responsible for creation of the corresponding table entry. Similarly, you can obtain information about its source using the Source/Destination context sensitive menu option..

To implement column-level data lineage, all that is necessary is to ensure that you mark the "Resolve package references to scanned catalog meta data" checkbox in the Scanning Options dialog box when saving the package into Meta Data Services. In our case, you can choose "Scan all referenced catalogs into repository" and "Scan catalog always" options.

Package section displays information about packages saved to Meta Data, including such information as, for example, their name, version, author, creation time, lineage, and execution statistics (account used to launch it and launch time). You should be able at this point to see reference to the package we just saved and executed.

Note that when saving to Meta Data Services, you lose the ability to assign owner and user passwords. While, this makes your packages less secure, you have the ability to audit their use. In addition, keep in mind that this functionality is used primarily for troubleshooting more complex data warehousing scenarios, where keeping track of transformations via DTS Packages would be otherwise fairly difficult.

No comments yet