SQL Server with Mr. Denny


July 30, 2012  4:00 PM

Why is SQL Injection still a problem?

Denny Cherry Denny Cherry Profile: Denny Cherry

SQL Injection is probably the most popular attack vector for hackers when they attempt to break into databases.  The reason for this is that it is so easy for an attacker to gain access to the system, and typically to get pretty high level permissions to a database engine so that they can then export some of all of the data from the database engine.

The really sad thing about this is that it is very easy for software developers to protect against SQL Injection attacks.  The way that software developers protect the application from SQL Injection is by using parameterized queries instead of the older, and usually easier technique of simply building the database query using variables in the software code.

One of the reasons that I that I think that SQL Injection is just a big problem is thanks to the separation of duties that we have at most companies.  The reason that I say this is because the software developers that build the applications never have to deal with the cleanup from the SQL Injection attack.  Many developers, probably because they don’t work all that closely with database administrators, see SQL Injection as a SQL Server problem not an application problem.  This thinking would be wrong, as the only way to prevent SQL Injection problems is to protect the data at the application layer by using coding best practices like using parameterized queries like that shown below, taken from Chapter 8 of my book Securing SQL Server (this sample is for VB.NET, the book includes examples in C# as well as VB.NET).

Private Sub MySub()
Dim Connection As SqlConnection
Dim Results As DataSet
Dim SQLda As SqlDataAdapter
Dim SQLcmd As SqlCommand
SQLcmd = New SqlCommand
SQLcmd.CommandText = “sp_help_job”
SQLcmd.CommandType = CommandType.StoredProcedure
SQLcmd.Parameters.Add(“job_name”, SqlDbType.VarChar, 50)
SQLcmd.Parameters.Item(“job_name”).Value = “test”
Connection = New SqlConnection(“Data Source=localhost;Initial Catalog=msdb;Integrated Security=SSPI;”)
Using Connection
Connection.Open()
SQLcmd.Connection = Connection
SQLda = New SqlDataAdapter(SQLcmd)
Results = New DataSet()
SQLda.Fill(Results)
End Using
‘Do something with the results from the Results variable here.
SQLcmd.Dispose()
SQLda.Dispose()
Results.Dispose()
Connection.Close()
Connection.Dispose()
End Sub

Now I freely admin that coding the .NET code this way is harder than using Dynamic SQL which is shown below.

Private Sub MySub()
Dim Connection As SqlConnection
Dim Results As DataSet
Dim SQLda As SqlDataAdapter
Dim SQLcmd As SqlCommand
SQLcmd = New SqlCommand
SQLcmd.CommandText = “exec sp_help_job @job_name='” + MyVBNetVariableWithTheJobName + “‘”
SQLcmd.CommandType = CommandType.Text;
Connection = New SqlConnection(“Data Source=localhost;Initial Catalog=msdb;Integrated Security=SSPI;”)
Using Connection
Connection.Open()
SQLcmd.Connection = Connection
SQLda = New SqlDataAdapter(SQLcmd)
Results = New DataSet()
SQLda.Fill(Results)
End Using
‘Do something with the results from the Results variable here.
SQLcmd.Dispose()
SQLda.Dispose()
Results.Dispose()
Connection.Close()
Connection.Dispose()
End Sub

The problem that I have with application developers taking the easy, shorter way out is that their job isn’t to take the easy way out.  Their job is to build the application securing and robustly, not in such a way that the application is as easy as possible to write.  This problem can probably be traced back to the specifications which were written for the application which probably don’t mention security at all anywhere in the specification from the business unit.  Because security isn’t a primary concern for the business unit it is left as an afterthought, an afterthought which is typically ignored until after there has been a breech.

Another reason that I think that SQL Injection is a problem is that we trust that our users wouldn’t want to do anything to hurt our applications or their data as they have a vested interest in keeping the system working correctly.  And this is true to some extent.  However when you publish an application on the public Internet not only will you customers be using it, but others will be attempting to hit the forms within the application.  Because of this, we can’t trust any input that the application user passes in.  Even if the value that is passed in is from a hidden field, or has been validated by the front end. If the value hasn’t been validated by the back end, and properly scrubbed then it shouldn’t be trusted.  And the only way to fully validate and scrub the value is to use the parameterized query technique which I showed above, no other technique no matter how clever will be as successful.

I’ve been working in the IT space for about 15 years now, and I’ve worked on dozens of application development projects over the years at companies large and small, and I can’t recall a single application design specification which included security of the data as a component of the application development.  As the production DBA for companies I’ve forced the issue when I would find problems early enough in the development cycle, but often I wouldn’t find out about the application that was being built until it was time to deploy the application to production.  At this point it is to late to make the kinds of major changes which need to be made, and because security doesn’t add value to the application or to the business unit security isn’t given the developer or QA resources which are needed to make the changes needed to properly secure the data from potential attackers.

I urge everyone that reads this, developers and administrators alike, to look at how applications within your environment connection to the database engine (it doesn’t matter what database engine you use, they can all be broken into via SQL Injection, and yes MySQL is included in this) and if dynamic SQL is being used, and isn’t being properly parameterized talk to upper management about this problem.  Explain to them that while this won’t be something which adds features to the application and won’t necessarily add value to the business, this is something which absolutely needs to be resolved.

Suffering from a SQL Injection breach will have a negative impact on the company, and the IT department in several ways.  From the company side of things customers will loose confidence in the company, which means that they will stop purchasing your product or using your service.  This means that the company will make less money.

Internally the business unit will loose faith in the IT staff as they can’t properly secure their applications from attackers.  The business unit will then loose faith in the developers as the IT staff explains that the only way to protect 100% against this sort of attack is to fix the application which means lots of time (possibly hundreds or thousands of man hours) just fixing database access code and not adding functionally into the application.  The business unit will then assume that the developers aren’t good developers and may request that new developers be brought in, that the application development be outsourced, or that a third party application be purchased.  All of which mean that you and/or your coworkers could easily be out on the street looking for more work.

If you are working on a new development project and security isn’t included in the specification push to have it added.  Yes it will slow the delivery of the application down, but it will remove the risk of a data breach, or worse than that a total network breach (where the attacker is able to get into the company network and take control of internal resources like domain controllers, file servers, etc.) which would be a major disaster  to any company no matter how large or small.

I hope that you take this to heart and fix any applications in your environment which have SQL Injection issues so that we can all stop reading about these data breaches which are coming all to often.

Denny

July 26, 2012  2:00 PM

Security Sessions at SQL PASS 2012

Denny Cherry Denny Cherry Profile: Denny Cherry

The SQL PASS session list for the SQL PASS 2012 Summit has been released.  This year there are 192 sessions being presented at the SQL PASS summit.  Last year at the 2011 summit there were only a couple of sessions on SQL Server Security.  This year there are 4 sessions.  While this appears to be a bit better that before (if I remember correctly there were 3 last year), based on the number of large scale data breeches this year we need to be talking more about SQL Server security, and most importantly people need to actually listen.

Where Should I Be Encrypting My Data being presented by me

SQL Server 2012 Security for Developers being presented by Andreas Wolter

The Evolution of Security in SQL Server 2012 being presented by Don Kiely

SQL Injection: From Website to SQL Server being presented by Mladen Prajdić

There are a couple of other sessions that mention security in the abstract, but based on these abstracts I’m guessing that security won’t be mentioned very much during the actual sessions.

Denny

P.S. Don’t forget about my free SQL PASS 2012 First Timer’s webcast coming up on October 17th, 2012 at 1pm Pacific / 4pm Eastern.  You do need to sign up for the session, which is FREE so get signed up.  Even if you have attended the SQL PASS summit before, this is worth it as there are some big changes in how PASS will be laid out at the convention center this year.


July 23, 2012  2:00 PM

SQL Saturday 147 Precon (#sqlsat147)

Denny Cherry Denny Cherry Profile: Denny Cherry

If you are planning on attending SQL Saturday 147 down in Recife Brazil you would check out my pre-con session being held on August 24th.  In this session I’ll be talking about all of the High Availability options which are available for SQL Server 2012.  I will be talking about AlwaysOn Availability Groups, Clustering, Database Mirroring, and Transaction Log Shipping.  Most importantly we’ll be talking about how you should be deciding which of these technologies to use.

The cost for this pre-con is $95 (US) plus an additional $6.22 (US) in fees to Event Bright for a total of $101.22.

I urge you to sign up for this great all day pre-con session today as seating for this session is limited.

Denny

P.S. Don’t forget this session will be held in English.


July 19, 2012  2:00 PM

Making Technical Writing Easier

Denny Cherry Denny Cherry Profile: Denny Cherry

Those of us that have worked on books have probably all run across the same problem at one time or another.  When you are going through the book writing process, you always end up going back and wanting to add a code example, or a screenshot somewhere in the middle of the chapter, and you then have to go through and renumber all of the other code examples and images that come after that.  Needless to say this is a pain to do, and can take forever if you’ve got a lot of images.

When I was working on the second edition of my book Securing SQL Server this was a major problem for me on several of the chapters in the book as I wanted to put new screenshots for SQL Server 2012 into the middle of the chapter, or there was some new feature that I was going to be talking about in the middle of a chapter, so new code samples were needed.

As I was going through the chapter I was thinking, “wouldn’t it be nice if Word would handle all this for me?”  I was able to find a way to do this, using the native bullet point feature within Word.  It took a little tweaking to my Word docs to make this work, but here’s what I did in case you run into this same problem.

In my case I needed three different numbering systems.  One for Images, one for Examples and one for Tables.  To do this I had to create three new list styles within Word.  To create a new list style you first need to click on the multilevel list button in the ribbon at the top of the document and then select “Define New List Style…” from the menu as shown below.

This will open the new list style editor.  From here you can give the style a name, “Figures” in my case.  Then click the format button at the bottom and select “Numbers” from the menu that opens.  On this new page, shown below, you can setup the formatting that will be used for the list.  In the “Enter formatting for number” section like I’ve done in the screenshot below.  Don’t forget to set the “Number style for this level” option.  In the formatting level you’ll see that I’m only changing level 1 as we won’t be using the child levels.  In my numbering I’ve got it setup for Chapter 3 which you can see because the first figure number will be “Figure 3.1″.

Click OK on this screen and then OK on the next screen as well.  Now when you click on the multilist option in the ribbon you’ll see the Figures style as an option as shown below.

I then repeat this exact same process for Examples and Tables.  Now when I’m doing my writing I can simply use the native feature to handle the numbering with the official number next to the caption.  The end result looks like this.

To setup the reference to the figure we use the Cross Reference feature which can be found on the “Insert” tab of the ribbon.  Simply put the cursor where you want to reference the number then click the cross reference button.  This will pop up a list of all the Figures, Examples and Tables which exist within the document.  Select the correct one and click the “Insert” button then the “Close” button.  This will insert the reference into the document.

Once all your figures, examples, and tables are setup this way when you put one into the middle of the document everything will number itself correctly.  If you need to insert a figure, example or table into the middle of the chapter all of the markers will be updated automatically, however the labels will not update automatically, so you’ll need to update them.  Thankfully this isn’t all that hard to do.  Simply do a select a (CTRL+A) then right click somewhere and select “Update Field” from the context menu.

Now if your publisher is like mine, then want [CAPTION] in front of the actual caption, but not within the actual chapter, so just before I send in the chapter I’ll edit the style and put the [CAPTION] in front of the Figure, Example and Table number.  This is done by bringing up the menu shown in the first screenshot above, then right clicking on the style to edit.  Then select the “Format” button at the bottom right and selecting “Numbers” from the menu that opens.  Just put the change needed in front of the caption so instead of reading “Figure 3.1″ it would read “[Caption] Figure 3.1:”.  When you OK out of these windows, the actual image captions will be updated for the references won’t be updated.  As long as you don’t do an update field (from the prior paragraph) then everything will look like you expect it to.

Hopefully you can use this trick when you are doing your writing.

Denny


July 16, 2012  2:00 PM

Stop Using SQL Logins

Denny Cherry Denny Cherry Profile: Denny Cherry

Hey vendors, consultants, clients, etc. STOP USING SQL LOGINS.  Now if the SQL Server you are using isn’t attached to a Windows domain then fine, odds are you’ll need a SQL Authentication login.  However if the machine is a member of the Windows domain then login to SQL Server using Windows Authentication.

The other day I was connecting to a clients SQL Server.  I had to log onto a different server to run SSMS, so far so good.  But then they gave me the connection information for the SQL Server which had a SQL Auth username and password.  I’ve already got a domain account, so for the love of god why did I have to have another stupid username and password instead of just connecting via the Windows Account I had JUST USED to log into Windows.

Vendor apps are just as bad.  They’ll insist that the Windows services run under a domain account, usually so that they can access network shares or something, but then they require a SQL Auth account to be created to log into the SQL Server database.  This means that someone needs to track another username and password, and given that SQL Auth accounts are easier to break into than Windows Auth accounts it’s less secure over all on top of that.

In 15+ years of managing SQL Servers for people I’ve found only a few software venders that were even willing to try running the software under a domain account so that we could use Windows auth to connect to the SQL Server.  And one of those was the one that I worked for, where I forced the developers to make that an option so that the DBA & sysadmin would have the option to install the Software under domain accounts and use those domain accounts to connect to the SQL Server.  The developer didn’t understand why I cared about this but eventually I got my way, mostly I think so that I’d shut up about it.

In my mind there are only a few times when it is truly acceptable to use a SQL Authentication Login at this point.

  1. The clients are not running Windows.
  2. The clients are not on the same domain, and the two domains aren’t trusted, and the user doesn’t have a Windows login in the same domain as the SQL Server.
  3. The SQL Server is running version 4.2 or earlier.
  4. The client application is a Windows service and it isn’t on the domain or the domains aren’t trusted.

Now you’ll notice for #2 I was pretty specific.  That’s because if the client is running Windows and the user has a domain login in the same domain as the SQL Server then the client application can be run as the users account in the other domain (this includes SQL Server Management Studio).

In summary, in case you didn’t get my point yet, STOP USING SQL AUTHENTICATION LOGINS.

Denny


July 12, 2012  2:00 PM

PBM, Replication and Stored Procedure Names

Denny Cherry Denny Cherry Profile: Denny Cherry

One of the things that Policy Based management is really good at is making sure that people don’t create stored procedures that start with “sp_”.  However when the server that you are deploying the policies to is also being used for SQL Server replication this can get a little more complex seeing how all of the replication stored procedures start with “sp_” by default and they aren’t marked as system stored procedures.

I came across this problem when setting this up on a clients system.  The way that I was able to get around this was to setup Policy to evaluate against a condition which I had setup which filtered out the stored procedures that were used for SQL Server Replication.  I did this by setting up a condition, which I called “User Defined Stored Procedures” which had two values in the Expression.  The first was looking at the Schema field and excluding anything in the “sys” schema (which takes care of all the system objects), then looking at the Name field and excluding everything that matched “sp_MS%”.  You can see this condition below (click any of the images to enlarge).

Now to ensure that this only ran against the user databases I created another condition against the Database facet which looked at the IsSystemObject field and make sure that it was False, shown below.  That way I could put procedures like sp_whoisactive and sp_who3 into the master database and not have a problem with them.

The actual Check condition of the policy was setup easily enough, simply checking that the stored procedure name wasn’t like “sp%” as shown below.

Bringing this all together is the actual Policy which is configured with the check condition, and is configured to filter the objects being checked against the two other check conditions which helps to limit the amount of time that the policy takes to execute as shown below.  As this is a SQL Server 2008 R2 instance in this example I had to use a schedule to verify everything nightly, but that’ll do.

Hopefully if you run across this situation this will help you get everything setup faster that I was able to.

Denny


July 9, 2012  6:20 PM

Windows 2012 Dates Announced

Denny Cherry Denny Cherry Profile: Denny Cherry

It appears that Microsoft has released the dates for RTM (Release To Manufacturing) and GA (General Availability) for both the Windows 2012 Server and Client Operating Systems. Both the client and server with RTM the first week in August with the server OS will be available in September and the Client OS in October. You can read more about the announcements and the schedules on the server and client blog posts their respective blogs.

Denny


July 5, 2012  6:31 PM

VMworld needs to provide more feedback to session submitters

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve submitted a lot of sessions to a lot of conferences over the years.  And while the feedback from some of the conferences is a little cryptic thanks to the review team only having access to canned responses, it’ll do.  Now personally I’d love to get specific feedback saying exactly why my session wasn’t submitted, but I’m realistic in that there are probably hundreds or thousands of sessions submitted for every major conference, so canned responses are as good as it is going to get.

However VMworld hits the bottom rung of the ladder.  The VMworld content team simply provides this feedback.

Here is a list of the most common reasons why sessions were declined:

  • There were too many submissions with similar topics.
  • The submission was too basic, not enough information was provided in the abstract.
  • The session was too single vendor product focused and likely to have a commercial nature.

Having the most common reasons why sessions weren’t picked isn’t exactly helpful.  Given that my abstracts have been accepted at conferences like TechEd and the SQL PASS Summit I can assume that the submission wasn’t too basic.  While my sessions are SQL Server specific, virtualizing SQL Server is a major push for VMware so I’m assuming that I don’t fall into the last bucket.  That leaves that there were just too many submissions, which I can’t believe as currently there are only three sessions on the schedule that talks about SQL Server (as of my writing this).  The first is about virtualizing SQL Server, and the second is how General Mills succeed in virtualizing SQL Server using VMware, and the third appears to be a repeat of a SQL Virtualization session from last year.

The first session is called “Virtualizing SQL High Availability” but the abstract doesn’t talk about SQL Server High Availability, it only talks about SQL Server Performance Tuning within a virtual machine which let’s be honest from a SQL Server perspective is identical to tuning SQL Server on a physical machine.  Make sure the disks are fast enough, that you have enough memory and CPU then do the normal SQL performance tuning.  Looking at the external speakers companies website he appears to be knowledgeable on databases in general, but he doesn’t list anything more current than SQL Server 7 in his publications list for SQL Server, and most of his publications are for Oracle.

The second session called “Virtualizing SQL 2012 at General Mills” looks basically like a fluff piece being led by a Technical Account Manager from VMware (Sales) and a VMware Architect. In other words, they’ll have no one around who can actually answer any SQL Server questions, or talk about SQL Server in depth.

The third session is called “Virtualizing SQL Best Practices” which I’m pretty sure I saw at VMworld last year, and if it is the same session as last year I really hope that it is better than the session last year.  When I saw the session by this same presenter last year the session was full of incorrect and very outdated information, much of which was based on SQL Server 2000 but was being presented for SQL Server 2008 and SQL Server 2008 R2.  If this is an updated version of last years session, I REALLY hope for the attendees sake that the session is updated and the incorrect information is updated.

Now am I writing this because I think I should have been handed a speaking slot at VMworld?  No.  I’m writing this because I think that VMworld needs to do a better service to their potential speakers so that the speakers can improve their abstracts based on VMworld’s requirements (as every conference has different criteria that they use when evaluating session abstracts).  I also feel that VMworld needs to do a better job providing SQL Server specific content to their attendees.  VMworld has 9 Oracle sessions listed (granted I think that only 7 of them are actually Oracle specific).

Hopefully this blog post won’t fall on deaf ears at VMworld and they will do something about their lack of SQL Server content as out of the three sessions listed, I personally don’t have much hope for any of them being very useful to the general VMworld attendee.

Before writing this blog post I emailed VMware asking for more information about why my sessions weren’t selected.  I got a pretty generic response back which doesn’t specify anything about why my specific sessions weren’t selected.

  • Take the time to specify some of the details in your session description.  The title, abstract, outline, and session takeaways are key factors in determining whether your session is selected and can affect session attendance.
  • Be sure your perspective or story is unique.
  • Attendees are there to learn, so educate rather than pitch your product.  Your proposal submission should clearly reflect your intention.
  • Make sure your topic is relevant to the audience you’re targeting. Review the content topics before submitting a session.
  • Session selections are based on the content described in the submission, and the speaker(s). Speakers should have extensive experience presenting at conferences of a similar nature. If a speaker has presented at VMworld in the past, we will look at speaker scores and take them into consideration during selection.

Again not exactly the more useful information for me as an abstract writer to improve.

Denny


July 5, 2012  2:00 PM

Prepping the security for a data warehouse

Denny Cherry Denny Cherry Profile: Denny Cherry

So in an earlier blog post I talked about how I had to grant some users the ability to create indexes in a reporting server.  A couple of people have asked me how I created the domain groups and ensured that the permissions were setup correctly.  In this case I did this with two SQL Scripts.  The first generates some command line commands which can then be run on a domain controller in order to create the domain groups that the users will be put into.  This script simply compares the groups which exist as logins and matches that against the list of databases on the SQL Server.  In this case I’m filtering the databases down based on a prefix as I only want to deal with databases that start with rpt or secure.  There are 5 different domain groups created for each database.  I take the output of this first query and run it from the command line window on an Active Directory domain controller and the domain groups are then created.  Once they are created I simply drop them into the correct folder in AD and the help desk can start dropping users into the groups.


select 'net group EDW1-' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1-' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_SHOWPLAN_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_SHOWPLAN_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_INDEX_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_INDEX_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_VIEW_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_VIEW_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_PROC-' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_PROC-' + name not in (select name from sys.server_principals)
GO

The second script that I’ve got creates the logins for each of these domain groups, again by simply looking at the ones that don’t exist in sys.server_principals.  This script generates T-SQL code which I then just copy and paste into another SQL Query window and then execute the script.  The only thing that I have to remember with this script is that there are line breaks in the output so I need to ensure that I run this with the results going to text not to the default grid.


select 'use master
GO
CREATE LOGIN [MyDomain\EDW1-' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1-' + name + '] FROM LOGIN [MyDomain\EDW1-' + name + ']
GO
EXEC sp_addrolemember @rolename=''db_datareader'', @membername=''MyDomain\EDW1-' + name + '''
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1-' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_INDEX_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_INDEX_' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_INDEX_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_INDEX_' + name not in (select name from sys.server_principals)
UNION ALL
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_PROC-' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_PROC-' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_PROC-' + name + ']
GO
GRANT SHOW PLAN TO  [MyDomain\EDW1_CREATE_PROC-' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_PROC-' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_VIEW_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_VIEW_' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_VIEW_' + name + ']
GO
GRANT SHOWPLAN TO  [MyDomain\EDW1_CREATE_VIEW_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_VIEW_' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_SHOWPLAN_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_SHOWPLAN_' + name + '] FROM LOGIN [MyDomain\EDW1_SHOWPLAN_' + name + ']
GO
GRANT SHOWPLAN TO  [MyDomain\EDW1_SHOWPLAN_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_SHOWPLAN_' + name not in (select name from sys.server_principals)

The great thing about these scripts is that they produce a consistent set of domain groups for each new database that is added to the reporting server so that the helpdesk can quickly and easily figure out which groups users need to be placed into.  Could I have done this in PowerShell? Yeah, I’m sure that I could have.  But I don’t know the PowerShell cmdlets that would have been needed to create the domain accounts, so honestly I didn’t bother even trying.  I knew the SQL commands to do this off the top of my head.  While someone else probably could have written this in PowerShell in 10 minutes, I did it in T-SQL in 10 minutes and it’s reproduceable.

(Don’t forget for the index domain groups I’ve got a SQL Agent job that grants the rights to those that runs every night.)

If I need to add another set of domain groups into the script, I can simply copy and paste an existing one and make the few needed changes.

Denny


July 2, 2012  2:00 PM

Giving users the ability to create indexes

Denny Cherry Denny Cherry Profile: Denny Cherry

So I ran across a pretty interesting requirement when setting up a new reporting database server.  The end users who run queries against the databases on this server are pretty good at performance tuning their own queries so they wanted the ability to create indexes on the tables on the reporting server.  I figured no problem I’d just grant them the ALTER ANY INDEX right and be done with it.  The only problem is that there isn’t any such right.

There’s also no right at the table level which can be granted which will give the user the ability to create indexes without giving them other rights on the objects.

Some Background

So I started doing a little bit of hacking around the SQL Server and I came up with a solution that’ll work.

A little background on how security is setup on the SQL Server to begin with.  All the rights on this SQL Server are controlled via Active Directory domain groups.  There are four rights which we are concerned with, being a member of the db_owner fixed database role, being able to query any table or view in the database, being able to create stored procedures and being able to create indexes on any table in the database.  To handle this there are 4 domain groups for each database on the reporting server name {ServerName}-{Privelege}-{DatabaseName} so for example there is a domain group named “EDW1_CREATE_PROC-optin” which grants the users of the domain group the CREATE PROCEDURE right within the optin database on the SQL Server EDW1.

Like the create procedure domain group I’ve also go groups named CREATE_INDEX, CREATE_VIEW, etc. to grant all the needed rights.

The Workaround

To workaround there not being a right to create any index that you want, I granted the CREATE_INDEX right the right to ALTER any table.  This allows the user to change the table, create indexes, and drop the table.  In case new tables are added to the system I setup a job which simply loops through all the databases and grants the ALTER right to the table to the user within each database that maps to the specific domain group for that database using this T-SQL code. This job is simply configured to run nightly and the SLA with the business users is that when new tables are added they simply won’t be able to create indexes on the new table until the next day. As this data is simply being replicated over from production there shouldn’t be that many new tables being added very often.

EXEC sp_MSforeachdb '
USE [?]
declare @schema sysname, @table sysname, @database sysname
declare tables CURSOR for select name, schema_name(schema_id) from sys.tables WHERE is_ms_shipped = 0
declare @cmd nvarchar(4000)
open tables
fetch next from tables into @table, @schema
while @@fetch_status = 0
begin
set @cmd = ''GRANT ALTER ON ['' + @schema + ''].['' + @table + ''] TO [DomainName\EDW1_CREATE_INDEX_'' + db_name()  + '']''
print @cmd
if exists (select * from sys.database_principals WHERE name = ''DomainName\EDW1_CREATE_INDEX_'' + db_name())
exec (@cmd)
fetch next from tables into @table, @schema
end
close tables
deallocate tables'

You’ll notice within the cursor above that I’m also checking on if the domain group exists or not, so that if it hasn’t been created for some reason the script won’t throw an error message and stop.

Now as the users are being granted the ALTER right on each table, this is giving them way more rights than they actually need on the table.  To resolve this I created a DDL trigger for the server which looks to see if the user is a member of the db_owner fixed database role, and if they aren’t it rolls back the command and tells them to call IT as shown in the T-SQL below.


CREATE TRIGGER PreventSchemaChange
ON ALL SERVER
FOR ALTER_TABLE, DROP_TABLE
AS

SET NOCOUNT ON

create table #groups (account sysname, type sysname, privilege sysname, mapped sysname, path sysname)

declare @login sysname = suser_sname(), @database sysname, @event_data XML = EVENTDATA()

SET @database = @event_data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'sysname')

insert into #groups
exec xp_logininfo @login, 'all'

if not exists (select * from #groups where [path] like '%' + @database + '%dbowner') or IS_SRVROLEMEMBER('sysadmin', suser_sname()) = 1
begin
rollback
raiserror('Access to changing or dropping objects is denied.  Please contact IT to make this change.', 16, 1)

end

GO

You’ll notice that I had to do some hacking around within that DDL trigger to make this work.  That’s because DDL triggers that are server scoped (which mine is) always fire in the master database.  So because of this I couldn’t use the normal IS_ROLEMEMBER function that I wanted to use.  The quickest solution that I could come up with was to get the database name from the EVENTDATE() function and use that in combination with the xp_logininfo system stored procedure to see if the user is a member of the correct domain group.  This worked because of my domain group naming convention that I had setup earlier.

Now I’ve put in a connect item to make all of this easier to deal with as I can’t be the only one who has this sort of requirement.  Hopefully my request will get some traction as I wouldn’t think that this would be all that hard to implement.  Granted even if they do add the feature I’m asking for it probably won’t be until the next major release, and probably won’t be back ported into SQL Server 2012 or SQL Server 2008 R2 (this system is SQL Server 2008 R2) but hopefully we can get it in a future release so complex hacks like this aren’t required.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: