The SQL PASS organization has taken a different approach to session selection for the SQL Rally than they normally do for the annual summit. Instead of a full program committee which selects sessions then announces them, they are leaving the session selection up to the community via a vote. This weeks voting is for the Database/Application Developer track. Within that track sessions are broken down into 11 categories. As a former SQL PASS summit speaker my session does into the Summit Spotlight track, which is probably the toughest track to be in as everyone else in the Summit Spotlight track is also a former summit speaker.
The session which I’ve submitted as a Database/Application Developer session is called “Where should I be encrypting my data?”. In this session we’ll be looking at all the various places within the application stack that data can be encrypted. This includes the application layer, the database layer, encrypting over the wire, transparent data encryption, encrypting using your MPIO driver and offloading encryption to your HBAs. We will look at why and how you encrypt data at all of these levels, and learn when to use each one. As data encryption is a must for any business doing business in the United States, this session is a must for any database or application developer.
Voting for this round of the SQL PASS sessions is being done the week of January 17th-23rd, 2011 only, so you’ll need to vote soon for your vote to count. If you would like to be emailed when the next rounds of voting happen, you can register for that as well.
The way the session selection will work (as I under stand it) is that the number of sessions have already been selected. The top sessions from each group will be given spots at the SQL Rally event, so getting the word out is very important. As is the fact that people need to be sure to vote for sessions that they want to see.
I gave two talks at the recent SQL Saturday 62 in Tampa, FL and I had a blast doing it. As promised I’ve posted them on Monday (it is still Monday my time, so that counts right)?
Thanks everyone for coming to my session. Hopefully I’ll see you at the Tampa SQL Saturday next year.
The short answer is as much as possible.
The more exact answer is you typically can’t allocate cache to a specific LUN. You can usually carve up the total amount of cache into read and write cache. You want as much write cache as possible so that the database writes directly to the disk as little as possible.
There is no number that I can give saying that you need X amount of write cache for a 500 Gig database, because it doesn’t depend on the database size. It depends on the amount of data change per minute, how fast the data can be destaged from the write cache to the disks, and how much load is being placed on the disk for reading data.
You’ve really got two questions here. The first is if I like iSCSI for SQL, and the second is if I like NAS for SQL. NAS doesn’t support iSCSI as iSCSI is a SAN technology.
iSCSI is just fine for SQL Server as long as the server and the storage are on the same subnet, and the network has enough bandwidth available to support the data transfer.
NAS isn’t supported for SQL Server, unless the NAS has a SAN head and the SAN head is used for access to the storage.
This great phrase (which is way more clever than anything I could come up with) was tweeted by Daniel Taylor (Twitter) last week while talking about the pre-cons which are coming up this week in Tampa. And today is the last day to sign up for $99 (plus a few bucks for fees). After today the price goes up to $109 per person.
There is a better deal available if you have a few people that want to go. If you buy two pre-con tickets for the full price of $109 you get a third ticket for free. If you and a friend were already planning on coming, you may as well bring a third person for basically nothing.
Pre-registration for the pre-con sessions is required so that the restaurant knows how many people to expect for lunch. The same is requested for the main SQL Saturday event as well.
One of the all to common configuration mistakes that are made is to put the session state within the SQL Server when a website begins to scale to multiple web servers.
For those that don’t know what session state is, let me take a step back and fill you in. Session state allows the web developer to store values in variables up on the web server, but in a shared location so that it doesn’t matter what web server the end user connects to the values are all there and the web servers understand the session information from each other. This is very important for shopping websites, or pretty much any site with a login prompt and a load ballancer that is configured to send the users connection to the next available web server.
What is the problem?
The problem becomes that most people when they need to scale out session state opt to put the session state information into SQL Server because that is a nice easy repository that is probably already up and running, not to mention that the session state code in ASP.NET easily supports it. The issue with putting the session state information into SQL Server is that you don’t give a crap about persisting the session state information to disk. The information doesn’t get persisted more than an hour or so, and if the information within the database is lost, the only impact is that any values in session variables is lost.
So what are the options?
There are a few options besides using SQL Server for session state.
- Keep using the in-process option.
- Use the ASP.NET Session State Service
The easiest option is to keep using the in-process option in IIS for session information. This means that you will need to configure what are called sticky sessions on the load balancer so that the user always goes to the same web server every time. In the event that a web server fails, all the session information for those users would be lost, and they would need to start over by logging back into the site, or they would have an empty shopping cart, or whatever the site does.
The ASP.NET Session State Service is a Windows service that provides a memory only session state repository so that session information isn’t ever written to disk, and it isn’t kept on the web server. You can either stand up a dedicated machine for this, or setup a couple and cluster the service manually so that you have an HA solution for your session state service. If you have an existing SQL Server cluster you can even use this cluster for it, if you don’t have anywhere better to put it. Just configure the cluster to have the SQL Server run on one node as the preferred node, and have the session state run on another node as its preferred node. This way the services won’t ever run on the same node unless the other node is offline. The session state service doesn’t take a lot of CPU power, and the amount of RAM it needs to be completely dependent on the amount of information that you are stuffing into session variables on the web servers.
In either case, either solution is better than putting the session state information into a SQL Server database. The information doesn’t need to be written to disk, ever. The information that is written into the session state database is in a blob binary form, not any sort of relational form so you can’t really do anything with it.
How do I know if sessions state is in SQL?
That is an easy one. You’ll have some funky database, probably in simple recovery mode usually name aspnetdb (maybe with a prefix and/or a suffix).
What other problems can session state cause?
Well first it’ll take away buffer pool resources from your other databases. Because the session state database is hit very hard the data from the database will always be in memory. How ever much data is in the session state database, you are missing that much buffer pool space for your other databases. Because of the way that session state works, every single time a page is clicked on the website, at least one call is made to the session state database pretty much forcing the database server to keep the data from the database in the buffer pool.
Another problem that you can see if ghost records. Paul Randal describes ghost records perfectly on his blog post Ghost cleanup in depth.
When a record is deleted, apart from it being marked as a ghost record, the page that the record is on is also marked as having ghost records in one of the allocation maps – the PFS page (post coming soon!) – and in its page header. Marking a page as having ghost records in a PFS page also changes the database state to indicate that there are some ghost records to cleanup – somewhere. Nothing tells the ghost cleanup task to clean the specific page that the delete happened on – yet. That only happens when the next scan operation reads the page and notices that the page has ghost records.
He also talks about how to fix the problem in the same blog post, so I’ll leave you to click through for all that information.
Another problem goes back to the fact that the SQL Server will be persisting everything to disk, and with the information in this database changing all the time, as the website grows you’ll need faster and faster disks under the session state database just to keep up. As the site grows more popular you will end up spending more and more money on faster and faster disks, just to keep the session state working, much less anything else on the SQL Server. You may even get to the point where you actually need a dedicated SQL Server just to run the session state database.
Another problem is good old locking and blocking. SQL Server likes to take page level locks when it does insert, update and delete operations. Well, unless each page on disk only has a single row in it you are going to have processes being blocked for short periods of time as other users session state information is updated. You can work around this to some extent by hacking the session state database’s stored procedures and forcing row locks, but now you are taking more locks (and more memory for locks), etc.
How do I change to the session state server?
First you need to install the service on the server that will be your session state server. On Windows 2008 just install all the .NET components and that should do the trick. You’ll probably want to start the service as well. When you bring up the list of services it is the one called “ASP.NET State Service”.
Then on each web server you’ll want to change the session state information. You can either use IIS Manager to do this, or change the web.config. However you configured session state the first time, that’s how you’ll want to change it this time.
From there select “State Server” from the list and change “localhost” to the server you’ll be using for session state. If you have changed the TCP port number from 42424 to something else you can adjust that here as well.
To set the session state setting via the web.config file find the existing session state information and edit it, or add in the session state information. Set the “mode” to “StateServer” and set the “stateConnectionString” to the same value that goes in the IIS config setting.
<configuration> <system.web> <sessionState mode="StateServer" stateConnectionString="tcpip=SampleStateServer:42424" cookieless="false" timeout="20"/> </system.web> </configuration>
Hopefully I’ve convinces you to move your session state information out of SQL Server and into a repository that it actually belongs in.
So at the upcoming SQL Saturday 62 I have managed to get my hands on a couple of cool giveaways which I’ll be doing a drawing for after the session. The first prize will be a signed copy of my new book “Securing SQL Server”, and the second give away will be a copy of Windows 7 Ultimate Edition.
The only way to be entered for these great prizes is to attend my pre con so be sure to register today.
Have you ever had this conversation with your storage admin?
[kml_flashembed movie="http://www.youtube.com/v/yZlbLkD46Kc" width="425" height="350" wmode="transparent" /]
If you have, then do I have a session for you to attend. On January 14, 2011 in Tampa, FL I’ll be presenting one of the two SQL Saturday #62 precon sessions during the the “Day of Data” on Friday.
Hopefully I’ll see you at the precon.
So the other day I got a ticket opened by my companies Managed Service Provider. There was a backup failure on one of our databases. The response from the hosting provider was:
This failed due to a "Error: 3041, Severity: 16, State: 1." error. This is because you must perform a full database backup before you back up the transaction log for a database in SQL Server. If you require any further assistance or have any queries regarding the above please do not hesitate to contact us at any time or update this ticket.
Now the database in question has been on the server for months, if not longer. And quickly looking at the database properties would tell you that a full database backup has been done, so this response was pretty much BS. Looking at the ERRORLOG for that time shows three error messages.
Error: 3041, Severity: 16, State: 1. BACKUP failed to complete the command BACKUP DATABASE prod_phreesia_print. Check the backup application log for detailed messages. Error: 18210, Severity: 16, State: 1. BackupMedium::ReportIoError: write failure on backup device '920a8f3f-be50-4e54-9b0f-f1bfeddea12a'. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.). Error: 18210, Severity: 16, State: 1. BackupVirtualDeviceFile::RequestDurableMedia: Flush failure on backup device '920a8f3f-be50-4e54-9b0f-f1bfeddea12a'. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).
Looking at these three error messages (which all were logged during the same second) gives a pretty good idea as to what happened. The tape backup system threw an error causing the full or differential backup to fail. Without looking through the system any deep I know that this had to be either the full or differential backup that threw the error because the full and differential backups go directly to tape while the log backups go to disk. Since the backup device listed in a guid, that tells me that it is a backup doing directly to tape via the tape backup solution. Since the tape backups are managed by the MSP, one would think they would be able to quickly match the two errors together.
But apparently I’m expecting to much from a company which my company pays over $40k a month to.
Moving data into the cloud is a huge thing with Microsoft and Amazon (among others). However for those in countries that aren’t the US there are some major road blocks to putting your data into the cloud.
The first issue is that when the cloud provider is a US company (which most of them are at the moment) then your data falls under US law, as well as any laws which exist if the data is being stored in a non-US data center. So basically if you are a non-US company and you use these services to host your data, the US government can issue a National Security letter to the cloud provider and get a copy of all the data that is hosted by the US company. (This effects those of us who live in the US, but there’s nothing we can do about it as we are already here.)
Additionally there’s some major gray area in US law at the moment. This has to do with the abandoned property laws. These laws were put together in the 90s or so when leaving data on a server for long periods of time was abnormal. These laws basically say that if data has been hosted on a 3rd party server (the cloud in this case) for over 90 days this data is abandoned and the US Government can request access to the data, without the need for a search warrant. Seeing as the whole point of using the cloud is to host your company data on a third party server these laws become a pretty big stumbling block.
Some countries have laws about where the data for their citizens can be stored. Check with your national government to see if it is even legal for you to store data in another country. Don’t forget that when picking your cloud provider, even if the data center you pick to host your data is within your country, there wouldn’t be anything stopping your cloud provider from replicating your data to another country for backup or high availability / disaster recovery.
There are lots of other laws to be informed about before moving your data into the cloud. Be sure to do your research before moving your application to the cloud.