At the Wednesday keynotes we started with Rushabh talking about the PASS financials. Some numbers include:
2010 revenue projection of $3.2M which is a 15% reduction from the 2009 numbers. But even with this reduction PASS is planning on spending 40% more on the SQL Server community. They were able to increase the community spending, by ratically reducing the IT expenses by 67%.
Wayne then named some outstanding volunteers that work with PASS. This includes Tim Ford for his work on 24 hours of PASS, Grant Fritschey for his work on the new SQL Server Standard, Amy Lewis who is the leader of the BI virtual chapter, Jacob Sebastian who is heading up the PASS Member Outreach program in India.
This year there are two passion awards being given out. The first was presented to Charley Hanania for his work with the European PASS Committee, his work with the Swiss PASS Chapter. He has be working with PASS for 4 years so far. The second was presented to Allen Kinsel (Twitter) for his work in preparing the PASS conference.
Tom Casey (Twitter) who is a General Manager of the SQL Server Product Team at Microsoft then took the stage. He has reminded us that only 20% of the people have the information that they need to do their job. Specifically they need more information from there data, and how SQL Servers BI product suite can help the other 80% of the people out there get the infomation that they need.
Tom brought Ron VanZanten from First Premier Bankcard to talk about how SQL Server BI is used by them to drive their business and why they picked SQL Server over Oracle and Teradata. First Premier Bankcard selected SQL Server because of the Office integration, as well as the pricepoint that SQL Server comes in at. First Premier Bankcard has gone from a new customer to an early adopter running SQL Server Madison for their data warehouse which has reduced some queries run time from hours to minutes.
Tom then talked about how the new Power Pivot platform is going to make it easier to the Information Worker to get the information they need, while IT will still control the data and the application. This is expected to make the Information Worker more efficient without having to requesting that the IT department put together the new application.
Tom brought Amir Netz (Twitter) of Microsoft to the stage to show a demostration of Power Pivot. The demo included bringing 100 million rows into Excel from the data warehouse then filtering that data against values which were simply entered by hand into another sheet in the workbook. As for sharing these huge documents we have Power Pivot for Sharepoint which allows you to upload the Excel workbook to the Sharepoint portal. The application can then automatically refresh the data and allow anyone who needs to view, and then slice and dice the data via the sharepoint portal without having to download the application. The work is all done on the sharepoint server, by using your SSAS serer to do the needed processing.
When you configure Power Pivot for sharepoint you get a very interactive set of managemet screens in the sharepoint configuration. It will show you who’s using the files, how often they are being used, and trends which show the usage of the documents over time.
The downside to putting all this new Power Pivot functionality in your org is that Office 2010, Sharepoint 2010 Enterprise Edition, and SQL Server 2008 R2 are all required to make this all work. This ends up being a pretty pricy solution if you don’t have Sharepoint and SQL already.
I’ve just uploaded a bunch of pictues from SQL Pass to flicker. They can be found here.
So far the SQL PASS 2009 summit has been a blast. I arrived on Sunday afternoon and the fun started shortly after that. There have been a couple of parties to go to, and we did a great photo walk Monday morning. Continued »
For those lucky enough to be attending the SQL PASS summit this year in Seattle, WA I’ll be there to. This is my third year in a row attending, and my first year speaking. Over the course of the week I’ve got a bunch of planned things which I’ll be doing. Hopefully you’ll look me up at one (or several) of them. Continued »
If you don’t agree with the above statement please keep reading. I’m write, and it’s important, I promise.
In order for the auto-shrink feature to be really effective it has to move data from the end of the file to the middle/front of the file so that it can chop off the tail end of the database file. This causes extra load to be placed on the disk, and on the CPU as it is identifying the data pages which can be moved, then moves them.
It also causes extra fragmentation to happen within the database as the shrink operation does not preserve the fragmentation state of the indexes within the database. Because of this the worst time to shrink a database is write after the indexes have been rebuilt. Because of the extra space that is needed to rebuild indexes this is probably also the most common time to shrink a database on a regular basis.
My favorite reason to not shrink a database is listed directly in Books OnLine under the “Shrinking a Database” heading. Under the Best Practices topic it says “Unless you have a specific requirement, do not set the AUTO_SHRINK database option to ON.”.
So go and turn your AUTO_SHRINK settings to off like they should be and quit worrying if the hard drive icon in the My Computer window shows that it’s full. Worry about about how much free space is within the database files, not the free space on the disk. Fill the disk already. It’s fun, and all the cool kids are doing it.
Yesterday was the Windows 7, Windows 2008 R2, Exchange 2010 launch event here in Southern California (Orange County to be specific, Burbank is on Wednesday). For the most part I was planning on going to pick up a couple of tidbits of information, and a free copy on Windows 7 Ultimate (I’m not stupid, someone offers me a free Windows license, I’m going to take it). However the day was much more informative that I had expected that’s for sure.
In this post I’m going to cover some of the high level information, then over a few future posts I’ll give more into what was covered. Continued »
So you are going along your normal day, and your boss comes up to you and tells you “We’ve got a few thousand bucks left in this years budget, what would you like to upgrade?” Assuming that new 26″ monitors for your workstation are out of the question, the boss is probably talking about a server upgrade here so lets see what we can do. Continued »
That’s right, the second most important vote of this years SQL PASS Summit (with the elections being the most important); the color of my hair is closing tonight.
Currently (I’m writing this at 5:30am EST) the standings are Blue at 38%, Purple at 29%, Rainbow Fro Clown Wig at 24% and Green bringing up the rear at 9%.
Get the word out, get those votes coming in. We’re getting close to 10% of the votes for the board elections. I’d love to see that number get higher. Pretend it’s Chicago; vote early and vote often (as often as you can when switching IPs each time). As the day goes on I’ll post updates to the vote count (none of that secret stuff going on here).
Have fun, and I’ll see you in a couple of weeks.
P.S. The reason for the quick closing so far before PASS is that if hair color needs to be ordered (I’m picky about my hair dye) we need time for it to get here.
This year at the PASS Summit there will be a daily bingo game. This isn’t the standard sit in a room while someone calls out numbers. No you have to find the people who’s names and faces (for the most part) are on the bingo cards. Continued »
Software developers love re-factoring code. And why shouldn’t they. It’s quick (sometimes) and when done correctly it’ll reduce the amount of code, and speed up application response time. DBAs like re-factoring code as well. We get the same benefits when done correctly. Re-factoring the database schema on the other hand, is a frigging nightmare.
Changing around code is easy, moving 100,000,000 records from one table to another in a timely fashion isn’t. It sucks, big time. Continued »