As I promised to everyone who attended my SoCal Code Camp sessions here are the slide decks and the sample code that I used during my presentations.
It’s that time again, it’s time for another SoCal Code Camp. I know, you are thinking, it’s already Code Camp time again? Didn’t we just have one a few months ago?
The answer to both questions is yes. We did just have one, but it is time for another one. The Fullerton Code Camp (this weekend) is on a pretty fixed schedule, it is the weekend before the Super Bowl. The LA Code Camp which we had a couple of months ago tries to follow a local Microsoft Event which leaves the scheduling to the people at Microsoft.
But all that doesn’t really matter because this weekend, rain or shine (so far weather.com says shine), we are having another Code Camp.
I’ll be presenting 4 sessions this time.
On Saturday I’ll be presenting:
On Sunday I’ll be presenting:
I’ll post the slide decks the Monday after (or you can dig through my blog and find the older versions of them).
See you there,
There was going to be a kick ass post on using your resources on forums, blogs, twitter, etc to help you solve those really weird SQL Server problems online, but WordPress decided to barf when saving it, and it didn’t save a draft of it either, even though it was up on my screen for like 2 days and I swear to god I hit save at least once or twice.
So instead you get me ranting about WordPress as I just don’t have the energy to rewrite it at the moment.
It was going to be really cool with a link to Jonathan’s post about Diskeeper hosing this production SQL Servers. I’ll try and rewrite it in the future.
So I got my session stats from my SQL PASS presentation a little while back, but I’ve been so busy I haven’t been able to put up a post about them.
My Session was DBA-421 Storage for the DBA. I had a great response rate on the evals with 66 evals turned in (there were about 100-120 people in the room). Overall the responses were pretty positive. I used a scale of Very Poor = 1 and Excellent = 5 to get the average score. Other numbers are just counts.
|Very Poor||Poor||Average||Good||Excellent||Avg Score|
|How would you rate the usefulness of the session information in your day-to-day environment?||0||2||6||22||37||4.5|
|How would you rate the Speaker’s presentation skills?||0||0||8||18||41||4.5|
|How would you rate the Speaker’s knowledge of the subject?||0||0||1||10||56||4.8|
|How would you rate the accuracy of the session title, description, and experience level to the actual session?||0||0||1||20||46||4.6|
|How would you rate the amount of time allocated to cover the topic/session?||1||0||10||27||29||4.2|
|How would you rate the quality of the presentation materials?||0||0||7||26||34||4.4|
Averaging all the scores over all the questions, overall that session rated at 4.51. Not bad for my first time out at SQL PASS. The part where I got really nailed on the evals was time (which I can’t control). A couple of people actually mentioned to me on the way out of the session that I should do it again next year as a spotlight session which would give me another 15 minutes. I’ll probably do that and see if they let me go at it again.
I only received three comments to which I’ll happily respond.
1. More pictures required – I’d love to, but this session turns into a two hour session some times as it is.
2. Too much detail about speaker’s environment, not enough practical info about databases. – This isn’t a database specific session, and they only SAN environment which I can use for demos is my own (I’m working on getting something together to fix that a bit).
3. Too many questions derailed the presentation. – True, I do need to work on that some, but I hate leaving people with unanswered questions. Ah, to fine some happy balance.
For those that submitted an eval, thank you. The feedback is always welcome. For those that didn’t please do so in the future. The evals provide very useful feedback to all the speakers and to the conference.
Thanks again to everyone that came to PASS, and I’ll see you in November in Seattle (unless I get approved for some of the like 6 conferences for 2010 that I’ve already submitted to or I’m planning on submitting to).
I’ll get to #1 in a second.
#2 How important are the business requirements? They are probably the most important thing to me when doing a database design. If I don’t have the requirements down pat, or I don’t understand the requirements then I can’t put together a proper database design.
#3 What tool do you use to create the design, do you need it to diagram, do you even care about diagrams? I usually don’t care about the diagram to much. I can see that in my head when I’m coming up with it. I usually go directly into T/SQL creating tables after I’ve worked in through in my head. For a very complex database I’ll bust out Visio and use that.
#4 What’s your biggest pain-point about designing? That’s easy, the business giving me a moving target to hit. Sometimes the moving target doesn’t impact the database design, sometimes it does.
#1 What process do you follow?
I take the business requirements are translate them from useless marketing speak into something usable, then go back to the business and see if that’s what the actually want. After doing this three or four times I now have a usable set of requirements that I can work off of. I’ll take these requirements and use then to decide what needs to be stored where, and how it fits into the rest of the environment (we are an application developer so everything goes into our production database, or our shopping cart database).
From there I work with the .NET developers to determine how they will need to see/use the settings which will be saved for our WebUI and the .NET services which run in the background. Then I work with the C++ developers to see how the client which we deploy will need to see the data. From there I’m able to get a good idea of how the data needs to be stored.
Sometimes it is simply easier to store the data as an XML blob because the database doesn’t need to see or use the data, it is simply storing the data so an XML blob then becomes the most flexible way to store the information.
When ever possible I’ll use the natural key of the database table as the primary key. I’m not fond of identity values for the sake of having an identity column on a table. If the column won’t actually be used for anything there is no point in having it there.
Because of the nature of our system we usually end up with LOTs of composite keys in the database as we to many to many relationships between different tables to use as settings.
So that’s why basic process that I use. What do you use?
If you are on Twitter and you follow some of the more popular people on twitter (BrentO, PaulRandal, SQLInsaneo, myself, etc) you’ll probably have noticed a sh*t storm of tweets about a certain blog which was plagiarizing information from lots of places including places like Microsoft’s TechNet and MSDN sites, SearchSQLServer.com, and several blogs.
After the blogger in question (Peter) removed all his SQL content (no idea if it was all plagiarized or just some was and taking it all down was easier) everything settled down a bit.
Before I continue I want to say that I fully respect Todd and his opinions on the subject, even though I don’t agree with them. Anyone willing to sit around and write for other people’s education for little or no money deserves to be respected.
Now as for the DMCA style take down nasty grams (and I like that way better then take down notices for some reason) there are other issues at play than just our giving information away to everyone.
For example, if you dig through Microsoft’s TechNet and MSDN sites there’s plenty in there which says that you can’t republish the information posted without written consent.
As for my articles which were plagiarized, the company which paid for those articles to be written (I’m assuming that it comes as no shock to anyone that people are paid to write articles for websites and magazines) owns the copyright on those articles for as long as they choose to enforce them. Some companies hold a lifetime exclusive on the articles, while others hold a shorter one.
Either the blogger in question or not was aware of the law on plagiarism or not, but he states on his website that he’s a college graduate. Pretty much every college will toss you out for plagiarizing other peoples works, and the students are well aware of it. He should have assumed that in the professional world that plagiarizing was still unacceptable.
Unfortunately for Peter there is a lot of information out there which can be freely read and referenced, but not copied. This blog would be included in that. For those of you reading this on SQLServerPedia.com it took a decent amount of work to get permission to post my blog posts up on the SQLServerPedia.com blog feed. Why did it take so long? Well because I don’t own the copyright to these blog posts. I write on behalf of Tech-Target and they own the copyright to anything and everything which I post on my blog, unless I have posted it somewhere else first (and then they still own the rights to the version on their site).
Would I like to have everyone be able to read what I write? The big reason that I spend so much time writting (especially this last month) is so that people can read it. I have information to share, and hopefully people find the information that I have useful. But a the copyright ownership of information has to be respected for web based articles just like it does for SQL Server Magazine, TechNet Magazine, etc.
These are my thoughts and opinions, on today’s events. Take them for what they are worth, god knows I’m no copyright lawyer. Please feel free to post your own here, on twitter, on Todd’s post (no registration required on his blog, unlike here) or your own blog if you have one.
It being the new year (ok, so I’m a couple of weeks late) it is time to come up with my professional goals for the new year. Continued »
I’ve got a new article up on SearchSQLServer.com where I talk a bit about the new Parallel Data Warehouse edition of SQL Server 2008 R2.
For those of you in bigger shops you can probably ignore this. If you work in a smaller shop where everything in the datacenter has a public IP, this post is for you.
When you build a new Microsoft Cluster Server cluster always be sure to fail over all the drives between all the nodes before installing SQL Server on it and deploying the cluster to production. The first time that you fail over a cluster from one node to another the disks can take a very long time to fail over.
If you don’t test this fail over ahead of time your first production fail over could be quite a bit longer than you expect. Granted I’m only talked 30-60 seconds longer than expected, but your are clustering the SQL Server for the most possible uptime, and 30-60 extra seconds could be a big problem.
Do in other words test, then deploy.