when relevant content is
added and updated.
Back in the old days of SQL Server 2012, we’d always tell people that creating indexes was half art and half science. And that’s true, but we’d create some non-clustered indexes (designed correctly of course), and the system would magically go faster. These days, in modern versions of SQL Server (or SQL Database if you’re using Azure) it can be a lot more art than science.
You have to know the data, at least somewhat, to create truly useful indexes. You need to know what the application is going to do with the data (you don’t need to know what the application looks like, just what the queries that it runs against the database). If you have a general idea of the queries that are hitting the server that’ll point you to the kinds of indexes that should be created.
If your application is doing summary reports on large amounts of data, sales by month over the last ten years (as an example) then a Clustered ColumnStore index will probably be your best option for the table.
If the application is doing summary reports on small amounts of data, more of a data dump than a summary report, then a ColumnStore index probably isn’t the best option. Instead, a non-clustered index would probably serve the application, and therefore performance, better.
It just depends on what the application needs and what it does. As a consultant, I have to drop into the company, review the application and quickly figure out what kind of application it is, and what kind of indexes need to be added (as well as how to define the indexes). It’s a hard job, but it’s a fun job, and I’m thrilled that I get the chance to do it.
But when push comes to shove, indexes are still an art form to some degree; and I think they always will be to some extent. We have some great tools like pssdiag and the query store that can help with this, but when you come down to it, it’s still people looking at a server and making it faster.