The other day my team decided that we needed to rethink our recovery strategy for an application. In order to meet the new recovery requirements, we needed to put the database in Full Recovery and take T-Log backups every hour. This would allow us to recover up to the minute with a worst case of 1 hour of possible data loss. We made the change and set up the t-log backups and verified that they were running and that our log space seemed to be sufficient. We got back in the office in the morning and we noticed multiple alerts of the log backups failing. WTF??? We did not understand. After some research and digging thru logs, we found that somehow the database had changed from full to simple and then back to full again. This confused us, we did not make that change and not many other accounts have the ability to do so. Again we dug in and started looking at when it happened and what was running in the database when it changed. We found a application stored proc that made the change. So because some app dev was worried that a load they were doing would fill up the Transaction Log, he decided it would be OK to change the model, breaking our recover-ability chain, and then do the load and change it back. Like no one would notice. We actually do our job and are aware of what is going on in our databases and we do not appreciate some lazy dev making a poor decision and then forcing us to give the application account elevated access in the database. We have since let the vendor know that they need to change the proc, and we need to work closely with them and re-evaluate the level of access that the application has into the database.
Thought I would share this so that you can keep an eye out for yet even more evil code.
Created by MyFitnessPal – Nutrition Facts For Foods