SQL Server Reboots – after delete statement

pts.
Tags:
Desktops
Management
Microsoft Windows
OS
Security
Servers
SQL Server
I have an SQL server 2000 and when I try to delete 1 million records from a table using "delete from tablex where date < '01/01/2006' after 10 minutes the server reboot and no records where deleted. Does anyone know why?

Answer Wiki

Thanks. We'll let you know when a new response is added.

Check your SQL Server event log. You could also try deleting in smaller chunks.

Kevin

Discuss This Question: 10  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • NewTon64
    You need to commit; after you delete statement. Suggestion try not delete 1 million at one time. :?
    0 pointsBadges:
    report
  • Kdnsfl
    Sorry for the lack of detail-first timer with IT_XCHNG I have T-SQl packages that fail also if the select results do not return with 10-15min. I just found that the delete statement crashed it every time and I thought it would enter an error into the log file
    0 pointsBadges:
    report
  • BobClarke
    Under Management in Enterprise Manager, open the SQL Server log for the time that the delete was running and look for error messages. How much free disk space do you have on the database server? Since you are using Delete, every deleted record get recorded in the transaction log. Do you have enough space for the tranasction log growth? Obviously, your server should not just spontaneously reboot, so I suspect you have more problems with that box than just SQL Server not working correctly. Bob
    0 pointsBadges:
    report
  • Chetanhandle
    Yes I think Bob is right... Guess if you have to delete such a large chunk of data at one go ensure enougn disk space for your temdb database (you can increase the tempdb size). Alternative to this is to have your .ldf file on a seperate disk on your machine. Regards, Chetan.
    0 pointsBadges:
    report
  • Jgsnsbt
    I agree with Bob. Looks very much like starting your search around tempdb and something that's going with commit-control, disk space etc. You say there's a problem with T-SQL and long running querys also. I'm not sure about SQL-server, but the iSeries-database is supporting something called query_time_limit, which the user can adjust for client query-tools against DB2/400. If DB2/400 query engine detects a query running too long, it just returns an error. If You look at RDO (RemoteDataObjects) properties You will find that parameter. And there's a server-side default value also stored in QAQQINI. Maybe you find something similar in SQL Server. Good luck.
    30 pointsBadges:
    report
  • BrantWellsTFC
    I've had similar issues in the past, having to delete a large number of records... What I would do, is in your SQL Server Query Analyzer, and generate a script that will delete one day's worth of data at a time... and just have it iterate through each day that you need to delete. This way, the db isn't trying to choke on a bunch of records at once. A script could be easily done in the SQL Query Analyzer (I'm not quite as proficient with that as I should be)...but It could also even more easily be done with a VB Application or Script... Just food for throught.
    0 pointsBadges:
    report
  • Seldom
    If you have a large amount of data to remove and do not wish it to be logged, you can change to logging through enterprise manager to off and then switch it back on after you have run the delete. This will speed up the process and stop massive log files being created when you carry out deletes of this size. This workaround does assume that the database is not in use at the time as you will lose logging for any other transactions as well. I remember that there is a logging option that will not post deletes to the logfile but can't remember it off the top of my head. The other option when removing large amounts of data is to use the TRUNCATE command. When data is truncated it is not posted to the logfile, regardless of logging options. The server rebooting issue is one I have had before and in my case was as a result of the logfile getting into ridiculus sizes and the server shut itself down as it did not have enough room in the pagefile to handle the information being processed. You could try moving the logfile to another disk and/or making the server pagefile a static size. Hope this helps.
    0 pointsBadges:
    report
  • Squasher
    Hi First thoughts would be disk space and/or a memory issue. When you delete a row you are also moving up all of the other rows to fill the space - a lot of work when you are deleting 1,000,000 records. To do this is one hit - even if it did seem to work is not advisable for that amount of records. Change your date range and count before and after the commit
    0 pointsBadges:
    report
  • MooseDrool
    I agree with the previous responses. You should check your disk space, you trasaction db size and your query timeout. You also seem to have a bigger issue with the server reset. I would try deleteing your records in smaller managable batches using SET ROWCOUNT 5000 (or some other managable number) This will limit your records to 5000 on each delete. If you create a While Exists loop with a commit and a truncate log file statement, you should avoid issues with your transaction log.
    0 pointsBadges:
    report
  • DaveInAZ
    Assuming you've generated less than one million records since 01/01/06 (your criteria), you'd be better off moving those rows into a new table (INSERT INTO), instead of deleting the majority of the existing table. Then you can simply drop the existing table and do a shrink database. If that's not an option for some highly illogical reason (and it would have to be highly illogical), you'll probably need to truncate the transaction log, as it's probably insanely large at this point and a big part of, if not the only cause of, your problem. Run a BACKUP LOG dbname WITH TRUNCATE_ONLY statement. That should solve the rebooting issue, as Seldom said. But, you'll still want to turn logging off while you delete those rows. Logging is intended to log user activity, not maintenance.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following