Backups to disk across GB ethernet – what’s your speed limit?

pts.
Tags:
Backup and Recovery
Dell PowerEdge
Disaster Recovery
Microsoft Windows Server 2003
RAID
SAN
SATA
Veritas Backup Exec
We don't have a SAN yet. We're trying to improve backup and disaster recovery time windows so we have been testing backups to disk using the following: Windows 2003 Server Dell PowerEdge server 2GB RAM 2xXeon P4's Veritas Backup Exec 9.1 (backup file size set to 1GB each) SATA hardware RAID controller w/ 64 MB cache (write cache enabled) 6x250GB SATA 7200 RPM Drives split into 2 arrays (mirror for OS, RAID5 for backups) Intel GB copper adapter using dedicated, offline GB switched network for backup traffic between this server and backup targets We have not been able to push more than 27 MB/s or 1.6 GB per hour (bytes NOT bits) to the disks. We have tried running multiple concurrent backup jobs from different targets, but the speed actually goes down significantly (not what I was expecting). I'm looking for some real-world examples where others have been able to get better performance. As much detail as you're willing to share would be appreciated. Thanks in advance!

Answer Wiki

Thanks. We'll let you know when a new response is added.

Your numbers do seem quite low. My environment is mixed, but here’s what comes closest to what you described –
Exchange 2003 on Win2K Server, an old HP NetServer with 2x1Ghz P3’s and 2 Gb RAM. Fiber (SX) Gb card connects to our production Ethernet network and backups happen at off-peak times through HP9000 server running HP Data Storage Protector with 2Gb/s connection to a SAN which hosts a dual-Ultrium drive library. Compared to other attached systems the Gb Ethernet is definitely the bottleneck for this server, but we still routinely see 20-22 GB (bytes) per hour, even with low user activity on Exchange.

Are you using jumbo frames on your copper Gb connector? Also, RAID 5 is really not the best choice for the kinds of sequential write activity in backup to disk. Can you try a simple RAID 0, or if fault tolerance is absolutely necessary, RAID 10?

Discuss This Question: 5  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • CanesThing
    While reading Briteiros' response I realized I made a type-o! The throughput is 1.6 GB/minute NOT per hour as I stated! The stated speed of 22GB/hour breaks down to .37 GB/min or 6.1 MB/s which is a little less than 1/4 the throughput we're seeing. Thanks for the RAID suggestion, we were thinking the same and changed the RAID5 volume into a RAID0 to see if performance increases. I'll post again once we perform some backups. I'm hoping somebody is getting faster throughput than we are . . .
    0 pointsBadges:
    report
  • GaryDatastor
    Here is a suggestion to establish if your D2D backup system is capable of running faster. I suggest you do this before making any other changes at it will establish a bench mark to test against later. As a distributor of Veritas, CA, Commvault and disk subsystems etc, I have frequently come across slow backup complaints related to Exchange, particularly if lots of small files are involved. My suggestion is to test your backup speed on some fixed content data ( sequential )such as large image files or PDF's. You should be able to obtain upto 70 to 90MB/sec over a private Gbe link to Disk with no other LAN activity. We typically install external RAID 5 SATA with 400GB drives configured for about 2TB ( SCSI Limit )U320 attached to backup host. If you get say 60MB/sec on fixed content data then this may be acceptable considering all other technical specifications for your particular installation. It may be that your Exchange server is always going to be slow but improvements may be possible by using some form of HSM on it like EAS or KVS tyo keep the actual executable part of Exchange small. Jumbo Frames is also a must for achieving high throughput over TCP/IP.
    0 pointsBadges:
    report
  • Grokrc
    RAID5 should be the best config for sequential access - if the array disks access synchronously (effective bandwidth = # data drives X HDD data rate). Jumbo frames certainly improve TCP/IP throughput, but one Gbps should effectively sustain about 30 MB per second (given the high package overhead of the 1.5K standard frames). Source volume access is likely slower than destination. Use isolated benchmarking to locate the bottelneck. Our experience with Veritas (to DLT) is that small files move a fraction the speed of large indicating a lot of file switch latency in the backup client.
    0 pointsBadges:
    report
  • Grokrc
    A quick footnote on RAID5 versus RAID0. Since you mentioned RAID0, then it follows that your SATA RAID5 is not synchronous as would be the case with high storage subsystems. At the high end of storage (e.g., EMC Symmetrics) device buffers and array processors allow checksum processing to be performed prior to physical I/O. Data and parity transfer to HDD synchronously. Subsystems with only cache and front-end storage processors incur significant latency to perform same. Our Veritas backs up enough storage to warrant an ATL with LTO gen-1. We previously used DLT7000. With tape the problem is sustaining streaming operations to avoid start/stop latency. High-end RAID5 can effectively be as fast as LTO gen-1 (no start/stop penalty).
    0 pointsBadges:
    report
  • Grokrc
    Footnote on high-end RAID5 performance. Synchronous spinning of associated HDD may have been implied because it is nearly true in effect. The reality is that the tiered processing and device buffers allow HDD to be read and written in parallel. Also, the fast spin and independent device buffers result in inter-device latency that is so low the effective performance is almost as if HDD in the array were synchronous. We are backing up about 9TB from over 120 servers. When we had only a few servers we used manually operated directly attached tape drives, which had a high labor cost and not always high reliability. Eventually growth justified Veritas and an ATL. However, managing the backup window remains a constant struggle to expose and relieve the next throughput bottleneck. As we move forwards with enterprise server consolidation the backup capacity we support will grow about 10X.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following