need Linux server for file replication at branch office…recommendations?

5,020 pts.
Tags:
Backup
Bandwidth limits
File replication
Linux
Linux servers
Remote users
Wireless in 2010
I have a branch office that is a little over an hour away from our main facility. Due to bandwidth issues, our remote users are reluctant to store their documents on our shared file access on the server at our main location. It can literally take 20 minutes to get a file to open! So, aside from trying to upgrade our bandwidth (which may not happen due to $), I thought of putting a server up there that could use rsync or similar to keep a local copy of the shared files from our main server up there at their location. They would then be working off of local copies which could sync to our main plant as needed. Has anyone here done this? Ideas? Software recommendations? Thanks!!
ASKED: August 13, 2010  1:01 PM
UPDATED: October 24, 2010  2:23 PM

Answer Wiki

Thanks. We'll let you know when a new response is added.

Better u use Webmin or NX for that issue. Also try to tune kernel and network

Discuss This Question: 9  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Koohiisan
    All of the workstations are Windows XP or higher at the remote location. They currently access server shares as network drives which are mapped at login. I want to relocate \MainSvrShare1 to \RemoteLinuxSvrShare1 so that they are not waiting 10 minutes to open an Excel document over the WAN. So, I don't think I can get them to use NX to get a remote desktop because that is a departure from what they are currently using. I want it to be a seamless, behind the scenes thing to them.
    5,020 pointsBadges:
    report
  • Labnuke99
    I do think rsync would be your best option. Wikipedia says: rsync is a software application for Unix systems which synchronizes files and directories from one location to another while minimizing data transfer using delta encoding when appropriate. An important feature of rsync not found in most similar programs/protocols is that the mirroring takes place with only one transmission in each direction. You may encounter some versioning issues though if the WAN is not able to keep up with regular traffic as well as file replication.
    32,960 pointsBadges:
    report
  • Labnuke99
    Can you also provide more details about the WAN link sizes and distance between the client(s) and server(s)? How busy are the links? Is there available capacity? What is the latency between the endpoints?
    32,960 pointsBadges:
    report
  • Koohiisan
    Our link at the remote location is around 1.5Mbps, I believe. It's used for a lot more than it *should* be (DNS), and it is used for a constant feed of 5250 traffic, as well as supporting a dozen or more PC users accessing files. Money is a critical issue here, which is why I am looking for a solution other than 'more bandwidth'. Thanks!
    5,020 pointsBadges:
    report
  • petkoa
    1.5 Mbit might be OK since rsync uses very efficient protocol for remote transfers - makes checksuming of file chunks and transfers not whole files but just differing chunks... however, what about documents which should be edited both in main and in remote location? Their synchronization could get tricky - no locking, no putting users in read-only mode... If you have no problems with this issue, I'll vote for CENTOS or Slackware (no commercial distributions, if I understood your main problem right?) Good luck, Petko
    3,120 pointsBadges:
    report
  • Koohiisan
    Thanks for the ideas! I don't know if there will be much sharing of files between locations. I wasn't sure how rsync would handle that. As mentioned, there could easily be issues with two users editing the same file. Perhaps I can segregate the remote users' data so that they are 99% separate from our data, but force them to still access files that might be a cause of conflict on our main server and not have copies on the remote.
    5,020 pointsBadges:
    report
  • Labnuke99
    You should look into netflow or ntop to see how the connection is currently being used. A dozen users is not a bad load for a T1 depending on what they are doing across the link or other services (DNS is a pretty low utilization type traffic - and it uses UDP so it is not a "stream" as such).
    32,960 pointsBadges:
    report
  • Rickardoo
    You should look into WAN optimization. I wont go into detail here but check out www.riverbed.com .. I think they have some demo videos on the site.
    10 pointsBadges:
    report
  • Kuntergunt
    I have the same problem and I am after a solution now for quite a while. what I have found and comes closest to a solution is XtreemFS (http://www.xtreemfs.org/). Unfortunately it is still under development and supports up to now only read replication, read/write replication is planned fpr the next major release. What I have already been looking up: Intermezzo, PVFS, Coda, Lustre, DRBD, OpenAFS, Ceph, GPFS, Hadoop, GlusterFS, MooseFS, Unison, ... Anyone who has some other suggestions?
    10 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following