Double-Take has literally hundreds of these installations in place and more get implemented each week. I might be concerned with data replication saturating bandwidth if it was the example below but considering you have been using log shipping successfully it sounds as if you have plenty of bandwidth to accommodate the rate of change. Some times you might see some saturation if you are running a DBCC process while trying to replicate at the same time via minimal bandwidth, but that can be adjusted to accommodate the maintenance schedule.. Double-Take has a level of compression that I have seen reduce transactional transmission by upwards of 80%.
I would stick with Log Shipping. I don’t like using double take on databases as I’ve seen some major problems with it when dealing with databases and low bandwidth networks (like the WAN). Heck, I’ve even had problems with double take on a very high load database moving the files from one disk to another on the same server.
Don’t let you network folks dictate what technology you use to manage the database. You are the DBA (I assume), you should be the one driving the solution which is used to move the SQL Server data.
How big were the db’s, I am looking at 5 db’s 2 30 gig ones, and 4 1-3 gig db’s. This will be coming across a T1 line. Also the server it will be coming from will be a 2005 act/pas cluster.