Quality Assurance and Project Management

Dec 20 2018   9:31PM GMT

Hybrid Cloud Services for VMware: All About IO Filters @JetStreamSoft IV

Jaideep Khanduja Jaideep Khanduja Profile: Jaideep Khanduja

Hybrid cloud

In this series of Q&A with Serge Shats, Ph.D., CTO and Co-Founder, JetStream Software we are in the middle of the discussion. This is the fourth post in the series. Previous posts can be accessed from the links below:

Post 1
Post 2
Post 3

Q: How are IO filters used for virtual machine live migration?

A: The problem with live migration is this: How do you keep applications running, with new data being written continuously, during the hours — or sometimes days — that it takes to move the applications’ data to the destination? There are a number of approaches, as virtual machine migration is not a new problem. But IO filters provide a capability that’s much simpler than anything we’ve seen before.

With JetStream Migrate, the software deploys as a replication filter in the source VMware environment. The migrating VMs’ configurations and virtual disks are copied from the on-premises data center to the cloud data center, and while that copy and transfer process is taking place, newly written data from the VM is captured by the IO filter and also replicated to the destination.

One of the advantages of this approach is that the copy of the virtual disk can be moved over the network connection, or it can be copied onto a physical device for “offline” transport to the cloud destination. So if you are familiar with the Amazon Snowball, it’s now possible for an organization to use a snowball-like device to transport data from one VMware environment to another VMware environment, without having to stop the VMs or their applications from running at the source.

HybrCloud Services for VMware

Source: JetStream Software

Q: With respect to disaster recovery (DR), why would someone use IO filters instead of snapshots?

A: One of the key goals for using IO filters for data replication is that — unlike snapshots — data can be captured for replication without a detrimental impact on application performance. Also, because data is being captured in a stream, there are better options for delivering a variety of DR capabilities, such as an extremely low RPO and RTO, as well as very fast point-in-time recovery.

We will be concluding this series in the next post.

 Comment on this Post

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: