Last week, news writer Jessica Scarpati reported that employees at Schur, a printing and packaging company based in Horsens, Denmark, were using public peer-to-peer (P2P) file sharing services, instead of the company’s password-protected file transfer protocol (FTP) server to share sensitive, business-related files with one another. Not only did this present a gaping WAN security hole, but it used up a great deal of WAN bandwidth.
Whether end users simply don’t know any better or they do it maliciously, the situation is certainly not limited to one instance in Denmark. How to block P2P traffic is a WAN bandwidth management frequently-asked question. If WAN managers were to educate users and implement a security policy that prevented this behavior, end users might actually use the appropriate file transfer method, and corporate data might actually be safe.
To avoid these security and performance pitfalls, Schur’s IT-technical manager, Tom Nielson, decided not to allow any kind of torrent files–BitTorrent or otherwise. Even if the torrent was legal and for the purpose of transfering work files, Nielson didn’t want to take any chances. “If it’s written in our IT policy that they’re not allowed to make file transfers other than with FTP, then we can block file transfers and stuff like torrent files,” Nielsen said. (You can read more about what Nielson did to block P2P traffic in this WAN security case study.)
Many IT network security folk feel that end users are to blame for broken computer security. Although they may ultimately be the ones leaking corporate data through P2P or other methods, the culprit really lies in poor security policies and network user management.
Nobody likes gotcha journalism, but sometimes you have to call a spade a spade.
As Silver Peak Systems was gearing up to announce the release of their virtual WAN optimization appliance, I noticed that a few of their customer use cases shared an interesting quality — they illustrated unusual circumstances where virtual WAN optimization was not just a convenience or a cost-saver, but a necessity.
After having recently listened to Silver Peak’s marketing team talk up their virtual WAN op appliance, the VX Series, imagine my surprise when I stumbled across a Q&A with Silver Peak President and CEO Rick Tinsley that former site editor Tim Scannell had done in September 2009. He asked how Rick thought virtualization would affect WAN optimization.
You can view his full answer on the Q&A page, but since it’s long, here’s the (edited-down and emphasis-added) portion that caught my attention:
“Virtualization is interesting but has to be the most overused buzzword in the industry today. There is server virtualization, which everyone in the industry has had experience with, and the ROI is pretty straightforward. When you get into desktop virtualization, however, you’re going to find that your mileage will vary and that it will vary tremendously.
[...] In terms of virtualizing the network element, this is where the marketing people tend to get a little bit ahead of themselves. We went through this a couple of years ago when some of the vendors were talking about having server blades in their boxes. If you ask people who run networks, most do not want a Windows server on their router. When we went through our own internal server virtualization process, we found that some apps lend themselves very well to virtualization and that you can truly get better server utility and better ROI from these applications.
Virtualizing network elements – like routers and switches and WAN accelerators – is one of those things that makes for a good PowerPoint and good marketing, but I’m not sure where it’s going to go in terms of actual deployment.
I gave Rick a chance to explain what drove his very sudden about-face and how he might respond to criticism that this may be seen by some customers as a hasty (and thus potentially sloppy) attempt to catch up to competitors, who have been well-established here for some time.
Check out his response below the jump…
Cloud computing may require more bandwidth to support the workloads traveling across your wide area network (WAN). But many businesses can’t afford to buy more bandwidth. To learn more about how cloud environments affect WAN bandwidth, I spoke to IT infrastructure expert Zeus Kerravala. Here was the outcome of our discussion:
• In this podcast interview, Zeus explains how cloud computing affects WAN bandwidth.
• In this article, we learn how WAN optimization improves cloud computing.
• These tips will improve WAN performance in cloud environments.
Today marks the release of the third annual Verizon data breach investigations report for 2010. Verizon Business believes more should be done in the telecom industry to increase information security awareness. This is one of the reasons why they began conducting data breach surveys back in 2008. What makes this year’s report so different from previous years is that the 2010 survey incorporates statistics from the United States Secret Service, which has given the telecom company 40% more information–covering companies both large and small of nearly every persuation across the globe.
According to the report, almost all data (98%) were breached from servers and applications. For a corporate wide area network (WAN), the risk that an attack will come from an application-specific attack is especially high, since today’s WANs usually traverse the Internet and focus on WAN application delivery.
Verizon researchers noted that theives will assess targets by choosing the most valuable data and weigh it with the cost of an attack. The harder the attacker works, the bigger the score. Your company size does not matter, but your WAN security weaknesses do.
Although last year’s U.S., Korea Internet attacks indicated a rise of sophisticated criminals, the 2010 Verizon data breach investigation found 85% of attacks were considered unadvanced and not difficult. 96% were avoidable through simple or intermediate controls. 86% of all breaches were recorded in log files. The worst part is that not only were data breaches discovered by a third party in most cases, they were usually discovered months after the fact.
“Many victims didn’t have the technology in place to catch attackers” said Wade Baker, one of the 2010 Verizon data breach investigations report authors. Otherwise, it was obvious that many organizations only put security in place to put a check mark on their list. In other words, a company’s data loss prevention technology was not formed to fit a network’s needs.
John Pironti explains that a thorough risk assessment must occur in order to solve network security threats:
“The thing that keeps me awake at night is this conversation about compliance — essentially giving enterprises checklists to go through … which is a paint-by-numbers approach to security, instead of doing security by risk assessment.”
To decrease the chances of an attack, is there anything a WAN manager can do? Below is a list of action items:
- Per the note above, complete a risk assessment before implementing security technology: Many WAN managers do not have evidence-based security. This is one way to get it.
- Restrict and monitor privileged users: Most end users operate as systems admins on their own networks. If we dial this back, we can avoid a situation where credentials are stolen.
- Check your logs: Log analysis reports are not where they should be. In this way, we don’t have the technology in place to properly mitigate an attack, but what we do have, we can at least monitor.
- When you’re making a network, check it twice: The report found that human error is almost always a contributing factor in a breach. Something as simple (and common) as failing to change credentials makes it easy for a hacker or cyber criminal to intercept valuable data across your WAN.
If you are upgrading your company’s Windows operating system (OS), you should know about a few changes Microsoft made to Windows security in Windows 2008 and Windows 2008 R2 that will affect the way you set up virtual private networks (VPNs). First of all, VPNs switched from using Layer Two Tunneling Protocol (L2TP) to Secure Socket Tunneling Protocol — or SSTP, which is a Windows flavor of Secure Sockets Layer (SSL). Not only does this make Windows VPNs more secure, but this also changes the steps to set up your VPNs. Secondly, although the preferred method for authenticating and authorizing VPN traffic has always been to use RADIUS authentication, earlier iterations of the OS prior to Microsoft Windows 2008 called this “Internet Authentication Service,” better known as IAS. This alteration will affect how you authorize traffic through your VPN.
In this series of technical articles, IT guru Brien Posey explains how to set up VPNs securely for Microsoft Windows 2008 and higher. From his articles you can learn:
- How to set up an SSTP VPN
- How to authorize VPN traffic with RADIUS
- How to configure Windows Server 2008 R2 to act as an enterprise certificate authority to avoid purchasing costly X.509 certificates
Now that businesses have become more comfortable spreading offices across the globe, IT departments manage many more remote offices and mobile workers. As much as IT makes distances in the world feel smaller, no one is a miracle worker. The longer the distance, the longer traffic takes to send. However, users don’t seem privy to these physics; end users expect applications to work over the wide area network (WAN) at local area network (LAN) speeds.
Distance isn’t the only factor slowing down the WAN; the ongoing migration to Web-based applications, cloud computing and mobility hurt WAN application performance. Since slow applications slow business processes, company executives will notice the difference, and app delivery will suddenly become a top of mind issue for WAN managers.
Part of achieving acceptable application delivery is to think higher up the OSI model. Mark Fabbi, vice president and distinguished analyst at Gartner Inc., explains that “throwing more resources and throwing more money at Layer 2 and Layer 3 doesn’t make much difference [to applications].”
IT network professionals who traditionally work at the bottom of the OSI model may be uncomfortable dealing with the application layer. However, if you’re used to configuring WAN optimization controllers, you may be back in familiar territory.
Industry expert Jim Metzler explains that WAN optimization controllers can fix application delivery — just as they would optimize other traffic across the WAN. Well, maybe not exactly the same way. Metzler’s application delivery 2.0 guide highlights how to use WAN optimization controllers to optimize particular applications. You can use them to do the following:
- Optimize virtualized applications.
- Improve application traffic in the cloud.
- Enable mobile application delivery.
- Support dynamic virtual machine movement.
There’s no reason all the application delivery work should rest on you though. Network pros can talk to application development teams to make application traffic svelte to start. If it’s too late for that, then talking to your carrier about application SLAs and agile QoS may be another option.
Cloud computing, virtualization and lofty user expectations require a network to be both robust and flexible–but limited resources prevent IT staff from enabling the network to meet these demands. This makes network management and finding the right tools all the more important. How can you do this on top of your normal day-to-day workload?
Experts in the industry have learned tricks along the way to help you spend less time troubleshooting and more time implementing techniques to keep your company competitive. Respected IT pros Jim Metzler, John Bartlett and Brent Chapman plan to share their tips with you in our free virtual seminar, entitled “Optimizing and Managing the Dynamic Enterprise Network.”
Join me Wednesday, June 23, in any or all of these three sessions:
Sign up for this virtual seminar to speak with experts in live Q&As, network with your peers in our virtual lounge, and/or watch vendor product demos, all from the comfort of your own desk… Plus, one lucky attendee will win an iPad.
I hope you can make it, and I really look forward to meeting you June 23!
Everyone who can connect to the Internet experiences it one time or another — a Web page is not found; a connection takes forever, or worse — disconnects. It’s times like these that the uninformed Chip the sales guy says the website is down, or the IT admin with a sense of humor says “Uh-oh. I broke the Internet.”
Why did the Internet come to be?
Nemertes Research president and panel discussion moderator Johna Till Johnson pointed out that the Internet is the largest creation the human race has ever put together collaboratively.
Furthermore, Johnson said “the Internet was invented because people wanted it to be invented.”
The Internet was born purely out of our human desire to connect — not because of any government mandate or religious decree.
But because no one person or organization can own the Internet, and because of it’s incredible boom and rapid success, many issues arise that cannot be addressed quickly enough. No one person or entity can fix the issues.
The Internet is broken
What are these issues that break the Internet? The panelists ultimately boiled problems down into four factions:
- Routing scalability: The routing table continues to grow and so do the requirements to support BGP.
- Security: Internet crimes have risen at phenomenal rates, making critical data harder to secure.
- Bandwidth: Applications like voice and video are only getting bigger and more demanding of resources, particularly of bandwidth.
- IPv4 address depletion: Seriously. IPv4 addresses are going, going gone.
Can the Internet be fixed?
Much of the conference was spent looking at ways to resolve the last point: IPv4 address depletion. NAT was out of the question, because, on top of it not being enough to stave off address depletion, it was also deemed “poor man’s security,” (let alone a nightmare for IPv6 transitioning).
A viable answer was IPv6, though it wasn’t the only answer. Much to my surprise, two other solutions — namely Cisco’s LISP and PNA — came up as considerable options. (See SearchTelecom.com’s Is IPv6 a sure thing? blog entry for more on this.)
Although IPv6 was not the lone answer, it appeared to be the final answer. The panel — ranging from Mike O’Dell from New Enterprises to Fred Baker at Cisco Systems — all agreed that IPv6 really was the way to fix address depletion in the end.
This was not to say that they were satisfied with the answer.
Panelist Dave Schaeffer, CEO of Cogent Communications, quipped “We got IPv6 pregnant — now we’ve got to marry her.”
ARIN president and CEO John Curran explained part of the unhappiness toward IPv6 is that there are no new features. In IPv4 there’s IPsec for automatic security and DHCP. IPv6 doesn’t have all the bells and whistles that IPv4 has.
IPv6 also only solves IPv4 address concerns. Bandwidth, security and routing scalability aren’t solved by IPv6 and, in some cases, are exasperated by it.
Who broke the Internet and can they fix it?
Ultimately, who’s to blame? The application developers for making “craplications;” the networking professionals for nating; the data center guys — “everyone’s to blame,” the conference panelists mutually concluded.
We all have played our part in breaking the Internet. Perhaps it’s time to collectively fix it… Now, if we only knew how…
Data center interconnects are generating a lot of news at EMC World this week. You have a startup like Infineta Systems coming out of stealth mode to demonstrate its Velocity Dedupe Engine, The product is meant to reduce the amount of bandwidth data replication (both synchronous and asynchronous) consumes by a factor of ten.
Meanwhile, EMC introduced VPLEX, a family of “private cloud” appliances. VPLEX also promises accelerated data replication and flips the model of traditional replication on its head by making replicated data active at two sites simultaneously, rather than having a primary site and backup site. But VPLEX also promises to enable virtual machine migration across data center interconnects.
Live virtual machine migration across data centers came up often at last month’s Interop Las Vegas. Everyone wants to do it, but no one seemed to think it was a practical reality yet. For one thing, latency sensitive applications can be derailed if a live VM migration from one data center to another takes a few milliseconds too long. For another thing, VMware’s vMotion technology only supports virtual machine migration across Layer 2 connections and a great many enterprise WANs are Layer 3 networks. Cisco Systems introduced Overlay Transport Virtualization a few months ago, as a way to create a “virtual” Layer 2 connection between WAN links in order to enable vMotion migration across data centers.
Now EMC’s VPLEX promises to virtualize storage resources across data centers so that a virtual machine and its resident application can migrate from one data center to another and without having its access data interrupted. It can access the data locally no matter what its geographic location is. Enterprise Strategy Group has validated this technology across a 100 kilometer WAN link (PDF). If storage can be federated across data centers like this, it could improve the performance of live migration of VMs.
Jim Metzler, of Ashton, Metzler & Associates moderated a session today at Interop Las Vegas 2010 about the emergence of virtualized application delivery appliances, both wan optimization controllers (WOCs) and application delivery controllers (ADCs). Panelists included representatives of Citrix, F5 Networks, Certeon, A10 Networks and Blue Coat
In the video below, Jim summarizes the hour-long discussion. Bottom line, Jim said vendors agreed that virtual appliances cost about one-third of their physical appliance counterparts, and are often available in a pay-as-you-go model.
The conversation also focused on the need to support multiple hypervisors, not just VMware. This was based on an impromptu survey of session attendees. Nearly all of them said they are using VMware in their infrastructure today, but almost none of them think they will be using exclusively VMware in the future. Microsoft Hyper-V and Citrix Xen will find their way into the enterprise, and virtual WOCs and ADCs will have to adapt to this.
[kml_flashembed movie="http://www.youtube.com/v/SQycyoUDkvM" width="425" height="350" wmode="transparent" /]
Beyond what Jim said, one other thing I heard: Many of the vendors offer both hardware-based and virtual ADCs and WOCs. Vendors like Citrix and F5 believe that enterprises will continue to deploy a combination of virtual and physical appliances over time. They will deploy them in a two-tiered architecture. In places where sheer performance is the priority, enterprises will use hardware-based products, but in places where features and functionality are the priority and performance isn’t a major concern, enterprises will deploy virtual appliances.