It’s been a busy old Spring so far – I’m still trying to get my head around the recession – IT is going bonkers, spending like the world is about to end (does somebody know something we don’t?), every flight I take from wherever to wherever is full and when I take a few days off on the Spanish and SoF coastlines the places are packed.
The result is a lot of tests and reports to update on, which can be found on the www.broadband-testing.co.uk website as normal, for free download. Gartner said it at the start of the year, IDC has supported the argument and I’m in the thick of it – network optimisation that is, whether LAN, WAN, Cloud or inter-planetary. As a result, we’ve got two new reports up on L-B/ADC solution providers, Kemp and jetNEXUS. Both are going for the “you don’t need to spend stupid money to optimise app delivery” angle and both succeed; however, the focus of the tests are quite different. With Kemp we showed that you can move from IPv4 to IPv6 and not take a performance hit at all – very impressive. With jetNEXUS we showed that you can d**k around with data at L7 as much as you want and still get great throughput, manipulating data as you wish with no programming skills required whatsoever. Could put a few people out of a job… no problem let them loose with sledgehammers to knock down my old home town of Wakefield so someone can rebuild it properly. What was it that John Betjeman said about Slough?
The same could be said of Vegas; since arriving back with what felt like pneumonia I’ve been in an “who’s the most ill” competition with my HP mate Martin O’Brien who contracted several unpleasant things while were both out at Interop. Elton John had to cancel the rest of his Vegas shows because he contracted (the same?) respiratory problems. Well if it’s good enough for Elton…
One of the things to come out of Interop meetings wot I have spoken about is the proposed testing of HPs (along with F5) Virtual Application Networking solution. What is interesting here is that the whole aspect of profiling network performance management on a per user, per application basis is to get that profile as accurate as possible in the first place. While HPs IMC management system (inherited from the 3Com acquisition) does some app monitoring, it doesn’t go “all the way”. But we know men (and women) who can… If you checkout the Broadband-Testing website, you’ll also see a review of Centrix’s WorkSpace products. With these you can take application monitoring down to the level of recording when a user logs into an app, how long they have it loaded for and even when they are actively using it or not. Now that IS the way to get accurate profiling; take note HP. Let the spending continue…
Back from Interop and my ‘beloved’ Vegas from which I escaped just in time before being air-con’d to death as my ongoing cough continues to remind me. Is it possible to sue “air”?
I don’t know – maybe there are people out there (mainly the people who were “out there”) who enjoy the delicious contrast of walking in from 42c temperatures into 15c, time and again, then in reverse, and the joy of being able to hear at least three different sorts of piped music at any one time, the exhilaration for the nostrils of seven or more simultaneous smells, 24 hours a day? Must be me being picky. I like my sound in stereo at least, but all coming from the same source…
Anyway – reflections on the show itself; easy when there’s less smoke and more mirrors AKA taking away the hype. What I found was a trend – that others at the show also confirmed – towards making best of breed “components” again, rather than trying to create a complete gizmo. For example, we had Vineyard Networks creating a DPI engine that it then bolts on to someone’s hardware, such as Netronome‘s dedicated packet processing architecture, that then sits – for example – on an HP or Dell blade server. I like this approach – it’s what people were doing in the early ’90’s; pushing the boundaries, making networking more interesting – more fun even – and simply trying to do something better.
There are simply more companies doing more “stuff” at the moment. Take a recently acquired client of mine who I met out there for the first time, Talari Networks, enabling link aggregation across multiple different service providers – not your average WanOp approach. A full report on the technology has just been posted on the Broadband-Testing website: www.broadband-testing.co.uk – so please go check it out. Likewise, a report from Centrix Software on its WorkSpace applications. Reading between the lines on what HP is able to do with its latest and greatest reinvention of networking – Virtual Application Networking or VAN – as we described on this blog last week, along with buddy F5 Networks, I reckon there is just one piece of the proverbial jigsaw missing and that is something that Centrix can most definitely provide with WorkSpace. The whole of VAN is based around accurately profiling user and application behaviour, combining the two – in conjunction with available bandwidth and other resource – to create the ideal workplace on a per user, per application basis at all times, each and every time they log into the network, from wherever that may be.
Now this means that you want the user/application behaviour modelling to be as accurate as possible, so your starting point has to be, to use a technical term much loved by builders, “spot on”. Indeed, there is no measurement in the world more accurate than “spot on”. While HPs IMC is able to provide some level of user and application usage analysis, I for one know that it cannot get down to the detailed level that Centrix WorkSpace can – identifying when a user loads up an application, whether that application is “active” or not during the open session and when that application is closed down… and that’s just for starters. I feel a marriage coming on…
I don’t think I can remember a time – and this is saying something – when there were SO many hyper buzz-phrases in IT circulation as there are currently. Every cloud variant, Big Data, SDN…
Live from the home of tack – i.e. Vegas, the Blackpool of the desert but without the classiness…or piers – is the latest bombardment of SDN, er, ness, care of Interop 2012.
Starting with a direct follow-up to my last blog entry – HPs take on SDN, AKA VAN (ok – enough TLAs…) or Virtual Application Networks, the big question was, who was going to drive the VAN since HP doesn’t have the whole solution to deliver it? The answer is F5 Networks. So, the idea is to being to deliver a completely optimised, end to end solution on a per user/per application basis by using templates to define every aspect of performance etc. Makes total sense, sounds too good to be true. So, what’s the answer – test it of course; watch this space on that one.
Meantime, I’ll be reporting in daily from the show – seeing lots of new (to me) vendors who, one way or t’other, are all ticking the SDN/Big Data/Cloud boxes.
It seems to me that we need to get back to basics with SDN so that people actually understand what it is. For example, there’s a definite belief among some that it does away with hardware… Nice idea – so we have software that exists in a vacuum that somehow delivers traffic? There also seems to be confusion between different vendors SDN solutions and OpenFlow. For those wot don’t know, here’s what OpenFlow is – in a classical router or switch, the fast packet forwarding (data path) and the high level routing decisions (control path) occur on the same device.
An OpenFlow Switch separates these two functions. The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller, typically a standard server. The OpenFlow Switch and Controller communicate via the OpenFlow protocol, which defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats.
The data path of an OpenFlow Switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). When an OpenFlow Switch receives a packet it has never seen before, for which it has no matching flow entries, it sends this packet to the controller. The controller then makes a decision on how to handle this packet. It can drop the packet, or it can add a flow entry directing the switch on how to forward similar packets in the future.
In other words it provides one, open-standard methodology of optimising traffic, end-to-end, but it is not a solution in its own right, just a potential part of the action.
Whatever – the interesting theme here is that no one talks about MPLS any longer (well maybe apart from Cisco and Juniper that is) despite it still being THE methodology used to move all our data around the ‘net and beyond. There are factions that stand for the WAN optimisation kills MPLS idea. And for good reason – but there’s no overnight change here, given the gazillions invested in MPLS networks. It’ll be interesting to see what the vendors here make of the situation, at least from a timeline perspective…
Meantime it’s showtime, meaning a walk past a beach, complete with wave machine and hundreds of Americans trying to get skin cancer, in order to get to the exhibition halls – this is Vegas, after all.
Wore my journalist hat yesterday to attend an HP update event on its ESSN division (don’t worry about what the initials stand for, but N is for Networking…).
While not the key focus of yesterday’s blurb, the key thing for me to take from the event was the company’s very recent announcement that they are going into the VAN market; no – not competing with Transits, though you could say the network is “in transit” but Virtual Application Networks – all part of the current SDN or Software Defined Network movement. For many years HP (as Procurve) and others have been trying to crack the whole “end to end” optimisation problem. I’ve been trying to personally crack it using any number of vendor parts since 1999….
So, VAN is the latest attempt. The aim is to use preconfigured templates to characterise the network resources required to deliver an application to users – i.e. to enable consistent, reliable and repeatable deployment of cloud applications in minutes. An end-to-end control plane virtualises the network and enables programming of the physical devices to create multi-tenant, on-demand, topology and device-independent provisioning. The idea is to be completely open, so this isn’t an HP closed shop solution; that that they have created it.
Speaking with one of HPs customers, Mark Bramwell of Wellcome Trust at the event, we both agreed that it sounds like the latest and greatest “smoke and mirrors”, “too good to be true” solution BUT – if it works, then great – every user has optimised applications, on a per user, per application basis. So we both agreed – the only sensible option is for me to test it. Watch this space on that one…
Speaking yet further on the subject in a broader manner with Lars Koelendorf who heads up HP EMEAs mobile and wireless stuff, we agreed that the ideal way to rebuild a network is to start with IPv6; with so many addresses available, every user could have their own virtual IP address that IS their identity so, whatever client they are using and wherever they are, all the logic sits behind their VIP(v6) address and the HP VAN man is complete. They would, of course, drive applications faster across the network than any other user type…
In conversation with Axel Pawlik, MD of RIPE NCC (which is obviously better than an unripe version).
The RIPE NCC is an independent, not-for-profit membership organisation that supports the infrastructure of the Internet in Europe, the Middle East and parts of Central Asia. The most prominent activity of the RIPE NCC is to act as a Regional Internet Registry (RIR) providing global Internet resources (IPv4, IPv6) and related services to a current membership base of around 6,800 members in over 75 countries. So these guys are involved at the heart of the IPv6 movement. Here’s Axel’s views on a few key areas:
What is at the heart of the IPv4/IPv6 issue?
“Although the IANA’s pool of available IPv4 addresses is exhausted, the RIPE NCC can still assign IPv4 addresses to its members from its own reserves of IPv4 address space. We cannot predict how long this supply will last.”
“IPv4 addresses and IPv6 addresses can’t communicate directly with each other. So, before IPv6 addresses can be used to access the Internet, your organisation’s networks, services and products need to be IPv6 compatible or enabled. This requires planning and investment in time, equipment and training. New hardware and software is required to make networks ready for an IPv6-based Internet.”
IPv6 – What’s The Deal?
“Unless businesses act now to safeguard their networks, the future expansion of the Internet could be compromised. IPv6 is the next generation of IP addressing. Designed to account for the future growth of the Internet, the pool of IPv6 addresses contains 340 trillion, trillion, trillion unique addresses. This huge number of addresses is expected to accommodate the predicted growth and innovation of the Internet and Internet-related services over the coming years.”
How will my customers be affected by the deployment of IPv6 in my networks?
“End users of the Internet may not notice any difference when using the Internet with an IPv6 address or an IPv4 address. However, if you do not invest in IPv6 infrastructure now, in the future there may be parts of the Internet that your customers cannot reach with an IPv4 address if the destination is on an IPv6-only network.”
What needs to be done?
- Network operators should ensure that their networks are IPv6 enabled and can be used by their customers to access other IPv6 networks.
- Software producers should ensure that that their software is IPv6 compliant.
- Hardware vendors should ensure that their products are IPv6 compatible.
- Content providers should prepare networks so that they are accessible using IPv6 as well as IPv4.
It’s a question we should all ask.
For the average IT user or network manager it’s a significant point to actually consider. For a managed services company such as SAS Group, based down in an actually leafy bit of “greater” Crawley, it’s a fundamental question to ask.
Charles Davis, CEO of SAS believes the IP world is reaching crisis point.
He points out that the number of addresses for IPv4 has long been predicted to run out soon arguing that, meanwhile, our readiness to move over to IPv6 looks increasingly unlikely to happen any time soon. Conventional wisdom among many analysts said that the industry wouldn’t be ready for the switch until 2015. Personally, based on the indicators he sees every day, Davis thinks it could be even more distant.
But – and this is a big but (no pun intended for American readers) – the world IS running out of IPv4 addresses. This means that two of the current booms in technology he identifies, cloud computing and the” Internet of Things”, might not be sustainable. You can’t have an Internet of Things, Davis argues, if the ‘things’ in question (gadgets) can’t get on the Internet. They simply won’t be able to without an IP address, and all the IP addresses available under the old system are rapidly being used up.
Davis believes that, while it might all sound a bit “Mad Max”, the IP crisis does bear some of the hallmarks of an apocalypse. For example, there are some alarming inequalities in the way resources are being shared out, he notes with just 20% of the world owning the majority of IP addresses. Hardly ideal… India, for example, – which when I last looked at my globe is quite a large country (with rapid IT deployment) has only three Class B address ranges (i.e. 130,000 addresses). In contrast as Davis points out, just one US IT company alone, HP, can trump that with its two class A IP address ranges (i.e. 32,000,000 addresses). Could this lack of infrastructure restrict the growth of the BRICs (Brazil, Russian, India and China) he asks, therefore, and will the developing nations become frustrated at their lack of, well, development?
In circumstances like these, Davis can see drastic measures being taken, such as… companies actually getting round the negotiation table and talking to each other. Perhaps some decisions will be taken sooner and innovative solutions will be dreamt up to free up more addresses.
Davis points out, it wasn’t as if it were planned. Yes, it was a class issue, but only in the sense that the early allocation of IPv4 addresses was based on IP class allocation. This was in the days when the eventual exhaustion of the IPv4 ranges was not seen as an issue, like many things IT. So the allocation that took place seemed appropriate at the time. As a result, large amounts of address space were unused. Indeed, some estimate that as many as 80% of allocated addresses are not currently in use.
The cloud computing lobby, too, will be exerting pressure for the IP crisis to be resolved. For cloud computing to work, you need certain conditions, one of which is perfect communications. Optimum communications, in turn, could be dependent on the adoption of IPv6.
This brings Davis onto another aspect of the next version of IP, which he believes nobody has really given much air time to as yet. With IPv6 giving companies complete visibility over the movements and browsing habits of smart phone and laptop users, it could become a marketing manager’s dream.
If only we had the same perfect information about the migration from IP4 to IP6… (watch this space).
“M” might stand for Murder in the London theatre world, but the ultimate “M” word in IT has to be “Migration”.
Apply this word to the challenge that is moving from IPv4 to IPv6 and you can probably hear the howls of despair and mistake them for an attempted murder. There are, however, some fundamental tools/advanced features of IPv6 that are designed to ease this process. These have been adopted to a lesser or greater degree by different vendors, so it’s worth noting the availability of these features when shopping around for IPv6 assistance and future proofing.
We’ll start with three absolutely fundamental ways to manage your IP addresses and how these work in a migratory environment.
NAT: NAT (Network Address Translation) has became a pretty fundamental tool for alleviating the issues with limited IPv4 address spaces, with most companies enabling it on their network gateways and other devices. So how to transition this to IPv6. First, there is what is known as Carrier Grade NAT (AKA Large Scale NAT) whereby Carriers/ISPs can allocate multiple clients to a single IPv4 address, standardising behaviour for IPv4 NAT devices and the applications running over them, using features such as “fairness” mechanisms – user allocated port quotas and the like.
We also have specific transition technologies such as NAT 64. This is a mechanism to allow IPv6 hosts to communicate with IPv4 servers. The NAT64 server is the endpoint for at least one IPv4 address and an IPv6 network segment of 32-bits. The IPv6 client embeds the IPv4 address it wishes to communicate with using these bits, and sends its packets to the resulting address. The NAT64 server then creates a NAT mapping between the IPv6 and the IPv4 address, allowing them to communicate.
DNS: As with the 64-bit version of NAS, we also have a 64-bit version of DNS. The IPv6 end user’s DNS requests are received by the DNS64 device, which resolves the requests.
If there is an IPv6 DNS record (AAAA record), then the resolution is forwarded to the end user and they can access the resource directly.
If there is no IPv6 address but there is an IPv4 address (A record), then DNS64 converts the A record into an AAAA record using its NAT64 prefix and forwards it to the end user. The end user then accesses the NAT64 device that NATs this traffic to the IPv4 server.
Dual Stacks/DS-Lite: An obvious feature to look for is dual-stack support where all IPv4 and IPv6 features can run simultaneously. In addition there is DS-Lite (Dual Lite Stack) which enables incremental IPv6 deployment, providing a single IPv6 network that can serve IPv4 and IPv6 clients. Basically this works using IPv4 (tunneled from customer’s gateway) over IPv6 (carrier’s network) to a NAT device (carrier’s device allowing connection to IPv4 Internet, which can also apply LSN/CGN). Because of IPv4 address exhaustion, Dual Lite Stack was created to enable an ISP to omit the deployment of any IPv4 address to the customer’s on-premises equipment, or CPE. Instead, only global IPv6 addresses are provided. (Regular Dual-Stack deploys global addresses for both IPv4 and IPv6.)
I’ve recently been in conversation with a number of network product vendors – from Cisco to Infoblox – users and test equipment vendors, with respect to what must be the ultimate in “let’s sweep it under the carpet and forget about it for a while” IT topics and that is IPv6.
With the last of the public IPv4 address allocation now long gone and the Far East already deploying IPv6 big time, the reality is that we do all need to start thinking about moving from the “4” to the “6”, albeit gradually in most cases. And with LTE around the corner in the mobile world, that being pure IP-based, how many new IP addresses will suddenly be demanded? And where are they going to get allocated from?
In the States recently and having a casual natter with Infoblox’ Steve Garrison, Steve was saying how many companies still carry out IP Address Management (IPAM) using Excel spreadsheets (got to be in the “Top 10 misuses of a spreadsheet”). So how will they cope with the complexities of deploying IPv6?
Another worry, from a conversation with F5 Networks and others that dabble in L4-7 data “mucking about” is the potential performance hit when moving from IPv4 to IPv6. This is something that (quelle suprise!) vendors don’t openly talk about, but F5 has seen up to 50% performance hit on some rival products (tested internally) when moving from IPv4 to IPv6 and generally reckons its own see up to 10% performance loss in the same circumstances. This claim was substantiated in talks with other vendors large and small, such as with a newly acquired load-balancing client of ours, Kemp Technology.
So, on the basis that someone has to do something about it, we are launching an IPv6 performance test program, with a view to developing what is effectively an ongoing buyers guide/approved list for companies to short-list their potential IPv6 related procurements with.
Over the next few days we’ll be looking at some of the key elements of IPv6 deployment – think in terms of something akin to the Top 10 Considerations when moving to IPv6. Because, sooner or later, we’re all going to have to do it…
I just moved apartments in Andorra and asked them if I could keep my current 100MbpsFTTH account (actually tops out at 83Mbps), related phone number etc – and the answer was “yes, of course” (with a bit of Catalan thrown in for good measure, most of which I ignored).