Networks Generation


September 27, 2012  10:41 AM

IT Is Prawn Cocktail ?

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
BYOD, Cisco, Enterasys, Extreme, HP, Netevents, Network management, SDN

Bem Vindo from the Algarve, at the latest Netevents symposium.

One of my favourite topics in networking (and IT in general) is how often we revisit old “recipes”. In the same way that prawn cocktail has become trendy again, so it is with networking and Netevents. Two panel debates in, we’ve already had seven “paradigms” (IT buzzword of the year, 1995) and several “visions” and a few “hype cycles”.
Debate topics are pretty well predictable:
– BYOD
– SDN
–  Mobile + Cloud = opportunity or risk?
etc…
The focus of the BYOD debate (and let’s face it, people have been bringing their personal laptop into work and copying data onto it to work on from home out of office hours since the early ’90’s) was device management and security. But is the real issue here not the device, but the kind of applications that people are using on them, and adopting and managing those?   In other words, at what point do applications such as Facebook become “enterprise” applications and how do we then manage those, rather than simply block them (and the devices themselves)?
Now we’re onto the subject of SDN – Software Defined Networking. The panel talk is about automation, removing the need for manual administration, control of mixed vendor networks etc. Isn’t this called vendor-independent Network Management – i.e. what all the net’ management vendors in the early ’90’s set out to achieve? So, it didn’t get there – will SDN?
The debate goes on…
 

September 4, 2012  3:15 PM

Tech Trailblazers Update

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
awards, Big Data, Business, information technology, Mobile, Networking, Security, September, Storage Networking Industry Association

Just a quickie update to all you vendors with mega technology out there re: the Tech Trailblazer awards wot I blogged about earlier this summer.

Entry levels have proved (as did Top Gear) that you can’t have too many awards competitions, and these are still open until the 12th September (for late birds, the early one has closed) in the following categories, just to remind you all:

  • Big Data Trailblazers
  • Cloud Trailblazers
  • Emerging Markets Trailblazers
  • Mobile Technology Trailblazers
  • Networking Trailblazers
  • Security Trailblazers
  • Storage Trailblazers
  • Sustainable IT Trailblazers
  • Virtualization Trailblazers
There’s over a million dollars up for grabs, so well worth the entry. To do so, just go to:


Simple as…

Enhanced by Zemanta


September 4, 2012  3:09 PM

At The End Of The Network

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Big Data, Bitspeed, EMC, hollywood, HP, Isilion, WAN

One of the problems we’ve faced in trying to maximise throughput in the past has not been at the network – say WAN – level, but what happens once you get that (big) data off the network and try to store at the same speed directly onto the storage.

 

We saw this limitation, for example, last year, when testing with Isilon and Talon Data and using traditional storage technology – the 10gigabit line speeds we were achieving with the Talon Data just couldn’t be sustained when transferring all that data onto the storage cluster. While we believe that regular SSD (Solid State Disk) technology would have provided a slight improvement, we still wouldn’t have been talking end-to-end consistent, top-level performance.

 

So it’s with some interest – to say the least – that I’ve started working with a US start-up, Constant Velocity Technology, that reckons it has the capability to solve exactly this problem. We’re currently looking to put together a test with them:  http://johnpaulmatlick.wix.com/cvt-web-site-iii – and another “big data” high-speed transfer technology client of mine, Bitspeed, with a view to proving we can do 10Gbps, end-to-end, from disk to disk.

 

Even more interesting, this is happening in “Hollywood” in one of the big-name M&E companies there. However, if any of you reading this are server vendors, then please get in touch as we need a pair of serious servers (without storage) to assist with the project!

 

Life beyond networking…

Enhanced by Zemanta


July 23, 2012  1:08 PM

Technology At What Prize?

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Cloud Computing, competition, investment, Mobile, Networking, Security, Storage, VC, Virtualization

Just wanted to give everyone with a good tech idea up their bit of T-shirt that covers the upper arm – given that it is summer -) a heads up about a new IT ideas competition called Tech Trailblazers – www.techtrailblazers.com – organised by my (and many others) old PR mate, Rose Ross. Well, when I say “old” I mean, er, long standing…

So, the idea is – if you have a tech startup wot has got something truly interesting to offer in the current tick box fields such as clouds, emerging markets, virtualisation, sustainability and mobile, as well as “classics” such as networking, storage and security, then take a look at the website listed above and see if it makes sense to enter (go on, you know it does).
As one of the (many) judges, I will – of course – be open to casual bribes such as free lunches in Michelin-starred addresses while receiving The Full Monty as to why your tech is prize-worthy…  It’s amazing how an excellent crab soufflé and a few glasses of Menetou Salon, or café gourmand and Tenareze Armagnac can heighten the understanding of new technologies. Someone should do a scientific investigation of the process. I’m happy to volunteer my services…
Anyway – I’ll be updating on the competition as it develops – while continuing my focus on optimisation technologies that defeat the laws of physics – i.e. go beyond linespeed, starting with something called Constant Velocity Technology that I’m being given the low-down on this week. Watch this (virtual) space…


June 8, 2012  3:06 PM

Hyperoptic 1 gigabit broadband, a user perspective

Cliff Saran Profile: Cliff Saran
Broadband

In this guest blog post Computer Weekly blogger Adrian Bridgwater tries out a new 1 Gbps broadband service.

In light of the government’s push to extend “superfast” broadband to every part of the UK by 2015, UK councils have reportedly been given £530m to help establish connections in more rural regions as inner city connectivity continues to progress towards the Broadband Delivery UK targets.

Interestingly, telecoms regulatory body Ofcom has defined “superfast” broadband as connection speeds of greater than 24 Mbps. But making what might be a quantum leap in this space is Hyperoptic Ltd, a new ISP with an unashamedly biased initial focus on London’s “multiple-occupancy dwellings” as target market for its 1-gigabit per second fibre-based connectivity.

Hyperoptic’s premium 1 gig service is charged at £50 per month, although a more modest 100 Mbps connectivity is also offered £25 per month. Lip service is also paid to a 20 Mbps at £12.50 per month contract for customers on a budget who are happy to sit just below the defined “superfast” broadband cloud base.

Hyperoptic’s managing director Dana Pressman Tobak has said that there is a preconception that fibre optic is expensive and therefore cannot be made available to consumers. “At the same time, the UK is effectively lagging in our rate of fibre broadband adoption, holding us back in so many ways — from an economic and social perspective. Our pricing shows that the power of tomorrow can be delivered at a competitive and affordable rate,” she said.

Cheaper than both Virgin and BT’s comparable services, Hyperoptic’s London-based service and support crew give the company an almost cottage industry feel, making personal visits to properties to oversee installations as they do.
While this may be a far cry from Indian and South African based call centres, the service is not without its teething symptoms and new physical cabling within resident’s properties is a necessity for those who want to connect.

Upon installation users will need to decide on the location of their new router, which may be near their front door if cabling has only been extended just inside the property. This will then logically mean that home connection will be dependent on a WiFi connection, which, at best, will offer no more than 70 Mbps as is dictated by the upper limit of the 802.11n wireless protocol.

Sharing the juice out

It is as this point that users might consider a gigabit powerline communications option to send the broadband juice around a home (or business for that matter) premises using the electric power transmission lines already hard wired into a home or apartment building.

Gigabit by name is not necessarily gigabit by nature in this instance unfortunately, despite this word featuring in many of these products’ names, which is derived from the 10/100/1000 Mbps Ethernet port that they have inside.
If you buy a 1 gigabit powerline adapter today you’ll probably notice the number 500 used somewhere in the product name – and this is the crucial number to be aware of here as this is a total made up of both upload and download speeds added together i.e. 250 Mbps is all you can realise from the total 1 gigabit you have installed at this stage via the powerline route.

Our tests show uplink and downlink speeds of roughly 180 Mbps were achieved in both directions using a new iMac running Apple Max OS X Lion. Similar results were replicated on a PC running Windows 7 64-bit version.

Image 1 Hyperoptic.jpgThe above image shows a wireless connection test while the below image shows a hard wired connection.

Image 2 Hyperoptic.jpgThese criticisms being levied, powerline manufacturers will no doubt expand their product lines to accommodate for speeds and standards which are the edge of this market’s current delivery capabilities. Further to this, Hyperoptic’s 180 Mbps via powerline is only a fraction of what you can experience if your cabling geography allows it — and it is over seven times faster than Ofcom’s “superfast” 24 Mbps target.

Hyperoptic’s service also includes an option to port your existing phone line over to its lines, which takes between two to three weeks. The company asserts that it is capable of transferring your old phone number over to its service or supplying you with a new one, the former option taking slightly longer but at no extra cost.

So in summary

It would appear that some of Hyperoptic’s technology is almost before its time, in a good way. After all, future proofing is no bad thing house design architects looking to place new cable structures in ‘new build’ properties and indeed website owners themselves are arguably almost not quite ready yet for 1 gigabit broadband.

As the landscape for broadband ancillary services and high performing transactions-based and/or HTML5-enriched websites now matures we may witness a “coming together” of these technologies. Hyperoptic says it will focus next on other cities outside of the London periphery and so the government’s total programme may yet stay on track.


June 1, 2012  11:47 AM

Optimisation Springs To Life

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
ADC, Application monitoring, Centrix, F5, HP, jetNEXUS, Kemp, L-B, Network management

It’s been a busy old Spring so far – I’m still trying to get my head around the recession – IT is going bonkers, spending like the world is about to end (does somebody know something we don’t?), every flight I take from wherever to wherever is full and when I take a few days off on the Spanish and SoF coastlines the places are packed.

The result is a lot of tests and reports to update on, which can be found on the www.broadband-testing.co.uk website as normal, for free download. Gartner said it at the start of the year, IDC has supported the argument and I’m in the thick of it – network optimisation that is, whether LAN, WAN, Cloud or inter-planetary. As a result, we’ve got two new reports up on L-B/ADC solution providers, Kemp and jetNEXUS. Both are going for the “you don’t need to spend stupid money to optimise app delivery” angle and both succeed; however, the focus of the tests are quite different. With Kemp we showed that you can move from IPv4 to IPv6 and not take a performance hit at all – very impressive. With jetNEXUS we showed that you can d**k around with data at L7 as much as you want and still get great throughput, manipulating data as you wish with no programming skills required whatsoever. Could put a few people out of a job… no problem let them loose with sledgehammers to knock down my old home town of Wakefield so someone can rebuild it properly. What was it that John Betjeman said about Slough?

The same could be said of Vegas; since arriving back with what felt like pneumonia I’ve been in an “who’s the most ill” competition with my HP mate Martin O’Brien who contracted several unpleasant things while were both out at Interop. Elton John had to cancel the rest of his Vegas shows because he contracted (the same?) respiratory problems. Well if it’s good enough for Elton…

One of the things to come out of Interop meetings wot I have spoken about is the proposed testing of HPs (along with F5) Virtual Application Networking solution. What is interesting here is that the whole aspect of profiling network performance management on a per user, per application basis is to get that profile as accurate as possible in the first place. While HPs IMC management system (inherited from the 3Com acquisition) does some app monitoring, it doesn’t go “all the way”. But we know men (and women) who can… If you checkout the Broadband-Testing website, you’ll also see a review of Centrix’s WorkSpace products. With these you can take application monitoring down to the level of recording when a user logs into an app, how long they have it loaded for and even when they are actively using it or not. Now that IS the way to get accurate profiling; take note HP. Let the spending continue…

 


May 16, 2012  4:28 PM

Coughing Up In Vegas

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Broadband-Testing, Centrix, Dell, F5 Networks, hewlettpackard, HP, Interop, Netronome, SDN, VAN, Vegas, Workspace

Back from Interop and my ‘beloved’ Vegas from which I escaped just in time before being air-con’d to death  as my ongoing cough continues to remind me. Is it possible to sue “air”?

I don’t know – maybe there are people out there (mainly the people who were “out there”) who enjoy the delicious contrast of walking in from 42c temperatures into 15c, time and again, then in reverse, and the joy of being able to hear at least three different sorts of piped music at any one time, the exhilaration for the nostrils of seven or more simultaneous smells, 24 hours a day? Must be me being picky. I like my sound in stereo at least, but all coming from the same source…

Anyway  – reflections on the show itself; easy when there’s less smoke and more mirrors AKA taking away the hype. What I found was a trend – that others at the show also confirmed – towards making best of breed “components” again, rather than trying to create a complete gizmo. For example, we had Vineyard Networks creating a DPI engine that it then bolts on to someone’s hardware, such as Netronome‘s dedicated packet processing architecture, that then sits – for example – on an HP or Dell blade server. I like this approach – it’s what people were doing in the early ’90’s; pushing the boundaries, making networking more interesting – more fun even – and simply trying to do something better.

There are simply more companies doing more “stuff” at the moment. Take a recently acquired client of mine who I met out there for the first time, Talari Networks, enabling link aggregation across multiple different service providers – not your average WanOp approach. A full report on the technology has just been posted on the Broadband-Testing website: www.broadband-testing.co.uk – so please go check it out. Likewise, a report from Centrix Software on its WorkSpace applications. Reading between the lines on what HP is able to do with its latest and greatest reinvention of networking – Virtual Application Networking or VAN – as we described on this blog last week, along with buddy F5 Networks, I reckon there is just one piece of the proverbial jigsaw missing and that is something that Centrix can most definitely provide with WorkSpace. The whole of VAN is based around accurately profiling user and application behaviour, combining the two – in conjunction with available bandwidth and other resource – to create the ideal workplace on a per user, per application basis at all times, each and every time they log into the network, from wherever that may be.

Now this means that you want the user/application behaviour modelling to be as accurate as possible, so your starting point has to be, to use a technical term much loved by builders, “spot on”. Indeed, there is no measurement in the world more accurate than “spot on”. While HPs IMC is able to provide some level of user and application usage analysis, I for one know that it cannot get down to the detailed level that Centrix WorkSpace can – identifying when a user loads up an application, whether that application is “active” or not during the open session and when that application is closed down… and that’s just for starters. I feel a marriage coming on…

 

Enhanced by Zemanta


May 9, 2012  3:53 PM

And Yet More SDN…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Big Data, mexico, Netronome, Open source, SDN

I don’t think I can remember a time – and this is saying something – when there were SO many hyper buzz-phrases in IT circulation as there are currently. Every cloud variant, Big Data, SDN…

So it’s good for the system, soul and sensibility to get behind the hype and see what vendors are actually offering between the lines. At Interop Vegas yesterday (where the food and wine quality sank to new depths c/o some alleged Mexican resto – and we all know Mexico produces superb wines… I met up with IP Infusion, who have been around for a decade or so but are now attaching themselves to the SDN wave – but in a good way. Basically IP Infusion creates a software based multi-service delivery platform – and always has done. Just that it now has to call it SDN to be fashionable, but all the better that the guys got there years ago. Basically, the technology decouples the control and data plane, the network services from the network OS and hardware, protocol stack and applications – meaning it is very flexible; probably THE key word if we accept the whole cloud scenario. It also gave proof that Open Flow is being deployed; IP Infusion showed a demo with two networks set up with redundant paths; one using (the hateful) Spanning Tree and one using Open Flow – both with live video streaming (i.e. the classic demo!). Not only was the latter more robust but recovery time was less than half that of STA when we induced a failure (by using the high tech methodology of yanking a cable out).
What was interesting with all the vendors I saw yesterday at Interop is that they were all focused on providing one specific element, rather than a “box”. Netronome – ultra fast processing hardware; Vineyard Networks, DPI engine to sit on, for example Netronome’s hardware, Anue – the glue that sits between the network monitoring/test tools and the stuff what’s being tested and makes sure it all gets optimised and automated. So there’s definitely a trend going on here that takes us back to best of breed ingredients and the chance to pick n mix.
More from Interop later…
Enhanced by Zemanta


May 8, 2012  4:48 PM

More Of That Software Defined Networking…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Cisco, Cisco Systems, F5 Networks, Hewlett-Packard, Interop, Juniper, Multiprotocol Label Switching, openflow, SDN, VAN, Vegas

Live from the home of tack – i.e. Vegas, the Blackpool of the desert but without the classiness…or piers – is the latest bombardment of SDN, er, ness, care of Interop 2012.

Starting with a direct follow-up to my last blog entry – HPs take on SDN, AKA VAN (ok – enough TLAs…) or Virtual Application Networks, the big question was, who was going to drive the VAN since HP doesn’t have the whole solution to deliver it? The answer is F5 Networks. So, the idea is to being to deliver a completely optimised, end to end solution on a per user/per application basis by using templates to define every aspect of performance etc. Makes total sense, sounds too good to be true. So, what’s the answer – test it of course; watch this space on that one.

Meantime, I’ll be reporting in daily from the show – seeing lots of new (to me) vendors who, one way or t’other, are all ticking the SDN/Big Data/Cloud boxes.

It seems to me that we need to get back to basics with SDN so that people actually understand what it is. For example, there’s a definite belief among some that it does away with hardware… Nice idea – so we have software that exists in a vacuum that somehow delivers traffic? There also seems to be confusion between different vendors SDN solutions and OpenFlow. For those wot don’t know, here’s what OpenFlow is – in a classical router or switch, the fast packet forwarding (data path) and the high level routing decisions (control path) occur on the same device. 

An OpenFlow Switch separates these two functions. The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller, typically a standard server. The OpenFlow Switch and Controller communicate via the OpenFlow protocol, which defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats.

The data path of an OpenFlow Switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). When an OpenFlow Switch receives a packet it has never seen before, for which it has no matching flow entries, it sends this packet to the controller. The controller then makes a decision on how to handle this packet. It can drop the packet, or it can add a flow entry directing the switch on how to forward similar packets in the future.

In other words it provides one, open-standard methodology of optimising traffic, end-to-end, but it is not a solution in its own right, just a potential part of the action.

Whatever – the interesting theme here is that no one talks about MPLS any longer (well maybe apart from Cisco and Juniper that is) despite it still being THE methodology used to move all our data around the ‘net and beyond. There are factions that stand for the WAN optimisation kills MPLS idea. And for good reason – but there’s no overnight change here, given the gazillions invested in MPLS networks. It’ll be interesting to see what the vendors here make of the situation, at least from a timeline perspective…

Meantime it’s showtime, meaning a walk past a beach, complete with wave machine and hundreds of Americans trying to get skin cancer, in order to get to the exhibition halls – this is Vegas, after all.

Enhanced by Zemanta


April 19, 2012  1:46 PM

What’s Next To Virtualise? The Network Of Course…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Cloud Computing, HP, information technology, Mobile, Networking, ProCurve, VAN, Virtualization, Wireless

Wore my journalist hat yesterday to attend an HP update event on its ESSN division (don’t worry about what the initials stand for, but N is for Networking…).

While not the key focus of yesterday’s blurb, the key thing for me to take from the event was the company’s very recent announcement that they are going into the VAN market; no – not competing with Transits, though you could say the network is “in transit” but Virtual Application Networks – all part of the current SDN or Software Defined Network movement. For many years HP (as Procurve) and others have been trying to crack the whole “end to end” optimisation problem. I’ve been trying to personally crack it using any number of vendor parts since 1999…. 

So, VAN is the latest attempt. The aim is to use preconfigured templates to characterise the network resources required to deliver an application to users – i.e. to enable consistent, reliable and repeatable deployment of cloud applications in minutes. An end-to-end control plane virtualises the network and enables programming of the physical devices to create multi-tenant, on-demand, topology and device-independent provisioning. The idea is to be completely open, so this isn’t an HP closed shop solution; that that they have created it.

Speaking with one of HPs customers, Mark Bramwell of Wellcome Trust at the event, we both agreed that it sounds like the latest and greatest “smoke and mirrors”, “too good to be true” solution BUT – if it works, then great – every user has optimised applications, on a per user, per application basis. So we both agreed – the only sensible option is for me to test it. Watch this space on that one…

Speaking yet further on the subject in a broader manner with Lars Koelendorf who heads up HP EMEAs mobile and wireless stuff, we agreed that the ideal way to rebuild a network is to start with IPv6; with so many addresses available, every user could have their own virtual IP address that IS their identity so, whatever client they are using and wherever they are, all the logic sits behind their VIP(v6) address and the HP VAN man is complete. They would, of course, drive applications faster across the network than any other user type…

Enhanced by Zemanta


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: