So here we are, already several weeks into 2013 and is there anything new to report on the networking front?
Thus it has always been so – HP networking, AKA ProCurve in the “old days”, has been a success in spite of its “parent” company and, today, amidst the doom and gloom financial results the company has posted, and all the Autonomy naming, blaming and shaming going on, I couldn’t help but help notice three little but significant words in one paragraph of the story in Microscope – Note – paragraph from one of the many HP/Autonomy stories around, focusing on the doom and gloom of HP losing money “everywhere”, but note the three magic words I’ve highlighted in the snippet below. See if you can spot them…
“In its day-to-day business, HP revealed it had had another predictably awful quarter at Personal Systems, with revenue down 14% as the unit fought for its piece of the ever-shrinking PC market. Printing sales were down 5%, Services declined 6% and ESSN declined 9%, with growth in Networking offset by shrinkage in Industry Standard Servers and Storage, while Business Critical Servers dropped 25%.”
Some of you may have seen earlier blogs, and even the Broadband-Testing report, on our recently acquired US client Talari Networks, whose technology basically lets you combine multiple broadband Internet connections (and operators) to give you the five-nine’s levels of reliability (and performance) associated with them damnedly expensive MPLS-based networks, for a lot less dosh.
You can actually connect up to eight different operators, though according to Talari, this was not enough for one potential customer who said “but what if all eight networks go down at the same time?” Would dread having to provide the budget for that bloke’s dinner parties – “yes I know we’ve only got four guests, but I thought we should do 24 of each course, just in case there’s a failure or two…”
Anyway – one potential issue (other than paranoia) for some was the entry cost; not crazy money but not pennies either. So, it makes sense for Talari to move “up” in the world, so that the relative entry cost is less significant and that’s exactly what they’ve done with the launch of the high(er)-end Talari Mercury T5000 – a product designed for applications such as call centres that have the utmost requirements for reliability and performance and where that entry cost is hugely insignificant once it saves a few outages; or even just the one.
If you still haven’t got wot they do, in Talari-ese it provides “end-to-end QoS across multiple, simultaneous, disparate WAN networks, combining them into a seamless constantly monitored secure virtual WAN”. Or, put another way, it gives you more resilience (and typically more performance) than an MPLS-based network for a lot lower OpEx.
So where exactly does it play? The T5000 supports bandwidth aggregation up to 3.0Gbps upstream/3.0 Gbps downstream across, of course, up to eight WAN connections. It also acts as a control unit for all other Talari appliances, including the T510 for SOHO and small branch offices, and the T730, T750 and T3000 for large branch offices and corporate/main headquarters, for up to 128 branch connections.
I‘s pretty flexible then, and just to double-check, we’re going to be let loose on the new product in the new year, so watcheth this space…
Following on from last week’s OD of SDN at Netevents, we have some proper, physical (ironically) SDN presence in the launch of an SDN controller from HP.
This complete the story I covered this summer of HPs SDN solution – the Virtual Application Network – which we’re still hoping to test asap. Basically the controller gives you an option of proprietary or open (OpenFlow), or both.
The controller, according to the HP blurb, moves network intelligence from the hardware to the software layer, giving businesses a centralised view of their network and a way to automate the configuration of devices in the infrastructure. In addition, APIs will be available, so that third-party developers can create enterprise applications for these networks. HPs own examples include Sentinel Security – a product for network access control and intrusion prevention and some Virtual Cloud Networks software, which will enable cloud providers to bring to market more automated and scalable public-cloud services.
Now it’s a case of seeing is believing – bring it on HP!
And here’s my tip for next buzz-phrase mania – “Data Centre In A Box”; you heard it here (if not) first…
Such was the count at the end of Day 1 of Netevents Portugal. Thirteen “paradigm’s” and two “paradigm shifts”. Surprisingly there were no “out of the boxes” and only one “granularity” reference. It should also be noted that the “p” word” was used by at least four different nationalities, so it’s not a single country syndrome.
Bem Vindo from the Algarve, at the latest Netevents symposium.
Just a quickie update to all you vendors with mega technology out there re: the Tech Trailblazer awards wot I blogged about earlier this summer.
- Big Data Trailblazers
- Cloud Trailblazers
- Emerging Markets Trailblazers
- Mobile Technology Trailblazers
- Networking Trailblazers
- Security Trailblazers
- Storage Trailblazers
- Sustainable IT Trailblazers
- Virtualization Trailblazers
One of the problems we’ve faced in trying to maximise throughput in the past has not been at the network – say WAN – level, but what happens once you get that (big) data off the network and try to store at the same speed directly onto the storage.
We saw this limitation, for example, last year, when testing with Isilon and Talon Data and using traditional storage technology – the 10gigabit line speeds we were achieving with the Talon Data just couldn’t be sustained when transferring all that data onto the storage cluster. While we believe that regular SSD (Solid State Disk) technology would have provided a slight improvement, we still wouldn’t have been talking end-to-end consistent, top-level performance.
So it’s with some interest – to say the least – that I’ve started working with a US start-up, Constant Velocity Technology, that reckons it has the capability to solve exactly this problem. We’re currently looking to put together a test with them: http://johnpaulmatlick.wix.com/cvt-web-site-iii – and another “big data” high-speed transfer technology client of mine, Bitspeed, with a view to proving we can do 10Gbps, end-to-end, from disk to disk.
Even more interesting, this is happening in “Hollywood” in one of the big-name M&E companies there. However, if any of you reading this are server vendors, then please get in touch as we need a pair of serious servers (without storage) to assist with the project!
Life beyond networking…
Just wanted to give everyone with a good tech idea up their bit of T-shirt that covers the upper arm – given that it is summer -) a heads up about a new IT ideas competition called Tech Trailblazers – www.techtrailblazers.com – organised by my (and many others) old PR mate, Rose Ross. Well, when I say “old” I mean, er, long standing…
In this guest blog post Computer Weekly blogger Adrian Bridgwater tries out a new 1 Gbps broadband service.
In light of the government’s push to extend “superfast” broadband to every part of the UK by 2015, UK councils have reportedly been given £530m to help establish connections in more rural regions as inner city connectivity continues to progress towards the Broadband Delivery UK targets.
Interestingly, telecoms regulatory body Ofcom has defined “superfast” broadband as connection speeds of greater than 24 Mbps. But making what might be a quantum leap in this space is Hyperoptic Ltd, a new ISP with an unashamedly biased initial focus on London’s “multiple-occupancy dwellings” as target market for its 1-gigabit per second fibre-based connectivity.
Hyperoptic’s premium 1 gig service is charged at £50 per month, although a more modest 100 Mbps connectivity is also offered £25 per month. Lip service is also paid to a 20 Mbps at £12.50 per month contract for customers on a budget who are happy to sit just below the defined “superfast” broadband cloud base.
Hyperoptic’s managing director Dana Pressman Tobak has said that there is a preconception that fibre optic is expensive and therefore cannot be made available to consumers. “At the same time, the UK is effectively lagging in our rate of fibre broadband adoption, holding us back in so many ways — from an economic and social perspective. Our pricing shows that the power of tomorrow can be delivered at a competitive and affordable rate,” she said.
Cheaper than both Virgin and BT’s comparable services, Hyperoptic’s London-based service and support crew give the company an almost cottage industry feel, making personal visits to properties to oversee installations as they do.
While this may be a far cry from Indian and South African based call centres, the service is not without its teething symptoms and new physical cabling within resident’s properties is a necessity for those who want to connect.
Upon installation users will need to decide on the location of their new router, which may be near their front door if cabling has only been extended just inside the property. This will then logically mean that home connection will be dependent on a WiFi connection, which, at best, will offer no more than 70 Mbps as is dictated by the upper limit of the 802.11n wireless protocol.
Sharing the juice out
It is as this point that users might consider a gigabit powerline communications option to send the broadband juice around a home (or business for that matter) premises using the electric power transmission lines already hard wired into a home or apartment building.
Gigabit by name is not necessarily gigabit by nature in this instance unfortunately, despite this word featuring in many of these products’ names, which is derived from the 10/100/1000 Mbps Ethernet port that they have inside.
If you buy a 1 gigabit powerline adapter today you’ll probably notice the number 500 used somewhere in the product name – and this is the crucial number to be aware of here as this is a total made up of both upload and download speeds added together i.e. 250 Mbps is all you can realise from the total 1 gigabit you have installed at this stage via the powerline route.
Our tests show uplink and downlink speeds of roughly 180 Mbps were achieved in both directions using a new iMac running Apple Max OS X Lion. Similar results were replicated on a PC running Windows 7 64-bit version.
These criticisms being levied, powerline manufacturers will no doubt expand their product lines to accommodate for speeds and standards which are the edge of this market’s current delivery capabilities. Further to this, Hyperoptic’s 180 Mbps via powerline is only a fraction of what you can experience if your cabling geography allows it — and it is over seven times faster than Ofcom’s “superfast” 24 Mbps target.
Hyperoptic’s service also includes an option to port your existing phone line over to its lines, which takes between two to three weeks. The company asserts that it is capable of transferring your old phone number over to its service or supplying you with a new one, the former option taking slightly longer but at no extra cost.
So in summary
It would appear that some of Hyperoptic’s technology is almost before its time, in a good way. After all, future proofing is no bad thing house design architects looking to place new cable structures in ‘new build’ properties and indeed website owners themselves are arguably almost not quite ready yet for 1 gigabit broadband.
As the landscape for broadband ancillary services and high performing transactions-based and/or HTML5-enriched websites now matures we may witness a “coming together” of these technologies. Hyperoptic says it will focus next on other cities outside of the London periphery and so the government’s total programme may yet stay on track.