I remember writing a column for a long since deceased IT publication, where I was discussing the rebirth of the mainframe as a network server, I remember it, not simply for the content, but more specifically the context, and how the sub-editors, in the name of formatting and pure ignorance, change my column title: “The mainframe is dead, long live the mainframe” (quite a witty variation I though!) to simply: “The mainframe is dead”. Not quite the same meaning…
Fortunately, no one edits these blogs (normally!) so hopefully the title here stays as originally typed. The point is, the “mainframe” is being reinvented again, but this time it’s not the mainframe itself that is regenerating itself, but the “network”, so the exact inverse of what happened before.
This thought was reiterated in recent conversations with the excellent Danny Yeowell of Dimension Data (a man who can describe an incredibly complex company structure in simple layman’s terms in the space of 90 seconds!) and in recent work I’ve started with Cirba, a company focusing on “software-defined infrastructure control solutions” AKA managing and optimising virtual storage through software. The point is, whatever a vendor means by SDN and NFV, what is happening is that network functionality is being distributed across the ether, as one giant set of components, manageable from a single, remote entity, whose applications are largely accessed through a browser, regardless of the access device – AKA the rebirth of the mainframe as an organic network, sitting as a software layer, controlling application and data access, wherever the apps and data reside.
As a fundamental consequence of this, storage and networking are becoming ever more integrated, hence the number of acquisitions in recent years of network technology by storage vendors and vice-versa. Looking at Cirba’s solution, while focused on virtualised storage, networking elements such as workload routing, load-balancing and interfacing with NFV elements are all fundamental parts of the product. Cirba’s analytics automate VM routing decisions based on all the required constraints including workload utilisation, business, technical, software licensing, and complex storage requirements.
A side-effect of the likes of Cirba is that it means that the storage vendors don’t have to do this stuff for themselves; they can simply integrate with a “Cirba” as exemplified by this week’s announcement that Cirba is now integrated with NetApp‘s OnCommand Insight OCI), thus providing the aforementioned optimisation to NetApp customers. So the storage companies increasingly become part of the SDN/NFV movement – there is no escape!
Lest we forget to bring “cloud” into this blog, Cirba also provides cloud infrastructure management teams with visibility into when resource shortfalls might adversely affect associated VMs and where excess resources exist for – in this case – NetApp and other storage infrastructure connected to NetApp OCI.
On a more generic level, all this is being put to the test currently by yours truly, so look out for a report on the topic in the near future.
I recently had the “pleasure” of visiting Milton Keynes; the railway station was packed with what were surely tourists – some mistake here? Admittedly, all looking in a hurry to get back to London… does “Stratford-Upon-Avon” translate aurally as “Milton Keynes” in some languages?
So, we went from hardware to software, and then software to virtual, the idea being not only that everything is more efficient, but also easier to scale and manage. Kind of VLANs part two?
Or more like browsers Mk II? I remember visiting a company in Cambridge c.1837 (it feels that long ago anyway) and seeing Mosaic for the first time. I was impressed; so here is the future interface, lovely and simple, makes sense o’t’Interweb. And then there was Netscape, and Mosaic became Firefox, and there was IE of course, then Chrome, Safari etc etc – and each iteration more complex than the last… So what happened to the simplicity of A browser?
And it’s kind of become the same with virtualisation re: the clutter of Hypervisor’s out there now. For example, Cirba, a company wot I’ve mentioned before in this ‘ere blog, which focuses on capacity planning and improved performance/reduced VM “wastage”, has announced it has added support for Linux’s native KVM-based environments in OpenStack-based private clouds. This, in itself, is not the point. It means that Cirba – and others – are now having to support the likes of KVM, VMware, Citrix Xen, MS Hyper-V, IBM PowerVM, Red Hat Enterprise… where does it end?
I guess what it does mean, with yet another “simplification” turning into “complication” is that there is that much more requirement for products that optimise virtual environments. Andrew Hillier, CTO and co-Founder of Cirba, explained that the company enables organisations to balance infrastructure supply and application demand for greater efficiency and control in single hypervisor, multi-hypervisor, software-defined and hybrid cloud environments – What a lovely, simplistic IT world we now live in…
Not that this is putting companies off the move from physical to virtual. Nutanix, a company that goes from strength to strength, despite having the most baffling ‘job description’ – “a web-scale hyper-converged infrastructure company” – announced its most recent customer, and a very interesting one at that: Bravissimo, a lingerie retailer – high street and online presence – is taking the opportunity to end of life its physical servers and move to Nutanix’s virtual computing platform – basically, integrated compute and storage management, which DOES make sense of course! Not so long ago women were burning their bras, and now they’re being virtualised!
Back to the business angle from a Nutanix perspective… what it means is that what typically takes days and weeks to configure, and scales as well as an obese climber, is reduced to a trivial 30-60 minute exercise, AND, additional functionality and apps such as disaster recovery and replication become exactly that – just add-ons. I saw the same concept, pre-virtualisation, work extremely well with Isilon, and they did just fine being acquired by EMC a few years ago. But even Nutanix has to support several different Hypervisor platforms…
Welcome to the world of IT!
Been doing a few catch-ups with old vendor friends recently; one was Brocade – and more of this next month – which has a REAL SDN/NFV story – and another was NetScout; networking monitoring!!!! Except that network monitoring now goes way beyond SNMP probes and Sniffers.
Speaking with NetScout’s Phil Gray, who came into the company with the acquisition of Psytechnics, which had a voice/video monitoring technology, two things became abundantly clear in terms of where network monitoring/analysis has been going since the days of analysing millions of lines of packet captures:
– A key requirement is getting rid of all the superfluous data automatically, so searches are focused purely on relevant information. Contrast this with the old days of spending hours looking for one hexadecimal string as the proverbial digital needle in an OSI haystack…
2. – Gauging/measuring by end user experience, not some theoretical mathematical improvement, is network monitoring for 2015. Importantly, the voice and video element of network data monitoring is, of course, more relevant than ever. Another point is that this traffic has to be captured at speeds and feeds from basic Internet level through to 40Gbps and above. This is not trivial stuff, just as identifying and preventing security attacks isn’t. Yet the world of network monitoring doesn’t get the mass media hype that security does but, at the same time, the key word that always comes up in IT conversations is “visibility”. The bottom line is that traffic monitoring and analysis was important in 1988 and it’s even more important – and relevant – here in 2015. Whether it’s data trending, app performance management or simple data feed analysis, if you don’t know what’s actually on the network, how do you manage it?
So – HP has just announced it is acquiring Aruba Networks; basically the 2nd-3rd stab at buying a wireless solution after Colubris and effectively inheriting additional WLAN tech with the 3Com acquisition (that NOT being the raison d’etre for that acquisition).
Within the general misty definition of “Cloud”, sometimes something pokes through the veil of ether-precipitation that says “I’m new and I make sense”.
And typically, it’s not a variation on that other “Somehow Defines Nothing” Hype-TLA that is SDN, but more akin to the style of Python “And now for something completely different”. In this case it comes from a UK start-up Fedr8. Ok, so the name sounds more like a courier company, but stick with me…
Rather than focusing on Cloud storage or performance, Fedr8 is focusing on making sure your existing applications will actually work in that environment in the first place. Kind of akin to avoiding the scenario where you buy a large American car before you measure the size of your garage. The product itself, Argentum, provides compatibility analysis and optimisation for in-house applications, prior to cloud delivery. It provides organisations with a suite of tools that can assess, analyse and optimise existing applications, enabling organisations to design successful cloud projects and migrate applications without even thinking about the pain, system, effort and time in attempting to do it manually. Or simply guessing…
To date Argentum has been piloted on Open Source applications developed by companies including Netflix, Twitter and IBM, so no big names there then! How, then, does it work? In layman’s terms it analyses the source code of any application, in any programming language, and then provides actionable intelligence to help a company move those existing apps into the cloud – hence “federate” the services! So, what’s in a name? Lots it seems -) At a slightly more technical level, code is uploaded to the Argentum platform where it undergoes a complex analysis and is split into objectified tokens. These tokens populate a meta database against which queries are run. From this, out pops a visualisation of the application and actionable business intelligence to enable successful cloud adoption.
Sounds great in theory, and looks a must for Broadband-Testing to put through its paces; not least because there is a Groundhog Day moment here. Yes, the product is innovative, BUT there is an eerie resemblance to that of a former client, AppDNA, whose product analysed applications for migration between Microsoft OSs and browser versions. So, same concept, different application (in every sense) and, indeed, why not? Especially since AppDNA ultimately got acquired by Citrix for more than a few quid. Now that’s a precedent I suspect the Fedr8 board will be quite sweet on…
I was at a Gartner event in Barcelona last week, where Computer Associates were playing host.
Been researching an article for CW’s very own Cliff Saran while, by chance, also speaking with a number of IT investors – the research being on networking innovations; oh and, by another chance, also judging an awards event, networking category and also visited a Cambridge Wireless innovations awards event…
“Cloud” and “WLAN” or “WiFi” are not, to date, IT terms that are typically seen in tandem, but Tallac Systems, a new venture for a number of ex HP er, yes – let’s say it – veterans (I reckon I can out-run you guys if necessary) are looking to create one from t’other.
- Manage entire Wi-Fi network from a single dashboard
- Control multi-tenant Wi-Fi networks, applications and devices
- Application-based virtual networks
- Cost effective 3rd party hardware
- OpenFlow enabled API
Every few years, the topic of “next generation” NetMan crops up once more and here we are again right now, thanks to the the distributed, somewhat cloudy and virtualised nature of contemporary network deployments. I mean, just how do you manage a network “entity” if you don’t know where it is?