IT – a job for life?
Possibly… just finished a meeting with an old IT mate, Mike Silvey of Moogsoft, and we were talking about how all the recent networking reinvention bollox has basically forced companies into investing in new technology, not least network management in its broadest sense, simply in order to make sense of the new PARADIGM -)
The reality is, regardless of whether the world needed Cloud, SDN, SD-WAN, FinTech, IoT etc, they’ve been landed with it, so someone/something has to manage it. Had a variation on said topic with Joel Dolisy of SolarWinds recently in London. We spoke about how everything and nothing changes simultaneously, from virtualisation to outsourcing and to, more critically, automation. Ah, the golden nugget – freeing up staff from fire-fighting to actually be pro-active in making their company better, whether it makes biscuits or sells petroleum. Way back in the 90s I was involved in network management automation projects and so it goes on in 2016. The question is, would true automation really lead to staff being freed up to be more productive, or would they simply be made redundant – in every sense? Well, what doesn’t make sense is individuals spending hours a day on mundane admin, so automation has to happen and then we see the fallout… It is therefore important for the likes of SolarWinds to continue to pursue the automation quest – one day Rodney…
On the SolarWinds front (weather gag?) an interesting move from the guys is device specific dashboards for the likes of F5, Cisco and others, with an SDK also coming out. This might seem overkill, but it does make sense as, after all, network management software vendors are better at doing network management than the hardware vendors!
Back to the talk of reducing manual admin time, one new product I’m working with currently that takes networking back to its hexadecimal basics and then gives it a two-digits wave goodbye is from a company called CapStar Forensics. The idea here is to take the “Wireshark” PCAP world into the 21st century for real – i.e. digging deep and dirty is still fundamental to many IT engineers, but why spend days and weeks doing manual searches to find what you’re looking for – tiny needles in Giant Haystacks is not an issue we should be wresting with (!) in 2016. So, CapStar adds a DPI engine and a huge library of search profiles into the equation. Early testing suggests that days can indeed be taken down to seconds, based on some cybersecurity related forensics.
Definitely a “watch this space” moment…
This week I have to host a panel debate on “stress testing cloud applications and infrastructure” at Netevents in Rome (I know – it’s tough, but someone’s got to do it…).
One of the areas to cover is, well, how you do actually cover that kind of environment from a test perspective – e.g. engage thousands of what we used to call human beings to all use specific apps at certain times, or can we simulate that or… given that we live in a world of analytics – well, we always have done, just that now they are being collected and – funnily enough – analysed, is a lot of the hard work actually being done for us? I mentioned in my last blog that I recently met up with John Rakowski of AppDynamics, the Application Intelligence company in the Enterprise space (that’s as in the type of business, not the starship – well, not yet at least) and a couple of areas we talked about were application intelligence and unified monitoring. In other words, the ability to blanket monitor, so you are collecting all the data into a unified reporting mechanism, and the ability to understand the app’s you’re actually monitoring.
This is a gazillion miles away from the old methods of collecting and then sifting through Syslog files and other Data Centre consuming information logs, requiring several of those human being things again on hand to manual carry out this most exciting of tasks – finding the eNeedle in the data haystack. So, in one fell swoop you minimise costs, remove human error, and maximise visibility and the ability to pro-actively manage apps and services.
It’s much the same kind of story in the world of network monitoring itself; I had a catch-up last year Savvius (the artist formerly known as WildPackets) and gone are the days when we had to search manually through disks-worth of Hex in order to find a particular packet identifier or character string for example. In its recent update, you can now correlate and analyse network data directly on the capture engine, as it happens, and it give you remediation advice too!
So, back to the original point – are these, let’s call them “app and data visibility tools” actually doing the job of a specialist app/service product testing, er, product and, more worryingly, that of the product tester?
That’ll be one to debate in Rome then! Bring on the pizza and Chianti (classico reserva, of course..) .
NetMan/Security vendor, and now part of the Thoma Bravo empire (watch out China!) SolarWinds, AKA SW, has sent us a timely reminder that this leap year has resulted in Feb having an extra Monday – AKA today – so what to do with it and improve the life of an IT pro at the same time?
Having recently finished some testing with Cirba on its SDI (Software Defined Infrastructure) approach to compute and storage resource management and app deployment, it’s been interesting to be simultaneously judging an early stage tech vendor competition and see just how many Cirba wannabe’s there are out there!
Much of the talk on this blog recently has involved the world of acquisitions. And so it goes on.
A recent meeting with Phil and Jim from the UK and US arms of Netscout respectively focused partially on their completion of the acquisition of the comms business of Danaher Corp – Danaher is an enormous company; I can imagine a conversation in parts of that company where one employee in a different division comments on “we’ve just sold our comms business” and another saying “did we have a comms business?” However – it’s a very sizeable business in its own right and actually turns Netscout into kind of a “big” company.
This comes with obvious complications. I can remember a conversation with ex F5 SVP marketing and top geezer Erik Giesa years ago at F5s HQ in Seattle and he talked about (while using his arms to emphasise the size of the offices) how F5 had never intended becoming a “big company” – simply that no one had acquired them in time before their market cap became too big to make them attractive any longer.
That said, Netscout seems to have all the angles covered, even the Arbor Networks security oriented element of the acquisitiion – I mean, on the surface, how does cyber security fit into network monitoring? Surprisingly easily as it happens – a no-brainer if you think about at; incredibly valuable information being extracted from the network needs the most protection of all. SNMP v1 anybody? You can imagine all the IT guys who used it to death way back ow thinking: “what possessed me to send intimate details about the corporate network across completely open connections in plain text?” Of course, the Arbor element runs way deeper than that, but you get the message. It’s as I have talked about in recent blogs, pretty well all IT companies are having to reinvent themselves, whether in networking or not. Hell – some people still think Dell only makes laptops!
It will, however, be interesting indeed to see how Netscout progresses as a company with a broad range of products and a massively increased customer base to keep happy. Good luck chaps!
Talking of mergers and acquisitions, just read that the green light has been given for BT and EE to get their act together – when I’m in the UK I’m with BT on broadband and EE on mobile – what did I do to deserve this? For the record, the BT Homehub5 has surely the worst coverage of any WiFi router I’ve used in the past 15 years. It doesn’t even extend between two bedrooms (don’t ask why I’m using this as the metric)! Time for signal booster acquisition, or a better router obviously, but that’s the easy option. I don’t do easy options…
The IT world has gone acquisition bonkers. Dell is paying as much for EMC as any football club would do to secure the combined services of Messi and Ronaldo. Well almost…
Meantime, a couple of companies I keep tabs on have also been in acquisition mode, albeit on a lesser, but still significant, scale, as the need to reinvent to stay in – and play – the game is more critical than ever. The two companies I am speaking of are TIBCO and SolarWinds and I caught up with both of them last week in that tourist theme park known as London.
TIBCO is on a world tour – we didn’t get the T-shirt however – and clearly it was a sell-out; barely standing room in the (large) conference room of the Landmark Hotel. Interesting to see that t’Interweb, while initially reducing physical presence at live events, seems to be a less significant distractor these days. Good job, as it’s just taken me over two weeks to get a BT broadband line activated, and that with the help of the press office and the “Exec Level Complaints Dept” (than you Lisa) – otherwise I would be still waiting until the middle of next week, or next year or… Meantime, BT has now issued four accounts on my behalf and I have three BT HomeHubs already. I digress…
Cloud integration is a key driver for TIBCO right now as evidenced by two releases last week – the snappily named TIBCO BusinessWorks Container Edition and the more direct TIBCO Cloud Integration (we likes “does what it says on the tin” descriptions my precious). The former is designed to get companies scaling the heights of the cloud as rapidly as possible, while the second is all about APIs – an IPaaS, AKA, integration Platform-as-a-Service, kind of a Platform as a Platform as a Service, if you like. Both are worthy missions from a company whose origins were on the trading floors of the world, where clouds were not even visible. And interesting how the old and new come together – APIs and iPaaS (only one letter different between the two, note, maybe we should introduce an IT version of Countdown, along the lines of the 8/10 Cats variant?) – I still remember when APIs “were the future”. Mind, so was 8-bit computing once…
Integration was also a key theme in my conversation with SolarWind’s head of security, Mav Turner, who has featured in a previous CW article of mine on compliance. Switching to that subject briefly, I made the point to the TIBCO board that accelerating DevOps and Integration might lead to some compliance issues as dev gets too far ahead of compliance box-ticking? CTO Matt Quinn begged to differ, but Mav of SolarWind was in my camp. Obviously both vendors have vested interests (if neither tour vests nor T-shirts in TIBCOs case) but compliance really is a fundamental pain in the (word deleted here – Ed) process of implementing and delivering new services and applications these days. With Mav, we talked about how this is an even more spectacular problem when dealing with government departments – something I know only too well from chatting with MK Council earlier this year.
SolarWinds’ focus was actually a combination of the two key themes here, acquisition and integration, in that they are currently bringing all their acquisitions together into a common interface and style (whatever happened to the days when you would test a “single” Cisco product and encounter three different management interfaces?) – we even used the dreaded phrase “single pane of glass management”. Oh how we laughed… On a more serious note – and this applies to every vendor that has done well enough to make a name for itself in one sphere, but moves forward into new worlds – Mav made the point that many people’s association with SolarWinds is simply Network Management, or even more simply, SNMP. People, the company HAS moved on…
This need for continual reinvention in the IT vendor world is frankly frustrating and driven largely by the analyst groups and stock exchanges in equal measure. TIBCO CEO, Murray Rode talked about how, in some ways, escaping the clutches of public ownership and moving back into private alleviates many of these pressures and allows a vendor to focus on all the important elements – bettering the product for the right reasons, customer focus etc – and he is absolutely right. Why do we have to put up with the pressures that, for example, force the renaming Mainframe Time-Sharing to Outsourcing, then Application Service Provision, then Outsourcing again and now Cloud? Just adds to marketing costs and confusion.
Talking of confusion – so what exactly is Dell boy going to do with VMware? Odds-on favourite is to offload it but who would be the buyer? Please, not HP, surely… (whichever bit of HP that might be) and could Brocade afford it? Microsoft, to clean up on the hypervisors? IBM as a slightly left-field proposition? Mega-management buy-out? As ever, it’s time to watch this space…
I remember writing a column for a long since deceased IT publication, where I was discussing the rebirth of the mainframe as a network server, I remember it, not simply for the content, but more specifically the context, and how the sub-editors, in the name of formatting and pure ignorance, change my column title: “The mainframe is dead, long live the mainframe” (quite a witty variation I though!) to simply: “The mainframe is dead”. Not quite the same meaning…
Fortunately, no one edits these blogs (normally!) so hopefully the title here stays as originally typed. The point is, the “mainframe” is being reinvented again, but this time it’s not the mainframe itself that is regenerating itself, but the “network”, so the exact inverse of what happened before.
This thought was reiterated in recent conversations with the excellent Danny Yeowell of Dimension Data (a man who can describe an incredibly complex company structure in simple layman’s terms in the space of 90 seconds!) and in recent work I’ve started with Cirba, a company focusing on “software-defined infrastructure control solutions” AKA managing and optimising virtual storage through software. The point is, whatever a vendor means by SDN and NFV, what is happening is that network functionality is being distributed across the ether, as one giant set of components, manageable from a single, remote entity, whose applications are largely accessed through a browser, regardless of the access device – AKA the rebirth of the mainframe as an organic network, sitting as a software layer, controlling application and data access, wherever the apps and data reside.
As a fundamental consequence of this, storage and networking are becoming ever more integrated, hence the number of acquisitions in recent years of network technology by storage vendors and vice-versa. Looking at Cirba’s solution, while focused on virtualised storage, networking elements such as workload routing, load-balancing and interfacing with NFV elements are all fundamental parts of the product. Cirba’s analytics automate VM routing decisions based on all the required constraints including workload utilisation, business, technical, software licensing, and complex storage requirements.
A side-effect of the likes of Cirba is that it means that the storage vendors don’t have to do this stuff for themselves; they can simply integrate with a “Cirba” as exemplified by this week’s announcement that Cirba is now integrated with NetApp‘s OnCommand Insight OCI), thus providing the aforementioned optimisation to NetApp customers. So the storage companies increasingly become part of the SDN/NFV movement – there is no escape!
Lest we forget to bring “cloud” into this blog, Cirba also provides cloud infrastructure management teams with visibility into when resource shortfalls might adversely affect associated VMs and where excess resources exist for – in this case – NetApp and other storage infrastructure connected to NetApp OCI.
On a more generic level, all this is being put to the test currently by yours truly, so look out for a report on the topic in the near future.
I recently had the “pleasure” of visiting Milton Keynes; the railway station was packed with what were surely tourists – some mistake here? Admittedly, all looking in a hurry to get back to London… does “Stratford-Upon-Avon” translate aurally as “Milton Keynes” in some languages?
So, we went from hardware to software, and then software to virtual, the idea being not only that everything is more efficient, but also easier to scale and manage. Kind of VLANs part two?
Or more like browsers Mk II? I remember visiting a company in Cambridge c.1837 (it feels that long ago anyway) and seeing Mosaic for the first time. I was impressed; so here is the future interface, lovely and simple, makes sense o’t’Interweb. And then there was Netscape, and Mosaic became Firefox, and there was IE of course, then Chrome, Safari etc etc – and each iteration more complex than the last… So what happened to the simplicity of A browser?
And it’s kind of become the same with virtualisation re: the clutter of Hypervisor’s out there now. For example, Cirba, a company wot I’ve mentioned before in this ‘ere blog, which focuses on capacity planning and improved performance/reduced VM “wastage”, has announced it has added support for Linux’s native KVM-based environments in OpenStack-based private clouds. This, in itself, is not the point. It means that Cirba – and others – are now having to support the likes of KVM, VMware, Citrix Xen, MS Hyper-V, IBM PowerVM, Red Hat Enterprise… where does it end?
I guess what it does mean, with yet another “simplification” turning into “complication” is that there is that much more requirement for products that optimise virtual environments. Andrew Hillier, CTO and co-Founder of Cirba, explained that the company enables organisations to balance infrastructure supply and application demand for greater efficiency and control in single hypervisor, multi-hypervisor, software-defined and hybrid cloud environments – What a lovely, simplistic IT world we now live in…
Not that this is putting companies off the move from physical to virtual. Nutanix, a company that goes from strength to strength, despite having the most baffling ‘job description’ – “a web-scale hyper-converged infrastructure company” – announced its most recent customer, and a very interesting one at that: Bravissimo, a lingerie retailer – high street and online presence – is taking the opportunity to end of life its physical servers and move to Nutanix’s virtual computing platform – basically, integrated compute and storage management, which DOES make sense of course! Not so long ago women were burning their bras, and now they’re being virtualised!
Back to the business angle from a Nutanix perspective… what it means is that what typically takes days and weeks to configure, and scales as well as an obese climber, is reduced to a trivial 30-60 minute exercise, AND, additional functionality and apps such as disaster recovery and replication become exactly that – just add-ons. I saw the same concept, pre-virtualisation, work extremely well with Isilon, and they did just fine being acquired by EMC a few years ago. But even Nutanix has to support several different Hypervisor platforms…
Welcome to the world of IT!
Been doing a few catch-ups with old vendor friends recently; one was Brocade – and more of this next month – which has a REAL SDN/NFV story – and another was NetScout; networking monitoring!!!! Except that network monitoring now goes way beyond SNMP probes and Sniffers.
Speaking with NetScout’s Phil Gray, who came into the company with the acquisition of Psytechnics, which had a voice/video monitoring technology, two things became abundantly clear in terms of where network monitoring/analysis has been going since the days of analysing millions of lines of packet captures:
– A key requirement is getting rid of all the superfluous data automatically, so searches are focused purely on relevant information. Contrast this with the old days of spending hours looking for one hexadecimal string as the proverbial digital needle in an OSI haystack…
2. – Gauging/measuring by end user experience, not some theoretical mathematical improvement, is network monitoring for 2015. Importantly, the voice and video element of network data monitoring is, of course, more relevant than ever. Another point is that this traffic has to be captured at speeds and feeds from basic Internet level through to 40Gbps and above. This is not trivial stuff, just as identifying and preventing security attacks isn’t. Yet the world of network monitoring doesn’t get the mass media hype that security does but, at the same time, the key word that always comes up in IT conversations is “visibility”. The bottom line is that traffic monitoring and analysis was important in 1988 and it’s even more important – and relevant – here in 2015. Whether it’s data trending, app performance management or simple data feed analysis, if you don’t know what’s actually on the network, how do you manage it?