Had some very interesting conversations in the past few weeks with a number of US-based vendors across primarily the security and optimisation sectors, with one commonality – established in the Americas but significantly less so in UK/EMEA.
The other issue for these companies currently lies in the uncertainty surrounding the UK and the dreaded “B” word; ideally these US vendors want an HQ in an English speaking location – well they do share some words – but not one that might be stranded from the rest of Europe (and the world).
A perfect example is Cybera, established enough in North America in the SD-WAN/WAN edge markets to be rated number one by Gartner in the small footprint retail WAN use case, but largely unknown in EMEA. And here’s the point – a WAN edge tech that excels in the branch/small office, SOHO, SMB and related environments is surely tailor-made for Europe, given its proliferation of distributed, small footprint locations favoured by so many companies – not least retail, banking, insurance…
Even then, as Roger Jones from Cybera explained to me, it’s not a simple 1:1 solution mapping from US to UK company equivalents. For example, in retail outlets, the US is way behind in terms of adopting contactless payment tech, so it’s not a case of a “one size fits all” solution. Equally, however, that opportunity is just as live in EMEA as it it is the US. For a lot of these vendors, some kind of “foot in the door” approach is a great way of making the first step (pun intended?) towards establishing a customer base in a new region. Cybera, like clients of mine I’ve worked with in the past, such as Aritari, has a great “foot in the door” approach in the form of an overlay solution which avoids the need to convince the customer to uproot their existing investment (which, let’s face it, they won’t!) but simply adds value to what they already have. Of course, down the line, the idea is indeed to replace their existing infrastructure but – shhh – I didn’t say that.
Meantime, I’m hoping to see more of the Cybera tech and monitor its EMEA footprint expansion, in spite of the current economic uncertainty. Another case of “watch this space”…
I recently completed a report for a long-time client, Kemp Technologies, in the area formerly known as L-B/ADC – i.e. Load-Balancing/Application Delivery Control.
It really hit home, during the testing, just how much this “technology” has changed. For starters, we didn’t ever really talk about the actual technology, other than understanding the underlying architecture/engine and how/why it all works. And that’s the point, because nor does the customer nowadays. Which is a good thing. In days of yore, when knights and dragons mixed company and Load-Balancers were simply big lumps of tin with a finite lifespan, the IT guys needed to understand the capacity of said box, how many could be bolted on and – if they got the sales guy drunk enough – when they’d realistically need a forklift upgrade to the next platform.
You then – as the customer – had to either architect the whole thing yourself, or spend many more $$$ or £££ or €€€ on consultation/services to get it done. And that was fine – at the time. But it was both expensive and limited. Now – as the customer – you simply need to be aware of what data and applications need to be accessed by whom (not even where, per se) and have an approximate idea of what data, apps and users you are going to be adding (or subtracting) in the future, and you just basically plug in – i.e. access a web-based console – and go. Well, it’s not quite that simple, but not far off. And there’s no programming involved, complex rules creation – i.e. extensive training required. So you don’t spend a gazillion bringing a team of SEs up to speed, who then bugger off to another company offering more money, shortly afterwards…
In other words, it’s very much a win-win scenario. In a hybrid cloud/OnPrem world it is all but impossible to know where your data and apps are, so it’s important that you don’t actually need to care about this -)
Anyway, enough bantering – please do download the report and understand what I’m talking about here. No boxes, no “use by dates”, just optimisation – as it was always meant to be.
It’s hard to recall a recent presentation from a vendor that didn’t include the AI or Machine-Learning buzz-phrases.
I’m not just talking IT here – coffee vending machines probably also incorporate some such logic; “by monitoring your coffee drinking profile, we are confident in pre-selecting your drink for you with total accuracy”. Actually, for me that’s not a complex algorithm – black Americano every time, just in case you’re buying…
In the world of Cyber Security though, it is fair to say that we’ve largely had our fill over its overuse – that and the “one size fits all” security story. No, it doesn’t, unless you are a company the size of – say – a Symantec, which has one of everything (and it’s all been designed to work on one platform, but that’s another story for another blog…). So it was refreshing recently to speak with a company called DataVisor in London, that a) doesn’t claim to do everything – the company focuses on fraud – and that b) does genuinely use AI (it correlates activity across accounts, so an anomaly is logically easy to spot). Of course, this can be done manually if you are one person with one account, but DataVisor has processed ITRO a trillion events across 4.2billion accounts. That’s a tough job to do with a network monitor and a spreadsheet…
Another important point to make is that, with security breaches in general, neither companies nor individuals think first about the impact of a breach, rather than just throwing money at preventing them – here’s the news: you can’t stop ALL attacks. Be prepared. So DataVisor focuses on impacts such as reputational damage, liability, the actual financial loss likely to be incurred, and so on. Common Sense as a Service (CSaaS).
Unsurprising therefore that the company is both growing rapidly and has some high profile customers across several verticals, from financial services to SM and mobile/gaming. And this despite hardly anyone having heard of the company. Methinks that’s very like to change in the very near future – definitely a vendor to keep an eye on (using AI or the manual method).
I remember when first testing Gigabit Ethernet (Packet Engines and Foundry Networks for the record) and thinking: “how do we harness this much bandwidth – maybe take a TDM approach and slice it? After all, who needs a gig of bandwidth direct to the desktop?”
Well, that was still a relevant point as only servers were really being fitted with Gigabit Ethernet NICs in ’98/’99 and even then they could rarely handle that level of traffic. Especially the OS/2 based ones… However, as a backbone network tech it was top notch. In a related metric, testing switches and appliances such as Load-Balancers and getting throughput in packets per second terms in excess of 100,000 pps was mad, crazy – the future! So, Mellanox (newly in the hands of nvidia – trust me, the latter do more that make graphics cards!) has just released the latest of its Ethernet Cloud Fabric products with lots of “Gigabit Ethernet” in it – i.e. 100/200/400 GbE switches. I’ll match that and raise you 400…
And forget six figure pps throughput figures. How about 8.3bn pps instead? You can’t test that with 30 NT servers -)
If you’re wondering who needs that level of performance in the DC, think no further than the millions of customers being serviced by AWS, Azure and all the other CSPs – that’s a lot of traffic being handled by the cloudy types on your behalf. And then there’s research, gas and oil/exploration, FinTec, the M&E guys and, not least – given the nvidia ownership – extreme virtual reality. Bring on Terabit Ethernet…
Meantime, I think we’ll need a lot of 5G deployment at the other end of the data chain! But that’s another discussion for another day. And whatever did happen to WiMAX?
So – another year and another Netevents; lots of old faces (not least mine!) and a few new ones. And it’s kind of the same with topics – understandably – as there’s only so much you can talk about in what is an increasingly condensed IT world.
Think about it – let’s take networking as an example. Well, in terms of connectivity, a LAN is Ethernet, other than a few SANs still around. And it’s not even referred to as a LAN any longer, just networking. When I started, I used to do multi-product tests, and each was a DIFFERENT type of LAN technology, not just six flavours of Ethernet. WANs – there were any number of different leased line providers and technologies: TDMs/fixed copper leased line, X.25, Frame Relay, dial-up, ISDN, and, in the higher echelons, ATM. SONET/SDH and finally MPLS.
And now people simply talk about cloud and Internet, public and private, OnPrem or OffPrem. “Cloud” also increasingly acts, in terms of being a single Service Provider point solution, as a replacement for other xSPs, SIs, VARs – you name it. Only security is still out there with a gazillion different flavours and approaches. Except that the world of IT is fed up with the complexity, time and cost of the validation and deployment (ongoing barely describes it) processes involved – just how DO you evaluate 200 different products that all claim to be an essential part of your security framework? – so, if a CSP or equivalent can provide the integrated security part of the deal too, it’s easy to see why that’s an attractive option. And that’s what we’re seeing now – solutions delivered that are integrating all those components in a more tightly-wrapped fashion than SIs could deliver of old. Moreover, some of the vendors at this year’s Netevents – such as Versa and NetFoundry – are combining elements (in this case SD-WAN and security and variations thereof) that are not merely bolt-ons to simply tick boxes, but are actually designed to be as good as any standalone (additional) alternative tech that is out there.
So, the market, in terms of options, is shrinking. Mergers and acquisitions are only accelerating this move. OK, so there are 83 billion tech start-ups a year (it is actually in the millions, globally, across all tech industries) but how many survive past the first (attempted) round of funding? And how many are REALLY offering something new and worthwhile? Think back to the early 80s, before PCs and networking became commonplace. How many options where there then? IBM, ICL, Marconi, DEC, Olivetti, Prime, Stratus, Tandem etc etc – but all essentially offering the same solution concept: a big box that powers lots of screens and keyboards. And if you wanted “WAN” connectivity in the UK, you simply went to BT and paid a fortune for the privilege So, AWS, Azure, VMware (being a unique element) and the rest are simply the “mainframe” providers of today. And all the other vendors are suppliers into the chain. But maybe that will give IT the stability it most definitely needs right now.
That said… while there can be too much choice, equally there can be too little. Back to those Netevents debates – if we accept that a “one size fits all” sock actually doesn’t (so the same sock really will fit from sizes 6-11???) then why would IT accept that a single technology – IoT – is equally suitable for kettles and power stations? Yes, both typically involve heating water, but that’s the only commonality. So, if you can hack a kettle, you can hack a power station. Is that what we’re really saying here?
So, much progress, but much more required… Well – that’s a great reason to have more IT events -)
OK – so that’s probably not the perfect headline to be announced by anyone who whistles through their teeth…
Been having some interesting conversations recently around the idea of zero trust security; primarily why there was ever anything but zero trust in the first place?
An obvious answer is that it suited the limitations of an old-school security architecture. A basic firewall, by definition, lets nothing or everything through it. So, unless you provide an element of trust, it would simply have blocked EVERYTHING. Pretty secure – at the time – but not massively productive… And, honestly, I’ve never locked myself out of the network by accident when testing firewalls in the past – just a squalid rumour -)
Now, of course, that ‘perimeter’ is bypassed c/o any number of alternative ways to “leave the building”, via Azure, AWS or whatever. VPNs obvious provide secure paths between two given endpoints, but how many people do you know who say “I love my VPN connection”? That’s not to say there aren’t great solutions out there – I’ve worked with some of them – but then there are the others that we won’t air in public…
All of which makes today’s conversation with Symantec, and its acquisition of Luminate, all the more interesting. I had a rather excellent update chat with Symantec very recently, where the slimmed-down, focus-sharpened, platform-based (Integrated Cyber Defense) approach it now employs, makes huge amounts of sense. Many times in this blog I’ve talked about the confused security industry and even more confused IT decision-makers within the enterprise, scratching their heads while thinking they need to acquire and integrate 14 products from eight different vendors into their existing strategy and setup.
We all know this does not – and cannot – work. Which is why a platform-based approach is the only away forward. Like a house built with no foundations, a security strategy based around loosely tying several arbitrary products together, does not stop the big bad wolf from blowing the house down.
So, Luminate could be classed, I guess, as the latest building block in this reconstruction project (which has had full planning permission granted). The idea behind Luminate is to create a zero trust application access architecture without traditional VPN appliances. It is designed to securely connect any user from any device, anywhere in the world to corporate applications, whether OnPrem, or in the cloud, while cloaking all other corporate resources. And there are no agents – and, ask any football manager, no one likes agents if they can get away without them! Again, rather like VPNs, I’ve worked with agent-based technologies that are so transparent they work perfectly. And then there are the others – AKA “how intrusive can you get?”. So, in terms of speeding up on-boarding of applications securely, the Luminate approach makes a lot of sense, but especially more so as part of an integrated platform, a point made by Gerry Grealish, Head of Product Marketing – Cloud & Network Security Products – at Symantec.
It also sounds like a good test project in the making -)
Anyone out there right now could be more than excused for thinking we’re drowning in security start-ups; too many “me too” vendors trying to resolve the same perceived problems – niche or broad.
Recently I met with the affable Liz Rice of Aqua Security – reverse cue the drowning gag -) to find a relatively early stage company that actually has.a more individual focus than most – in this case securing containers and notably the Kubernetes environment. This is a smart move as Kubernetes is beginning to rule the container world (no “shipping” figures here!) – see my forthcoming follow-up report with Densify for evidence of this. The point is that the DevOps community, who love Kubernetes, are not generally immersed in security. They want speed and flexibility; security – think of any gate, door, wall – just potentially – or deliberately – gets in the way. It’s the same scenario I’ve encountered over the decades when performing network optimisation testing and how to secure that network without compromising on the performance improvements being generated – it’s not a trivial task.
Aqua’s starting point is looking at the typical approach to container security – studying logs to identify malicious activity, raising alerts and stopping the machines – i.e. only after the proverbial horse has bolted, possibly weeks or months ago. As container adoption rates surge, and – additionally – cloud-native (gotta get the “c” word in there) infrastructure evolves to include Container-as-a-Service (CaaS), the security tools need to move in the same direction. A recent report from Forrester suggests that “vendors in or adjacent to the container ecosystem are all racing to show that they have relevant solutions for enterprise customers” and that enterprises should “explore both container-native and traditional security vendor solutions – innovations are coming fast and furious from both camps.” It’ll be interesting to see who wins the race, but Aqua is certainly going round the track in the right direction!
So I applaud Aqua for its focus here. Its bottom line is that a company should get the DevOps and security teams together in the same room (real or virtual) and work together to identify the potential attack vectors and assemble a container security program that is proactive in identifying and blocking potential threats. As I said, It is a reality that more enterprises are deploying containers and other tools to help build and ship applications faster – solutions designed to be easy-to-use and to improve developer agility – not with production deployments and associated security requirements in mind. Aqua’s view is right there with my own – a solution is not to slow deployments down, it’s to automate the operations and security processes around these tools so everyone wins (and gets longer holidays).
A lot of the work I’ve been doing over the past 12-18 months has been around unifying security with the rest of IT (i.e. as it should have been from the start) and thereafter automating as much of it as possible, in order to remove the “Friday afternoon error syndrome” and this focus is going to take a massive step forward this year (but I can’t talk about it right now!) so Aqua’s approach definitely feeds in that direction – I look forward to furthering the Aqua story in the near future.
Of course, a lot of end user companies out there trying to understand what “digital transformation” actually means (hopefully not as painful as Kodak found it in their own way), let alone the bringing together of “islands of IT” and automating what can be automated, need HELP. I also recently met with Mark Cook, Group CEO of Getronics, a company wot is trying to bring IT into what is very soon the be the next decade – (argh – where has the rest of this century gone – I still remember the cheap Chinese fireworks I bought to celebrate the millennium shooting off at shin height around the garden…?).
The old cliché about businesses focusing on their core business and not turning into IT shops (if you make biscuits, make and market biscuits, not Windows 10 and endpoint security) has never rang louder and more true, as many traditional companies are struggling or going under – cue more drowning gags. While the cloud is marketed as making life easier, that’s only true if you know how to manage it, as my recent report for Densify showed – some of the costs that companies are incidentally racking up are frankly shocking. So it will be interesting to see how the likes of Getronics can help get companies up to speed on their IT – in all aspects – even as the next tech start-up emerges; oh, and there’s the next – and the next… just to make that vendor/tech decision-making process all the harder (albeit often in a good way).
It’s the same challenge for investors looking to back the right tech. One interesting diversion for Getronics is its recently established Investment Services Group. This is aimed at providing the private equity community with a combination of services – digital evaluation, transformation and management thereof, that are designed to rapidly assess, value and unlock the value of acquisitions, end to end – from initial due diligence to final exit. Given my involvement in such areas, I’m especially interested in how this pans out, so here’s yet another case of “let’s revisit” and “watch this space”.
One common theme with recent product tests and vendor briefings and that is – automation.
Not that there’s anything new here per se; back in the mid-90s I remember working on an extensive project with the splendid Roger Green of Boole & Babbage (long since part of BMC – the company that is, not Roger) to fully automate the network fault finding and remediation thereof.
Possibly in the interests of keeping a million networking SEs in work still, automation seemed to take something of a back seat (computerised cars gag?) for a while, but now it’s back with a bang (i.e. what happens when you get one human driver on the same road as otherwise computerised cars) as I noted in an excellent briefing with Mike Wood, now of Apstra. As Mike himself said, I’ve got such an extensive collection of his biz cards over the years, I could probably decorate my walls with them, but then he’s far from the only one – and all you others, you know who you are!
Apstra has defined “intent-based networking” with the aim of fully automating the whole network service lifecycle, which kind of brings us full circle from those mid-90s days with B&B. Key to this is the validation of what you create, before you get it catastrophically wrong. Definitely a company to keep an eye or two (one, if you’re an octopus) on and I will be chatting more about Apstra, post deep dive demo in Feb. Meantime, it will be interesting to monitor how many other vendors decide that they too are “intent-based networking” oriented -)
Meantime x2, back on the automation front, I’ve just finished a test report on Densify (the artist formerly known as Cirba, as we all say) whose multi-cloud/hybrid management and optimisation tech has recently become fully automated, thanks to Cloe. Now, I haven’t met her, but I have seen her in action and what she can do with one line of code is nobody’s business… Rather than wax lyrical about the tech here, far better an option is to take a look at the report itself (it’s got plenty of pictures between the words):
And please do proffer feedback – that’s what these blogs are all about! More to come on Densify this quarter, so be watching of this space…
And finally, a Happy New (Brexit – what’s that?) Year to all!
Two and a bit years ago – I should remember, it was on my birthday! – I was presenting a panel debate on the latest Cybersecurity deterrents. With four vendors and two consultants on the panel, it was the general consensus that deciding exactly which combination of products and services a company needed to invest in was more difficult than sorting out Brexit…
One of the issues was: exactly what replaces what, and what compliments what? The problem here was that few of the products on offer actually appeared to truly replace and consolidate existing products – and haven’t we all spent enough on different security products already? The concerns here are two-fold; not just CapEx and OpEx, but the complications of integrating and managing that increasingly large portfolio of products. One configuration slip and you leave a gaping security hole the size of the NHS defence strategy.
So, it was with a combination of pleasure and relief that, at another event (both organised by Mark “Mr Netevents” Fox) I rejoiced in the company of JASK (even if the name grates – Just ASK, in case you just ask) and one particular slide from a Citi Research CIO report, which showed individual vendors replacing incumbent vendors and – in some cases – more than one! In the case of JASK – which itself consolidates security alerts and feeds from multiple platforms – it showed the company displacing IBM QRadar AND Splunk. And it gets better for this Yorkshire – the commentary on why it achieved this? Saving MONEY!!! Shock heading – IT SAVES MONEY -) Not before time…
At the same event, another vendor – this time in the SD-WAN space – Versa, has a similar take on its own market, that of cutting out the middle man and doing it all itself. The Versa solution is a complete SD-WAN stack, all software, that even includes the security element. One to definitely get my paws on. This is a valid message that Versa is promoting as, post WanOp market (as defined by Gartner/Riverbed), that same level of confusion has arisen, as in: “what do I keep, what do I chuck, what else do I need to buy?”. The Versa answer is – “buy our product, end of story”, or pretty much so. Of course, I have clients wot can add still to the Versa offering, but that’s another story…
And while we’re on the subject of feature consolidation, another company I’ve been following recently, Avi Networks, in the – what was historically – Load-Balancing/App Delivery Controller market, is simplifying the decision-making process too. In addition to load-balancing traffic in any direction on pretty well any platform – real, virtual, container – locally, and globally, multi-cloud etc – it combines now with Istio to create what it calls a Universal Service Mesh. Basically, Istio manages a network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, its’ requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication. Istio provides behavioural insights and operational control over the service mesh as a whole. So, we now have macro and micro control of applications, in addition to identity-based security, real-time application monitoring, and enterprise-grade authentication and authorisation, as well as the Web App Firewall (WAF) security that Avi already offers.
Seems like the decision-making processes in IT are finally becoming streamlined – or is this wishful thinking? Either way, sounds like a good IT New Year’s Resolution – must consolidate…
Ah – tomorrow – the dreaded Black Friday, yet another unwanted import from Trumpland.
But, JASK – already previously subjected here to the blogging treatment – has some security advice to offer as they, rightly, note that the criminal fraternity have a great window of opportunity to take advantage of the online buying frenzy.
The company advises that stealing from and defrauding shoppers too excited by the lure of a huge discount to be aware they are being scammed, is accompanied by very focused campaigns, planned well in advance and that follow to the date and specific product detail offers that shoppers may publish in order to orchestrate and disguise their attack campaigns. JASK also notes that said malicious actors have the capability of affecting or even preventing stores operating online, but then a badly designed website/capacity/redundancy is equally capable of achieving the same result -)
Rod Soto, Director of Security at JASK, suggests a few damage limitation methods, such as using a credit card, rather than a debit card, so there are at least some safeguards, shop at familiar (to you) sites and use common sense where an offer appears too good to be true and don’t click on that link!
Two more pieces of advice are equally applicable to general online activity – one, never repeat your passwords in other online shops and use password vaults to generate strong and random passwords (and don’t use your phone browser) and two, make sure your computer is up to date with patches and security fixes.
I have one further piece of advice to proffer: forget stupid Black Friday shopping and just go down the pub instead…