More Gherkin And IT As A Business Strategy – Back at the Gherkin event, two of the primary vendors who were present for the discussions actually substantiated the previous blog comments about basing a security strategy around the business, rather than as a bolt-on.
Both Apstra and NetFoundry have somewhat “challenging” strap lines – “intent-based networking” and “Network as a Service meets Connectivity as a Code” respectively, but dig beneath the marketing-ese in each case and you get to some real foundational IT – proper building blocks for the artist formerly known as networking and WAN connectivity. We went through that whole “middleware” phase 20 years ago, but no one really knew what it actually was. Apstra and NetFoundry do, but they don’t call it that. But in both cases, this really is “glue” code – layers that pull networking, apps and services together, and optimise the management and delivery thereof.
Apstra does the “orchestration” job that, again, many vendors once claimed to do, but a) couldn’t and b) didn’t really know exactly what it was they were trying to do in the first place, hence point a). It automates the conversations between the network elements and the life-cycle they create – hence it works as the business itself works – as a flow of information and services that makes a business, well, a business. And optimises it in turn. Equally, NetFoundry plays an equivalent role outside of the data centre, controlling the destiny of applications – essentially turning apps into secure private networks. Years ago, many of the major networking vendors were talking up the idea of “the application is the network”. But it wasn’t – not back then. By that they simply meant some prioritisation mechanisms. Ones that no one ever enabled on their routers and switches. This is proper embedded code so, again, it is a fundamental building block.
I like this approach. I can throw another layer into the mix too – the control of the basic business flow process, and that is thingamy.com – if you haven’t checked these guys out, then do. It’s what turns the likes of ERP and related processes into something designed to work in 2020 and beyond, not 1980.
Who knows – IT might actually work one day -))))
Had another interesting cloudy discussion last week with SIOS – one of ITs better kept secrets, certainly in the UK.
Much of that is down to the sensitivity of what it does and who it does it for. The vendor deals with HA – high availability (and disaster recovery) – and companies aren’t too keen to admit that their products and services, data and applications aren’t necessarily always available. Like a bank saying – well, we might have your money still in our reserves, if our latest investments haven’t gone wotsits up. Of course, that does happen, but they don’t actually tell you…
Where the HA scenario has got especially interesting for SIOS – and its customers – is with the advent of t’cloud (as we call it in Yorkshire, where we see many). HA in a one CSP scenario is not straightforward – HA was always an OnPrem scenario, even when failover time was so long that you could go on holiday while it was happening. At least you could see it happening in front of you (assuming you were in the machine room/data centre and not on holiday). But within the cloud there is no shared storage. And what if you are putting your golden data/apps eggs in several cloudy baskets – say AWS, Azure and Google. Now they are not interested in ensuring that if and when their service fails, then another CSPs kicks in! So, what SIOS provides is HA clustering regardless of app/data, OS or CSP/server location. It can emulate physical disk resource to simulate those physical clusters across resource and cluster workload.
Of course, what this means is that no two customer scenarios will be the same. Good luck chaps! But seriously, in the world of “value add” it gives partners enormous flexibility in managing those scenarios and providing that value add. Put it this way – with my old IT hat on, I would not want to attempt a DIY alternative to what SIOS offers. Another side effect of the SIOS approach is that it can save massively on licensing costs, as you’re paying for less physical resource.
Looking forward to hopefully seeing some of this tech in action in the future, so watch this space!
Had my first visit to The Gherkin recently at a “mini” Netevents security briefing in London.
I can certainly recommend the brioche-bun bacon butties with a view of the London rain from the 38th floor. What was different about this Netevents is that we had real people there – i.e. not just tech IT pro’s but guys who actually have to work directly with people and make stuff work. It always makes it more interesting when you get to hear from the coal face (I was there in the 80s, I know what it’s like). Not least Brian Lord, who formerly had the simple task of running GCHQs security but now fronts an independent consultancy, PGI – so he’s still at the sharp end.
One of the realistic messages to come out of the briefings was that what is key is not how you’re protecting your crown jewels, but which crown jewels you should be protecting. In other words – what’s the one thing you would rescue from a burning building that would cost you your business/life? For many industries – retail, transport, manufacturing, banking etc – that answer is obvious – customer data. Have that breached and you could be accountable for billions. Throw in the casual “fact” that only about 20% of investment in IT security is actually put to use, and you do wonder why so many start-up vendors in this sector still focus on how their tech protects and not what it is actually protecting in the first place.
Not that this is a new message, but it’s one the start-up vendors especially need to take seriously. For every one that makes headline news with a $$$$ dollar acquisition, many others quietly fade away and die. At the end of his panel debate, Brian asked “what will we be talking about in cyber security in five years’ time?” Methinks, exactly what we were talking about in the Gherkin, since that’s what we were talking about five years previously… But then that’s IT – it is cyclical. It keeps people in jobs, just like manufacturing new and totally unnecessary features in cars, to lure people to trade in a perfectly usable vehicle, to spend money on features they don’t need. Mind you, my parents used to do the same thing with their three-piece suites; and they were all still made out of dralon…
One of Brian’s key focus areas – unsurprisingly – is government; the biggest cyber target of all. So why is the UK government spending its entire time not working out Brexit backstops instead of protecting its eBorders – discuss! Maybe we should all invest in cyber criminals -)))
The final clear point from the excellent discussions was that – still – security is not aligned with the business process. Back to the car analogy – it’s like having your garage a mile away from the house. OK, so in Dartmouth that’s normal but… I’ve been doing some background work with an old IT buddy, Roger Green, on this very subject. It’s simple enough – strategy comes before technology. Just when are companies actually going to adapt this approach? I guess we’ll be talking about that five years from now…
Had some very interesting conversations in the past few weeks with a number of US-based vendors across primarily the security and optimisation sectors, with one commonality – established in the Americas but significantly less so in UK/EMEA.
The other issue for these companies currently lies in the uncertainty surrounding the UK and the dreaded “B” word; ideally these US vendors want an HQ in an English speaking location – well they do share some words – but not one that might be stranded from the rest of Europe (and the world).
A perfect example is Cybera, established enough in North America in the SD-WAN/WAN edge markets to be rated number one by Gartner in the small footprint retail WAN use case, but largely unknown in EMEA. And here’s the point – a WAN edge tech that excels in the branch/small office, SOHO, SMB and related environments is surely tailor-made for Europe, given its proliferation of distributed, small footprint locations favoured by so many companies – not least retail, banking, insurance…
Even then, as Roger Jones from Cybera explained to me, it’s not a simple 1:1 solution mapping from US to UK company equivalents. For example, in retail outlets, the US is way behind in terms of adopting contactless payment tech, so it’s not a case of a “one size fits all” solution. Equally, however, that opportunity is just as live in EMEA as it it is the US. For a lot of these vendors, some kind of “foot in the door” approach is a great way of making the first step (pun intended?) towards establishing a customer base in a new region. Cybera, like clients of mine I’ve worked with in the past, such as Aritari, has a great “foot in the door” approach in the form of an overlay solution which avoids the need to convince the customer to uproot their existing investment (which, let’s face it, they won’t!) but simply adds value to what they already have. Of course, down the line, the idea is indeed to replace their existing infrastructure but – shhh – I didn’t say that.
Meantime, I’m hoping to see more of the Cybera tech and monitor its EMEA footprint expansion, in spite of the current economic uncertainty. Another case of “watch this space”…
I recently completed a report for a long-time client, Kemp Technologies, in the area formerly known as L-B/ADC – i.e. Load-Balancing/Application Delivery Control.
It really hit home, during the testing, just how much this “technology” has changed. For starters, we didn’t ever really talk about the actual technology, other than understanding the underlying architecture/engine and how/why it all works. And that’s the point, because nor does the customer nowadays. Which is a good thing. In days of yore, when knights and dragons mixed company and Load-Balancers were simply big lumps of tin with a finite lifespan, the IT guys needed to understand the capacity of said box, how many could be bolted on and – if they got the sales guy drunk enough – when they’d realistically need a forklift upgrade to the next platform.
You then – as the customer – had to either architect the whole thing yourself, or spend many more $$$ or £££ or €€€ on consultation/services to get it done. And that was fine – at the time. But it was both expensive and limited. Now – as the customer – you simply need to be aware of what data and applications need to be accessed by whom (not even where, per se) and have an approximate idea of what data, apps and users you are going to be adding (or subtracting) in the future, and you just basically plug in – i.e. access a web-based console – and go. Well, it’s not quite that simple, but not far off. And there’s no programming involved, complex rules creation – i.e. extensive training required. So you don’t spend a gazillion bringing a team of SEs up to speed, who then bugger off to another company offering more money, shortly afterwards…
In other words, it’s very much a win-win scenario. In a hybrid cloud/OnPrem world it is all but impossible to know where your data and apps are, so it’s important that you don’t actually need to care about this -)
Anyway, enough bantering – please do download the report and understand what I’m talking about here. No boxes, no “use by dates”, just optimisation – as it was always meant to be.
It’s hard to recall a recent presentation from a vendor that didn’t include the AI or Machine-Learning buzz-phrases.
I’m not just talking IT here – coffee vending machines probably also incorporate some such logic; “by monitoring your coffee drinking profile, we are confident in pre-selecting your drink for you with total accuracy”. Actually, for me that’s not a complex algorithm – black Americano every time, just in case you’re buying…
In the world of Cyber Security though, it is fair to say that we’ve largely had our fill over its overuse – that and the “one size fits all” security story. No, it doesn’t, unless you are a company the size of – say – a Symantec, which has one of everything (and it’s all been designed to work on one platform, but that’s another story for another blog…). So it was refreshing recently to speak with a company called DataVisor in London, that a) doesn’t claim to do everything – the company focuses on fraud – and that b) does genuinely use AI (it correlates activity across accounts, so an anomaly is logically easy to spot). Of course, this can be done manually if you are one person with one account, but DataVisor has processed ITRO a trillion events across 4.2billion accounts. That’s a tough job to do with a network monitor and a spreadsheet…
Another important point to make is that, with security breaches in general, neither companies nor individuals think first about the impact of a breach, rather than just throwing money at preventing them – here’s the news: you can’t stop ALL attacks. Be prepared. So DataVisor focuses on impacts such as reputational damage, liability, the actual financial loss likely to be incurred, and so on. Common Sense as a Service (CSaaS).
Unsurprising therefore that the company is both growing rapidly and has some high profile customers across several verticals, from financial services to SM and mobile/gaming. And this despite hardly anyone having heard of the company. Methinks that’s very like to change in the very near future – definitely a vendor to keep an eye on (using AI or the manual method).
I remember when first testing Gigabit Ethernet (Packet Engines and Foundry Networks for the record) and thinking: “how do we harness this much bandwidth – maybe take a TDM approach and slice it? After all, who needs a gig of bandwidth direct to the desktop?”
Well, that was still a relevant point as only servers were really being fitted with Gigabit Ethernet NICs in ’98/’99 and even then they could rarely handle that level of traffic. Especially the OS/2 based ones… However, as a backbone network tech it was top notch. In a related metric, testing switches and appliances such as Load-Balancers and getting throughput in packets per second terms in excess of 100,000 pps was mad, crazy – the future! So, Mellanox (newly in the hands of nvidia – trust me, the latter do more that make graphics cards!) has just released the latest of its Ethernet Cloud Fabric products with lots of “Gigabit Ethernet” in it – i.e. 100/200/400 GbE switches. I’ll match that and raise you 400…
And forget six figure pps throughput figures. How about 8.3bn pps instead? You can’t test that with 30 NT servers -)
If you’re wondering who needs that level of performance in the DC, think no further than the millions of customers being serviced by AWS, Azure and all the other CSPs – that’s a lot of traffic being handled by the cloudy types on your behalf. And then there’s research, gas and oil/exploration, FinTec, the M&E guys and, not least – given the nvidia ownership – extreme virtual reality. Bring on Terabit Ethernet…
Meantime, I think we’ll need a lot of 5G deployment at the other end of the data chain! But that’s another discussion for another day. And whatever did happen to WiMAX?
So – another year and another Netevents; lots of old faces (not least mine!) and a few new ones. And it’s kind of the same with topics – understandably – as there’s only so much you can talk about in what is an increasingly condensed IT world.
Think about it – let’s take networking as an example. Well, in terms of connectivity, a LAN is Ethernet, other than a few SANs still around. And it’s not even referred to as a LAN any longer, just networking. When I started, I used to do multi-product tests, and each was a DIFFERENT type of LAN technology, not just six flavours of Ethernet. WANs – there were any number of different leased line providers and technologies: TDMs/fixed copper leased line, X.25, Frame Relay, dial-up, ISDN, and, in the higher echelons, ATM. SONET/SDH and finally MPLS.
And now people simply talk about cloud and Internet, public and private, OnPrem or OffPrem. “Cloud” also increasingly acts, in terms of being a single Service Provider point solution, as a replacement for other xSPs, SIs, VARs – you name it. Only security is still out there with a gazillion different flavours and approaches. Except that the world of IT is fed up with the complexity, time and cost of the validation and deployment (ongoing barely describes it) processes involved – just how DO you evaluate 200 different products that all claim to be an essential part of your security framework? – so, if a CSP or equivalent can provide the integrated security part of the deal too, it’s easy to see why that’s an attractive option. And that’s what we’re seeing now – solutions delivered that are integrating all those components in a more tightly-wrapped fashion than SIs could deliver of old. Moreover, some of the vendors at this year’s Netevents – such as Versa and NetFoundry – are combining elements (in this case SD-WAN and security and variations thereof) that are not merely bolt-ons to simply tick boxes, but are actually designed to be as good as any standalone (additional) alternative tech that is out there.
So, the market, in terms of options, is shrinking. Mergers and acquisitions are only accelerating this move. OK, so there are 83 billion tech start-ups a year (it is actually in the millions, globally, across all tech industries) but how many survive past the first (attempted) round of funding? And how many are REALLY offering something new and worthwhile? Think back to the early 80s, before PCs and networking became commonplace. How many options where there then? IBM, ICL, Marconi, DEC, Olivetti, Prime, Stratus, Tandem etc etc – but all essentially offering the same solution concept: a big box that powers lots of screens and keyboards. And if you wanted “WAN” connectivity in the UK, you simply went to BT and paid a fortune for the privilege So, AWS, Azure, VMware (being a unique element) and the rest are simply the “mainframe” providers of today. And all the other vendors are suppliers into the chain. But maybe that will give IT the stability it most definitely needs right now.
That said… while there can be too much choice, equally there can be too little. Back to those Netevents debates – if we accept that a “one size fits all” sock actually doesn’t (so the same sock really will fit from sizes 6-11???) then why would IT accept that a single technology – IoT – is equally suitable for kettles and power stations? Yes, both typically involve heating water, but that’s the only commonality. So, if you can hack a kettle, you can hack a power station. Is that what we’re really saying here?
So, much progress, but much more required… Well – that’s a great reason to have more IT events -)
OK – so that’s probably not the perfect headline to be announced by anyone who whistles through their teeth…
Been having some interesting conversations recently around the idea of zero trust security; primarily why there was ever anything but zero trust in the first place?
An obvious answer is that it suited the limitations of an old-school security architecture. A basic firewall, by definition, lets nothing or everything through it. So, unless you provide an element of trust, it would simply have blocked EVERYTHING. Pretty secure – at the time – but not massively productive… And, honestly, I’ve never locked myself out of the network by accident when testing firewalls in the past – just a squalid rumour -)
Now, of course, that ‘perimeter’ is bypassed c/o any number of alternative ways to “leave the building”, via Azure, AWS or whatever. VPNs obvious provide secure paths between two given endpoints, but how many people do you know who say “I love my VPN connection”? That’s not to say there aren’t great solutions out there – I’ve worked with some of them – but then there are the others that we won’t air in public…
All of which makes today’s conversation with Symantec, and its acquisition of Luminate, all the more interesting. I had a rather excellent update chat with Symantec very recently, where the slimmed-down, focus-sharpened, platform-based (Integrated Cyber Defense) approach it now employs, makes huge amounts of sense. Many times in this blog I’ve talked about the confused security industry and even more confused IT decision-makers within the enterprise, scratching their heads while thinking they need to acquire and integrate 14 products from eight different vendors into their existing strategy and setup.
We all know this does not – and cannot – work. Which is why a platform-based approach is the only away forward. Like a house built with no foundations, a security strategy based around loosely tying several arbitrary products together, does not stop the big bad wolf from blowing the house down.
So, Luminate could be classed, I guess, as the latest building block in this reconstruction project (which has had full planning permission granted). The idea behind Luminate is to create a zero trust application access architecture without traditional VPN appliances. It is designed to securely connect any user from any device, anywhere in the world to corporate applications, whether OnPrem, or in the cloud, while cloaking all other corporate resources. And there are no agents – and, ask any football manager, no one likes agents if they can get away without them! Again, rather like VPNs, I’ve worked with agent-based technologies that are so transparent they work perfectly. And then there are the others – AKA “how intrusive can you get?”. So, in terms of speeding up on-boarding of applications securely, the Luminate approach makes a lot of sense, but especially more so as part of an integrated platform, a point made by Gerry Grealish, Head of Product Marketing – Cloud & Network Security Products – at Symantec.
It also sounds like a good test project in the making -)
Anyone out there right now could be more than excused for thinking we’re drowning in security start-ups; too many “me too” vendors trying to resolve the same perceived problems – niche or broad.
Recently I met with the affable Liz Rice of Aqua Security – reverse cue the drowning gag -) to find a relatively early stage company that actually has.a more individual focus than most – in this case securing containers and notably the Kubernetes environment. This is a smart move as Kubernetes is beginning to rule the container world (no “shipping” figures here!) – see my forthcoming follow-up report with Densify for evidence of this. The point is that the DevOps community, who love Kubernetes, are not generally immersed in security. They want speed and flexibility; security – think of any gate, door, wall – just potentially – or deliberately – gets in the way. It’s the same scenario I’ve encountered over the decades when performing network optimisation testing and how to secure that network without compromising on the performance improvements being generated – it’s not a trivial task.
Aqua’s starting point is looking at the typical approach to container security – studying logs to identify malicious activity, raising alerts and stopping the machines – i.e. only after the proverbial horse has bolted, possibly weeks or months ago. As container adoption rates surge, and – additionally – cloud-native (gotta get the “c” word in there) infrastructure evolves to include Container-as-a-Service (CaaS), the security tools need to move in the same direction. A recent report from Forrester suggests that “vendors in or adjacent to the container ecosystem are all racing to show that they have relevant solutions for enterprise customers” and that enterprises should “explore both container-native and traditional security vendor solutions – innovations are coming fast and furious from both camps.” It’ll be interesting to see who wins the race, but Aqua is certainly going round the track in the right direction!
So I applaud Aqua for its focus here. Its bottom line is that a company should get the DevOps and security teams together in the same room (real or virtual) and work together to identify the potential attack vectors and assemble a container security program that is proactive in identifying and blocking potential threats. As I said, It is a reality that more enterprises are deploying containers and other tools to help build and ship applications faster – solutions designed to be easy-to-use and to improve developer agility – not with production deployments and associated security requirements in mind. Aqua’s view is right there with my own – a solution is not to slow deployments down, it’s to automate the operations and security processes around these tools so everyone wins (and gets longer holidays).
A lot of the work I’ve been doing over the past 12-18 months has been around unifying security with the rest of IT (i.e. as it should have been from the start) and thereafter automating as much of it as possible, in order to remove the “Friday afternoon error syndrome” and this focus is going to take a massive step forward this year (but I can’t talk about it right now!) so Aqua’s approach definitely feeds in that direction – I look forward to furthering the Aqua story in the near future.
Of course, a lot of end user companies out there trying to understand what “digital transformation” actually means (hopefully not as painful as Kodak found it in their own way), let alone the bringing together of “islands of IT” and automating what can be automated, need HELP. I also recently met with Mark Cook, Group CEO of Getronics, a company wot is trying to bring IT into what is very soon the be the next decade – (argh – where has the rest of this century gone – I still remember the cheap Chinese fireworks I bought to celebrate the millennium shooting off at shin height around the garden…?).
The old cliché about businesses focusing on their core business and not turning into IT shops (if you make biscuits, make and market biscuits, not Windows 10 and endpoint security) has never rang louder and more true, as many traditional companies are struggling or going under – cue more drowning gags. While the cloud is marketed as making life easier, that’s only true if you know how to manage it, as my recent report for Densify showed – some of the costs that companies are incidentally racking up are frankly shocking. So it will be interesting to see how the likes of Getronics can help get companies up to speed on their IT – in all aspects – even as the next tech start-up emerges; oh, and there’s the next – and the next… just to make that vendor/tech decision-making process all the harder (albeit often in a good way).
It’s the same challenge for investors looking to back the right tech. One interesting diversion for Getronics is its recently established Investment Services Group. This is aimed at providing the private equity community with a combination of services – digital evaluation, transformation and management thereof, that are designed to rapidly assess, value and unlock the value of acquisitions, end to end – from initial due diligence to final exit. Given my involvement in such areas, I’m especially interested in how this pans out, so here’s yet another case of “let’s revisit” and “watch this space”.