Networks Generation


November 28, 2019  12:51 PM

Taming Windows As A Service – Making The Incompatible Compatible…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

With so much IT focus on the cloudy landscape, it’s easy to forget that what is happening at the desktop is still the touchy-feely point of IT contact for the users themselves.

And much of that still revolves around Microsoft and the age-old Windows platform. Except that, with Windows 10, Microsoft has forever changed the Windows landscape, it now being provided as Windows as a Service. This fundamentally changes the ongoing ownership of an estate of Windows-based endpoints, not least that Microsoft releases two features updates a year – the first to screw the platform and the second to fix it -)))) Well, sort of… Regardless, what it means is that, every time there’s an update, there are possible compatibility and compliance issues, that simply won’t resolve themselves without due analysis and remediation.

But this is not a trivial operation, especially with a large application estate. The old cliché about sufficient monkeys and typewriters producing the complete works of Shakespeare is actually complete drivel (though Measure For Measure isn’t actually that good anyway), and the same applies to an IT team and compatibility testing. There simple isn’t enough time between updates to manually test and remediate issues. End of.

So, recently I’ve been working with a UK vendor, Rimo3 – https://rimo3.com/ which has developed a cost-effective, timely alternative – ACTIV – using automation to massively reduce those test and remediation times. And it works. Moreover, the company is looking at VDI and virtualised environments in the same way, so a complete desktop estate could be managed in an automated fashion. What’s key about all this is that we’re not talking about some fanciful future solution for a potential problem, but something which resolves a major problem that exists right now. Can a company – at least of a given size – really manage its product portfolio manually, especially when it probably doesn’t even know the extent of that portfolio in the first place? You’d be amazed how few companies really know just how many applications are in their estate. Most don’t even get it close…

Anyway – enough jabbering on – don’t want to get in the way of Black Friday Christmas shopping, so check out the report itself to fully appreciate the scale of what is being achieved here:

https://rimo3.com/wp-content/uploads/2019/11/Broadband_Testing_Rimo3_ACTIV_Report.pdf

 

 

 

October 22, 2019  11:35 AM

More From The Gherkin

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

More Gherkin And IT As A Business Strategy – Back at the Gherkin event, two of the primary vendors who were present for the discussions actually substantiated the previous blog comments about basing a security strategy around the business, rather than as a bolt-on.

Both Apstra and NetFoundry have somewhat “challenging” strap lines – “intent-based networking” and “Network as a Service meets Connectivity as a Code” respectively, but dig beneath the marketing-ese in each case and you get to some real foundational IT – proper building blocks for the artist formerly known as networking and WAN connectivity. We went through that whole “middleware” phase 20 years ago, but no one really knew what it actually was. Apstra and NetFoundry do, but they don’t call it that. But in both cases, this really is “glue” code – layers that pull networking, apps and services together, and optimise the management and delivery thereof.

Apstra does the “orchestration” job that, again, many vendors once claimed to do, but a) couldn’t and b) didn’t really know exactly what it was they were trying to do in the first place, hence point a). It automates the conversations between the network elements and the life-cycle they create – hence it works as the business itself works – as a flow of information and services that makes a business, well, a business. And optimises it in turn. Equally, NetFoundry plays an equivalent role outside of the data centre, controlling the destiny of applications – essentially turning apps into secure private networks. Years ago, many of the major networking vendors were talking up the idea of “the application is the network”. But it wasn’t – not back then. By that they simply meant some prioritisation mechanisms. Ones that no one ever enabled on their routers and switches. This is proper embedded code so, again, it is a fundamental building block.

I like this approach. I can throw another layer into the mix too – the control of the basic business flow process, and that is thingamy.com – if you haven’t checked these guys out, then do. It’s what turns the likes of ERP and related processes into something designed to work in 2020 and beyond, not 1980.

Who knows – IT might actually work one day -))))

 

 


October 22, 2019  10:11 AM

Different Clouds Don’t Like To Cluster…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

Had another interesting cloudy discussion last week with SIOS – one of ITs better kept secrets, certainly in the UK.

Much of that is down to the sensitivity of what it does and who it does it for. The vendor deals with HA  – high availability (and disaster recovery) – and companies aren’t too keen to admit that their products and services, data and applications aren’t necessarily always available. Like a bank saying – well, we might have your money still in our reserves, if our latest investments haven’t gone wotsits up. Of course, that does happen, but they don’t actually tell you…

Where the HA scenario has got especially interesting for SIOS – and its customers – is with the advent of t’cloud (as we call it in Yorkshire, where we see many). HA in a one CSP scenario is not straightforward – HA was always an OnPrem scenario, even when failover time was so long that you could go on holiday while it was happening. At least you could see it happening in front of you (assuming you were in the machine room/data centre and not on holiday). But within the cloud there is no shared storage. And what if you are putting your golden data/apps eggs in several cloudy baskets – say AWS, Azure and Google. Now they are not interested in ensuring that if and when their service fails, then another CSPs kicks in! So, what SIOS provides is HA clustering regardless of app/data, OS or CSP/server location. It can emulate physical disk resource to simulate those physical clusters across resource and cluster workload.

Of course, what this means is that no two customer scenarios will be the same. Good luck chaps! But seriously, in the world of “value add” it gives partners enormous flexibility in managing those scenarios and providing that value add. Put it this way – with my old IT hat on, I would not want to attempt a DIY alternative to what SIOS offers. Another side effect of the SIOS approach is that it can save massively on licensing costs, as you’re paying for less physical resource.

Looking forward to hopefully seeing some of this tech in action in the future, so watch this space!

 

 

 


October 22, 2019  9:00 AM

Secure Bacon Butties With Gherkin But No Gherkins

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

Had my first visit to The Gherkin recently at a “mini” Netevents security briefing in London.

I can certainly recommend the brioche-bun bacon butties with a view of the London rain from the 38th floor. What was different about this Netevents is that we had real people there – i.e. not just tech IT pro’s but guys who actually have to work directly with people and make stuff work. It always makes it more interesting when you get to hear from the coal face (I was there in the 80s, I know what it’s like). Not least Brian Lord, who formerly had the simple task of running GCHQs security but now fronts an independent consultancy, PGI – so he’s still at the sharp end.

One of the realistic messages to come out of the briefings was that what is key is not how you’re protecting your crown jewels, but which crown jewels you should be protecting. In other words – what’s the one thing you would rescue from a burning building that would cost you your business/life? For many industries – retail, transport, manufacturing, banking etc – that answer is obvious – customer data. Have that breached and you could be accountable for billions. Throw in the casual “fact” that only about 20% of investment in IT security is actually put to use, and you do wonder why so many start-up vendors in this sector still focus on how their tech protects and not what it is actually protecting in the first place.

Not that this is a new message, but it’s one the start-up vendors especially need to take seriously. For every one that makes headline news with a $$$$ dollar acquisition, many others quietly fade away and die. At the end of his panel debate, Brian asked “what will we be talking about in cyber security in five years’ time?” Methinks, exactly what we were talking about in the Gherkin, since that’s what we were talking about five years previously… But then that’s IT – it is cyclical. It keeps people in jobs, just like manufacturing new and totally unnecessary features in cars, to lure people to trade in a perfectly usable vehicle, to spend money on features they don’t need. Mind you, my parents used to do the same thing with their three-piece suites; and they were all still made out of dralon…

One of Brian’s key focus areas – unsurprisingly – is government; the biggest cyber target of all. So why is the UK government spending its entire time not working out Brexit backstops instead of protecting its eBorders – discuss! Maybe we should all invest in cyber criminals -)))

The final clear point from the excellent discussions was that – still – security is not aligned with the business process. Back to the car analogy – it’s like having your garage a mile away from the house. OK, so in Dartmouth that’s normal but… I’ve been doing some background work with an old IT buddy, Roger Green, on this very subject. It’s simple enough – strategy comes before technology. Just when are companies actually going to adapt this approach? I guess we’ll be talking about that five years from now…

 

 

 

 


August 21, 2019  1:27 PM

Getting A Foot In The Door In Europe

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

Had some very interesting conversations in the past few weeks with a number of US-based vendors across primarily the security and optimisation sectors, with one commonality – established in the Americas but significantly less so in UK/EMEA.

The other issue for these companies currently lies in the uncertainty surrounding the UK and the dreaded “B” word; ideally these US vendors want an HQ in an English speaking location – well they do share some words – but not one that might be stranded from the rest of Europe (and the world).

A perfect example is Cybera, established enough in North America in the SD-WAN/WAN edge markets to be rated number one by Gartner in the small footprint retail WAN use case, but largely unknown in EMEA. And here’s the point – a WAN edge tech that excels in the branch/small office, SOHO, SMB and related environments is surely tailor-made for Europe, given its proliferation of distributed, small footprint locations favoured by so many companies – not least retail, banking, insurance…

Even then, as Roger Jones from Cybera explained to me, it’s not a simple 1:1 solution mapping from US to UK company equivalents. For example, in retail outlets, the US is way behind in terms of adopting contactless payment tech, so it’s not a case of a “one size fits all” solution. Equally, however, that opportunity is just as live in EMEA as it it is the US. For a lot of these vendors, some kind of “foot in the door” approach is a great way of making the first step (pun intended?) towards establishing a customer base in a new region. Cybera, like clients of mine I’ve worked with in the past, such as Aritari, has a great “foot in the door” approach in the form of an overlay solution which avoids the need to convince the customer to uproot their existing investment (which, let’s face it, they won’t!) but simply adds value to what they already have. Of course, down the line, the idea is indeed to replace their existing infrastructure but – shhh – I didn’t say that.

Meantime, I’m hoping to see more of the Cybera tech and monitor its EMEA footprint expansion, in spite of the current economic uncertainty. Another case of “watch this space”…


July 18, 2019  9:48 AM

Juggling Might Not Have Changed, But Load-Balancing Has!

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

I recently completed a report for a long-time client, Kemp Technologies, in the area formerly known as L-B/ADC – i.e. Load-Balancing/Application Delivery Control.

It really hit home, during the testing, just how much this “technology” has changed. For starters, we didn’t ever really talk about the actual technology, other than understanding the underlying architecture/engine and how/why it all works. And that’s the point, because nor does the customer nowadays. Which is a good thing. In days of yore, when knights and dragons mixed company and Load-Balancers were simply big lumps of tin with a finite lifespan, the IT guys needed to understand the capacity of said box, how many could be bolted on and – if they got the sales guy drunk enough – when they’d realistically need a forklift upgrade to the next platform.

You then – as the customer – had to either architect the whole thing yourself, or spend many more $$$ or £££ or €€€ on consultation/services to get it done. And that was fine – at the time. But it was both expensive and limited. Now – as the customer – you simply need to be aware of what data and applications need to be accessed by whom (not even where, per se) and have an approximate idea of what data, apps and users you are going to be adding (or subtracting) in the future, and you just basically plug in – i.e. access a web-based console – and go. Well, it’s not quite that simple, but not far off. And there’s no programming involved, complex rules creation – i.e. extensive training required. So you don’t spend a gazillion bringing a team of SEs up to speed, who then bugger off to another company offering more money, shortly afterwards…

In other words, it’s very much a win-win scenario. In a hybrid cloud/OnPrem world it is all but impossible to know where your data and apps are, so it’s important that you don’t actually need to care about this -)

Anyway, enough bantering – please do download the report and understand what I’m talking about here. No boxes, no “use by dates”, just optimisation – as it was always meant to be.

https://kemptechnologies.com/reviews/broadband-testing-kemp-ax-fabric/


June 25, 2019  12:39 PM

There’s AI and, er, AI….

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

It’s hard to recall a recent presentation from a vendor that didn’t include the AI or Machine-Learning buzz-phrases.

I’m not just talking IT here – coffee vending machines probably also incorporate some such logic; “by monitoring your coffee drinking profile, we are confident in pre-selecting your drink for you with total accuracy”. Actually, for me that’s not a complex algorithm – black Americano every time, just in case you’re buying…

In the world of Cyber Security though, it is fair to say that we’ve largely had our fill over its overuse – that and the “one size fits all” security story. No, it doesn’t, unless you are a company the size of – say – a Symantec, which has one of everything (and it’s all been designed to work on one platform, but that’s another story for another blog…). So it was refreshing recently to speak with a company called DataVisor in London, that a) doesn’t claim to do everything – the company focuses on fraud – and that b) does genuinely use AI (it correlates activity across accounts, so an anomaly is logically easy to spot). Of course, this can be done manually if you are one person with one account, but DataVisor has processed ITRO a trillion events across 4.2billion accounts. That’s a tough job to do with a network monitor and a spreadsheet…

Another important point to make is that, with security breaches in general, neither companies nor individuals think first about the impact of a breach, rather than just throwing money at preventing them – here’s the news: you can’t stop ALL attacks. Be prepared. So DataVisor focuses on impacts such as reputational damage, liability, the actual financial loss likely to be incurred, and so on. Common Sense as a Service (CSaaS).

Unsurprising therefore that the company is both growing rapidly and has some high profile customers across several verticals, from financial services to SM and mobile/gaming. And this despite hardly anyone having heard of the company. Methinks that’s very like to change in the very near future – definitely a vendor to keep an eye on (using AI or the manual method).


May 29, 2019  11:15 AM

What’s In A Number? Er, Lots Of Packets Per Second…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

I remember when first testing Gigabit Ethernet (Packet Engines and Foundry Networks for the record) and thinking: “how do we harness this much bandwidth – maybe take a TDM approach and slice it? After all, who needs a gig of bandwidth direct to the desktop?”

Well, that was still a relevant point as only servers were really being fitted with Gigabit Ethernet NICs in ’98/’99 and even then they could rarely handle that level of traffic. Especially the OS/2 based ones… However, as a backbone network tech it was top notch. In a related metric, testing switches and appliances such as Load-Balancers and getting throughput in packets per second terms in excess of 100,000 pps was mad, crazy – the future! So, Mellanox (newly in the hands of nvidia – trust me, the latter do more that make graphics cards!) has just released the latest of its Ethernet Cloud Fabric products with lots of “Gigabit Ethernet” in it – i.e. 100/200/400 GbE switches. I’ll match that and raise you 400…

And forget six figure pps throughput figures. How about 8.3bn pps instead? You can’t test that with 30 NT servers -)

If you’re wondering who needs that level of performance in the DC, think no further than the millions of customers being serviced by AWS, Azure and all the other CSPs – that’s a lot of traffic being handled by the cloudy types on your behalf. And then there’s research, gas and oil/exploration, FinTec, the M&E guys and, not least – given the nvidia ownership – extreme virtual reality. Bring on Terabit Ethernet…

Meantime, I think we’ll need a lot of 5G deployment at the other end of the data chain! But that’s another discussion for another day. And whatever did happen to WiMAX?


May 17, 2019  9:11 AM

Socks And IoT – What Do They Have In Common?

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

So – another year and another Netevents; lots of old faces (not least mine!) and a few new ones. And it’s kind of the same with topics – understandably – as there’s only so much you can talk about in what is an increasingly condensed IT world.

Think about it – let’s take networking as an example. Well, in terms of connectivity, a LAN is Ethernet, other than a few SANs still around. And it’s not even referred to as a LAN any longer, just networking. When I started, I used to do multi-product tests, and each was a DIFFERENT type of LAN technology, not just six flavours of Ethernet. WANs – there were any number of different leased line providers and technologies: TDMs/fixed copper leased line, X.25, Frame Relay, dial-up, ISDN, and, in the higher echelons, ATM. SONET/SDH and finally MPLS.

And now people simply talk about cloud and Internet, public and private, OnPrem or OffPrem. “Cloud” also increasingly acts, in terms of being a single Service Provider point solution, as a replacement for other xSPs, SIs, VARs – you name it. Only security is still out there with a gazillion different flavours and approaches. Except that the world of IT is fed up with the complexity, time and cost of the validation and deployment (ongoing barely describes it) processes involved – just how DO you evaluate 200 different products that all claim to be an essential part of your security framework? – so, if a CSP or equivalent can provide the integrated security part of the deal too, it’s easy to see why that’s an attractive option. And that’s what we’re seeing now – solutions delivered that are integrating all those components in a more tightly-wrapped fashion than SIs could deliver of old. Moreover, some of the vendors at this year’s Netevents – such as Versa and NetFoundry – are combining elements (in this case SD-WAN and security and variations thereof) that are not merely bolt-ons to simply tick boxes, but are actually designed to be as good as any standalone (additional) alternative tech that is out there.

So, the market, in terms of options, is shrinking. Mergers and acquisitions are only accelerating this move. OK, so there are 83 billion tech start-ups a year (it is actually in the millions, globally, across all tech industries) but how many survive past the first (attempted) round of funding? And how many are REALLY offering something new and worthwhile? Think back to the early 80s, before PCs and networking became commonplace. How many options where there then? IBM, ICL, Marconi, DEC, Olivetti, Prime, Stratus, Tandem etc etc – but all essentially offering the same solution concept: a big box that powers lots of screens and keyboards. And if you wanted “WAN” connectivity in the UK, you simply went to BT and paid a fortune for the privilege So, AWS, Azure, VMware (being a unique element) and the rest are simply the “mainframe” providers of today. And all the other vendors are suppliers into the chain. But maybe that will give IT the stability it most definitely needs right now.

That said… while there can be too much choice, equally there can be too little. Back to those Netevents debates – if we accept that a “one size fits all” sock actually doesn’t (so the same sock really will fit from sizes 6-11???) then why would IT accept that a single technology – IoT – is equally suitable for kettles and power stations? Yes, both typically involve heating water, but that’s the only commonality. So, if you can hack a kettle, you can hack a power station. Is that what we’re really saying here?

So, much progress, but much more required… Well – that’s a great reason to have more IT events -)


February 12, 2019  5:17 PM

Scoop! Symantec Acquisition Makes Sense Of Software Defined Perimeter Security…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead
Uncategorized

OK – so that’s probably not the perfect headline to be announced by anyone who whistles through their teeth…

Been having some interesting conversations recently around the idea of zero trust security; primarily why there was ever anything but zero trust in the first place?

An obvious answer is that it suited the limitations of an old-school security architecture. A basic firewall, by definition, lets nothing or everything through it. So, unless you provide an element of trust, it would simply have blocked EVERYTHING. Pretty secure – at the time – but not massively productive… And, honestly, I’ve never locked myself out of the network by accident when testing firewalls in the past – just a squalid rumour -)

Now, of course, that ‘perimeter’ is bypassed c/o any number of alternative ways to “leave the building”, via Azure, AWS or whatever. VPNs obvious provide secure paths between two given endpoints, but how many people do you know who say “I love my VPN connection”? That’s not to say there aren’t great solutions out there – I’ve worked with some of them – but then there are the others that we won’t air in public…

All of which makes today’s conversation with Symantec, and its acquisition of Luminate, all the more interesting. I had a rather excellent update chat with Symantec very recently, where the slimmed-down, focus-sharpened, platform-based (Integrated Cyber Defense) approach it now employs, makes huge amounts of sense. Many times in this blog I’ve talked about the confused security industry and even more confused IT decision-makers within the enterprise, scratching their heads while thinking they need to acquire and integrate 14 products from eight different vendors into their existing strategy and setup.

We all know this does not – and cannot – work. Which is why a platform-based approach is the only away forward. Like a house built with no foundations, a security strategy based around loosely tying several arbitrary products together, does not stop the big bad wolf from blowing the house down.

So, Luminate could be classed, I guess, as the latest building block in this reconstruction project (which has had full planning permission granted). The idea behind Luminate is to create a zero trust application access architecture without traditional VPN appliances. It is designed to securely connect any user from any device, anywhere in the world to corporate applications, whether OnPrem, or in the cloud, while cloaking all other corporate resources. And there are no agents – and, ask any football manager, no one likes agents if they can get away without them! Again, rather like VPNs, I’ve worked with agent-based technologies that are so transparent they work perfectly. And then there are the others – AKA “how intrusive can you get?”. So, in terms of speeding up on-boarding of applications securely, the Luminate approach makes a lot of sense, but especially more so as part of an integrated platform, a point made by Gerry Grealish, Head of Product Marketing – Cloud & Network Security Products – at Symantec.

It also sounds like a good test project in the making -)


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: