Networks Generation


February 3, 2020  3:30 PM

Be Prepared For Infrastructure Change…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

As I mentioned in my previous blog post, the time for digital transformation to happen has started to hit crunch time, and then some.

I recently reported on an IT event in Central Londinium, an LNETM event looking at workplace migration with a finance audience focus – one of the most affected areas for digitisation – which majored on the issues of OS and app migration. The report is here: http://bit.ly/LNETMReportand it’s a real insight into the real world right now; financial giants (with the names obscured to protect the innocent (and not so innocent!) having to cope with a stupendous amount of change that could make or break their companies, however illustrious the history.

Key takeaways from the event very much mirrored the recent conversations with “intent aware” Apstra, that was the focus of the previous blog entry – I love it when you finally get some consensus in IT direction – such as:

  • One of the biggest challenges to migration is not necessarily technology-related, but the culture change, notably at board level.
  • The traditional 3-5-year migration window is no longer applicable, but is difficult to move away from as migration budgets effectively shrink.
  • DevOps changes are forcing a rethinking of behaviour around the traditional IT infrastructure.
  • At board level, more collaboration is required, notably between CEO and CIO roles.
  • Change is essential – mandatory – but doing it badly can be catastrophic to a business, especially in the financial services sector. As part of that change, risk, costs and competitive advantage all need to be managed in equal measure.
  • Using the right technology (like Rimo3, Juriba and Lakeside who presented at that event – see the report link for more details) is essential to enable companies to migrate successfully; the manual approach is simply not a cost or time effective option.

Having only completed my most recent update analysis on aforementioned Rimo3’s OS/app migration automation platform:

https://rimo3.com/wp-content/uploads/2019/11/Broadband_Testing_Rimo3_ACTIV_Report.pdf

– the feedback from top level guys at the coal face, was both rewarding and very reassuring, if a little bit scary!

What IT departments are dealing with right now is nothing new; migration and the need to automate where possible have been ongoing requirements for decades. The difference now is that, even solely looking at the Windows 10 OS environment – the migration issue isn’t a “must get around to it sometime, otherwise we might see problems” but a MUST DO scenario. Companies don’t have the option.

At the same time, the whole of IT infrastructure really is changing – and very rapidly – from DevOps, to SecOps, to cloud and beyond. Vendors basically are moving forward, and companies are simply having to ride the wave with them, whatever it takes to do so. What it does take is making use of relevant third-party products and services, otherwise the incredibly tight time scales will render any “manual” attempt at such a transformation impossible. This is no idle, “FUD factor” vendor threat: we are seeing daily examples of businesses – many historically very powerful and seeming indestructible – falling by the wayside as they fail to keep up with the digital transformation so essential to ongoing success; retail and banking being two such examples.

Businesses need to fully accept the transformation requirement, identify which vendor partners can help with that migration and commence the process – now! Automation is clearly a critical element of that migration, as is the re-alignment and re-allocation of IT structure and resource.

Anyway, I urge you to read the event report at the URL given earlier for a full appreciation of how critical the timing is right now. Regardless of everything else that is going on in the world!

 

February 3, 2020  11:56 AM

It’s Not Just About Brexit, Superbowl And Footie Transfer Deadlines…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

So, the past few days has seen some serious deadlines and finals come and go.

And, no, we’re not going to talk about the rugby (yet). In the world of IT, the “Brexit” of our industry, AKA “Digital Transformation”, is really rearing its head again right now, and for good reason. Both a recent event I reported on in London (see next blog post) and an equally recent briefing with Apstra – they of intent networking – have reiterated the real need for companies to adapt their IT to the modern world.

This isn’t some hyperbole – look at the state of the retail industry in the UK, for example – one big name after another goes down the digital pan. And traditional banks and finance houses are under more pressure than ever to compete with the new wave of online-based rivals.

Apstra quoted Gartner who – let’s face it – are hardly unknown for somewhat over-excited predictive passion, but in this case they really are hitting the target: “Through 2021, organizations that fail to adjust network funding and operational practices will be three times more likely to fail in their digital business transformation” which, in turn, means they will be the next Toys R Us, Debenhams or whatever.

At the heart of the transformation are two key elements – IT architecture and automation. And, yes, we are revisiting here – automation was a heady topic for me in testing mode in the 90s, and architectural transformation and orchestration was the big thing in the early 2000s. The massive difference between then and now – which is where the likes of Apstra have a major role to play – is that, back then (and then some) IT wasn’t ready for being reinvented; the technology just wasn’t up to scratch. But it is now. Moreover, other areas of IT, beyond the networking team, such as DevOps, and SecOps, have simply game out on a limb and reinvented themselves, so now the IT infrastructure is playing catch-up, in an attempt to regain control.

As an example of what I’m talking about, transformation means IT accelerating and managing as much of that process as possible – kind of where IT helps itself… With its latest software release, for example, Apstra has announced the wackily titled “Intent Time Voyager”, which is not based on DALEK technology, nor any inspiration from Herbert George Wells, but is a massive improvement in that staple tool of IT – rollback and roll forward. The Apstra OS (AOS) creates a snapshot of the entire network configuration for every committed change and admin can store up to five snapshots of known-good configurations (i.e. not just one). Moreover, this is vendor-neutral configurations we’re talking – the whole network – which can then be restored to any particular snapshot with three clicks, Apstra is claiming. Which is nice.

This is precisely the kind of “transformation acceleration” tool I’m talking about – one that lets you move the IT infrastructure forward with a great degree of control, automation and speed. It’s the kind of technology that might just keep more retail giants in business; as we all know, “every little helps”…

 


November 28, 2019  12:51 PM

Taming Windows As A Service – Making The Incompatible Compatible…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

With so much IT focus on the cloudy landscape, it’s easy to forget that what is happening at the desktop is still the touchy-feely point of IT contact for the users themselves.

And much of that still revolves around Microsoft and the age-old Windows platform. Except that, with Windows 10, Microsoft has forever changed the Windows landscape, it now being provided as Windows as a Service. This fundamentally changes the ongoing ownership of an estate of Windows-based endpoints, not least that Microsoft releases two features updates a year – the first to screw the platform and the second to fix it -)))) Well, sort of… Regardless, what it means is that, every time there’s an update, there are possible compatibility and compliance issues, that simply won’t resolve themselves without due analysis and remediation.

But this is not a trivial operation, especially with a large application estate. The old cliché about sufficient monkeys and typewriters producing the complete works of Shakespeare is actually complete drivel (though Measure For Measure isn’t actually that good anyway), and the same applies to an IT team and compatibility testing. There simple isn’t enough time between updates to manually test and remediate issues. End of.

So, recently I’ve been working with a UK vendor, Rimo3 – https://rimo3.com/ which has developed a cost-effective, timely alternative – ACTIV – using automation to massively reduce those test and remediation times. And it works. Moreover, the company is looking at VDI and virtualised environments in the same way, so a complete desktop estate could be managed in an automated fashion. What’s key about all this is that we’re not talking about some fanciful future solution for a potential problem, but something which resolves a major problem that exists right now. Can a company – at least of a given size – really manage its product portfolio manually, especially when it probably doesn’t even know the extent of that portfolio in the first place? You’d be amazed how few companies really know just how many applications are in their estate. Most don’t even get it close…

Anyway – enough jabbering on – don’t want to get in the way of Black Friday Christmas shopping, so check out the report itself to fully appreciate the scale of what is being achieved here:

https://rimo3.com/wp-content/uploads/2019/11/Broadband_Testing_Rimo3_ACTIV_Report.pdf

 

 

 


October 22, 2019  11:35 AM

More From The Gherkin

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

More Gherkin And IT As A Business Strategy – Back at the Gherkin event, two of the primary vendors who were present for the discussions actually substantiated the previous blog comments about basing a security strategy around the business, rather than as a bolt-on.

Both Apstra and NetFoundry have somewhat “challenging” strap lines – “intent-based networking” and “Network as a Service meets Connectivity as a Code” respectively, but dig beneath the marketing-ese in each case and you get to some real foundational IT – proper building blocks for the artist formerly known as networking and WAN connectivity. We went through that whole “middleware” phase 20 years ago, but no one really knew what it actually was. Apstra and NetFoundry do, but they don’t call it that. But in both cases, this really is “glue” code – layers that pull networking, apps and services together, and optimise the management and delivery thereof.

Apstra does the “orchestration” job that, again, many vendors once claimed to do, but a) couldn’t and b) didn’t really know exactly what it was they were trying to do in the first place, hence point a). It automates the conversations between the network elements and the life-cycle they create – hence it works as the business itself works – as a flow of information and services that makes a business, well, a business. And optimises it in turn. Equally, NetFoundry plays an equivalent role outside of the data centre, controlling the destiny of applications – essentially turning apps into secure private networks. Years ago, many of the major networking vendors were talking up the idea of “the application is the network”. But it wasn’t – not back then. By that they simply meant some prioritisation mechanisms. Ones that no one ever enabled on their routers and switches. This is proper embedded code so, again, it is a fundamental building block.

I like this approach. I can throw another layer into the mix too – the control of the basic business flow process, and that is thingamy.com – if you haven’t checked these guys out, then do. It’s what turns the likes of ERP and related processes into something designed to work in 2020 and beyond, not 1980.

Who knows – IT might actually work one day -))))

 

 


October 22, 2019  10:11 AM

Different Clouds Don’t Like To Cluster…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

Had another interesting cloudy discussion last week with SIOS – one of ITs better kept secrets, certainly in the UK.

Much of that is down to the sensitivity of what it does and who it does it for. The vendor deals with HA  – high availability (and disaster recovery) – and companies aren’t too keen to admit that their products and services, data and applications aren’t necessarily always available. Like a bank saying – well, we might have your money still in our reserves, if our latest investments haven’t gone wotsits up. Of course, that does happen, but they don’t actually tell you…

Where the HA scenario has got especially interesting for SIOS – and its customers – is with the advent of t’cloud (as we call it in Yorkshire, where we see many). HA in a one CSP scenario is not straightforward – HA was always an OnPrem scenario, even when failover time was so long that you could go on holiday while it was happening. At least you could see it happening in front of you (assuming you were in the machine room/data centre and not on holiday). But within the cloud there is no shared storage. And what if you are putting your golden data/apps eggs in several cloudy baskets – say AWS, Azure and Google. Now they are not interested in ensuring that if and when their service fails, then another CSPs kicks in! So, what SIOS provides is HA clustering regardless of app/data, OS or CSP/server location. It can emulate physical disk resource to simulate those physical clusters across resource and cluster workload.

Of course, what this means is that no two customer scenarios will be the same. Good luck chaps! But seriously, in the world of “value add” it gives partners enormous flexibility in managing those scenarios and providing that value add. Put it this way – with my old IT hat on, I would not want to attempt a DIY alternative to what SIOS offers. Another side effect of the SIOS approach is that it can save massively on licensing costs, as you’re paying for less physical resource.

Looking forward to hopefully seeing some of this tech in action in the future, so watch this space!

 

 

 


October 22, 2019  9:00 AM

Secure Bacon Butties With Gherkin But No Gherkins

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

Had my first visit to The Gherkin recently at a “mini” Netevents security briefing in London.

I can certainly recommend the brioche-bun bacon butties with a view of the London rain from the 38th floor. What was different about this Netevents is that we had real people there – i.e. not just tech IT pro’s but guys who actually have to work directly with people and make stuff work. It always makes it more interesting when you get to hear from the coal face (I was there in the 80s, I know what it’s like). Not least Brian Lord, who formerly had the simple task of running GCHQs security but now fronts an independent consultancy, PGI – so he’s still at the sharp end.

One of the realistic messages to come out of the briefings was that what is key is not how you’re protecting your crown jewels, but which crown jewels you should be protecting. In other words – what’s the one thing you would rescue from a burning building that would cost you your business/life? For many industries – retail, transport, manufacturing, banking etc – that answer is obvious – customer data. Have that breached and you could be accountable for billions. Throw in the casual “fact” that only about 20% of investment in IT security is actually put to use, and you do wonder why so many start-up vendors in this sector still focus on how their tech protects and not what it is actually protecting in the first place.

Not that this is a new message, but it’s one the start-up vendors especially need to take seriously. For every one that makes headline news with a $$$$ dollar acquisition, many others quietly fade away and die. At the end of his panel debate, Brian asked “what will we be talking about in cyber security in five years’ time?” Methinks, exactly what we were talking about in the Gherkin, since that’s what we were talking about five years previously… But then that’s IT – it is cyclical. It keeps people in jobs, just like manufacturing new and totally unnecessary features in cars, to lure people to trade in a perfectly usable vehicle, to spend money on features they don’t need. Mind you, my parents used to do the same thing with their three-piece suites; and they were all still made out of dralon…

One of Brian’s key focus areas – unsurprisingly – is government; the biggest cyber target of all. So why is the UK government spending its entire time not working out Brexit backstops instead of protecting its eBorders – discuss! Maybe we should all invest in cyber criminals -)))

The final clear point from the excellent discussions was that – still – security is not aligned with the business process. Back to the car analogy – it’s like having your garage a mile away from the house. OK, so in Dartmouth that’s normal but… I’ve been doing some background work with an old IT buddy, Roger Green, on this very subject. It’s simple enough – strategy comes before technology. Just when are companies actually going to adapt this approach? I guess we’ll be talking about that five years from now…

 

 

 

 


August 21, 2019  1:27 PM

Getting A Foot In The Door In Europe

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

Had some very interesting conversations in the past few weeks with a number of US-based vendors across primarily the security and optimisation sectors, with one commonality – established in the Americas but significantly less so in UK/EMEA.

The other issue for these companies currently lies in the uncertainty surrounding the UK and the dreaded “B” word; ideally these US vendors want an HQ in an English speaking location – well they do share some words – but not one that might be stranded from the rest of Europe (and the world).

A perfect example is Cybera, established enough in North America in the SD-WAN/WAN edge markets to be rated number one by Gartner in the small footprint retail WAN use case, but largely unknown in EMEA. And here’s the point – a WAN edge tech that excels in the branch/small office, SOHO, SMB and related environments is surely tailor-made for Europe, given its proliferation of distributed, small footprint locations favoured by so many companies – not least retail, banking, insurance…

Even then, as Roger Jones from Cybera explained to me, it’s not a simple 1:1 solution mapping from US to UK company equivalents. For example, in retail outlets, the US is way behind in terms of adopting contactless payment tech, so it’s not a case of a “one size fits all” solution. Equally, however, that opportunity is just as live in EMEA as it it is the US. For a lot of these vendors, some kind of “foot in the door” approach is a great way of making the first step (pun intended?) towards establishing a customer base in a new region. Cybera, like clients of mine I’ve worked with in the past, such as Aritari, has a great “foot in the door” approach in the form of an overlay solution which avoids the need to convince the customer to uproot their existing investment (which, let’s face it, they won’t!) but simply adds value to what they already have. Of course, down the line, the idea is indeed to replace their existing infrastructure but – shhh – I didn’t say that.

Meantime, I’m hoping to see more of the Cybera tech and monitor its EMEA footprint expansion, in spite of the current economic uncertainty. Another case of “watch this space”…


July 18, 2019  9:48 AM

Juggling Might Not Have Changed, But Load-Balancing Has!

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

I recently completed a report for a long-time client, Kemp Technologies, in the area formerly known as L-B/ADC – i.e. Load-Balancing/Application Delivery Control.

It really hit home, during the testing, just how much this “technology” has changed. For starters, we didn’t ever really talk about the actual technology, other than understanding the underlying architecture/engine and how/why it all works. And that’s the point, because nor does the customer nowadays. Which is a good thing. In days of yore, when knights and dragons mixed company and Load-Balancers were simply big lumps of tin with a finite lifespan, the IT guys needed to understand the capacity of said box, how many could be bolted on and – if they got the sales guy drunk enough – when they’d realistically need a forklift upgrade to the next platform.

You then – as the customer – had to either architect the whole thing yourself, or spend many more $$$ or £££ or €€€ on consultation/services to get it done. And that was fine – at the time. But it was both expensive and limited. Now – as the customer – you simply need to be aware of what data and applications need to be accessed by whom (not even where, per se) and have an approximate idea of what data, apps and users you are going to be adding (or subtracting) in the future, and you just basically plug in – i.e. access a web-based console – and go. Well, it’s not quite that simple, but not far off. And there’s no programming involved, complex rules creation – i.e. extensive training required. So you don’t spend a gazillion bringing a team of SEs up to speed, who then bugger off to another company offering more money, shortly afterwards…

In other words, it’s very much a win-win scenario. In a hybrid cloud/OnPrem world it is all but impossible to know where your data and apps are, so it’s important that you don’t actually need to care about this -)

Anyway, enough bantering – please do download the report and understand what I’m talking about here. No boxes, no “use by dates”, just optimisation – as it was always meant to be.

https://kemptechnologies.com/reviews/broadband-testing-kemp-ax-fabric/


June 25, 2019  12:39 PM

There’s AI and, er, AI….

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

It’s hard to recall a recent presentation from a vendor that didn’t include the AI or Machine-Learning buzz-phrases.

I’m not just talking IT here – coffee vending machines probably also incorporate some such logic; “by monitoring your coffee drinking profile, we are confident in pre-selecting your drink for you with total accuracy”. Actually, for me that’s not a complex algorithm – black Americano every time, just in case you’re buying…

In the world of Cyber Security though, it is fair to say that we’ve largely had our fill over its overuse – that and the “one size fits all” security story. No, it doesn’t, unless you are a company the size of – say – a Symantec, which has one of everything (and it’s all been designed to work on one platform, but that’s another story for another blog…). So it was refreshing recently to speak with a company called DataVisor in London, that a) doesn’t claim to do everything – the company focuses on fraud – and that b) does genuinely use AI (it correlates activity across accounts, so an anomaly is logically easy to spot). Of course, this can be done manually if you are one person with one account, but DataVisor has processed ITRO a trillion events across 4.2billion accounts. That’s a tough job to do with a network monitor and a spreadsheet…

Another important point to make is that, with security breaches in general, neither companies nor individuals think first about the impact of a breach, rather than just throwing money at preventing them – here’s the news: you can’t stop ALL attacks. Be prepared. So DataVisor focuses on impacts such as reputational damage, liability, the actual financial loss likely to be incurred, and so on. Common Sense as a Service (CSaaS).

Unsurprising therefore that the company is both growing rapidly and has some high profile customers across several verticals, from financial services to SM and mobile/gaming. And this despite hardly anyone having heard of the company. Methinks that’s very like to change in the very near future – definitely a vendor to keep an eye on (using AI or the manual method).


May 29, 2019  11:15 AM

What’s In A Number? Er, Lots Of Packets Per Second…

Steve Broadhead Steve Broadhead Profile: Steve Broadhead

I remember when first testing Gigabit Ethernet (Packet Engines and Foundry Networks for the record) and thinking: “how do we harness this much bandwidth – maybe take a TDM approach and slice it? After all, who needs a gig of bandwidth direct to the desktop?”

Well, that was still a relevant point as only servers were really being fitted with Gigabit Ethernet NICs in ’98/’99 and even then they could rarely handle that level of traffic. Especially the OS/2 based ones… However, as a backbone network tech it was top notch. In a related metric, testing switches and appliances such as Load-Balancers and getting throughput in packets per second terms in excess of 100,000 pps was mad, crazy – the future! So, Mellanox (newly in the hands of nvidia – trust me, the latter do more that make graphics cards!) has just released the latest of its Ethernet Cloud Fabric products with lots of “Gigabit Ethernet” in it – i.e. 100/200/400 GbE switches. I’ll match that and raise you 400…

And forget six figure pps throughput figures. How about 8.3bn pps instead? You can’t test that with 30 NT servers -)

If you’re wondering who needs that level of performance in the DC, think no further than the millions of customers being serviced by AWS, Azure and all the other CSPs – that’s a lot of traffic being handled by the cloudy types on your behalf. And then there’s research, gas and oil/exploration, FinTec, the M&E guys and, not least – given the nvidia ownership – extreme virtual reality. Bring on Terabit Ethernet…

Meantime, I think we’ll need a lot of 5G deployment at the other end of the data chain! But that’s another discussion for another day. And whatever did happen to WiMAX?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: