It’s a truism that IT essentially reinvents itself every decade or so.
In some ways this can be rightly seen as a cynical way to repackage the same old tosh and sell it again to confused.com IT departments afraid of having an unsupported IT implementation. But it does also spawn benefits, not least because the vendors themselves are often forced to reappraise their offering and actually improve it.
Vendors reinventing themselves or morphing into their next “Dr Who” regeneration makes a lot of sense, not least because technology sometimes comes first and then there’s the question of what to do with it. And that’s the hard bit. And it’s not because they’ve created a solution to a problem that doesn’t exist, it’s simply finding the right way to market (it). I’ve seen many vendor clients with top notch tech take time to get their offering right – Voipex with Aritari, Fedr8 with Green Rain are two that immediately spring to mind (seasonal reference). Another is Densify, with whom I had a catch-up this week and whose offering is based on the technology from their twin, the artist formerly known as Cirba.
When I reviewed the Cirba technology, it was extremely impressive, but equally a tech monster, as in, “bloody hell, this does an awful lot of stuff”. How many times have you seen what was fundamentally being offered as a product and thought: “that should actually be a service offering, based on that underlying technology”?
Never has there been a better scenario for this than in this cloud-adoption era. Companies, xSPs, carriers even, are all second-guessing as to how to optimise storage, application deployment and delivery across what can be a global delivery mechanism – storage with multiple tiers of virtualisation (and real tears, not virtual ones) and layers of networks to manage and optimise. So Densify takes the guesswork out of it. That’s my summation of its proposal; it’s a far more obvious and appealing offering than trying to explain how the underlying tech works and EVERYTHING that it can do. I’m speaking from experience here. I still get the headaches…
Put simply, Densify’s girlfriend Cloe (Cloud-Learning Optimisation Engine) uses machine-learning to work out what to do (sounds a bit like a Chloe I used to know) and, er, does it, without the user having to work it out for themselves. Bingo! Densify reckons it provides results in the first 48 hours of deployment, with Cloe recommending the best cloud technologies for any given application. The solution also offers multi-cloud support, whereby applications are provided the right resources even when simultaneously using multiple cloud vendors.
Sounds like the perfect test revisit scenario…
I’ve already talked many times in this ‘ere blog about how cloudy the cloud can be to many would-be adopters.
And it hasn’t got much clearer; in the past 18 months I’ve tested storage in the cloud, security in the cloud, service desk in the cloud, optimisation in the cloud… I assume there must be cloud-based weather report services? The issue for the would-be adopter is therefore – do I need ALL of these cloud-based services/technologies? And what do I keep and what do I throw away? Does one of these services give me “cloud-based everything”?
Well, no, but we’re getting there. This week in London I had the pleasure of meeting up with Cloudistics (despite GWRs best efforts) – there’s a clue in the name as to what area of IT they are playing in, to discuss their formal EMEA entry (not for Eurovision) and the logistics thereof. The basic premise is to make cloud adoption as easy and cost-effective as possible and to work with what’s already in place. So that answers a significant number of questions straight up.
One thing that is also clearly understood is that, even with an outsourced, “in the Ether” service, such as a public/private cloud, the EMEA market – with its geo-cultural fragmentation – is a very different market to the US, albeit allowing for no one every mistaking Seattle with North Carolina (unless you’re from the latter). This is a massive benefit of a cloud-based approach; you have the basic set of tools, but they can be manipulated to order, so you get optimisation and economies of scale in tandem. For any variation on an MSP, or any reseller wanting to reinvent themselves, something like Cloudistics is something they really should get quite excited about, almost right up there with a 1961 vintage Taylors port (but not quite, obviously).
Which leads us to another thread of conversation in the meeting – education. Lots of it. Needed. But ‘twas ever the way with IT; some people still don’t know how to configure VLANs correctly. Actually, probably most people… Hey, that’s what outsourcing is all about 😊 – it’s not the end users that need educating but the middle-men, the resellers. They need margin, value-add, a living. Vendors such as Cloudistics are offering that option. They should grab it.
TMI or Too Much Information is a popularised phrase of recent years, that isn’t normally related to IT (well, at least not the sort of IT I’m involved in), but it is very much a truism.
Big data, data analytics, ya de da, all relating to the reality that we are (and have been) collecting Petashedloads of data for years, without necessarily contemplating what we might actually do with it (other than store it and back it up). It took the likes of Google to wake up the data admin guys and say “well, we’re gonna do stuff with this data, even if you’re not”.
Of course, the retail and consumer markets in general have been analysis bonkers for decades, but what about IT security, cyber or otherwise? It’s not a case of collecting info from a single security device – be it a syslog from some kind of firewall (of which many companies have very many), IDS, IPS, SIEM, UTM, ATM, ITN, BBC… but all of these and more, administering and collating said info and then, er, what? In the event of trying to glean info on a possible cyber attack, diving into said eMountain of data – even using a variety of tools – will possibly result in a positive outcome, six months after the effects of said attack has already rendered the company bankrupt -)
So what’s the answer (other than “42”)? Well, there are start-ups a plenty working on a solution to the aforementioned multitude of solutions that were designed to solve the original problem (which we can’t now remember what it was), most of which involve machine learning and the automation of “manual error strewn” tasks and acceleration of forensics and the search for the holy cyberattack grail. One such candidate is JASK (if you want to know what that stands for, Just ASK), with whom I had a jolly excellent conversation this week. Their understanding of the problem is right on the money, and not a million bytes away from that described above (which is very close in modern data terms). In short, it’s a platform (cloud + agents) designed to automate the collection and correlation of threat alerts from all manner of sources – it is completely open in this respect – and then analyse said alerts, providing prioritisation and – theoretically – acceleration in getting to the epicentre of the problem. So kind of Splunk take two? Maybe, but key to its potential success is in working with these existing products, so not alienating the vendors, nor – vitally – the end-user customers who have already invested gazillions in said solutions: “just one more wafer-thin mint, sir?”
Of course the proof is, not in the pudding – too late by that stage of the meal – but in justifying its existence (and cost, which is sensibly by company size, not some archaic CPU count, or active users, or random number generator) which can only come with actually using the product and seeing what it spits out, what time it saves, and what businesses it saves. As ever, it’s a case of watch this space…
Interesting conversation recently with PGi – a well-established player in the worlds of web conferencing, unified comms and online collaboration.
Interesting, timing wise, because a lot of conversations recently with vendors have revealed how much discontent there is with the classic web conferencing tools, such as WebEx and GoToMeeting. Indeed, I’ve had several instances of conferencing with individuals who have worked for the vendors owning those technologies who have chosen to eschew their own products and use an alternative instead!
Typical comments I get are “clunky”, “limited”, “good echo unit impersonations”, “jerky” – these are just the polite ones. And it’s not just a quality/features issue, but so many people fail to get onto the conference in the first place… So what better than to conference with PGi, using its own GlobalMeet platform – kind of a pre-test of a product simply by using it; logical or what? So, a couple of comments – firstly, it took one mouse click to connect me. No confusing options, no one struggling to connect. Secondly, PGi has taken the limitations out of a web conferencing application by re-engineering it as a platform in its own right. So, application additions can be made readily, as and when they make sense, including from 3rd parties of course, and some of my own clients spring to mind here.
The simplification is important beyond the mere convenience factor, given that a survey just published by PGi highlights that 44% of respondents rank the smartphone as their chosen device for unified communication and collaboration in the future, rather than a laptop or desktop computer, or a desktop phone, for which there has been a decline of 70% in use over the past five years for conference calls, and 55% in just the past 12 months. For the record, PC/laptop stands at 43% usage.
Not that any of this is surprising, but it reinforces the need for ease of access and flexibility in terms of device support, updates and app/application additions. When you add in how online collaboration is now really making more and more sense – for example, I carry out testing/test validations online now regularly – then, the need for something better and beyond what we have experienced to date is all too clear, something PGi seems to have grasped and acted up, so well done to them!
My online conferencing life may well be less of a misery in the future…
IT, and security especially, is overloaded with slick marketing terms that have little or no substance behind them – the classic “solution searching for a problem” scenario.
But… just occasionally the complete opposite prevails, as in the most recent test project I’ve just completed for Ziften, a US-based tech company focusing on endpoint security and asset management. And there’s the rub – for years, nay decades, I’ve been talking about the importance of combining elements of IT into what I would term “broad network management”. After all, what is security if not a fundamental element of network management? However, what has happened is that – just like the early days of datacoms and telecoms teams creating those “Islands of expertise” within IT; I mean, these guys didn’t simply not communicate, they despised each other! – those islands of IT expertise have expanded, whereby security doesn’t interact directly with networking.
The result is a hotchpotch approach to the most crucial elements of running an IT shop. Moreover, the duplication of resource and effort is extremely costly. So it’s a double non-whammy! So here’s the deal – how do you secure an endpoint if you don’t know it’s there to begin with? In other words, the starting point for securing endpoints is to discover and manage what is on the network to start off with, then apply the security. No, it’s not rocket science, it’s common sense. For once…
This is the approach Ziften has taken with its product – eureka! So we now have method and order in place and the product does exactly what it says on the tin (or the marketing spiel). Which brings me onto the point I made at the beginning of this post – slick marketing terms, or the reverse thereof. The categorisation for the Ziften solution sits in the relatively newly defined SysSecOps market – Systems and Security Operations, i.e. the combination thereof. Doesn’t exactly trip off the tongue so much as trip it up. But who cares? Maybe this is the golden IT rule. Abbreviations that are a bit rubbish actually stand for something meaningful and useful…
Meantime, my report can be found via this link – read it and get a dose of common sense. Then, if you’re a marketing type, think of a better category title -)
Like VoIP optimisation I’ve been blogging about recently, so I’ve been testing network (LAN and WAN in old money) optimisation solutions since the late ’90s.
The former, AKA Load-Balancing and Application Delivery Control (ADC) was historically very hardware oriented; sure, all the intelligence was in the software, but the horsepower came from the specialised hardware. So, when that ran out of steam – whether the platform itself or add-ons such as SSL accelerator cards, then the whole platform had to be rebuilt from the ground up. Just like it was with routers etc – and they’ve gone down the SDN route (which finally appears to mean something in product terms) and so it has happened with L-B/ADC solutions.
I was recently speaking with Avi Networks, one of the few in the genre not on m radar – but they are now. Essentially there’s a lot of ex-F5 Networks guys in there, so they know how to make a good product and – now – how to make a better one! So, it’s a software solution, naturally, so it can take advantage of low-cost, bare metal scalability; the smart bit is in the separation of the control and data planes – a logical cloud-esque solution, so the control is global, single-site and the data can be anywhere. That might seem suitably obviously, but making it work is another matter altogether. What it means for the “customer” – whether an enterprise or an xSP – is that the data plane can be completely bybrid – OnPrem, public/private cloud,, physical, virtual…
Importantly it also comes with total visibility, performance analytics and all the information needed to truly be in control of those applications and the deployment optimisation thereof. Of course, Avi isn’t alone in reinventing the genre; my old client jetNEXUS has introduced an AppStore approach to building out an SDN/NFV based L-B infrastructure; build as you go… makes total sense!
And here’s the bottom line to this approach – it’s far most cost-effective and budget friendly that spending millions up front…
In this era of virtual cloudiness and all manner of reinventions, it’s interesting to note that many of the limitations of IT since whenever, are still incumbent, despite technology advances.
For example, yesterday I was sharing a Skype For Bz call with Said from SolarWinds (so Devon meets Austin, Texas). Said was rightly bemoaning the quality of contemporary Web Conf tools, both in their relative complexity and inability to deliver good quality, latency-free voice and data comms , regardless of the bandwidth at each end – and in the middle.
The irony here is that I’ve been testing optimisation products since the late 90s that are a cure-all for these limitations. So why haven’t they been implemented? Moreover, the security concerns make secure, optimised VoIP more critical than ever. A story emerged this summer involving the European Parliament potentially enforcing end-to-end encryption on all forms of digital communications, as an extension of personal privacy. A ban on “backdoors” into encrypted messaging apps like WhatsApp and Telegram is also being considered. And talking of WhatsApp, WhatsWorse than the alleged WA voice to voice comms – it’s bloody terrible!
Logic dictates that end-to-end security must be allowed to exist free from intermediate back doors. The value of “legal intercept” and back doors needs to be considered from a systems point of view, not political, and how that capability can lead to bigger issues than legal intercept aims to mitigate. Two of my clients, Brand Communications and Aritari, both offer the ability to secure and optimise VoIP anywhere in the world, across any form of connection, cloudy or otherwise – so why are there no standards being enforced or reinforced here? Next year’s World Cup in Russia is certainly going to be, er, interesting…
Proving that VoIP is bigger than ever, Mitel has recently acquired ShoreTel as a proof point; to cement the VoIP in the cloud concept. Add in the capabilities of a Brand and Aritari and you have the PSTN reinvented for 2020 onwards – resilient, secure, optimised, network-agnostic and concurrent data friendly. I was in conversation with Jeremy Butt, newly appointed SVP of EMEA for Mitel very reecntly, and we both agreed it makes total sense. In my office I have a landline, and I have a cordless phone system. Have I plugged it in once this year? No. I just use VoIP or mobile. I;m not sure what colour “that” future is anymore but it’s certainly not landline…
In the process of kicking off a new season of awards judge duty, starting with Techtrailblazers, which focuses on start-ups (and upstarts).
Got me thinking – just what IS a start-up company? Ok – so there are definitions within even Techtrailblazers itself as competitions are based upon rules, and rules dictate strict definitions, so there’s no way around this (for the record, or mp3, I believe the entry conditions are a company no more than six years old?). BUT – what if you’re one of the many vendors that has been around the block once (or more) already and has reinvented itself, so it is effectively a start-up, just a mature one, like someone who revisits university in the 50s to do a different degree? So, a mature start-up!
For example, I was speaking recently with a US vendor who fits that very description – Vector Networks, which effectively refocused with the acquisition of its Vizor IT assets management software. The software manages the complete asset lifecycle, from network discovery and inventory data acquisition through to purchase, warranty and maintenance data. This, in itself, is effectively a reinvention of the original asset management systems of the early 90s on the back of networks first being deployed en masse. And in combining this with software management and service desk, it moves the game on from the original Landesk (itself reinvented and now part of Ivanti) product and its ilk that I was testing as long ago as ’91 (when Intel acquired that original Landesk incarnation).
And then there’s the next step in the process, as defined by the newly created SysSecOps category. A recent report from Technology Research outlined the concept of SysSecOps – that is to say combining systems and security operations into a single IT profile or categorisation. And it makes total sense as it’s all about visibility and follows the aforementioned mantra – if you don’t know what your systems are running – basically what is happening on an ongoing basis – then how can you possibly secure users, their data and applications?
I’ve just finished a test report on said approach for Ziften and the concept definitely works. Now, getting back to the original point here, Ziften is technically a start-up from the awards definition (founded 2011 I believe). So does that mean that essentially Vector should be competing against Ziften on a start-up playing field far more level that Yeovil Town’s old pitch? Most definitely, but it requires an IT rethink/reinvention of terminology and, as we all know, IT is very good at that -)
Security, for a decade or so, didn’t see much in the way of true change – yes, firewalls got smarter, likewise AV products (well, some anyway), IDS became IPS so it could actually stop something happening, encryption became more encrypted and VPNs became more virtual, but typically same old vendor faces, same old product types with variations on a theme.
And initial cloud instances didn’t really change much at first. For example, I remember testing some early cloud-based products and it was essentially just the same technology OnPrem, moved to the cloud – and with a significant drop in performance capability at the time.
However, the general consensus suggests that the “traditional” security architecture is no longer sufficient protection against cybersecurity and, certainly, the pure signature-based method of resistance is indeed full of cyber-holes. New vendors such as Tempered Networks and vArmour are looking to protect in a micro-segmented way, rather than building secure gateways/walls, albeit in very different ways to each other. And, meantime, you have vendors insisting that prevention at the cloud is the answer, and others saying the endpoint should be the focal point.
I recently had the pleasure of back to back arguments from two vendors, one in the former camp, Menlo Security, and one in the latter, Cylance (but is it golden?). Of course, the answer is that there is no single solution for all – indeed, each vendor has specific focuses – but that’s to kill the fun before it starts… Menlo’s focus currently is on preventing malware intrusions and has opted for the isolation method (not a form of birth control) – basically to isolate user devices from web and email threats coming from t’Interweb, so only the “good” stuff actually reaches the endpoints. It’s a valid argument for what it does – i.e. it is not an “all or nothing” solution, but a definite revisit to those initial cloud instances I mentioned earlier, but clearly better thought out. Anyway, it will hopefully come under the Broadband-Testing microscope soon, so watch this space on that one…
Cylance then described why the endpoint is still vulnerable (not least from insider attacks) and why, therefore, you do need protection at the endpoint (again, this is NOT a form of contraception) which, again, makes total sense in isolation, even though it’s not a isolation technology, just to make that clear (as mud). I kind of think of it in terms of, well – if we had NextGen Firewalls, then now we have a kind of NextGen AV technology. Funny – at that point I just looked at the Cylance website and that’s how they are describing it -) Great minds and all that. Or stupid ones as my old history teacher used to counter with. I actually got a “B” in History so the debate is still raging…
What I have noted from several demo’s is that Cylance’s “DNA-matching” approach to identifying threats seems to a) work and b) at very high performance levels and with a relatively minimal footprint/impact. Kind of like replacing a slow-burning 3-litre V8 engine with a turbo-charged 1.6-litre alternative, that is half the size and weight, has twice the power, and is thrice as economical. Will that appear on Cylance’s website???
Anyway, there are still more questions than answers (sounds like a cue for a song?) which makes it all the more interesting. Me, saying security is interesting? Surely some mistake here… It’ll all come out in the washing (public or otherwise)…
WiFi/WLAN is taken so much for granted nowadays, that it’s easy to forget just how far it has come in recent years.
Given that it took around 15 years to get from proprietary 1Mbps technologies such as those offered by NCR (WaveLAN) and Olivetti (can’t remember the product name, except that they used it to remote control a forklift truck – a real one, not a Dinky – at a show at the NEC way back; ‘elf n safety’ – what was that back then?) to an IEEE standards-based 10Mbps Ethernet solution, the speed of change (and change of speed) since then, and especially since the .11n standard emerged, is nothing short of spectacular.
Been catching up with a number of vendors, including the likes of Zebra, Cradlepoint, Xirrus and TP-Link, and it’s clear that the WiFi world really has become ubiquitous, from Glastonbury – see CW story:
https://www.computerweekly.com/news/2240184168/EE-partners-with-Glastonbury-for-4G-festival – which used Cradlepoint technology, for the record (or mp3) – to every coffee chain in the world (seemingly), every hotel room (with the possible exception of Bridlington) and, pretty well every square yard of every town in the UK, WiFi is available. Even if you’re not aware of that (access) point, your smartphone keeps reminding you…
But it’s the capabilities that are now pretty astounding, something I touched on with Bruce Miller of Xirrus last week. That company’s latest Wave 2 APs contain up to eight radios and support 3.47Gbps of throughput – each! in the mid-2k’s we needed a depot full of gear to achieve those levels of coverage and performance…
And with the proliferation of outdoor APs now, it means that it is THE perfect technology for cloud-based services; it even passes in the ether, so you can even justify looking up in the sky when you mention the “C” word (as many people seem to do). It has also changed the way people select everything from which hotel room to which coffee chain (or burger chain), based on their past experiences – and the likes of Tripadvisor etc, something I also touched upon with Bruce, and conversations with Hubert Da Costa of Cradlepoint and Andy Woolhead of TP-Link.
The “value-add” that WiFi now gives to a business (not to technology) has also meant that IT guys, regardless of their business, are having to turn their WiFi investments into revenue-making resource. No longer is it enough just to feel inclined to offer WiFi as a free service to the staff or general public, it has to earn £££, soon equal to €€ or $… This, again, makes a cloud-based WiFi service a very attractive proposition to a business; known OpEx and less pressure on ROI. It also gets around the primary problem – still – of Wifi; hopelessly bad deployments. Despite automated site surveys having told engineers precisely where to mount each AP (and how many, and at what settings) since the early days of controller-based WLAN, Trapeze etc, venues (yes YOU hotels, you know who I mean) still get it horribly wrong. Let’s just hope it doesn’t take another 15 years to sort that issue out..
Meantime for Zebra fans (and not those who eat the burgers available from the Arcade butchers in Hastings), watch this space – news soon…