Recently returned from the sunny climes of the Algarve at this year’s round up of IT reprobates at Netevents (congrats, as ever, to the Netevents team for a) organising the event and b) surviving it).
Unsurprisingly there was much talk of that re-roasted old chestnut that is AI and what it actually means (if not what it stands for). My old (equally) mad mate Jan Guldentops rightly made the point that you could probably ask a hundred vendors and get a different interpretation from each. And is “machine learning” the same thing as “artificial intelligence”? And why did no one discuss why, when the latter is known as “AI”, that no one ever refers to the former as “ML”?
Much of the conversation was greatly on topic – thanks Scott Raynovich, AKA Mr Futuriom, for making the point that SD-WAN is going to be massive. Music to my ears, as it’s one of my key focus areas. Some of my clients are smiling very broadly. No thanks to Gartner for narrowing the definition of WAN Optimisation in the first place, so that it had to be renamed in order to expand and fulfil its potential, however…
As part of the ongoing debates around SD-WAN and cloud came the realisation that there isn’t actually infinite bandwidth around the globe and that all forms of optimisation are actually a fundamental requirement going forward into a multi-cloud, “let’s kill off MPLS” future.
And then there’s the inevitable realisation that security still doesn’t work (though if you believe certain Antipodean journalists it isn’t actually necessary; what’s worth stealing in Australia anyway?), as evidenced by ongoing, much documented hacks into major global companies, banks, travel companies and governments.
Then with the migration from OnPrem to various cloudy scenarios and migration of apps and data onto all manner of virtual environments and self-contained containers, there’s the issue of how you manage that migration, especially when it might be across connections that have very significantly less than infinite bandwidth? Does VMware enjoy the company of Docker? Are Azure and AWS bosom buddies?
Seems almost freaky then, that I’ve just started a project working with a US inventor focused in the areas of data acceleration, encryption and multi-platform migration, backup and replication management. Or was I just dreaming that on the way back from Faro airport?
Actually, I wasn’t. I think I’m rather going to enjoy the ride. And no prizes for guessing what I bought a bottle of at the duty-free shop, Faro airport, that being in PORTugal…
Had a great couple of online sessions recently with Joe Merces of the splendidly named CloudDaddy.
The aim of CloudDaddy is to simplify one of the great pain points of IT (no, not repairing daisywheel printers, I think that really is historical pain), that being backup and restore, by incorporating AWS into the process. So often IT gets carried away with inventing for its own sake and ignoring the “stuff” that has always caused the most problems.
CEO Joe (who grew up the hard way as CIO of the New York City Law Department) and I spoke about the old uncertainties of not just ensuring backup strategies went according to plan but the sweat-inducing trauma of… the restore! Of course, there are still gazillibytes of data out there on tape, so the traumas continue for many, but with something like CloudDaddy a company can move its backup into the cloud site by site, byte by byte, just in case “cloud paranoia” is at work.
Talking of paranoia, an important point to make is that CloudDaddy comes with fully integrated security; yes, I appreciate this should be a given, but it isn’t. We referenced the city of Atlanta attack, where they found even their backups had been encrypted. Nice… One look at the CloudDaddy dashboard would have show them every instance that wasn’t secured. And another look at the AWS site map shows you where you can backup your data to. It’s like a new form of global war game – “I’m going to launch a backup at Australia…”.
Hoping to get a full look at the product asap; given its global approach via AWS, this could just be THE cloud daddy!
And talking earlier of IT all too often getting obsessed with “what’s new” and marketing hype, machine learning (now in its 4th historical hype cycle by my reckoning) has been much overused and abused of recent, but it is also now making its way into the world of data backup and recovery, care of Imanis Data. The company set out to use machine learning to protect Hadoop and NoSQL applications against Ransomware, but have extended the intelligence to backup/recovery, albeit currently for Hadoop and NoSQL enterprise data management specifically (hardly a small market!) but you can see where the concept could go in the future, so be watching of this space!
IT is often two-faced and never more so than when it comes to the financial health of the industry.
Speak to one set and they’ll tell you how tight it is, scrapping for every penny of profit, and they’re speaking as they see it. At the same time, many of my US-based clients and regular contacts seem to be able to bring in the levels of investment they need, as they need it. And we’re not simply talking the Bay Area here; Austin in deepest Texas is a hotbed of security start-ups these days and a great place it seems to bring the $$$ into it seems. Latest example is JASK (Just ASK) which has just raised another $25m (Series B) to bring total investment to just shy of $40m in not a very long time at all. To put it into perspsective, I’ve got UK clients who – between them – have struggled to raise one tenth of this between them all over 10 years, despite having great tech!
So is the answer for EMEA companies to up sticks and move to the US and go down the .inc route? Well, it has helped in some notable cases – for example, my old client AppDNA – but equally it can prove to be a very expensive wild goose chase. And free-range geese don’t come cheap! No, in many ways it’s still a “who you know” industry, though having good tech does help these days; go back 20 years and that wasn’t even a qualifier. Nowadays there is some level of common sense involved in the investment decisionmaking process. WRT JASK (enough initialisation!) its tech does take a very common sense approach to helping security analysts free up more holiday time -:) If you’ve ever tried to wade through a gazlllion alerts and shedloadbytes of packet captures to find the eNeedle in the eHaystack, then you’ll get the idea behind JASK – i.e. do all that stuff for you – so you can short-cut to the conclusions and remediation.
Related to this concept is another Austin-based company that I am spending some time with currently, and that is Capstar Forensics; we worked out that, not only does its search engine tech take weeks and months of manual search time down to literally a few seconds, but actually makes the impossible, er, possible. And talking still of Austin (told you it’s a hotbed!) I’m literally about to start working on an updated report for client Ziften, focusing on its integration with Microsoft’s Defender tech. Integration is also a key aspect of the JASK approach. Maybe someday there’ll simply be one security product called “Austin, Texas”? You heard it here first…
Like – I suspect – many people, I spend far too large a percentage of my one and only life on various ‘net conferences, using a variety of platforms.
Most of them are truly loathsome. So it was quite refreshing recently, when speaking to a conf software vendor, Fuze, using its own product for said conference, to have a one-click connection, with no “sorry I’m struggling to get onto the conference” moments, and excellent lip-sync when using the video (and, no, we weren’t in the same room -) – hoping to spend more time with these guys down the line.
One issue the aside comments during the conf did raise is that of – “what devices are conf attendees using to access the conferences nowadays?” Given that I regularly test network infrastructure management (of various sorts) products, security admin services etc, that are all completely manageable via an Android/iPhone device, it seems logical that more and more folks are using said devices to join conferences, and this is indeed the case, regardless of their actual location – e.g. at their desk. And it’s not a trivial issue providing the same level of access and facilities across all platforms – phone, laptop, desktop, tablet (stone or otherwise), cave walls etc…
Moreover, it shows the ever-increasing need for focus on audio/video from smartphones to attain a quality, shall we say, approximately a thousand times better than that we experience with the likes of WhatsApp. And of course it needs to be secure, even by what Russian FIFA World Cup 2018 standards are likely to be (with probably less use of batons). Which brings me nicely onto another recent conversation with another vendor, this time Swiss-based Equiis (needless to say, the conference started exactly on time -). Equiis plays in the world of secure mobile comms, an area I’ve been focused on since the early 90s, care of Brand Communications, Netmotion and others. And it’s bigger than ever, for the aforementioned – and many other – reasons. Equiis’ focus is very much on the enterprise market, though they do have an intriguing announcement in the pipeline which will expand that market considerably, but I’m not at liberty to talk about that currently. Many people think I should be locked up anyway…
What is really key here is the ability to continue what typically starts as some kind of online conversation in the office and continues on the way back home, or to another meeting, the gym, pub, Indian take-away, noodle bar or whatever… with all the same secure functionality on tap. Two or more different locations shouldn’t require two or more different devices and applications and, regardless of the number of devices, the user methodology should be the same. This isn’t Nirvana, it’s just basically what’s needed in 2018 and beyond. Or at least until the world ends, which some people are suggesting might be as a result of the World Cup. Not sure about that, but I can certainly see a potential depression building over England as a result thereof… I do hope I am proved wrong -)
It’s a truism that IT essentially reinvents itself every decade or so.
In some ways this can be rightly seen as a cynical way to repackage the same old tosh and sell it again to confused.com IT departments afraid of having an unsupported IT implementation. But it does also spawn benefits, not least because the vendors themselves are often forced to reappraise their offering and actually improve it.
Vendors reinventing themselves or morphing into their next “Dr Who” regeneration makes a lot of sense, not least because technology sometimes comes first and then there’s the question of what to do with it. And that’s the hard bit. And it’s not because they’ve created a solution to a problem that doesn’t exist, it’s simply finding the right way to market (it). I’ve seen many vendor clients with top notch tech take time to get their offering right – Voipex with Aritari, Fedr8 with Green Rain are two that immediately spring to mind (seasonal reference). Another is Densify, with whom I had a catch-up this week and whose offering is based on the technology from their twin, the artist formerly known as Cirba.
When I reviewed the Cirba technology, it was extremely impressive, but equally a tech monster, as in, “bloody hell, this does an awful lot of stuff”. How many times have you seen what was fundamentally being offered as a product and thought: “that should actually be a service offering, based on that underlying technology”?
Never has there been a better scenario for this than in this cloud-adoption era. Companies, xSPs, carriers even, are all second-guessing as to how to optimise storage, application deployment and delivery across what can be a global delivery mechanism – storage with multiple tiers of virtualisation (and real tears, not virtual ones) and layers of networks to manage and optimise. So Densify takes the guesswork out of it. That’s my summation of its proposal; it’s a far more obvious and appealing offering than trying to explain how the underlying tech works and EVERYTHING that it can do. I’m speaking from experience here. I still get the headaches…
Put simply, Densify’s girlfriend Cloe (Cloud-Learning Optimisation Engine) uses machine-learning to work out what to do (sounds a bit like a Chloe I used to know) and, er, does it, without the user having to work it out for themselves. Bingo! Densify reckons it provides results in the first 48 hours of deployment, with Cloe recommending the best cloud technologies for any given application. The solution also offers multi-cloud support, whereby applications are provided the right resources even when simultaneously using multiple cloud vendors.
Sounds like the perfect test revisit scenario…
I’ve already talked many times in this ‘ere blog about how cloudy the cloud can be to many would-be adopters.
And it hasn’t got much clearer; in the past 18 months I’ve tested storage in the cloud, security in the cloud, service desk in the cloud, optimisation in the cloud… I assume there must be cloud-based weather report services? The issue for the would-be adopter is therefore – do I need ALL of these cloud-based services/technologies? And what do I keep and what do I throw away? Does one of these services give me “cloud-based everything”?
Well, no, but we’re getting there. This week in London I had the pleasure of meeting up with Cloudistics (despite GWRs best efforts) – there’s a clue in the name as to what area of IT they are playing in, to discuss their formal EMEA entry (not for Eurovision) and the logistics thereof. The basic premise is to make cloud adoption as easy and cost-effective as possible and to work with what’s already in place. So that answers a significant number of questions straight up.
One thing that is also clearly understood is that, even with an outsourced, “in the Ether” service, such as a public/private cloud, the EMEA market – with its geo-cultural fragmentation – is a very different market to the US, albeit allowing for no one every mistaking Seattle with North Carolina (unless you’re from the latter). This is a massive benefit of a cloud-based approach; you have the basic set of tools, but they can be manipulated to order, so you get optimisation and economies of scale in tandem. For any variation on an MSP, or any reseller wanting to reinvent themselves, something like Cloudistics is something they really should get quite excited about, almost right up there with a 1961 vintage Taylors port (but not quite, obviously).
Which leads us to another thread of conversation in the meeting – education. Lots of it. Needed. But ‘twas ever the way with IT; some people still don’t know how to configure VLANs correctly. Actually, probably most people… Hey, that’s what outsourcing is all about 😊 – it’s not the end users that need educating but the middle-men, the resellers. They need margin, value-add, a living. Vendors such as Cloudistics are offering that option. They should grab it.
TMI or Too Much Information is a popularised phrase of recent years, that isn’t normally related to IT (well, at least not the sort of IT I’m involved in), but it is very much a truism.
Big data, data analytics, ya de da, all relating to the reality that we are (and have been) collecting Petashedloads of data for years, without necessarily contemplating what we might actually do with it (other than store it and back it up). It took the likes of Google to wake up the data admin guys and say “well, we’re gonna do stuff with this data, even if you’re not”.
Of course, the retail and consumer markets in general have been analysis bonkers for decades, but what about IT security, cyber or otherwise? It’s not a case of collecting info from a single security device – be it a syslog from some kind of firewall (of which many companies have very many), IDS, IPS, SIEM, UTM, ATM, ITN, BBC… but all of these and more, administering and collating said info and then, er, what? In the event of trying to glean info on a possible cyber attack, diving into said eMountain of data – even using a variety of tools – will possibly result in a positive outcome, six months after the effects of said attack has already rendered the company bankrupt -)
So what’s the answer (other than “42”)? Well, there are start-ups a plenty working on a solution to the aforementioned multitude of solutions that were designed to solve the original problem (which we can’t now remember what it was), most of which involve machine learning and the automation of “manual error strewn” tasks and acceleration of forensics and the search for the holy cyberattack grail. One such candidate is JASK (if you want to know what that stands for, Just ASK), with whom I had a jolly excellent conversation this week. Their understanding of the problem is right on the money, and not a million bytes away from that described above (which is very close in modern data terms). In short, it’s a platform (cloud + agents) designed to automate the collection and correlation of threat alerts from all manner of sources – it is completely open in this respect – and then analyse said alerts, providing prioritisation and – theoretically – acceleration in getting to the epicentre of the problem. So kind of Splunk take two? Maybe, but key to its potential success is in working with these existing products, so not alienating the vendors, nor – vitally – the end-user customers who have already invested gazillions in said solutions: “just one more wafer-thin mint, sir?”
Of course the proof is, not in the pudding – too late by that stage of the meal – but in justifying its existence (and cost, which is sensibly by company size, not some archaic CPU count, or active users, or random number generator) which can only come with actually using the product and seeing what it spits out, what time it saves, and what businesses it saves. As ever, it’s a case of watch this space…
Interesting conversation recently with PGi – a well-established player in the worlds of web conferencing, unified comms and online collaboration.
Interesting, timing wise, because a lot of conversations recently with vendors have revealed how much discontent there is with the classic web conferencing tools, such as WebEx and GoToMeeting. Indeed, I’ve had several instances of conferencing with individuals who have worked for the vendors owning those technologies who have chosen to eschew their own products and use an alternative instead!
Typical comments I get are “clunky”, “limited”, “good echo unit impersonations”, “jerky” – these are just the polite ones. And it’s not just a quality/features issue, but so many people fail to get onto the conference in the first place… So what better than to conference with PGi, using its own GlobalMeet platform – kind of a pre-test of a product simply by using it; logical or what? So, a couple of comments – firstly, it took one mouse click to connect me. No confusing options, no one struggling to connect. Secondly, PGi has taken the limitations out of a web conferencing application by re-engineering it as a platform in its own right. So, application additions can be made readily, as and when they make sense, including from 3rd parties of course, and some of my own clients spring to mind here.
The simplification is important beyond the mere convenience factor, given that a survey just published by PGi highlights that 44% of respondents rank the smartphone as their chosen device for unified communication and collaboration in the future, rather than a laptop or desktop computer, or a desktop phone, for which there has been a decline of 70% in use over the past five years for conference calls, and 55% in just the past 12 months. For the record, PC/laptop stands at 43% usage.
Not that any of this is surprising, but it reinforces the need for ease of access and flexibility in terms of device support, updates and app/application additions. When you add in how online collaboration is now really making more and more sense – for example, I carry out testing/test validations online now regularly – then, the need for something better and beyond what we have experienced to date is all too clear, something PGi seems to have grasped and acted up, so well done to them!
My online conferencing life may well be less of a misery in the future…
IT, and security especially, is overloaded with slick marketing terms that have little or no substance behind them – the classic “solution searching for a problem” scenario.
But… just occasionally the complete opposite prevails, as in the most recent test project I’ve just completed for Ziften, a US-based tech company focusing on endpoint security and asset management. And there’s the rub – for years, nay decades, I’ve been talking about the importance of combining elements of IT into what I would term “broad network management”. After all, what is security if not a fundamental element of network management? However, what has happened is that – just like the early days of datacoms and telecoms teams creating those “Islands of expertise” within IT; I mean, these guys didn’t simply not communicate, they despised each other! – those islands of IT expertise have expanded, whereby security doesn’t interact directly with networking.
The result is a hotchpotch approach to the most crucial elements of running an IT shop. Moreover, the duplication of resource and effort is extremely costly. So it’s a double non-whammy! So here’s the deal – how do you secure an endpoint if you don’t know it’s there to begin with? In other words, the starting point for securing endpoints is to discover and manage what is on the network to start off with, then apply the security. No, it’s not rocket science, it’s common sense. For once…
This is the approach Ziften has taken with its product – eureka! So we now have method and order in place and the product does exactly what it says on the tin (or the marketing spiel). Which brings me onto the point I made at the beginning of this post – slick marketing terms, or the reverse thereof. The categorisation for the Ziften solution sits in the relatively newly defined SysSecOps market – Systems and Security Operations, i.e. the combination thereof. Doesn’t exactly trip off the tongue so much as trip it up. But who cares? Maybe this is the golden IT rule. Abbreviations that are a bit rubbish actually stand for something meaningful and useful…
Meantime, my report can be found via this link – read it and get a dose of common sense. Then, if you’re a marketing type, think of a better category title -)
Like VoIP optimisation I’ve been blogging about recently, so I’ve been testing network (LAN and WAN in old money) optimisation solutions since the late ’90s.
The former, AKA Load-Balancing and Application Delivery Control (ADC) was historically very hardware oriented; sure, all the intelligence was in the software, but the horsepower came from the specialised hardware. So, when that ran out of steam – whether the platform itself or add-ons such as SSL accelerator cards, then the whole platform had to be rebuilt from the ground up. Just like it was with routers etc – and they’ve gone down the SDN route (which finally appears to mean something in product terms) and so it has happened with L-B/ADC solutions.
I was recently speaking with Avi Networks, one of the few in the genre not on m radar – but they are now. Essentially there’s a lot of ex-F5 Networks guys in there, so they know how to make a good product and – now – how to make a better one! So, it’s a software solution, naturally, so it can take advantage of low-cost, bare metal scalability; the smart bit is in the separation of the control and data planes – a logical cloud-esque solution, so the control is global, single-site and the data can be anywhere. That might seem suitably obviously, but making it work is another matter altogether. What it means for the “customer” – whether an enterprise or an xSP – is that the data plane can be completely bybrid – OnPrem, public/private cloud,, physical, virtual…
Importantly it also comes with total visibility, performance analytics and all the information needed to truly be in control of those applications and the deployment optimisation thereof. Of course, Avi isn’t alone in reinventing the genre; my old client jetNEXUS has introduced an AppStore approach to building out an SDN/NFV based L-B infrastructure; build as you go… makes total sense!
And here’s the bottom line to this approach – it’s far most cost-effective and budget friendly that spending millions up front…