This is a guest blog post by Tony Judd, MD UKI and Benelux at Verizon
With the popularity of new technologies such as artificial intelligence (AI), the Internet of Things (IoT) and software-defined networking (SDN), impacting almost every aspect of modern business, organisations are having to transform their networks in order to take advantage of these technology developments. However, this change is being used by some to falsely prophesise the end of multiprotocol label switching (MPLS), yet this couldn’t be further from the truth.
Although SDN and other networking techniques are transforming how networks are architected and operate, they do not actually replace the functionality that MPLS provides. It is true that SDN has helped drive opportunities to augment network architectures with lower-cost broadband and public internet connections to enable hybrid networking. However, SDN does not actually replace the need for higher-quality MPLS connections for critical applications as some over-the-top (OTTP) network providers might have you believe. Both technologies will coexist and, in fact, SDN will depend on MPLS for traffic management and security—the attributes that made MPLS networks reliable and desirable in the first place.
Recent technology advances such as media streaming, social media and mobility have generated massive amounts of data that flow into networks from a myriad of devices. Now as IoT, AI and edge computing environments start to go live, data volumes will become even astronomical. Currently, 2.5 exabytes of data are generated daily, and Cisco estimates that data volume is growing at an annual rate of 24% through 2021.
Combined, all of the recent and ongoing technology developments – cloud streaming, IoT, mobility – changed how enterprises consume applications and, as a result, also changed bandwidth demands and (WAN) traffic patterns. As such, enterprises face serious challenges related to scalability, security and network performance. Network traffic is unpredictable and much of it flows from multiple sources dispersed throughout private and public cloud infrastructures as well as data centres.
Scalability limitations and security concerns are more pronounced for enterprises that use multiple vendors to run their networks. Like the reliability of the network itself, security policies and solutions vary from vendor to vendor. For instance, OTTP service providers deliver security at the application layer because they don’t own the underlying network, so the data they handle can become more vulnerable when crossing network boundaries. That’s because elements of the underlying networks are managed by multiple service providers that don’t always communicate or collaborate with each other. In contrast, a provider that owns the underlying network infrastructure can design a secure network to meet enterprise needs.
Easing traffic congestion
To get the most out of their SDN investments, enterprises should use MPLS for critical applications and locations and simply supplement with broadband for less critical traffic. MPLS is designed with the built-in security and scalability that modern businesses demand. Network providers that own the underlying network can deliver strong protection against increasingly common types of cyber-attacks – DDoS (distributed denial of service), ransomware and zero-day threats.
Today’s enterprises also need smart networks that prioritise traffic based on the applications they use, both at the point of entry and exit from the network. Intelligent networks prioritise each application and allocate the proper amount of bandwidth. For instance, the network distinguishes between audio and video applications that require higher priority from casual internet browsing.
This refined approach to traffic balancing isn’t available through public internet connections, but there are providers that offer private MPLS connections and monitor those connections around the clock to maintain performance, scalability and security.
Advanced security through MPLS
A further benefit of using MPLS is that it can help to deliver strong security through the design of the network. Through private connections, MPLS can be used to separate IP addresses from routers and hide the internal structure of the core network from the outside.
In addition, MPLS can be used to put in place additional controls customised to an organisation’s specific needs. These controls can typically support an organisation’s compliance with industry-specific regulations or standards such as HIPAA (Health Insurance Portability and Accountability Act) for healthcare and PCI DSS (Payment Card Industry Data Security Standard) for retailers and other businesses that process credit card information.
MPLS and SDN: working together
Without a doubt SDN is changing how networks are managed, driving increased flexibility and scalability to enterprises, allowing them to dial services up and down as required. However, SDN will not mean the end of MPLS, instead SDN will require MPLS to increase security and manage traffic in an effective manner. With this in mind, businesses who want to put in place the latest and greatest digital capabilities should adopt a network strategy that can deliver the best of both worlds; SDN controls combined with MPLS capabilities.
This is a guest blog post by Brendan O’Rourke, head of design at BriteBill, an Amdocs company.
It’s no secret that today’s customers are more demanding than previous generations. They share their experiences online, they expect instant gratification 24-7, and the competition is just a mouse click away.
The communications and media sector hasn’t traditionally been one that impresses customers with great service, but just how badly does it fare compared with other industry sectors? The latest UK Satisfaction Index, published in July 2018 by the Institute of Customer Service, has the answer. It found that satisfaction among UK consumers, across all verticals, rates at 77.9/100. Telecoms scored 74.3, making it the second lowest scoring vertical – only the transport sector fared worse (72.5).
Unsurprisingly, this low level of customer satisfaction translates into high levels of customer churn. A new TM Forum Quick Insight Report, titled ‘Inspire loyalty with customer lifecycle management’ sponsored by BriteBill, found that postpaid churn currently ranges from 5% to 32% per year.
While it’s difficult to put an exact figure on the cost of churn, consider this: The average mobile operator in a mature market spends 15-20% of service revenues on acquisition and retention, compared with the average Capex spend on infrastructure (networks and IT) of just 15% of revenues.
Canada’s BCE and Telus revealed in 2017 that it cost almost 50 times less for them to keep an existing mobile customer than to acquire a new one, with retention costs of CAD11.04 and CAD11.74 respectively, while average subscriber acquisition cost weighed in at an eye watering CAD521. In a saturated market, it would seem sensible for service providers to focus on keeping existing customers, rather than trying to lure new ones away from competitors.
There is, of course, a direct link between customer experience and churn rates. It’s obvious that a positive customer experience aids in retention, but can it be quantified? According to the TM Forum report’s main author and senior analyst, Catherine Haslam, yes it can. Australian service provider Optus achieved a 1.4% reduction in churn amongst its retail post-pay customers by raising its net promoter score (NPS) by six points, while another large service provider saw a 3% decrease in churn following a 25-point boost in NPS.
So, how do you boost NPS?
“It’s so simple yet easy to forget: Nothing is more important than engaging with your customers in a proactive and positive way,” says Haslam. “All too often, communication between service providers and their customers is reduced to the monthly bill – hardly a positive experience for most – and occasional calls to customer care when there is a problem. Service providers are missing a trick by not using opportunities to interact in positive ways, throughout the customer lifecycle.”
Billing, for example, should be a retention tool, rather than a churn agent. The truth is that customers find bills boring and difficult to understand. According to a study by the UK’s USwitch (June 2018), one in six mobile users haven’t even checked their bill in the last six months. When asked why, 18% (or 1.3 million mobile users) said they simply couldn’t be bothered.
Sadly, while service providers may have spent millions upgrading their IT systems to support the digital customer experience, they tend to overlook simple outputs such as the bill. As Haslam puts it, “the first bill is so important, but often it’s very different from a customer’s expectations.” Even if the charges shown on a communications bill are correct, they can be confusing. Factors such as device leases, proration (partial month billing), billing in advance for some services and in arrears for others, overages and vague descriptions all contribute to the complexity, leaving the user utterly befuddled.
It’s high time service providers took a fresh approach to bills that can pay measurable dividends. Cricket Wireless, for example, a subsidiary of AT&T, ran a campaign called ‘Let’s Look Inside Your Bucket’. This inventive campaign used a video-based approach to not only communicate information, but they combined offers and a healthy dose of humour. The campaign was incredibly successful and lead to a whopping 37% reduction in early customer churn.
For service providers looking to transform their bill, the TM Forum recommends the following: communicating billing information accurately, clearly and concisely, demonstrating value, and including new information – such as making customers aware of new products and services that are relevant to them. If a service provider can do this, and do it well, they can change their bill from a churn driver into a valuable retention tool. And quite possibly put an end to the era of boring bills.
Facing negative PR all around after announcing a strategic realignment that is to see 13,000 employees made redundant and the closure of its central London HQ after over 100 years, even the good stuff (such as pivoting the organisation towards full-fibre broadband at long last) that BT CEO Gavin Patterson has done in his five years at the top wasn’t enough to save him from jumping before he was pushed.
The news of Patterson’s resignation broke today after it emerged early this week that angry BT shareholders were mobilising against him. In meetings with chairman Jan du Plessis, it appears that both agreed that even though the strategy changes were the right move, he was not the right man to oversee them.
With a long history at the business, Patterson seemed like the sort of safe pair of hands that befits monolithic organisations like BT. He was never a Bill Gates or Steve Jobs-style techno visionary, but that sort of leadership would have been out of place at BT.
On the handful of occasions I met him, he struck me as a generally likeable man, with a slightly rogueish demeanour that reminded me a little of the sort of lads you find found in IT sales organisations. If you transported him back to the 1980s and dropped him in the City of London, he’d fit right in, and would probably drive a white Porsche cabriolet.
With BT facing tough choices over the next few months, I would imagine the organisation will go for another safe pair of hands, someone who knows the business and is ready and able to steer it through the choppy waters ahead.
For me, this suggests BT will look within for its next leader, so it may be worth keeping an eye on some of the likely internal candidates. Who are they, then?
Twice in recent years BT called on the head of its Retail business to step up, and both Ian Livingston and Patterson answered the call. Retail is now BT Consumer, led by EE’s man Marc Allera, but I reckon he’s not steeped enough in the organisation’s culture yet.
The CEO of Global Services, Bas Burger, is probably right out, given its troubles, and for my money, so would be the head of Technology, Service and Operations, Howard Watson, who is more of a tech specialist, but maybe BT would turn to its Business and Public Sector organisation, led by Graham Sutherland, who oversaw a healthy sales bump and some tasty contract wins last year.
Another name in the hat could be Gerry McQuade, CEO at BT Wholesale and Ventures, a declining business unit as traditional voice revenues wither, but would that give him the oomph to drive the wider organisation?
BT could even consider Openreach boss Clive Selley, a tricky proposition given the organisation’s quasi-independent status, and Selley is, again, a very technical man, but might he be able to bring a new perspective to the wider group? Based on my acquaintance with Selley, I have no doubt he would try to do his utmost to keep up the much-needed pressure to build next-generation networks.
And of course, BT could still surprise us all and tap a complete outsider. One thing is for sure, whoever steps up is going to have one hell of a job.
After yet another set of lacklustre earnings and with over 10,000 staff facing the axe, embattled BT CEO Gavin Patterson needed a quick win. Today’s launch of new consumer offerings from both BT and EE, along with a plan to converge its broadband and 4G networks seem, at face value, to give him that.
At an event in London today BT bet big on its Consumer unit – which is made up of its broadband retail business, EE, and ‘cheap-n-cheerful’ ISP Plusnet. It made over 20 announcements, of which a tie-up with Amazon Prime Video, more content for BT Sport, a managed smart home ecosystem, and the repatriation of its dreaded outsourced call centres to UK shores probably have the most kerb appeal for consumer service buyers.
I think a big part of this strategy is to convince consumers that BT is a natural home for them. Here, BT has a clear advantage as the incumbent (and up until the ’80s the state-run monopoly), and can draw on this to demonstrate that it’s a safe bet for the average user, being the only provider with the size, scale and money to bring together its converged network and customer services vision and wrap it together with lots of nice little perks, such as free 4G Wi-Fi routers if your broadband goes down, or access to Amazon exclusives such as The Grand Tour. No argument there.
But what struck me at today’s press conference was that BT made scant mention of full-fibre, or Openreach’s pivot towards the so-called gold standard of broadband. But then, I wondered, why would it need to? The converged network offering offers a nice speed boost almost right out of the gate in the form of a new router that bonds together a fixed and mobile connection, and to be scrupulously fair, the content streaming experience on a superfast connection is generally as good as on an ultrafast one.
Everything announced today was predicated on making the online experience easier for consumers, not on expanding access to full-fibre. Clever BT has decided that this sort of thing is what consumers want, and in many ways it’s got that right. If the tone of coverage in the mainstream press is anything to go by, this strategy will work out well for it and I expect its customer acquisitions will duly spike a little in the next few months.
Ask yourself this, how can rapidly expanding full-fibre suppliers such as CityFibre compete with that? Sure, you can pay an altnet for an ultrafast connection and it’ll be great, no question, but after that you’re on your own. And let’s be frank here, nobody else building pure full-fibre networks in this country really has a hope of being able to afford Premier League football rights, or to tie-up with content producers like Amazon and Netflix.
Yes, in terms of competition, this is great for BT. But I can’t help but think that to some extent, BT is tinkering with easy fixes and consumer-pleasing add-ons when it ought to be pulling out all the stops on full-fibre. I think it’s in danger of falling back into bad habits and leading on broadband that is, well, just good enough, and I don’t want to see that.
As we’ve been saying here for years, good enough broadband isn’t good enough for Britain’s digital future, and we have a responsibility not to let BT take the easy way out and make good enough out to be desirable. Today’s announcements are good news, but they also show how important it is to keep holding BT’s feet to the fire, and to keep talking about full-fibre as a priority.
Are Britain’s internet service providers (ISPs) coming up short when it comes to helping their less tech-savvy users protect themselves against the scourge of telephone scams and online fraud?
In a shocking breach of Betteridge’s Law of Headlines, the answer is actually yes. But keep reading anyway.
Now we’ve cleared that up, some explanation, I pose the question because TalkTalk, which runs its own anti-fraud campaign called Beat the Scammers, has just published a set of stats, collated in partnership with Action Fraud, shedding some light on the extent of the problem.
TalkTalk’s data show that in the two year period between October 2015 and September 2017, the five most common online scams in the UK hit over 130,000 people, and those are just the cases that were reported to the police.
Online shopping and auction fraud, where products are misrepresented or never arrive, while the merchants vanish without a trace, was the most prevalent type, with 66,874 cases reported during the monitored period.
Computer service fraud – calls from bogus tech support teams, hit 45,713 people, while email and social media hacks hit 9,473, personal computer hacks, often through phishing emails, hit 6,004, and extortion, where personal data is effectively held to ransom, 1,850.
People in London were the most frequently targeted marks for online fraudsters, where the Met police reported over 20,000 cases, way ahead of their colleagues in West Mercia (Herefordshire, Shropshire and Worcestershire), which only had 9,043 reports.
Meanwhile, the people of Essex and West Yorkshire emerged as the least easily fooled, with only 3,956 and 3,894 cases being reported in these jurisdictions – although a cynic, which I am, might point out that because unreported cases obviously weren’t taken into account, the good folk of Basildon and Bradford might just be too proud to admit it.
Donna Moore, who happily is TalkTalk’s head of scam prevention, believes it is the ISP’s responsibility to take on the role of education, which is just as well, otherwise I wouldn’t fancy her chances in her next appraisal.
“We launched our Beat the Scammers education and awareness campaign in 2016 and have continuously improved our service, encouraging our customers to activate our protection tools, completely free of charge,” she said.
“Such tools include CallSafe, which provides customers with a simple way to avoid unwanted calls and enhance their call security. Furthermore, we’re proactively blocking over 700 million unwanted calls a year, and we continue to safeguard customers with the TalkTalk Nevers – a set of guidelines outlining information we will never ask customers for.”
But of course, every good ISP story has to have ISPs throwing shade at other ISPs, so TalkTalk offered some helpful (to TalkTalk) comparisons. Its own CallSafe service includes a number of features that rivals BT and Sky are missing, including unlimited number blocking and whitelisting, feature activation through handsets, and automatic addition of frequently called numbers to an approved list. BT also lacks a screening service and options to accept or reject callers, while Virgin Media, claimed TalkTalk, offers no call blocking features at all.
This is a guest post co-authored by Zach Katsof, director of intelligent communications at Arkadin; Holger Reisinger, SVP of large enterprise solutions at Jabra; and Alan Shen, VP of consulting services at Unify Square.
Artificial intelligence (AI) and machine learning (ML) are evergreen buzzwords. Even within the unified communications ecosystem, AI and ML are popping up more and more frequently, from Cortana voice assistants in Teams and information overload reduction technology in Slack, to call quality troubleshooting algorithms in UC monitoring software.
When it comes to the unified communications and collaboration market, the potential for AI applications across enterprise messaging, presence technology, online meetings, team collaboration, smart headsets and room systems, telephony and video conferencing is endless. But this begs the question: within the UC ecosystem should we think of AI as still very experimental or as having already crossed the chasm? And, if the latter, which of the AI applications and solutions are over-hyped and what’s the real deal?
The AI potential in UC extends both forwards into the realm of the end-user as well as backwards into the domain of IT. For the end-user, AI can automate a series of actions to improve human-to-human collaboration. AI can sort through data (emails, chats, speech recognition) and identify keywords and patterns to then provide feedback on the best way to communicate based on the audience and topic. The more index-able user data becomes, the greater the ability to compare it with keywords from chats and create automated responses to instant messages based on user communication patterns.
AI can also navigate through data and categorise whether people are using time efficiently and productively. For example, while logging the meetings that take place in a company, AI can determine how many of those meetings had agendas, who were the participants, what the minutes included and how much time was spent on each topic. In a similar way, setting up a meeting using AI allows for better resource management. It can evaluate who is attending and recommend the best possible meeting space based on the number of people, the name or topic of the meeting and the tools that might be needed during the meeting. Additionally, it can determine whether or not the participants are in the same office or require a Skype for Business dial-in, and what hardware is needed, like a speakerphone or whiteboard based on whether it’s a brainstorm session or catch-up meeting.
On the IT side, AI can analyse vast amounts of data and UC logs that are available for use in troubleshooting and specific problem solving. Instead of IT having to be reactive in its response to either individual user or systemic UC issues, the presence of AI allows for extrapolated insights regarding how the individual, team or company is performing. Using this learning AI can then issue proactive guidance to IT regarding everything from changes to server configurations to recommendations for a new or different UC headset for a specific end user.
AI in action
AI is regularly applied to enterprise communications to increase efficiency and reduce unnecessary expenditure from humans on tedious work that a machine could take care of instead. The present and future of AI in UC, along with a rating of hype versus reality, is seen in the following areas:
- AI-based gesture recognition on devices: On a conference call, gestures can improve the conferencing experience. For example, on a video conference the system can measure expressions. Cameras can provide details regarding the body language of the participants and provide real-time feedback to improve presentation skills and/or responses. Rating: Early stage.
- Completely automated conference calls: There is ML in transcription, but it is not advanced enough to where voice transcription has completely eclipsed human comprehension. It wasn’t like that 10 years ago. Nowadays, Amazon’s Alexa can understand speech like a human. Rating: Early stage.
- Meeting Management and Follow-Up: AI-enabled devices learn who is speaking, identify key points and then automatically assist people with tasks and send notifications or meeting summaries to all attendees. Rating: Early stage.
- Conference rooms and room systems management: AI-systems drive the entire process of scheduling and setting up meetings. Rating: Nascent stage.
- Smart devices: Meetings are made more efficient and productive by augmenting the conversation with information/insights that currently take hours or days of additional work post- meeting to realize. Rating: Early stage.
- UC systems/platforms (e.g. Cisco, Skype for Business, etc.): IT departments can monitor entire UC systems as well as room systems and identify, for example, when a specific audio/video system in a specific conference room may require a maintenance check-in by IT to reduce possibility of down-time before users are impacted. Rating: Early stage (mature via third party apps).
- Web-chat systems/platforms (e.g. Slack, Teams, etc.): Based on ML from all conversations on the platform, web-chat systems can think in real-time and adjust the questions/suggestions to cater to a specific situation based on prior history/database of similar discussions. Rating: Nascent stage.
- End-user productivity enhancing bots: Personal assistants built into UC apps can simplify actions (e.g. search for information in real-time), and interactive bots can improve customer service interactions (e.g. IVRs driven by bots). Rating: Early stage.
AI risks and considerations
When analysing data input and output, there are risks to consider with AI. If we are able to achieve a level where software is actually taking action and either self-healing the UC systems, or self-scheduling new meetings, we open the door to the software potentially taking the wrong action. Once the algorithm is able to come up with a better conclusion than what a human could do, it issues a recommendation. The model for ML algorithms can be so complex that, if the user or IT wants to ask “why,” there may not always be a why. There are third, fourth and fifth-level elements in this massive, complex algorithm with a huge data model and structure. The human element must arrive at a decision point regarding whether they simply trust the output (because they believe that the outcome will be better), or whether they remain in constant oversight mode.
This is perhaps the ongoing AI dilemma – can an algorithm spit out a decision that would force IT or the end-user to think the machine is doing a better job at managing UC than the human? AI won’t replace our need to think and react. No matter how good a ML platform or AI solution is, people will always need to exercise judgment and validate actions prior to proceeding. The more AI is integrated into UC, the more dependent users will become on it. This could lead to an increased expectation of “perfect” meetings, chats and calls before, during and after the event. If AI systems are not able to keep up or deliver as expected, there will be limited patience and tolerance for poor performance and users will likely stop using it.
Per the state of hype versus reality, the irrefutable notion is that we have not hit the peak of AI – we’ve only begun to scratch its surface. In the current stage, we can still make AI work for us by improving efficiencies, and as it becomes more complex and developed, hope for an automated state of perfection. In short, AI is the real deal but there are still several miles to cover before we cross the chasm to peak performance.
I was hugely pleased today to see that despite the possibility of another legal roadblock in the Court of Appeal, telecoms regulator Ofcom is going to move ahead and lay the groundwork for the long-delayed auction of two massive slices of radio spectrum, one to support enhanced 4G mobile networks, and the other to form the basis for future 5G mobile networks.
But I also detected a definite note of frustration in Ofcom’s statement, which said “the litigation by Three is continuing to delay access to the spectrum and the benefits to consumers and businesses that can flow from it.”
And to be perfectly honest, I can’t say I blame the regulator for being a tad annoyed. Actually, I don’t think the regulator’s statement goes far enough.
Three is challenging the auction process because it believes that the way spectrum holdings in the UK are structured is unfair to smaller operators, that the combined BT-EE entity owns too much spectrum, that its holdings should be capped, and that its ability to bid in the upcoming auction should be restricted.
And I don’t argue with any of this. Yes, it is self-evidently correct that the way spectrum holdings in the UK are structured is unfair on smaller operators such as Three, but it is also true that Three has been able to buy up two major slices of spectrum in the past three years through its acquisition of UK Broadband in early 2017, and a 2015 deal with Qualcomm.
But I now have to ask myself what is more important? That everything is perfectly fair? Or that the UK is able to compete on the global stage?
The spectrum that Ofcom proposes to sell off could have been in use nearly two years ago. Data use on 4G networks shows no signs of stopping. And the first 5G networks will probably be rolled out in this country two years from now.
The UK needs this spectrum in use as soon as possible, and I find Three’s attitude increasingly at odds with the pressing national need to both grow and exploit the potential of our digital economy. We must have more network capacity!
It’s time for Three either to get over itself, or get its rich parent – which made £1.47bn in profit in the first six months of 2017 – to put its hand in its pocket and help keep its UK operation competitive.
In this guest blog, Paul Ruelas, director of product management at Masergy, shares some tips on how to pick the right hybrid network supplier for your business.
A global wide area network (WAN) should guarantee your company a consistently high-quality user experience anywhere in the world. The success of your business – and your business applications – depend on it. Productive business users are paramount.
A global WAN is a corporate asset, one that needs to be under the IT department’s complete control. You should be able to burst when bursting is needed, you should be able to assign services classes that meet your application performance requirements, and you should be able to deploy multiple virtual networks without service provider intervention if you so desire.
All of that requires a WAN supplier that excels at network design and service delivery. How can you ensure this? In part, by asking the right questions. Here are five areas of technical leadership you’ll want your WAN supplier to possess – and questions you should ask to determine whether they have it.
Application user experience
When it comes to a global network, the application user experience (AUX) rules. This has little to do with bandwidth and far more with jitter – a variation in the delay of receiving packets. Your service level agreement (SLA) should contain a jitter value of less than one millisecond, and your WAN supplier should be able to provide an AUX guarantee.
You should be able to check, on a daily basis, what the supplier is providing your company. Ask them if they can provide a consistent user experience anywhere in the world and, when they tell you they can, ask them how they plan on guaranteeing that.
The last mile is an important consideration when drawing up SLAs. Its impact on changes, tickets, and mean time to repair (MTTR) can be significant, so it’s worth checking that the WAN supplier will include the last mile in SLAs. Questions to ask your prospective supplier should include; for your last mile, are you provider-independent? Do you guarantee a clear channel on a point-to-point basis? Does your service provide a business continuity solution, using the internet as a backup solution?
Many carriers and telecommunications companies have grown through acquisition and they own some parts of the fibre transmission networks, switching assets, and legacy systems of the companies they’ve acquired. Before you sign a contract, make sure you understand the impact of their patchwork infrastructure on your global WAN.
The first step in understanding that architecture is asking your supplier to provide those insights into the architecture that they’ll be provisioning for your network. It’s also worth checking if their network supports fast rerouting, that it can deliver Ethernet everywhere, and they are able to provide guaranteed clear channel without any oversubscribing. And, above all else, you need to find out how they will ensure seamless product and service delivery.
Within a supplier’s monthly fee, changes and change requests are often separate costs. These can be substantial. Since the WAN is your asset, you want to be in control of it at all times. Today, a mature service should let you use software-defined networking (SDN) principles. In essence, these let you manage a WAN as if it were a LAN. You can make changes immediately for user experience, bandwidth, quality of service (QoS), VPN provisioning, and more. Can your prospective supplier offer this?
Cloud-based IT applications and services are becoming increasingly important, so cloud connectivity should be part of your strategy and connectivity. Global enterprises want the option and flexibility to connect to the major vendors such as Amazon Web Services, Microsoft Azure, and IBM Bluemix, so ensuring your prospective supplier can connect you to your vital assets in the cloud is paramount.
Some suppliers may provide a “dumb pipe” to the cloud. However, what you should look for is the option to manage this connectivity, and you should be able to manage these connections as extensions of your WAN with full visibility and control.
Technical leadership alone shouldn’t be how you measure your prospective supplier. You should also try to understand the business considerations; customer service/satisfaction, client support, SLAs, price, TCO, and invoicing.
You should own the WAN. But you also want to work with a company that values you as a customer and provides a professional partnership. You should also expect a predictable cost model. That will let you manage your IT functions in keeping with company requirements and budgets.
Negotiating a contract for a global WAN is a major undertaking. By asking the right questions from the start, you can make an informed and positive choice. If your potential supplier cannot answer your questions to your satisfaction, then you need to ask why you’re talking to them in first place.
Chapter 11 is never a good look for a tech supplier, whatever the reason for it. In the case of unified communications and network solutions supplier Avaya, it wanted to buy time to restructure its business as it transitioned from a hardware to a software business. Arguably there was more to it than that, but that was the company line and, damnit, the company was going to stick to that!
But at Gitex Technology Week – the Middle East’s equivalent of Mobile World Congress and CES rolled into one , which has just kicked off in Dubai – Avaya is strutting its stuff with more confidence than I can remember it having at any time since the start of the decade, when it was still drunk on its purchase of Nortel’s network hardware business.
What’s going on? Why is Avaya wearing Chapter 11 with pride? Why is it hosting journalists on foreign press junkets? Why aren’t the PRs crying in the bar? This is not how it’s meant to be!
Well, to start with, Avaya is done with networking hardware. D-O-N-E. The Nortel hangover was obviously too much to bear, and the legacy switching business was duly up-chucked in June (come on down, Extreme Networks!).
Nidal Abou-Ltaif, president of Avaya’s newly-created International division, which (as is so often the case) means everything that isn’t the US and Canada, told me that bidding farewell to the legacy networking business had been an emotional process for Avaya.
Why couldn’t it be made to work, I asked him? Refreshingly for a supplier exec, his reply was candid.
“We wanted to focus on our core business, which was traditional UC [unified communications] and contact centres – what we have sold is an amazing technology and we are very proud of every customer installation we did, but it’s not easy to go to market and keep supporting customers when you have 3% of the market,” he explained.
Switching and routing kit is technology that as Abou-Ltaif said, you can “close your eyes and sell”, and it’s true that a couple of big suppliers basically have this market sewn up, so Avaya believes that by refocusing on its old core business, it can actually sell more by making its products software-based, cloud-hosted, and interoperable with Cisco et al.
Looking ahead, Abou-Ltaif said Avaya will be ploughing money into software and cloud, and it will likely spend at least the next 18 months revitalising and modernising its UC lines to get them match fit for the new world of networks. No new product launches at Gitex this time around.
For the UK – which has lost its status as Avaya’s EMEA base (thanks to the restructuring process into the US and International businesses) – Abou-Ltaif said that Brexit would bring it new opportunities among smaller businesses as they try to modernise their infrastructure to compete more effectively (he also added that Avaya’s German salespeople have Euro signs in their eyes as they prepare to pitch to all the financial services companies that are planning to move some of their operations away from London after Brexit).
So what else can we expect from Gitex this year? Well, Avaya is bringing customers from all over the region and from a diverse set of verticals, ranging from communications suppliers such as Telekom Serbia and Hungary’s Magyar Telecom; to financial services firms such as Oman’s Mashreq Bank and Dutch powerhouse ABN Amro; to white goods manufacturers such as BSH Hausgeräte.
Notice a theme? They’re all sectors that are powering headlong into the connected digital world opened up by the Internet of Things, which will perhaps unsurprisingly be the talk of the show this week, and not just at the Avaya booth – Gitex is a trade event on an epic scale, and everybody wants a piece of the digital pie, from the humblest component manufacturer you probably never heard of to the industry heavy hitters – AWS, Cisco and the like.
Gartner stats suggest that 89% of organisations now expect to compete primarily on customer experience, meaning that enterprises need to rapidly evolve their digital strategies to deliver differentiated experiences to users. Experience is the key word here, and walking around the exhibition halls, Avaya is not alone in pinning its hopes on what I would argue is often rather a poorly thought through concept.
Let’s just hope it doesn’t turn out to be another drunken mistake.
Right now, if you’re responsible for transforming your legacy wide-area network estate, you might be asking yourself, what does it mean to monetise your enterprise WAN? How can I leverage my WAN as a form of currency to earn revenue from my network assets? There are two primary ways to improve a company’s financial bottom line. One is to lower capital and operational expenditure (CAPEX and OPEX), while the other is to increase revenues.
We don’t often think of networks as assets we can monetise but just as governments monetise debt to keep interest rates low on borrowed money and businesses monetise products and services to generate profits, SD-WAN technologies can help monetise WANs to keep bandwidth costs low and deliver new service revenues with greater agility.
SD-WAN can also help internet service providers (ISPs), managed service providers and cloud service providers increase profits by leveraging excess bandwidth and delivering new value-added network services.
According to Gartner, 22% of an enterprise’s capital and operational budget is spent on data and voice networks and staff. In fact, Gartner details how CIOs and network managers can save up to 50% on network expenses with its 10 best practices to optimise spending on network infrastructures and telecom services.
- Manage your network service provider relationships appropriately and save 20%.
- Use hybrid WAN architectures, preferably with SD-WAN, to save as much as 50%.
- Embrace new consumption models and save 30%.
- Implement SIP trunking to save as much as 50%.
- Manage your mobile service providers and save 15%.
- Carefully manage your network equipment vendors and save 30%.
- Segment your network infrastructure and save 30%.
- Leverage WAN optimisation to improve WAN performance and save 30%.
- Just say “no” to chassis-based switches and save 70%.
- Scrutinise network equipment maintenance expenses and save as much as 50%.
Virtual technologies, mobility and the cloud, have forever changed the cost and efficiency dynamics for data delivery, consumption and insights. Unfortunately, many companies are being stifled by their private legacy networks.
Compared to broadband internet, MPLS is expensive and rigid and locks companies into a single carrier for each location. SD-WAN, on the other hand, helps lower bandwidth and management costs, frees enterprise WAN connectivity by supporting multiple network service providers, eases network management, simplifies administration and optimises cloud connectivity.
SD-WAN efficiently distributes traffic across multiple WAN connections including MPLS, low-cost internet, LTE, DIA internet, VSAT, and others. To avoid service provider lock-in, each connection can be from different network service providers. Additionally, because the most progressive SD-WAN techniques are observing behaviour at the packet level, uni-directionally, many times per second, multi-vendor telemetry data analysed by SD-WANs are effective analytics in terms of SLA attainment with both MPLS circuit providers and ISP/cloud internet connectivity service providers. Many enterprises aren’t interested in managing their own WANs and applications, so outsourcing WAN connectivity to a service provider can be a good option. For service providers, SD-WAN as-a-service, bundled with compute, storage and business-critical edge-network apps such as unified communications including voice and video, offers new revenue opportunities for managing enterprise WANs.
Pay as you grow
With a pay as you grow SD-WAN pricing model, WAN infrastructure can easily scale without hitting a performance ceiling. With subscription-based pricing it’s never been easier or more cost-effective to take advantage of SD-WAN benefits, such as eliminating upfront costs and moving budget requirements from a capital expenditure to an operational expenditure, based on a Monthly Recurring Contract (MRC). Comprehensive maintenance may also be included within the MRC subscription to simplify deployment and ongoing support. Some additional benefits include:
- Easily decommission WAN connectivity at the end of a short-term data migration project, or keep it in place to support ongoing strategic projects, like data replication.
- Only pay for bandwidth actually used, instead of overpaying for more capacity than is needed, while also benefiting from intelligent link aggregation to reserve bandwidth for high priority traffic.
- Upgrade virtual SD-WAN appliances at any time, to meet changing WAN capacity needs.
- Dynamically deploy a network on-demand from a centralised controller node, either on-premise or via XSP, to flexibly align WAN support for localized and regional business opportunities.
MPLS has performed well in legacy datacentre to branch architectures, but in today’s cloud-connected enterprises MPLS is not well-suited for connecting users to cloud and SaaS applications, without additional costs and considerable re-engineering. SD-WAN, with diverse commodity internet connections and hybrid WAN options, enables enterprise users to connect to applications in cloud and corporate datacentres over any network and any device – from any geography.
SD-WAN direct cloud connectivity eliminates the trombone effect, so traffic from remote locations doesn’t need to be backhauled to the corporate datacentre before exiting to the internet – eliminating latency that can adversely impact the user quality experience.In addition to cost savings, SD-WAN orchestration makes service-chaining multiple functions significantly faster and easier. CIOs can take advantage of cost-effective thin-branch consolidation of multiple network functions like routing, firewall, app-security and WAN optimisation within a single SD-WAN edge appliance, virtual machine and/or cloud instance.
Think global, act local
SD-WAN network overlays allow administrators to automate new service provisioning without having to rebuild their network hardware infrastructure. The ability to achieve greater control, automated traffic flows and dynamically route traffic among multiple network links with QoS turbo boost and app prioritisation allows organisations to create new service revenues.
When a local or regional time-sensitive business opportunity arises, time to market is of utmost importance, whether setting up a pop-up retail shop, a temporary or short-term healthcare service or streaming live video of sporting, industry or corporate events. The ability to easily and cost-efficiently turn up a reliable, robust and secure WAN within hours can not only make the difference between failure and success, it can deliver a competitive advantage, while supporting new revenue streams, through e-commerce and new customers/subscriber acquisitions.
At a time when every business is looking closely at its bottom line and exploring ways to make savings and generate new revenue, it’s time to take a fresh look at how you can monetise your networks rather than seeing them as a just a necessary drain on resources.
This is a guest post by Atchison Frazer, head of worldwide marketing at Talari Networks.