Quocirca Insights

Page 1 of 3012345...102030...Last »

December 3, 2017  10:48 PM

VeloCloud takes aim at outcome-driven networking

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

With corporate network evolution focused on performance, adaptability and lower cost, it begs the question: Can the corporate SD-WAN network optimise on outcomes? Can corporate managers translate desired business outcomes into network performance requirements? Some solution elements are well known: Abstraction & automation combined with hybrid premise and cloud-based processing and storage. But is it enough to get us to outcome-driven networking?

For larger companies with more complex networks and business needs, the first-generation of device-centric SD-WAN didn’t really fit. They need a complete service package, that addresses multiple applications and different types of users as well as security and compliance requirements. This has given telcos and network service providers an opportunity to launch SD-WAN-as-a-Service. All they need are new software-based SD-WAN partners.

SD-WAN catering for telcos

Technology from three young companies dominate this marketplace today:

  • The market leader in SD-WAN for telcos is VeloCloud founded in 2012 (recently bought by VMware – itself bought by Dell). It bills itself as the only SD-WAN company to support data plane services in the cloud. Telco customers include Deutsche Telekom, AT&T, TelePacific and Sprint.
  • Versa Networks (with Verizon as a major investor) has been deployed by carriers like COLT and Verizon. It provides a multi-tenant solution that can seriously scale. This allows telcos to support large customers and retail service providers on a single platform.
  • Viptela (bought by Cisco) has been deployed by major carriers including Verizon and Singtel to deliver managed SD-WAN services. The Viptela Fabric is a purpose-built solution from ground-up to provide secure, scalable, resilient WAN applications performance.

From optimisation to outcome

The new element that VeloCloud adds to its Outcome-Driven Networking is self-Learning & adaptation. So, its latest service wrapper provides customers with self-learning analytics at scale and predictive insights in relation to set business outcomes.

In the hybrid corporate network that entails dynamic mid-flow, sub-second steering and remediation of WAN traffic to maintain outcomes as defined on a per-application basis. Companies that need to set up new endpoints, can on-board a new data centre or a recent acquisition, create routes for prioritization, exit points, and transit points – all defined on VeloCloud’s GUI. This ensures that traffic is then dynamically routed from branches to the new data centre.

Also security is improved, because users can isolate traffic by segments. Each segment has unique security, business, and access policies. This allows for custom policies on a per segment basis across the network.

Bring on the partners

VeloCloud also brings in a range of partner capabilities as virtual network functions (VNF). So companies like Zscaler, IBM, Infoblox, Symantec, ForcePoint and Radar can insert security services on the VeloCloud Edge to provide firewalls, VPN tunnelling etc. into network function virtualisation (NFV) enabled infrastructures.

AT&T’s Indigo Project

AT&T is using VeloCloud to develop an application-aware concept called ‘Indigo’, which builds on a software-centric core, and creates a network that is not only software-centric but also data-driven. This is a service concept which blends the software defined network (SDN) with AT&T’s ECOMP orchestration platform, big data analytics, artificial intelligence (AI), machine learning, cybersecurity and 5G elements. Together, AT&T believes it will create a new level of outcome driven, data-sharing network for its largest corporate customers.

However…

When virtualisation makes life easier for the WAN customer, it actually shifts complexity to the network and cloud providers. Many of these operators find their progress towards NFV challenged by a lack of technical maturity and the intricacy of operational changes required to virtualise networks. Management of a multi-vendor environment increases complexity dramatically. The shift to NFV requires significant operational changes involving internal processes, culture and redefining the organisational set-up.

Another sticky point to this new approach will be cost. When so many vendors are contributing to the service delivery, how is it going to be priced? Will the distributed services from partners end up as an expensive network architecture for the customers? Consequently, lacking mature standards (open standards like ONAP’s Amsterdam are only just emerging), leading edge customers may have a hard time extracting themselves from this service? Buyers should carefully map out the parts of the service that are proprietary.

November 29, 2017  2:20 PM

The impact of IT incidents on your business

Bob Tarzey Profile: Bob Tarzey

A new research report from Quocirca, Damage Control – The impact of critical IT incidents, shows the scale of the challenge faced by organisations as they struggle to address the volume of incidents that impact their IT infrastructure, especially those considered critical. The research was sponsored by Splunk.

The average organisation logs about 1,200 IT incidents per month, of which 5 will be critical. It is a challenge to wade through all the data generated by the events that lead to these incidents and prioritise dealing with them. 70% say a past critical incident has caused reputational damage to their organisation, underlining the importance of timely detection and to minimise impact.

The mean cost to IT of a critical incident is US $36,326, the mean downstream cost to business is an additional US $105,302. These two costs rise together, suggesting high cost to IT is a proxy for poor event and incident management, which has a knock-on effect for business operations.

80% say they could improve their mean time to detect (MTTD) incidents, which would lead to faster resolution times and decrease the impact on businesses. The mean time to repair (MTTR) for critical incidents is 5.81 hours, this reduces if there are fewer incidents to manage in the first place. On average, a further 7.23 hours are spent on root cause analysis, which is successful 65% of the time.

Duplicate and repeat incidents are a persistent problem. 97% say their event management process leads to duplicates, where multiple incidents are created for the same IT problem; 17.2% of all incidents are duplicates. 96% say failure to learn from previous incidents through effective root cause analysis (RCA) leads to repeat incidents; 13.3% of all incidents are repeats.

The monitoring of IT infrastructure to log events and identify incidents could be improved; 80% admit they have blind spots, leading to delayed detection and investigation of incidents. The complexity of IT systems and the tools that monitor them leaves many organisations without an adequate, holistic end-to-end view of their IT infrastructure.

Dealing with the volume of events generated by IT monitoring tools is a challenge. 52% say they just about manage, 13% struggle, and 1% are overwhelmed. Those with event management processes which enable them to easily manage the volume of events have a faster mean time to detect incidents and fewer duplicate and repeat incidents.

Quocirca will be presenting the report findings in a series of webinars, in conjunction with Splunk.

Europe 5th December

Americas 7th December

Asia and Australia 12th December


October 16, 2017  11:40 AM

Augment the business, build on the reality of ‘things’

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
augmented reality, Internet of Things, Virtual Reality

The ability to mix the virtual world of digital content and information – text, sounds, images and video – with the physical world of three-dimensional space we inhabit has long appealed. Science fiction has envisaged this in two ways; the fully immersive synthetic environment of Virtual Reality (VR), or the digital overlay onto the real world of Augmented Reality (AR).

There is no doubting the excitement of the more immersive VR. Although it has been around for many years, the technology now performs sufficiently well for relatively low-cost headsets (sometimes very low cost cardboard/smartphone combos) to give a truly impressive experience. No wonder VR has been doing well in games and entertainment. It also work well in the ‘infotainment edge’ of selling. This includes pre-visualising high value goods like premium car ranges or luxury yachts, but it equally could have wider appeal.

While there are many other promising uses for VR in the business world – training, guidance, product servicing etc – users are ‘tethered’ to a physical place while immersed in their virtual world. AR is the ‘mobile-like’ experience that can be carried through the real world and which could make it much more universally applicable.

The rise of Augmented Reality

Awareness of the potential of AR has grown significantly in recent years. This is thanks most recently from a public perspective to the success of games like Pokemon Go. Despite this, some still view AR as the slightly less exciting counterpart to VR. Both are now occupying a space increasingly referred to breathlessly (at least by marketing folk) as Mixed Reality (MR).

Most AR applications do not require the user to wear a headset. Simply looking at the real world through a mobile screen – handheld like smartphones and tablet or wearable device such as smart glasses – and seeing an overlay of digital information, is sufficient. The information presented as overlay can be sophisticated, three dimensional and animated, or simply a piece of relevant textual data. The value comes from the contextual juxtaposition of the data with the place, moment and gaze of the user. This is all about enhancing the user experience by making data relevant.

Some of the early demonstrations tended to be cute or entertaining. The use of AR by IKEA in its Place application demonstrates the potential for AR in everyday, pragmatic and useful settings. Place allows users to experience what furniture choices from an IKEA catalog of 2,000 items would look like in their own rooms, before purchase. It deals with lighting and shading and the auto scaling of objects as they are placed. The app is built with widely available innovative technology. Key to its appeal is the simple and direct user experience, coupled with the complete catalog of pre-built items. Smart AR software without the right data will not be effective.

The applications for AR in business are significant, but need to be effectively managed, particularly with respect to how data is used. Despite the occasional need for completeness in applications such as IKEA’s, ’less’ is most definitely ‘more’. Otherwise AR risks simply becoming another way to digitally overload individuals trying to make use of it.

Augmented Internet of Things

Curiously then, another technology that fits very well with AR is the growing use of Internet if Things (IoT) applications. Here again there is a risk of data overload. Some of the key skills required are akin to those of a professional librarian – curatorial and editorial. The pragmatic application of machine learning could automate much of this.

However, the combination of IoT and AR holds immediate promise even with or without further automation. With so much potential information available from sensors and devices, visualising useful insights can be difficult. How much better if, for example, looking at physical things or places causes relevant data to appear? At a glance, systems could diplay their wear and load characteristics. Devices in the workplace might have been running too hot or have higher than expected power consumption; these could be highlighted as facilities staff walk through and gaze at them. Smart city or smart campuses could display Wi-Fi usage hotspots for network engineers to make invisible radio network coverage and capacity, visible. In each case, the ability to tie relevant information to the context of place and direction of where it applies makes it easier to visualise and understand and mitigate its impact.

The importance of AR, unlike VR, is the way it is rooted to real places and real things. While there has been a lot of  hype in both areas, finding commercially justifiable business use cases has been harder. In combination IoT and AR are worth more than the sum of their parts. One adds real-time data and connections to the real world; the other places it visually in context to deliver the anticipated benefits. Now would be a great time to explore both in combination.


October 5, 2017  10:13 AM

Google challenges AWS and the hybrid cloud with its cloud platform strategy

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard
Uncategorized

The recent Nordic Cloud Summit in Stockholm presented Google as a new, enterprise focused cloud and infrastructure company. Today, Google is still a minnow in the global enterprise cloud market. But that will change, according to the Google speakers led by Eva Fors, Head of Google Cloud in the Nordics.

Google provided no market figures, but management claims that GCP is the fastest growing part of Google’s business. However, Nordic system integrators like Tieto that support cloud migration, put the revenue accruing from the Google Cloud Platform (GCP) in the 1-2% range of total cloud migration revenues. Google is targeting global-3000 companies in the less regulated, data intensive vertical markets like retail and manufacturing.  GCP is already beefing up technical support for its indirect sales channel comprising 10,000+ ISV partners.

Google is clearly determined to commercially exploit its PaaS potential, especially after Diane Greene took over the Google cloud business in 2015. This is what the Google global network looks like on a slide from the Summit:

GCP operates across Google’s global infrastructure. Over the past six years, Google claims to have invested more than 30 billion USD in its global infrastructure. Today, 100,000 miles of fibre and 8 subsea cables support Google’s own traffic. Google claims that its network is traversed by 40% of global Internet traffic. The infrastructure has more than 100 edge points of presence and more than 800 ‘global cache edge nodes’.

Google GCP vs. Amazon AWS?

GCP’s direct competitor is Amazon Web Services (AWS) with its Migration Acceleration Program (MAP), designed to help enterprises migrate existing workloads to AWS. MAP provides consulting support, training and service credits to mitigate risk, build a strong operational foundation and help offset the initial cost of migrations. It includes a migration methodology for executing legacy migrations in a methodical way as well as robust set of tools to automate and accelerate common migration scenarios.

The Google GCP strategy seeks to avoid directly competing with AWS pricing and global availability. Instead the focus is on a wide range of ancillary apps and services that go far beyond what AWS offers. Essentially GCP rests on three basic pillars.

The three GCP pillars

  • Security: For enterprises to reap all the benefits of cloud computing they need to trust cloud data centres and the Internet connections to them as much as/more than? they trust their own in-house data centres and corporate networks. Since the emergence of public cloud services, their security record has been a lot better than most corporate data centres, and generally CIOs have come to respect the security provided by tier 1 cloud providers. Google wants to demonstrate the same levels of security for data across its own infrastructure.
    • Google supports its customers in preparation for GDPR (General Data Protection Regulation), especially with regards to data centre management, data sovereignty and PII protection (Personally Identifiable Information). One tool demonstrated at the Summit is the DLP API, a layered security service which redacts PII data on retained documents or chat conversations. Typically, this includes contact information, faces, data on credit cards, ID document data etc.
  • Transition and efficiency tools: The shift from on-site computing to cloud computing involves very different steps depending on industry vertical, geo locations, size and maturity of the company making the move etc.; a lot of support tools are needed. Google wants to make that journey with the customers and develop tools to facilitate both transition and operations. Google claims to have developed over 500 of such tools just in the past 6 months.
    • The cloud based G Suite facilitates the creation, connection, access and control of corporate communications, and automating and reducing time spent on coordination activities.
  • Analytics: Google is synonymous with big data, and data crunching using. It’s all about a company’s ability to collect, mine and extract useful information fast from vast data hoards. Today, many data scientists are bogged down with maintenance tasks e.g. maintaining a Hadoop Platform. This is not a good use of their time. Any loud platform, including GDP, has the potential to take that pain away.

GCP analytical tools

  • At the Summit Google demonstrated several analytics tools, including:
    • BigQuery: Google’s serverless fully managed, petabyte scale, enterprise data warehouse for analytics.
    • G-Sheets: Addressing spreadsheet complexity with natural language input and the ‘Explore’ button with a range of data analysis and graphic presentation
    • Cloud Spanner: Removes the schism between SQL and No-SQL DBs with simple querying and scalability options
    • TensorFlow: an open source software library for building machine learning systems using data flow graphs.
    • Cloud Machine: machine learning engine that delivers a managed TensorFlow service
    • Vision API: Detecting labels, faces, colours
    • Video translation API: understands your video
    • Speech API: speech in, text back
    • Natural language API: analysing and routing feedback from customers
    • Translation API: Google Translate now uses neural network translation to create a higher layer language to translate between languages and words not previously compared.
    • Hardware acceleration with Tensor Processing Unit. This is a custom ASIC that will become commercially available this year.

Google GCP vs. Hybrid Cloud?

Google’s strategy is to be a one stop shop for all corporate apps and infrastructure. Hybrid cloud is merely an intermediate step on the way to total cloud computing, where the real performances advantages accrue. Many analysts would disagree with this notion, but actual analyst data supporting the hybrid cloud adoption is inconclusive.

Recent Quocirca research on hybrid cloud adoption indicates that overall perceptions of cloud computing are reasonably good. Expectations are being met in most areas.  However, when it comes to implementing and using a hybrid cloud, there are issues. They center on technical and human security, as well as data sovereignty, costs and performance.  Areas stated as being catalysts for organisations to more rapidly embrace cloud include better overall support for standards, the use of automated APIs and automated workload management across the total logical platform. These are all points that GCP addresses.

So Google may have a point. The pains of hybrid cloud interaction and management may be alleviated by putting all the heavy data driven apps on the GCP cloud, and merely retaining financial and admin systems on-site.

 


October 4, 2017  11:27 AM

Quocirca UK ICO Watch: how likely is the ICO to clobber your organisation for a maximum fine?

Bob Tarzey Profile: Bob Tarzey

Much of the rhetoric about the EU-GDPR, which comes into force in May 2018, relates to the danger of data breaches and the huge fines that may be imposed when they occur. This does not reflect the reality, based on precedents set by the UK Information Commissioner’s Office (ICO), acting under current legislation.

As of writing, since October 20th 2015, the ICO has issued a total of 93 monetary penalties; the average fine has been £86,000 compared to a maximum fine of £500,000. There are no cases published on the website before this date as they are removed after two years.

The ICO enforces two existing laws; the 1995 Data Protection Act (DPA) and the 2003 Privacy in Electronic Communications Regulation (PECR), both are based on EU Directives. It is the DPA that will be superseded in the UK by a new GDRP-like Data Protection Act currently going through the UK parliament.

53 of the fines issued by the ICO since Oct 2015 are under PECR, which covers misuse of telephone, SMS and email communications – i.e. nuisance calls and spam messages. The average for these over the last two years was £101,500, the largest to date has been £400,000, to Keurboom. When it comes to PECR, the more data subjects (citizens) that are impacted, the bigger the fine (see graph).

The remaining 40 fines were for data privacy issues under the DPA.

As part of a charity crackdown, mostly in April 2017, 13 fines were issued. Again, the number of data subjects matters. The ICO objected to the way charities, including Guide Dogs for the Blind, Cancer Research UK, The British Heart Foundation and the Royal Society for the Prevention of Cruelty to Animals (RSPCA), used data. This included sharing data with related charities (without the knowledge of data subjects), tele-matching (where information is sought out which data subjects did not provide) and wealth screening (seeking, through tele-matching, the richest donors). The average fine issued to charities was £14,000.

Another 10 fines were for misuse of data or for the potential risk of exposure of data. For example, Basildon Borough Council was fined £150,000 in May 2017 for publishing sensitive data on its web site and Pharmacy2U fined £130,000 in Oct 2015 for selling data without the consent of data subjects. The average of these 10 fines was £58,000.

The remaining 17 were for data breaches (the ICO became aware of about 4,000 during the period in question). These range £400,000 to TalkTalk Telecom for its leak of 157,000 customer records in 2015 to £400 for a laptop stolen from a historical society. In between was a £60,000 fine for Norfolk County Council for filing cabinet found in second hand shop containing seven paper files including sensitive information about children and £150,000 to Greater Manchester Police for a leaked video interview. The average fine for a data breach was £110,000. Most breaches fall to the bottom right of the graph, i.e. the seriousness of the privacy violation is a bigger factor than the number of data subjects impacted.

Given the maximum fine the ICO can currently issue of £500,000, for data breaches the average has been just 16% of this. The ICO is pursuing what it believes are the interests of UK citizens, it has limited resources and is not chasing down every breach, only those it considers most serious. Of course, organisations should be wary of the new legislation and taking care of customer data is good practice anyway. However, don’t take the scare stories being peddled by vendors and the popular press at face value.

 


September 26, 2017  2:57 PM

How printers can be a launchpad for malware attacks

Louella Fernandes Profile: Louella Fernandes
iot, print, Security

HP continues to shine a spotlight on print security with the recent announcement of embedded print security features that aim to mitigate the threat of malware. So how vulnerable are printers to external attacks, and how can businesses limit their risks?

While the prevalence of connected printers and MFPs bring convenience and productivity, they also pose security risks. Along with the capabilities to capture, process, store and output information, most print devices also run embedded software. Information is therefore susceptible at a device, document and network level. Not only can confidential or sensitive data be accessed by unauthorised users – whether maliciously or accidentally –  but network connectivity makes vulnerable print devices potential entry points to the corporate network.  Any data breach can be disastrous – leading to internal consequences such as the loss of IP or productivity, as well as external repercussions including brand and reputational damage, legal penalties and loss of customers.

In today’s evolving Internet of Things (IoT) threat landscape, hackers that target printers with lax security can wreak havoc on a company’s network.  Data stored on print devices can be used for fraud and identity theft and once hackers have a foothold, the unsecured print device provides an open door to the network. Compromised devices can be harnessed as botnets and used as launch pads for malware propagation, DDoS attacks and devastating ransomware attacks.

It is unsurprising to see that external hacking and DDoS attacks are top print security concerns amongst businesses. And although 95% of businesses indicate that print security was an important element of their overall information security strategy (55% say it was very important, and 40% fairly important) – just 25% reported that they are completely confident that their print infrastructure is protected from threats.

printhack

Mitigating the risk

To address these threats, print devices need to include robust security protection. Fortunately, more manufacturers are embedding security in new generation devices. HP’s enterprise printers for instance, can detect and self-heal from malware attacks through run-time intrusion detection and whitelisting. The newly announced HP Connection Inspector stops malware from “calling home” to malicious servers, stopping suspicious requests and automatically triggering a self-healing reboot. Meanwhile Xerox’s ConnectKey Technology enabled family of printers incorporates McAfee whitelisting technology which constantly monitors for malicious malware and automatically prevents it from running.

However, it only takes one rogue, unsecured device to weaken security. Whilst progress is being made on embedding security technology in the new generation of printers, the reality is that most organisations have a mixed fleet of devices – old and new, from different manufacturers.

Organisations should therefore undertake a print security threat assessment. Such assessments are commonly offered under a managed print service (MPS) contract, and seek to uncover security vulnerabilities. Quocirca’s MPS study revealed that 31% of organisations have completed such an assessment with another 57% indicating that their assessment is underway. Organisations report that the top goal (65%) for a security assessment is to protect against new, advanced threats.

The most sophisticated security assessments not only make recommendations for device replacement and optimisation, but also offer ongoing and proactive monitoring of devices to identify potential malicious behaviour. Ultimately this requires that print devices are monitored as part of a broader security platform – HP, for instance, offers integration with security and information and event management (SIEM) tools.

 The need for a multilayered security approach

As both internal and external threats continue to evolve, a multi-layered approach to print security is essential to combat the security vulnerabilities that are inherent in today’s networked printers. Unless an organisation regularly tests its defences, it will be at risk of leaving a part of the print infrastructure exposed – enabling a skilled hacker to penetrate the network.

A business can be targeted no matter how big or small, so a comprehensive print security strategy that encompasses threat detection, preventative measures, threat monitoring and analytics alongside incident response and recovery is vital in today’s IoT era.

Further reading:

Quocirca MPS Landscape, 2017

Print Security in the IoT era, 2017


August 11, 2017  9:51 AM

The emergence of a new data-centric management vendor

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

SecurityIt doesn’t seem that long ago where there were three main focuses on data security:

– Hardware/application security

– Database security

– Document management security

Each had its own focus; each had its own problems.  Layering the three approaches together often left gaping holes through which those with malicious intent could drive a coach and horses.

However, we are now seeing a new type of vendor coming through: ones who seem more in line with what Quocirca has termed a ‘compliance oriented architecture’ (COA) for some time.

The focus here is to pay less attention to the things that create or store the data and information, instead focusing on the data itself and how it flows across a set of constituent parties directly.

For example, if a company depends on application security and that security is compromised, the malicious individual is now within the ‘walled garden’.  Unless identified and locked out, those breaking in with sufficient privilege are free to roam amongst the data held within that application.  The same with database security: break through the onion-skin of that security and there is all the data for the malicious individual to play with.

Document management systems that are solely dependent on the use of underlying databases for storing the documents as binary large objects (BLObs) often combine both approaches: they have user/roll policies combined with database security – still not a very strong approach.

Instead, if the data is captured at the point of creation and actions are taken from that point on, security can be applied at a far more granular and successful level – the attack vectors for malicious intent are minimised.  Combine this with total control of how data is accessed – via any means, such as a piece of client software, an application-to-application call or a direct SQL/API call – and a different approach to security across a distributed platform with different end users over an extended value chain becomes possible.

One vendor in this space that Quocirca has been talking with is Edgewise Networks.  Still operating in beta mode with a select group of customers, Edgewise takes an approach of applying direct security over the connections between different aspects of the overall system.  For example, it identifies connections from devices to applications, and from application to application or service to service.  In the case of a database, it can see that the dependent application needs to access it, and so can allow the connection.  However, should there be an attempt to access the data via any other means – another application, a direct SQL call or whatever – it can block this.  It logs all of the information, enabling forensic investigations of what has been going as well.

EnterpriseWeb is another company with a compelling approach. It offers an application platform that supports the modeling of complex distributed domains and the composition of dataflow processes. It is fully dynamic, processing functional and non-functional concerns in real-time based on live metadata and real-time state. This means that EnterpriseWeb enforces security, identity and access policies per interaction, ensuring continuous and targeted enforcement. EnterpriseWeb makes it possible to have consistent policy-control over heterogeneous endpoints, systems, databases and devices. Moreover, it can extend security across domains for highly-integrated and transparent operations. It can do this both at the human and machine-level, where it can coordinate the deployment of probes and monitoring applications on to nodes for closed-loop control.

Systems such as those provided by Edgewise Networks and EnterpriseWeb could change the way that organisations operate information security across a diverse, hybrid private/public cloud platform.  By taking control of the interactions between different functional components, data can be secured as it traverses between the functions, and man-in-the-middle attacks can be prevented.  When combined with other approaches such as object-based storage and encryption of data on the move and at rest, an organisation can move away from firewall-style approaches which are failing as the edge of the network disappears to a far wider-reaching security approach that is easier to implement and manage.

Sure, Edgewise Networks and EnterpriseWeb are young companies that still must prove themselves not only as technically viable and innovative in the long term, but they also must show that they can market themselves successfully in a world where the technology comes second to the marketing message.


August 8, 2017  9:30 PM

Did Europe miss the SD WAN bus?

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

2017 may well become the year that SD WAN (software defined wide area networking) routing and the SD WAN edge-to-cloud infrastructure paradigm is adopted by SMEs globally. It may also be the year where European telco manufacturing loses a big chunk of the global routing market to nimbler North American and Asian rivals. IDC’s most recent market figures estimated the global market in 2015 for SD WAN products and services at $225 million rising to $1.9bn this year and growing at 69% CAG through to 2021 and thus hitting $8bn.

The SD WAN industry now counts over 40 manufacturers with global distribution potential. They include all the incumbents (Nokia, Cisco, HPE, Huawei), entrants from neighbouring technologies like WAN optimisation (Silver Peak), network security vendors (Barracuda), network virtualisation (Citrix), MPLS service (Aryaka), to pure plays like Viptela, Talari and Peplink. This gives customers a wide range of choice – dictated by configurations, prices, availability, existing infrastructure and demonstrated capabilities in similar vertical industry configurations.

Most importantly, the shift to SD WAN can come at a low CapEx level and lower than hitherto OpEx costs – all with improved network performance. This is achieved by combining (and thus achieving a higher utilisation level) of the company’s existing WAN access channels (fixed, cellular, point-to-point, satellite, etc.). SD WAN creates that single virtual access – and provides the software to give quality of service (QoS) to best effort links – thus doing away with costly multi-protocol labelling service (MPLS) support for latency sensitive, mission-critical applications.

An SD WAN infrastructure allows a company to centrally configure and manage their branch office access to cloud resources.
Obviously, with that many companies emerging on the market the coming years will see consolidation down to the 5-10 global players that this market will support, once it reaches maturity in 2020. This may deter large enterprises from going down the SD WAN route just yet – they see this as a vetting period. But for the SMEs, now is a good time to engage with the SD WAN vendors who are eager to develop industry specific configurations.

European SD WAN players?

Nokia with its Alcatel take-over also acquired the US-based SD WAN company Nuage Networks. This company helps service providers including BT, China Telecom, Telefonica, and Telia to deliver fully automated and self-service SD-WAN systems. These allow enterprise customers to connect their users quickly and securely to applications in private and public clouds. Nuage Networks is the only major ‘European’ foothold in this exploding market – the rest is, to all intents and purposes, niche. In fact, I have only found two European vendors in this space:

  • Swedish vendor Icomera develops hardware/software solutions for passenger Internet access on trains and planes, as well as fleet management and telematics for remote monitoring.
  • In Germany, Viprinet’s hardware/software concatenates different types of access media (such as ADSL, SDSL, UMTS / HSPA+ / 3G, and LTE / 4G) for mobile, ad-hoc and remote location connectivity.
  • Icomera and Viprinet specialise in mobile network access

    Icomera and Viprinet specialise in mobile network access

Bypassing the stumbling blocks

The SD WAN market is price sensitive, very competitive and capital intensive. So to enter this market, the VC community needs to be more active, as do public venture funds. Hitherto, we have seen little VC interest in this field, and what interest there is does not seem to be in for the long haul. They prefer the usual 3-year get-in, get-out strategy. Public funding including the huge EU funds in the Horizon 2020 program also seem to have bypassed this market opportunity.

The traditionally strong European telco industry has never played particularly well in the consumer and small business space. So manufacturers like Ericsson, Nokia and Siemens may not feel it is in their sweet spot. However, SD WAN is very much software based. Companies like Talari in the US generate as much revenue from software and services as it does from hardware sales.  So, European software companies in the logistics and automotive business could build a new line of business in SD WAN using standardised hardware.

The European auto industry should also be very interested in this technology where mobile connections play a key role. Developing 5G-enabled SD WAN could align interests between telco vendors and auto manufacturers.

Now is the time for European software vendors to step up to this challenge. Not only are there the relatively straightforward examples as outlined above, but the emerging world of the internet of things (IoT) also offers a whole raft of new and lucrative opportunities.

It would be a pity to see such a green field site of new opportunities be defaulted to the incumbent US companies or the highly dynamic and hungry Asian companies.  Europe can make a strong play, looking back to its heartlands of strong software innovation.

 


July 26, 2017  3:43 PM

Quocirca UK ICO Watch: GDPR fines may not be as scary as the vendors are telling you

Bob Tarzey Profile: Bob Tarzey

Are you fed up with vendor scare-mongering about the challenge of complying with the General Data Protection Regulation (GDPR) and the huge fines heading your way? UK-based organisations may be better off looking at the precedents set by the Information Commissioner’s Office (ICO), the body with responsibility for enforcing data protection in the UK. How the ICO has enforced the existing Data Protection Act (DPA) may provide guidance for the future.

First, let’s get Brexit out of the way, the UK government stated its commitment to data protection in the Queen’s speech following the June 2017 General Election and stated that the GDPR will be implemented in the UK. The ICO has also confirmed this directly to Quocirca.

Under the DPA the ICO has had the power to instruct organisations to undertake certain actions to better protect personally identifiable information (PII). In serious cases, it can issue enforcement notices and, in extreme cases, monetary penalties, up to a current maximum of £500K. It also brings prosecutions against individuals that have abused PII. For example, the July 2017 case against the Royal Free London NHS Foundation Trust for mis-sharing data with Google DeepMind resulting in an undertaking, not a fine.

The ICO is open about its activities; it publishes actions taken on its web site and each case where it has taken action remains there for about 2 years. As of writing, since June 2015 the ICO has issued 87 monetary penalties, 52 undertakings and 35 enforcement notices. It has also brought 31 prosecutions. The DPA is not the only legislation considered by the ICO in taking these actions. It also enforces the 2003 Privacy and Electronic Communications Regulations (PECR), perhaps best known for the so-called Cookie Law, but also for limiting the use of spam SMS/email and nuisance phone calls.

The ICO’s monetary penalties

The average fine issued in the last two years has been £84K; 17% of the maximum. The two largest fines to date have been £400K: one under the DPA to TalkTalk Telecom for its widely publicised 2015 leak of 156,959 customer records, and one under PECR to Keurboom Communications for 99.5M nuisance calls.

Of the 87 fines, 48 were PECR related (average £95K). A further 13 were to charities for mis-use of data (average £14K). 8 were for some sort of data processing issue (average £68K) and 18 for data leaks (average £114K). A future blog post will look at the nature of these 18 data leaks.

The ICO also maintains and publishes a spreadsheet of data security incident trends, which lists all the UK data leaks it has become aware of; these number 3,902 since June 2015. So, the 18 fines issued for data leaks represent less than 0.5% of all cases the ICO could have considered.

The ICO is too resource-stretched to pursue every data leak. As you would expect, it prioritises the worst incidents. Even then, it is reticent to fine and has rarely come near to imposing the maximum fine. The ICO’s job is to protect UK citizens’ data, not to bring down UK businesses. Sure, the ICO will have broader powers, and the possibility to impose higher penalties, under GDPR. However, if the ICO chooses to use these new powers with the same discretion as it has under the DPA, any data manager that has ensured their organisation is paying due diligence to way it handles PII, should not be losing too much sleep.

Quocirca presented these data and some other findings from its ICO research at a recent webinar sponsored by RiskIQ which can viewed HERE.


July 11, 2017  1:45 PM

The importance of Operational Intelligence in emergency situations

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

flooding-article2At a recent roundtable event organised by Esri UK, representatives of the North Wales police force, Wessex Water and the Environment Agency talked about how they were using geographic information systems (GIS) in their work to provide Operational Intelligence (OI).

The focus of the discussions revolved around how each organisation needed to deal with emergency situations – particularly around flood events.

For the Environment Agency, the focus is shifting from a reactive response to a more proactive position.  Using a mix of internet of things (IoT) devices combined with more standard meteorological weather forecasting and satellite data, the department is aiming to both avoid flooding through dealing with issues before they become problems and to better respond to problems when they are unavoidable.

An example here is in using topological data in order to predict water runoff in upland areas and then to use analytics to better understand what that may mean further downstream.  Nick Jones, a senior advisor at the Agency, described how it is also using real-time analytics for OI – an example here is in holding certain stocks of items such as mobile flood barriers, but being able to get them to a point of greatest need at the right time via a just-in-time logistics model.  This allows for the optimisation of inventory – and avoids issues with the public when they find that items that could have prevented a flood were available – but in the wrong place.

Wessex Water also has to respond to such events.  Floods can force sewage up into streets and gardens, or even into people’s homes.  Andy Nicholson, Asset Data Manager at the company, described how the company was using GIS-based OI to better prioritise where to apply its resources in emergency situations.  For example, partnering with the Environment Agency to gain access to its data means that Wessex Water can track the progress of a flood – and can then both advise its customers of possible problems and allocate people and other resources to go and mitigate issues by working to potentially stop, redirect or slow down any issues caused by a flood event along the areas it has responsibility for.

Likewise, the North Wales police force has a responsibility to citizens.  Dave Abernethy-Clark, a PC with the force, described how he came up with Exodus, an OI system using GIS data to provide the force with a better means of dealing with vulnerable people during events such as a flood (but could also be a fire, civil disturbance or any other event).  Again, the real-time and predictive usage of OI can be invaluable. It is better to evacuate a vulnerable person before an event overtakes them.

However, doing this based on just basic safety assessments can mean that a vulnerable person is removed from their property where there is actually little real need to do so.  OI ensures that only those who are very likely to be impacted are identified and dealt with, minimising such upsets and saving resource costs and time.

Andy Nicholson from Wessex Water also emphasised how OI should never be a single data set approach.  By pulling together multiple data sets, the end result is far more illuminating and accurate.  The use of a flexible front end is also important – he discussed how one event resulted in a complex situation appraisal being shown on a screen.  From this view, it looked like the event could be a major one that would impact a large number of customers.  However, with a few extra filters in place, he managed to narrow this down to just a few core points being shown, and this then allowed for causality to be rapidly ascertained and the problem dealt with along with the minimum impact on just a few customers.

The takeaways from the event were that OI is an increasingly valuable tool to organisations out there.  The increasing capabilities of tools and the underlying power of the platforms they run on means that real-time OI is now possible.  The opening up of different organisations’ data sets also means that other organisations can directly plug in to existing useful – and often free – data.

The way that these three, seemingly disparate organisations worked together to deal with an emergency event such as a flood was apparent.  By all of them (plus other groups) being able to work against the same underlying data and using collaborative systems built over the OI platform, the sum total of the capability to deal with an event is enhanced greatly. How this helps all of us cannot be underestimated.


Page 1 of 3012345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: