Quocirca Insights


October 13, 2016  7:07 AM

Nok Nok adds a risk engine for FIDO driven authentication

Bob Tarzey Profile: Bob Tarzey

In February 2014, Quocirca reviewed the FIDO (Fast IDentity Online) standard for authenticating consumers to web service providers (I am not a dog, FIDO a new standard for user authentication). In 2014, the FIDO Alliance had attracted over 100 supporters; the site now lists around 250. Quocirca compared FIDO with the SSL/TLS standard for authenticating online resources to users, noting that FIDO provided assurance in the other direction, that users were who they said they were.

The standard is championed by Nok Nok Labs, which sells an Authentication Server to enable web service providers to implement FIDO-enabled web applications. In Sept 2016 Nok Nok announced it was taking its authentication to a new level with the introduction of a risk engine. The aim is to mitigate mobile fraud, which is required as users increasingly use mobile devices to access online services and also because mobile devices are often used as a second factor of authentication for accessing these services.

The risk engine takes a number of risk signals into account, enabling a risk score to be calculated and evaluated before a user is authenticated. These include:

  • Geolocation: is the device, and therefore the user in an expected location?
  • Travel speed check: is the location of the current access request consistent with location of the last one? This helps to identity device spoofing by criminals in locations remote from that of the legitimate user.
  • Shared device check: make sure there is not an excessive number of users on a device, only one will be acceptable if a device is registered as not shared
  • Multiple device check: has the number of devices used to access a given online service increased and is there a known reason for this?
  • Friendly fraud prevention: authentication is only accepted if a user-specific biometric is used to activate a device when it is shared by multiple users.
  • Device health check: is the device is configured as expected and is there any indication it has been tampered with?

Strong authentication on mobile devices provides consumers with an experience similar to that business users get from single-sign-on (SSO). However, this would be the case if FIDO was used or not. The real benefit of FIDO is the ease of deployment of strong authentication for consumers by web service providers. Pre-built products such as the Nok Nok Authentication Server make deployment even easier and the addition of the risk engine makes the authentication stronger than ever.

The FIDO Alliance is attempting to drive demand and influence the regulatory approach to authentication. In August 2016, the European Banking Authority (EBA), an independent EU authority, released a draft regulatory technical standards (RTS) on strong authentication. In the run up to this release, the alliance lobbied the European Commission, putting forward suggestions that have been taken into account. For example, that “Payment services providers be able continuously to adapt to evolving fraud scenarios” and with regard to “the use of a mobile devices as an authentication element as well as for the reading or storage of another authentication element” it was said that “the majority were of the view that this should be possible as long as the strong customer authentication procedure mitigates the inherent risks of the mobile device being compromised”. Both of these points are only really achievable with the sort of capabilities provided through the new risk engine.

As for Nok Nok itself, it says business is good although pilots are taking longer than hoped. It is looking to system integrators to help get the message out there to drive FIDO adoption and Nok Nok’s own software sales. Web service providers often say they want to deliver security and standards help enable this. Saying it is one thing, actually getting it done is another.

October 11, 2016  12:55 PM

DRM 2.0

Bob Tarzey Profile: Bob Tarzey

There nothing new about the need for digital rights management (DRM). However, what DRM tools are expected to achieve has changed over the last decade or so. DRM aims to limit what can be done with copy-righted and sensitive material through asserting access controls.

Such material has become more likely to be shared between organisations linked by online business processes. This usually involves various cloud storage platforms as well as each organisation’s internal systems. Furthermore, access is required via an increasing multitude of end user devices, a challenge for Microsoft, which many have turned to for DRM in the Windows-dominated past. The content involved is also more likely than ever to be the target of online theft by cyber-criminals, business competitors and/or nation states.

All this means DRM systems have had to become both more flexible in the way they support legitimate users and more secure when it comes to blocking illegitimate ones. The latter need has led to native encryption support appearing in DRM offerings, which leads to the additional complexity of encryption key management. DRM is also being linked with identity and access management (IAM) systems to help authenticate users and apply policy controls to their use of sensitive content.

There are two basic approaches to DRM:

  1. The document itself knows about access rights and maintains its own audit trail. If the rights change a new version of the document is issued and old versions might be recalled; it is hard to keep track of what is going on across distributed communities of users. The advantage, is the document is usable wherever it ends up. This is the way older products such as WatchDox (now owned by BlackBerry) worked.
  2. The DRM system is based on a policy server. Here wherever a document is, the server is referred back to, asserting access rights and maintaining audit trails, all changeable at any time. A drawback is that documents can only be manipulated if the user is online, although there are ways around this, and anyway, situations where users cannot get online are becoming rarer.

Quocirca reviewed the use of DRM policy servers in a 2014 report. The report was sponsored by Fasoo, a DRM tools vendor using a policy server approach. Fasoo Enterprise DRM (FED) is installed on-premises or can be hosted per customer on a cloud platform such as IBM SoftLayer or Amazon Web Services (AWS). It relies on user end points having agents installed to ensure referral back to the policy server. Fasoo has therefore focussed on use cases where the use of an agent can be mandated such as participation in a given supply chain. FED does provide limited agentless support by allowing content to be rendered within a web browser. Fasoo provides optional file encryption. To date, Fasoo, which is based out of South Korea, has seen most success in Asia and the USA and is yet to really get going in Europe.

Another vendor, FinalCode emerged from stealth mode two years ago and has innate encryption controls. Its product is provided as either a cloud service (SaaS) or an on-premises virtual appliance. In either case it also relies on an agent installed on the user end-point. FinalCode just announced 5.11 (September 2016) of its product. With this release, the cloud version, like the on-premise one, now puts encryption fully in the hands of the data controller using AWS’s Key Management Service (KMS). It has also enhanced IAM support around the SAML (security assertion mark-up language) standard and Microsoft Active Directory. FinalCode enables file owners to give specific viewers of files offline access, but does not recommend this as real-time policy changes and audit logs will be disabled. Quocirca first wrote about FinalCode in January 2016, where it was pointed out how advanced DRM products come closer to the need to implement a compliance oriented architecture than ever before.

Vera is an even newer kid on block having launched its product just over a year ago. It also has innate encryption. Initially only available as a cloud service it now has an on-premise version too, with some customers using a hybrid approach, with the policy engine in the cloud and an encryption key server on-premises. The biggest use case to data for Vera has been to support Microsoft Office 365 deployments, but it also supports other cloud stores, such as Box and Dropbox. For IAM it has partnerships with Ping Identity, Okta, Centrify and others. Access controls and encryption are supported using a file wrapper. Browser-based rendering allows read-only access without an agent, but for full editing capability an agent with specific file support is required. This also enables an offline mode which can be time limited, with activity logs synchronisation later. Vera also features Dynamic Data Protection for specific file types including usages stats and an audited chain-of-custody.

Two other products in this area are Ionicprivacy enforced [with encryption] by your policy” and Seclore “the most advanced, automated and secure enterprise digital rights management”.

Increasing cloud use and cross organisational interaction; greater mobility and device choice and evolving security, privacy and compliance requirements are leading to a shift in the way digitals rights are managed. A new guard looks set to overrun the old order to provide the necessary support.


October 10, 2016  9:52 AM

Observations from Hitachi Strategy Session day.

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

Hitachi-EH5000-e1342006114266Hitachi recently invited around 40 analysts and influencers to an event in Las Vegas to hear what it is up to. What would previously have focused on just one its divisions, HDS (Hitachi Data Systems), talking about what wonderful storage systems it produces, was instead something far more wide ranging and, overall, more interesting.

Hitachi has, after a long time, realised two things:

  • As a total company, its divisions actually have far more in common than that which makes them different.
  • Each division has been doing some very interesting things for a long time that can be pulled together at a very interesting time in the market.

When looking at some of the Hitachi divisions – such as in the construction. mining and transport industries – it was apparent that Hitachi has been operating in the area of the internet of things (IoT) for a lot longer than anyone has been talking about IoT itself. For example, Hitachi’s trains have always been a complex collection of inter-related sensors, actuators and other devices. Its new trains in the UK will be so full of IoT devices that they will be able to functionally replace most of what Network Rail’s specialist rolling stock does in monitoring the track for any problems. Instead of having only half a dozen specialist trains running as and when they can, Network Rail will receive data from every single Hitachi train that runs – and at full speed.

Consider in mining: as well as the vehicles that Hitachi provides into the environment, it also works with third parties to enhance the overall IoT environment. It has carried an exercise with one large mining company has that resulted in massive process optimisation – even down to the level of knowing when it makes economic sense to stop mining for a period of time due to weather issues coming through, the spot price of metals and so on.

As such, Hitachi has created a new group, called Hitachi Insight Group, which is serves as an umbrella to pull all the good bits together from the Hitachi divisions and create an overall strategy for how Hitachi will play in the IoT market.
As part of this, it has created an IoT reference platform, which it has called Lumada. Lumada will act as a core central part of an IoT architecture, and will be based on open standards; will be adaptable for the future and will also be verified and secure. As part of the means of Lumada being able to deal with the past as well as the future, it uses technology gained from Hitachi’s acquisition of Pentaho in June 2015 to manage data acquisition from older, more proprietary IoT systems, such as SCADA devices.

However, this is where Hitachi fell back into problems that Quocirca has seen before. The main tent session stated that Lumada was categorically not a platform (it is). This was then contradicted in a couple of sessions afterwards – although one Hitachi representative tried to explain it to me that a ‘platform’ had to be a product (it doesn’t) and as Hitachi was not going to be selling Lumada separately, then it was not going to call it a platform (but it does – on its own website, here).

Hitachi also under-messaged what Lumada is likely to be able to do. It only mentioned aggregation devices when pressed by an attendee. It didn’t really get into how Hitachi’s skills in understanding the two-way granularity of the IoT is so important – for example, a train is both an IoT platform in itself, with all those devices in it, as well as a single IoT device that sends a stream of data to a central environment as required.

Nor did it use the highly differentiated use cases that it could have done to show how Lumada is completely different from other activity in the IoT space. Rather than talking about how the IoT capabilities of the trains in the UK has completely changed how the train operating companies (TOCs) will use a train – moving from a leasing model from rolling stock leasing companies (ROSCOs) such as Lombard North Central or GE to a ‘Train as a Service’ model where the TOC pays Hitachi based on how reliable and available the trains are – Hitachi pretty much just talked about areas that can be heard from any other company that is dabbling in the IoT.

Other areas, such as Hitachi’s involvement in the move from human drivers to autonomous driving of fifty 500 tonne Caterpillar and Komatsu mining spoil trucks and how this has changed both how fast the vehicles can move, how close they can drive in convoy, how close they can pass on the road and the savings on parts such as gear boxes, brakes and the avoidance of accidents were not touched on.

It’s a pity.

It is a given that Hitachi will have engineered any solution to the nth degree. It is a given that the technology will be at least up with the best, and often ahead of them. The problem for Hitachi is that it has to stop being just an engineering company and start true marketing – otherwise it will remain a big player in the markets where it is already known – but pretty invisible outside of those.


October 7, 2016  1:03 PM

A ‘Smart’ Product is not a ‘Smart’ Solution

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

The term ‘smart’ suggests that a product helps the user solve an everyday problem. However, solutions to problems mostly require a number of different man-machine systems to interact. they must register, analyse, take action and monitor outcomes. Data from health monitors data needs to feed into a professional decision system such as the family doctor, the hospital specialist or an AI system. This ensures that examination, diagnosis and treatment regime  result in the beneficial outcome. That requires a lot more than just a ‘smart’ product.

The Smart Health Monitor Watch

So what issues does a smart product like the Apple Watch or Fitbit alleviate or solve? On the health front, they can monitor specific body activities such as heart rate, number of steps walked, stairs climbed, exercise done, sleep rhythms etc. It puts some numbers on our daily activities. However, there are no studies that have shown smart watch wearers to be healthier than comparable non-smart-watch users. Also, the smart watch data does not feed into any public health system. It’s not really data that you would take along to your family doctor. Certainly, no professional health worker would rely on the precision of consumer devices. The standard of today’s professional medical monitoring equipment is in a whole different league.

Internet of Things II - Everything connected in the new world order - Multicolor version

Solutions require registration, analysis, action and monitoring outcomes

For normal exercise routines involving swimming, biking and running the smart watch can document performance parameters. We can use these to compare performance with friends (but mostly with ourselves). It makes it more fun to work out, and introduces an element of competition even when running alone or on a treadmill. However, serious athletes who rely on precise performance measurements are disappointed by the significant margins of error in consumer devices. Some research indicates up to 20% margins of error in pulse measures for example. Just look at Apple’s own assessment of Apple Watch measurements (https://support.apple.com/en-us/HT204666):

… skin perfusion varies significantly from person to person and can also be impacted by the environment. If you’re exercising in the cold, for example, the skin perfusion in your wrist may be too low for the heart rate sensor to get a reading. Motion is another factor that can affect the heart rate sensor. Rhythmic movements, such as running or cycling, give better results compared to irregular movements, like tennis or boxing.

The Smart Home

Devices like Google’s Nest and Samsung’s SmartThings hub collect data from video cameras, movement sensors, smoke detectors and utility (heating, electricity, water) metering etc. Data is analysed and made available on your mobile phone, and alarms go off if certain data patterns are identified. So a window being opened, or smoke being detected can trigger alarms and send out alerts across fixed and mobile networks. But can you react in time?

Interestingly, insurance companies who 5-10 years ago offered lower home insurance payments for customers with alarm systems installed have mostly reduced these discounts as they had shown limited deterrent effect. Burglars are long gone before police and/or security company people arrive on the scene. When ‘smart’ security devices are sold without the corresponding support organisation that can dynamically address and resolve the issue, they risk offering a false sense of security, by ignoring the fact that burglars are getting smarter as well.

Products are just Lego blocks

As we develop products loaded with more processing power, manufacturers should be careful to set the right customer expectations and focus on exactly what the ‘smart’ function is and how it can contribute to problem resolution.

An illustration of this is the ‘self-driving car’ innovations that are changing our whole concept of personal transportation. Manufacturers have avoided the ‘smart’ car moniker – in part because there is a smart car brand [Smart Automobile, a division of Daimler AG], but also because the focus is specifically on the driving function of an individual car. The self-driving car is not just about cars with navigation systems and cameras, it’s about new road systems, new energy management, new legislation etc. all aimed at reducing the number of cars on the roads, reducing pollution levels and the number of accidents they are involved in.

Whether something is ‘smart’ or not should be judged by the buyer rather than the seller. ‘Smart’ should not be a product category in and of itself, because ‘dumb’ versions of the product may actually be a lot more user friendly. Continuously pushing smart-labelled solutions into the marketplace that continue to underwhelm users may ultimately kill the product – smart watches and smart homes certainly run the risk of a consumer backlash.


September 26, 2016  10:44 AM

IBM – something old, something new, something borrowed, still Big Blue?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

IBMIBM recently held its customer event, Edge, is Las Vegas. Although totally new announcements were a little thin on the ground, there were various items that are newsworthy.

Starting with the old. IBM continues to bang the ‘Innovation’ drum that it has been banging for the best part of twenty years. Quocirca has long cautioned against a pure focus on innovation, recommending a balanced approach between improvement, innovation and invention. Every discussion seemed to have ‘innovation’ in it – a message that is now somewhat non-innovative in itself.

Alongside this was more messaging around the mainframe. By now, even the most hardened ivory-towerists must accept that the mainframe just refuses to die, and IBM’s increasing focus on it as a different workhorse under either a straightforward ‘zSystem running Linux’ or the better ‘LinuxOne’ moniker has breathed more life into the monster.

Indeed, Systems GM and EVP Tom Rosamilia stated that he doesn’t expect many (if any) new logos in the zOS camp. However, he also stated how he believes that acceptance of how different workloads require different underlying platforms would drive mainframe hardware sales was apparent.

Storage is a renewed focus. IBM seemed to have spent a long time in maintaining a ‘spinning disk is best’ attitude – even after acquiring Texas Memory Systems (TMS). Now, Flash is top of mind – and not just as a hard disk replacement via SSDs. 3D NAND, NVMe, non-volatile DIMMs and so on are now in the mix – along with a stronger focus on object-based storage. Phase-change storage systems are still on the cards as well – just not within the next two generations of systems – something more for 2018 or thereabouts.

Elsewhere, the OpenPower initiative seems to be doing well – at least in the number of companies having joined the group. Rosamilia stated that he would now like to see a focus on products coming out from the group, rather than a focus on acquiring new company names. For the longer-term future of Power, OpenPower is key – and some members of the OpenPower Initiative may need additional help from IBM – for example in scaling the Power architecture and its need for power (with a small ‘p’) for effective use across a wide internet of things/everything (IoT/E) environment.

On the new side, IBM announced new Power platforms, and a raft of agreements between themselves and third parties. Whereas with any Intel-focused vendor, such agreements could be seen as tick-box offerings, with IBM, it is different. Sure, a Linux-based service or application can be run on Power, but it will not run in its most effective manner. The agreements with the likes of Hortonworks, NGINIX, Ubuntu and others were all around how these companies were porting to Power – not just moving Linux images over.

On the borrowed side, IBM continues to make the most of open source and is partnering more strongly with many other vendors in the space. It’s agreements with Red Hat are being extended, while the ones with Suse continue, alongside the newer ones with Canonical around Ubuntu.

The major borrowing though, leads to the biggest aspect of IBM’s new work. The underlying system that enables Bitcoin to work is called a Blockchain. The first generation Blockchain has been shown to have multiple issues, not least around scalability and security. In order to try and address these issues, many different companies started work on their own Blockchain systems.

Blockchain is most visible example of a distributed ledger – no one entity has control of the way that the information held within the ledger is stored, which means that all transactions are held in what should be an immutable, auditable and secure manner. However, this multitudinous splitting of work on different Blockchain replacements was heading towards a cliff – the promise of Blockchain would be lost between a raft of proprietary systems that did not interact with each other.

IBM had started work on its own OpenLedger product. Then, the Hyperledger initiative was started by the Linux Foundation, bringing together many of these companies that were heading off in different directions.

Now, the concept of the Hyperledger is bearing fruit, with a raft of agreed standards meaning that the basic Blockchain backbone to the system remains open – but can use different plug in modules over areas such as consensus agreement mechanisms.

IBM has been a big player in the Hyperledger initiative – and has taken the lead in how Blockchain security has to be improved. In the past couple of years, there have been a few proven hacks of Bitcoin systems – some external, and some internal. By using mainframe security principles, IBM can make outside attacks almost impossible. By making all the activity of the Blockchain occur in a secure mainframe partition where not even sysadmins have any access, then it has applied the same levels of security against internal attacks.

One example of a customer using IBM’s Hyperledger is Everledger, a U.K.-based company that currently deals in the need for a trusted chain of provenance in dealing with the trading of diamonds. Everledger knows that it can easily port this to pretty much any high-value item, such as fine art, fine wine, vintage vehicles, antiques and so on – a massive market with a lot of money to lose through fraud, so a lot of money to invest to prevent such fraud.

However, Blockchain is not just for high value goods. Examples were shown in the import/export process, where the use of paper-based systems is estimated to have a global cost in the high $10bs per annum. Registering the needed ‘paperwork’ against object in a distributed ledger can get rid of all of this paperwork – and, again, cut out opportunities for fraud.

Beyond this, a distributed ledger can also be used at the consumer level – a place for someone to lodge their house title deeds, their will, photographs of their loved ones – anything that has a value to them.

To this end, IBM has made its Hyperledger available on its development cloud, Bluemix, for any developer to look at and play with. It has made some strides in making it easy to use, with developers not needing too much knowledge of how a distributed ledger works. Quocirca will be publishing a longer report on distributed ledgers soon.

So – still Big Blue: lots of hardware; still moving over to a hybrid ‘cloud-first’ strategy with Softlayer and Bluemix. However, IBM is pushing forward with stuff that previously seemed to stall in its labs.

Will IBM succeed? Well, the elephant has been taught to dance in the past. Maybe this time, it will be waltzing in step with all its partners.


September 12, 2016  4:18 PM

Cellular rises as a driving force for SD-WAN adoption

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
5G, iot

mobile2-pThere is no doubting the impact that software defined networking (SDN) is having in changing traditional approaches to connectivity. Nowhere is this more apparent than in the WAN with software defined wide area networks (SD-WAN).

This combination of cheaper commodity line connections, fast and streamlined hardware, all controlled by remote management software in the cloud is growing fast. In part this is because it offers lower cost, while still being well managed and secure. It also delivers on the software defined potential for truly flexible connectivity and capacity.

Wireless Access for Transportation Services

Cellular or wireless connectivity is often a part of the SD-WAN proposition, generally as a resilient back-up connection in the event of physical line failure. However, it can also be used when there is no fixed alternative, to provide ad hoc networks, for example, in transport or logistics with Vehicular ad hoc networks (VANETs). A similar approach also works in static locations where the network requirement is short term. This could be an ad hoc extension to connect a remote building site or the emergency services at the scene of a major incident or fire.

Clearly mobile use cases rely completely on wireless connectivity. With the lower capacity available in the past, application functionality has been somewhat restricted. Meeting mobile WAN connectivity needs has also often required highly specialist hardware and unique skills.

However, faster cellular networks with ever improving coverage are becoming widely accessible. This means that with sufficient levels of performance and intelligence in the networking devices, multiple wireless connections can be bonded together for day-to-day, and not just emergency or short term, use.

With the improved coverage of 4G cellular networks, wireless-only WAN connections are now a reality in many locations. Next generation 5G are already looming close on the horizon – Ericsson, Huawei and Nokia have draft 5G radio prototypes in place, and some operators are already conducting field trials. Many more are planning trials for this year. This opens up opportunities for more capacity for existing mobile WAN use and more flexibility to static WAN locations.

New SD-WAN players, new applications

Companies such as Icomera and Option Wireless, have recently been joined in the mobile application space by others specifically offering hardware addressing in-vehicle and transport sector requirements. These include products from some of the larger incumbent networking hardware providers. In addition there are adaptable in-vehicle solutions from fast growing SD-WAN routing specialists such as Cradlepoint, Viprinet and Peplink.

The breadth of applications to which this approach is viable is also growing as the cost of providing connectivity falls. Once it was only cost effective to wirelessly connect cruise ships and railway trains. Now it is possible to connect the small boats which run in-harbour services and regular bus lines. This opens up wider usage to support operational management – fleets, marine or agriculture applications – and well as connectivity for passengers and staff or crew.

The use cases for fixed wireless connectivity are expanding as well. High speed low cost connection is readily achievable for the increasingly popular ‘pop-up’ locations in retail, entertainment and hospitality. This can now be done without needing wait for fixed infrastructure to be enabled. Higher wireless capacity will support rich media and video to deliver sophisticated applications to remote building sites and emergency services.

Simplified Management with SD-WAN

Specialist expertise and hardware designed for bonding and efficient network routing is being combined with software in the cloud to make management simpler and the network secured. Reliable, secure capacity means that existing applications can take advantage of rich media or video. Those emerging applications around the Internet of Things (IoT) will also be able to scale up as they rely on data feeds from huge numbers of sensors in far flung locations.

Cloud is often a major part of the hype surrounding SD-WAN, but does deliver security and flexibility. However, mobile and ad hoc applications also rely on effective exploitation of networking hardware, in particular the cellular element. This still requires specialist knowledge of radio networks and the ability to intelligently bond multiple cellular connections together to obtain the maximum and most reliable capacity.

Increasing use of cloud and software defined networks is delivering smarter management and flexibility to the WAN. To deliver the required levels of performance over wireless connections this has to be married with powerful and effective cellular hardware. Software defined, cloud controlled, and hardware empowered is the way forward.


August 31, 2016  8:49 AM

Dismantling data centre and WAN silos

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

Two data processing trends are joining hands and moving towards wider availability for SMEs: hyperconverged data centre infrastructures and software defined WAN (SD-WAN) connectivity. What was just a year ago a daunting deployment and management challenge has now been packaged and centralised. This allows for much easier adoption into SMEs with cloud application and enterprises with branch environments.

Both concepts have similar development drivers behind them. The data centre hyperconvergence concept originated in large cloud companies like Amazon, Facebook and Google. They placed an emphasis on low-cost, ease-of-use and rapid self-service that can link WAN-connected sites of very different sizes and capacities. They tried to use existing data centre architectures, but quickly realized that they were a poor match for their demands. They wanted a much cheaper infrastructure to run any application at any scale. That required a software defined infrastructure to break dependence on siloed hardware for additional features and upgrades. As cloud companies they wanted the agility, security and reliability of on-premises data centre that could be rolled out globally.

Trends in the data centre

Figure 1: The hyperconverged data centre

Fig 1 blogpost

Essentially, a hyperconverged data centre reorganises the IT infrastructure. It groups multiple IT components (servers, storage devices, intranetworking equipment and software) into a single, optimized on-site computing package with IT infrastructure management, automation and orchestration. Providers like Nutanix, SimpliVity and Atlantis Computing offer the software overlay and hardware that can discover, pool and reconfigure data centre assets easily. Resources can be shared by a wider range of corporate applications, and managed using policy-driven processes. However, most converged data centres solutions still rely on external routing devices for the WAN access.

Trends in the WAN access

SD-WAN addresses the silo issues related to WAN access by optimizing use of bandwidth and reducing the complexity of network management. SD-WANs use less expensive Internet links in a logical overlay and hardware that intelligently routes traffic via multiple paths. This improves overall application performance.

The SD-WAN converges the WAN access through its ability to support hybrid deployments. Multiple WAN services such as leased lines, MPLS, Internet, cellular and satellite links are combined. The SD-WAN router optimises based on bandwidth, SLA (Service Level Agreement) classes of service, security postures, and pricing. SD-WAN products come from two different vendor groups:

–           Diversified router vendors who typically have a strong legacy of business in the carrier as well as enterprise sector, such as Cisco, Citrix, Nokia Networks and Riverbed. They have been providing traditional routers and other networking solutions. These vendors are in the process of adding SD-WAN capabilities, often through acquisitions.

–           Specialist SD-WAN router vendors who focus on providing highly available connectivity through multiple WAN, bonding and cellular routers. This group includes Talari, CradlePoint and Peplink.

Figure 2: The converged WAN connecting branches to HQ and SMEs to cloud service providers

Fig 2 blog post

Route optimization on a packet-by-packet basis ensures that business critical applications running on the SD-WAN get the bandwidth, priority and load balancing needed. An SD-WAN router like Peplink’s Balance One has the ability to detect problems and respond to session connection degradation by switching session packets to other access options. This ensures the required Quality of Services (QoS), so that critical applications don’t experience loss of service. Besides the cost savings, a big driver for SD-WAN in relation to hosted cloud applications is more reliable and resilient connectivity at the branch locations.

The automated management and the real-time application performance visibility that SD-WAN delivers will require some new IT resources enabling staff to redesign, deploy and operate a new SD-WAN environment. Typically, this involves optimized TCP windowing, compression, object caching and content staging. Traffic prioritization and much lower application SLA costs are key to any purchasing of SD-WAN branch connectivity.

With branch deployments in mind SD-WAN provider Peplink has centralized “Day 2” operations management such as policy changes, adding new applications, security monitoring and image updates. In this way SD-WAN skills are developed and maintained centrally or managed by the cloud provider. Similarly, SMEs can improve WAN performance and lower their costs by deploying SD-WANs with no additional IT skills required. They can let the cloud service provider handle set-up, deployment and management.

By hyperconverging their data centre and letting their cloud provider configure the SD-WAN access, SMEs are ready to engage in big data analysis and the Internet of Things to improve their business performance.


August 30, 2016  10:46 AM

Unified Communications and Collaboration – still missing a vital ingredient?

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
cloud, Collaboration, Mobile, social media, Unified Communications

Communications technologies are definitely improving markedly and offer many new and exciting ways for people to connect and share. Rarely a day goes by without a new announcement of some innovation or improvement, often wrapped up in marketing terms about ‘improving collaboration’.

But successful and effective collaboration requires much more than just technology. Mainly it requires more emphasis on people, especially the user experience, and the processes that intelligently connect them to the rest of their team.

Too much choice?

The individual user experience from applications and devices has been improving, but part of the challenge is that there are now just too many options and people have their own personal preferences. What works for one person in one circumstance might not work in a different situation or for a different person. Hence the backstop is often the lowest common denominator. Once it was written memos, then phone calls, now it is email.

emailEmail is generally the most used but also most hated communications tool. It is not the ideal tool for collaboration or the coordination of activities, despite often being used for these purposes. Many individuals and organisations would like to get rid of email. Some are even having a go by switching to alternative messaging platforms with more of a social feel.

This rarely solves the issue, which is generally not about the medium of communication, but about getting to a beneficial result from people working together – collaborating – and being better organised.

Too much flexibility?

Ironic then that the most influential technologies of late seem to have been about liberation and flexibility. Mobile, removing the ties of desk and working in, or having to return to, specific locations to access IT. Social, which has given individuals new incentives to share information (sometimes too much) and build communities of common interest. Finally, cloud, which gives more freedom at the points of access yet shifts resources and their management to a virtual ‘central’ location, enabling a multiplicity of use cases, users and modes of access.

Individuals have so much easy to use communication power at their fingertips, eyes, mouth and ears, so they surely must be collaborating better, right?

No. The problem is actually more one of control, or to sound less big-brother-ish or give the impression of needing top-down management, the problem is one of orchestration.

The term orchestration often pops up when referring to managing diverse and ad hoc technology resources to get a desired output. It also fits well when related to getting people to work together towards a common goal – i.e. collaborating.

At a recent communication and collaboration event this summer hosted by Dennis publishing, it was pretty clear that this was a common thread emerging from the various presentations from speakers from companies including Cisco, IBM, Arkadin, Plantronics as well as echoing Quocirca’s own thoughts. There is plenty of communication, but not enough collaboration.

What to do about it?

A good first level of orchestration is sorting out presence, i.e. the awareness of who is doing what, where and when will they be free. Even this is challenging; diverse platforms have different indications of ‘availability’ and relying on people to flag their own is equally problematic – most people struggle to keep out of office or voicemail messages up to date.

No, this is a problem crying out for more intelligent automation of unified communications. Unfortunately, solutions to this are often only demonstrated in the glossy marketing videos showing a utopian communications future. Here calendar entries are intelligently harvested along with public transport timetables to redirect a desk bound phone call about a team meeting into a message on a smartphone which then pops up at just the right time and reminds you to pick up your shopping en route. (which your smart fridge has probably ordered knowing a business dinner engagement has been postponed).

It looks great in the marketing futures, but will do little to help the dispersed team meet its goals today.

So when a unified communications vendor next tells you about how great its tools are for getting your people communicating and collaborating, ask it to explain its approach to orchestration, control and co-ordination. Unifying comms is no longer about adding more and more flexibility, openness and options, but about allowing both user and the organisation (at a democratic team level), to be more in control.


August 17, 2016  4:35 PM

Not set in concrete – tailoring IoT to the business

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
Big Data, Business Process, Data scientist, iot

Tailored services and personalised advertising is great isn’t it? People and organisations are no longer bombarded by things they are not interested in or can not respond to, only those things that are relevant.

In theory that’s what all the cookies, search histories and other smart stuff on the internet delivers, especially with all the recent attention on the Internet of Things (IoT). The reality is somewhat more patchy. Getting the right data to tune, tailor or personalise effectively is not that straightforward, as many online advertising attempts and email blasts seem to indicate.

mixerIoT presents both a ‘big data’ opportunity and problem. However a focus on volume of data over and above the other aspects – velocity, variety, veracity, value – will result in more problems than opportunities. These will be less easy to dismiss than misdirected adverts.

First, veracity. Sure you can’t check everything, but when big assumptions are made based on flimsy or uncorroborated evidence it’s like putting 2 and 2 together and getting pink bananas. Even simple attributes are more complex than they appear. Take the use of a gmail account as a profiling indicator. JoBlo@gmail.com might be several people sharing one account or one of several email addresses used by JoBlo. So are you sure that what appears online to be JoBlo is actually the person in Exeter currently googling for buying a cement mixer?

Verify data before acting upon it. Look to corroborate against other data points. Take tentative steps when using partially verified data. “Can you confirm you are interested in cement mixers? We might have something you would find useful.” Above all, don’t assume; check, verify, corroborate especially with data from sensors. Do you think an aircraft relies on a single indicator of fuel level in each tank?

Next, variety – taking diverse information from a mix of sources to build a bigger picture. Interest in cement mixer plus membership of trade body for builders might indicate a professional rather than casual requirement. Then again, it might not. Is this a recurring theme or something new, are there other lines of interest that gel?

How wide does the search for relevant information need to be? That is difficult to know, but since many people are carrying an array of mobile sensors on their body, and industrial as well as consumer devices are increasingly imbued with open connectivity as well as a range of sensors (Internet of Things), there is plenty of data to choose from. Despite reportedly growing numbers of data scientists in the UK, too much data will swamp them or cost too much in terms of analytics. However, collect too little and business opportunities may be missed.

The best way to decide is to step outside of the current silos of management structures and departmental agendas and have someone or a team take an independent operational perspective. Assess each primary process, what data is available, what is unknown, what could be done differently if the unknowns were known, and what might the impact be on the overall process. Then and only then, look into what IT might be available to help produce the data.

Then, velocity. After-the-event insights are great for looking back, saying “I told you so” and apportioning blame, but they do little for making worthwhile improvements to a business process or meeting customer needs. With so much ‘real time’ data gathering and analytics available, some might feel that the problem can be sorted by a major deployment of an all-encompassing project probably codenamed something like “Rolling Thunder”.

This would be a mistake.

Finally, value. Why collect, analyse and report on all this data if there is no real value at the end? You may have identified that JoBlo is a builder with an interest in cement mixers, but if he has already bought one, it is unlikely that he is immediately in the market for another. Ensure that you understand what the desired end result is; whether it is likely; and what value this means for the business.

IoT deployments could have hugely beneficial consequences, but quite honestly in most cases it is too difficult to predict in advance. Take an incremental approach similar to the dev-ops mind-set. Start small, trial the concept, check results, refine, repeat and scale. It is a bit like planning in fog. A leap into the unknown may work out ok, but it mostly likely will not. This means a lot of budget will have been wasted either on shiny tech that does not work, or changes to the business that are not commercially beneficial.

It is worth remembering that with all the masses of data, powerful analytics and clouds of compute power and storage that even the biggest and best on the internet still deliver mismatched, untimely or irrelevant adverts to browsers. Focus on and attention to data that matters to the recipient – consumer or business process – is key. Understanding this is much harder than it first appears – perhaps we need more business process scientists as well as data scientists?


August 3, 2016  1:24 PM

Does Sage know its onions, or is it due a stuffing?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

roast-turkey-with-pancetta-sage-onion-stuffing-12305-1Sage Software, the financial accounting software vendor, has recently held its 2016 Sage Summit in Chicago. Over double the size of last year’s event in New Orleans, what did the event have to say about Sage’s future?

In many ways, not a lot. The headline speeches were largely around bringing in A-List celebrities (Gwyneth Paltrow, Zooey Deschanel, Ashton Kutcher, Sir Richard Branson) alongside inspirational people from the world of the Invictus Games and Sage’s own Sage Foundation. In amongst these sessions were dotted little snippets of product information.

Was this a case of there not being any real news to give? Actually no – it was a clever strategy of getting Sage’s brand better known in the US where its problem is that many of its own customers still see it as Peachtree, and are not really aware of what else Sage has to offer.

Last year was really a case of Sage CEO, Stephen Kelly, making a lot of noise to show that Sage had finally arrived at the cloud computing party. Sage Live and Sage One were front and centre, with lots of noise around the ‘c’ versions of Sage 50, 100 and 300. This was to try and head off the encroaching threat of web-native companies such as Xero, KashFlow and others – and it seems to have had a measure of success.

This year was far more a story of maturation and evolution. Cloud was presented as a given, although Kelly was still keen to ensure that everyone understood that Sage will not force any company to move from an on-premise version of its software to the cloud – ever. Sage will obviously make it more and more attractive for companies to make such a move – it will compensate its channel more for moving customers over; it will ensure that companies are aware of the extra capabilities that a globally shared platform can offer in B2B and B2C trading and so on.

The question is, will Sage ever start to purposefully not add specific functionality to its on-premise systems so as to make remaining on that platform not only less favourable, but also less viable for its more conservative customers? Only time will tell.

So, what was new? New customer characterisations – out with SMB, mid-market and larger customers. It was stated that the customer base did not really identify with the terminology (something that Quocirca can also attest to). Instead, we now have start-up and scale-up segments. Nothing too startling about this – but it may well play well with companies that want to be seen as more dynamic than an “SMB”.

At the product level, Kelly was keen to focus on how he sees the need to continue to rationalise the product portfolio, bringing it down from the close to 300 products that were around what was effectively a global federation of different companies before he joined. This is being done by building out on an open API strategy, which decouples the front end (system of engagement) from the back end (system of record) so providing much greater flexibility going forward.

To the Cloud!

This also enables Sage to make a better play for building an app marketplace – it is introducing a new Integration Cloud that purportedly will allow code-less integration of Sage, public cloud and on-premise systems. If this works as promised, Sage will be able to be a cloud aggregator and broker.

This could, however, bring its own issues. Look at the majority of existing app marketplaces out there. It is worse than cable television – you think you know what you want, but finding it is difficult. You find something that you think is what you want, but it is badly put together and presented. You find just what you want, but it is in a different language. And so on.

Sage will need to be the honest broker in the middle, making the identification of what app is best for the user as easy as possible. It needs to empower the Sage community to rank and score apps to weed out those that are not up to the job. It needs to ensure that it doesn’t allow any third party to water down its stated commitment to joining its customers in a strategy of trust and security.

This could be further complicated based on some of the working examples Sage showed from its integrations with other products. One showed how it integrated into TomTom Fleet Manger, tracking an employee’s movements for mileage expenses and so on. It was said that this could also then be integrated into a time charging model, for example where a professional services employee enters a customer’s building and so can automatically starting charging the customer for their time.

This is great – as long as it all works and does not become seen as too ‘Big Brother’ by the employee. If it doesn’t work, identifying the root cause and remediating it could be difficult – and who gets it in the neck? Probably Sage.

The rise of the Bot.

The most interesting announcement, though, was something that was very innovative – not only for an accounting company, but any company. Sage has brought in a very bright person, Kriti Sharma, to look at how artificial intelligence and machine learning can be brought in to the world of financial systems. To this end, Sharma has developed Pegg, a bot. Somewhat of a mix of Cortana/Siri and TripIt, Pegg can take input from (at the moment) Slack and Facebook Messenger.

Why? Well, consider expenses – many companies such as SAP Concur (which owns TripIt) and KDS have worked on automating the expense process as much as possible – and yet users still struggle with it. By using a bot, it is possible to more quickly input the expense details in natural English, and Pegg will then deal with the intelligence required to sort it out and post it to the expense system.

Sharma is fully aware of the security and other issues that there could be around this, and also keenly aware of the possible power in a natural language interface to financial accounting processes that there is there as well. As such, she is ensuring that it is a case of small steps being taken to find out what users really want, how those requirements are dealt with and how security is managed along the entire process.

So, is Sage now safe? Not completely, but it is definitely not the turkey waiting to be stuffed. It still has plenty of progress to make, but as was pointed out, the majority of start-up and scale-up organisations around the globe are still using Microsoft Excel and other not-fit-for-purpose means of accounting.

The devil is in the detail – but Sage seems to be positioning itself as an interesting ingredient in an organisation’s business recipe.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: