Quocirca Insights


June 19, 2014  8:38 AM

Cloud infrastructure services, find a niche or die?

Bob Tarzey Profile: Bob Tarzey
Uncategorized

Back in May it was reported that Morgan Stanley had been appointed to explore options for the sale of hosted? services provider Rackspace. Business Week, May 16th reported the story with the headline Who Might Buy Rackspace? It’s a Big List. 24/7 WALLST reported analysis from Credit Suisse that narrowed this to three potential suitors; Dell, Cisco and HP.

To cut a long story short, Rackspace sees a tough future competing with the big three in the utility cloud market; Amazon, Google and Microsoft. Rackspace could be attractive to Dell, Cisco, HP and other traditional IT infrastructure vendors, that see their core business being eroded by the cloud and need to build out their own offerings (as does IBM which has already made significant acquisitions).

Quocirca sees another question that needs addressing. If Rackspace, one of the most successful cloud service providers, sees the future as uncertain in the face of competition from the big three, then what of the myriad of smaller cloud infrastructure providers? For them the options are twofold.

Be acquired or go niche

First achieve enough market penetration to become an attractive acquisition target for the larger established vendors that want to bolster their cloud portfolios. As well as the IT infrastructure vendors this includes communications providers and system integrators.

Many have already been acquisitive in the cloud market. For example the US number three carrier CenturyLink buying Savvis, AppFog and Tier-3 and NTT’s system integrator arm Dimension Data added to existing cloud services with OpSource and BlueFire. Other cloud service providers have merged to beef up their presence, for example Claranet and Star.

The second option for smaller provider is to establish a niche, where the big players will find it hard to compete. There are a number of cloud providers that are already doing quite well at this, they rely on a mix of geographic, application or industry specialisation. Here are some examples: 

Exponential E – highly integrated network and cloud services

Exponential-E’s background is as a UK focussed virtual private network provider, using its own cross London metro-network and services from BT. In 2010 the vendor moved beyond networking to provide infrastructure-as-a-service. Its differentiator is to embed this into in to its own network services at network level 2 (switching etc.) rather than higher levels. Its customers get the security and performance that would be expected from internal WAN based deployments that cannot be achieved for cloud services accessed over the public internet.

City Lifeline – in finance latency matters

City Lifeline’s data centre is shoe-horned in an old building near Moorgate in central London. Its value proposition is low latency for which it charges a premium over out of town premises for its proximity to the big city institutions.

Eduserve – governments like to know who they are dealing with

For reasons of compliance, ease of procurement and security of tenure, government departments in any country like to have some control over their suppliers, this includes the procurement of cloud services. Eduserv is a not for profit long-term supplier of consultancy and managed services to the UK government and charity organisations. In order to help its customers deliver better services, Eduserve has developed cloud infrastructure offerings out of its own data centre in the central south UK town of Swindon. As a UK G-Cloud partner it has achieved IL3 security accreditation enabling it to host official government data. Eduserve provides value added services to help customers migrate to cloud, including cloud adoption assessments, service designs and on-going support and management. 

Firehost – performance and security for payment processing

Considerable rigour needs to go into building applications for processing highly secure data for sectors such as financial services and healthcare. This rigour must also extend to the underlying platform. Firehost has built an IaaS platform to targets these markets. In the UK its infrastructure is co-located with Equinix, ensuring access to multiple high speed carrier connections. Within such facilities, Firehost applies its own cage level physical security. Whilst infrastructure is shared it maintains the feel a private cloud, with enhanced security through protected VMs with built in web application firewall, DDoS protection, IP reputation filtering and two factor authentication for admin access.

Even for these providers the big three do not disappear. In some cases their niche capability may simply see them bolted on to bigger deployments, for example, a retailer off-loading its payment application to a more secure environment. In other cases, existing providers are staring to offer enhanced services around the big three to extend in-house capability, for example UK hosting provider Attenda now offers services around Amazon Web Services (AWS).

For many IT service providers the growing dominance of the big three cloud infrastructure providers, along with the strength of software-as-a-service providers such as salesforce.com, NetSuite and ServiceNow will turn them into service brokers. This is how Dell positioned itself at its analyst conference last week; of course, that may well change if it bought Rackspace.

                                                                        

June 17, 2014  9:58 AM

Cloud orchestration – will a solution come from SCM?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Business, Cloud Computing, Configuration management, DevOps, Git, IBM, Programming, Serena

Serena Software is a software change and configuration management vendor, right?  It has recently released its Dimensions CM 14 product, with additional functionality driving Serena more into the DevOps space, as well as making life easier for distributed development groups to be able to work collaboratively through synchronised libraries with peer review capabilities.

Various other improvements, such as change and branch visualisation and the use of health indicators to show how “clean” code is and where any change is in a development/operations process, as well as integrations into the likes of Git and Subversion means that Dimensions CM 14 should help many a development team as it moves from an old-style separate development, test and operations system to a more agile, process driven, automated DevOps environment.

However, it seems to me that Serena is actually sitting on something far more important.  Cloud computing is an increasing component of many an organisation’s IT platform, and there will be a move away from the monolithic application towards a more composite one. By this, I mean that depending on the business’ needs, an application will be built up from a set of functions on the fly to facilitate that process.  Through this means, an organisation can be far more flexible and can ensure that it adapts rapidly to changing market needs.

The concept of the composite application does bring in several issues, however.  Auditing what functions were used when is one of them.  Identifying the right functions to be used in the application is another.  Monitoring the health and performance of the overall process is another.

So, let’s have a look at why Serena could be the one to offer this.

·         A composite application is made up from a set of discrete functions.  Each of these can be looked at as being an object requiring indexing and having a set of associated metadata.  Serena Dimensions CM is an object-oriented system that can build up metadata around objects in an intelligent manner.

·         Functions that are available to be used as part of a composite application need to be available from a library.  Dimensions is a library-based system.

·         Functions need to be pulled together in an intelligent manner, and instantiated as the composite application.  This is so close to a DevOps requirement that Dimensions should shine in its capabilities to carry out such a task.

·         Any composite application must be fully audited so that what was done at any one time can be demonstrated at a later date.  Dimensions has strong and complex versioning and audit capabilities, which would allow any previous state to be rebuilt and demonstrated as required at a later date.

·         Everything must be secure.  Dimensions has rigorous user credentials management – access to everything can be defined by user name, roll or function.  Therefore, the way that a composite application operates can be defined by the credentials of the individual user.

·         The “glue” between functions across different clouds needs to be put in place.  Unless cloud standards are improved drastically, getting different functions to work seamlessly together will remain difficult.  Some code will be required to ensure that Function A and Function B do work well together to facilitate Process C.  Dimensions is capable of being the centre for this code to be developed and used – and also as a library for the code to be stored and reused, ensuring that the minimum amount of time is lost in putting together a composite application as required.

Obviously, it would not be all plain sailing for Serena to enter such a market.  Its brand equity currently lies within the development market.  Serena would find itself in competition with the incumbent systems management vendors such as IBM and CA.  However, these vendors are still struggling to come to terms with what the composite application means to them – it could well be that Serena could layer Dimensions on top of existing systems to offer the missing functionality. 

Dimensions would need to be enhanced to provide functions such as the capability to discover and classify available functions across hybrid cloud environments.  A capacity to monitor and measure application performance would be a critical need – which could be created through partnerships with other vendors. 

Overall, Dimensions CM 14 is a good step forward in providing additional functionality to those in the DevOps space.  However, it has so much promise, I would like to see Serena take the plunge and see if it could move it through into a more business-focused capability.


June 11, 2014  10:28 AM

It’s all happening in the world of big data.

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Uncategorized

For a relatively new market, there is a lot happening in the world of big data.  If we were to take a “Top 20” look at the technologies, it would probably read something along the lines of this week’s biggest climber being Hadoop; biggest loser being relational databases and staying place being the less-schema databases.

Why?  Well, Actian announced the availability of its SQL-in-Hadoop offering.  Not just a small subset of SQL, but a very complete implementation.  Therefore, your existing staff of SQL devotees and all the tools they use can now be used against data stored in HDFS, as well as against Oracle, Microsoft SQLServer, IBM DB2 et al. 

Why is this important?  Well, Hadoop has been one of these fascinating tools that promises a lot – but only produces on this promise if you have a bunch of talented technophiles who know what they are doing.  Unfortunately, these people tend to be as rare as hen’s teeth – and are picked up and paid accordingly by vendors and large companies.  Now, a lot of the power of Hadoop can be put in the hands of the average (still nicely paid) data base administrator (DBA).

The second major event that this could start to usher in is the use of Hadoop as a persistent store.  Sure, many have been doing this for some time, but at Quocirca, we have long advised that Hadoop only be used for its MapReduce capabilities with the outputs being pushed towards a SQL or noSQL database depending on the format of the resulting data, with business analytics being layered over the top of the SQL/noSQL pair.

With SQL being available directly into and out of Hadoop, new applications could use Hadoop directly, and mixed data types can be stored as SQL-style or as JSON-style constructs, with analytics being deployed against a single data store.

Is this marking the end for relational databases?  Of course not.  It is highly unlikely that those using Oracle eBusiness Suite will jump ship and go over to a Hadoop-only back end, nor will the vast majority of those running mission critical applications that currently use relational systems.  However, new applications that require large datasets being run on a linearly scalable, cost-effective, data store could well find that Actian provides them with a back end that works for them.

Another vendor that made an announcement around big data a little while back was Syncsort, which made its Ironcluster ETL engine available in AWS essentially for free – or at worst at a price where you would hardly notice it, and only get charged for the workload being undertaken.

Extract, transform and load (ETL) activities have for long been a major issue with data analytics, and solutions have grown around the issue – but at a pretty high price.  In the majority of cases, ETL tools have also only been capable of dealing with relational data – so making them pretty useless when it comes to true big data needs.

By making Ironcluster available in AWS, Syncsort is playing the elasticity card.  Those requiring an analysis of large volumes of data have a couple of choices – buy a few acres-worth of expensive in-house storage, or go to the cloud.  AWS EC2 (Elastic Compute Cloud) is a well-proven, easy access and predictable cost environment for running an analytics engine – provided that the right data can be made available rapidly.

Syncsort also makes Ironcluster available through AWS’ Elastic MapReduce (EMR) platform, allowing data to be transformed and loaded directly onto a Hadoop platform.

With a visual front end and utilising an extensive library of data connectors from Syncsort’s other products, Ironcluster offers users a rapid and relatively easy means of bringing together multiple different data sources across a variety of data types and creating a single data repository that can then be analysed.

Syncsort is aiming to be highly disruptive with this release – even at its most expensive, the costs are well below the costs for equivalent licence and maintenance ETL tools, and make other subscription-based service look rather expensive.

Big data is a market that is happening, but is still relatively immature in the tools that are available to deal with the data needs that underpin the analytics.  Actian and Syncsort are at the vanguard of providing new tools that should be on the shopping list of anyone serious about coming to terms with their big data needs.


May 12, 2014  9:31 AM

The continuing evolution of EMC

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
EMC, Federation, IBM, VMware, XtremIO

The recent EMCWorld event in Las Vegas was arguably less newsworthy for its product announcements than for the way that the underlying theme and message continues to move EMC away from the company that it was just a couple of years ago.

The new EMC II (as in “eye, eye”, standing for “Information Infrastructure”, although it might as well be as in Roman numerals to designate the changes on the EMC side of things) is part of what Joe Tucci, Chairman and CEO of the overall EMC Corporation calls “The Federation” of EMC II, VMware, and Pivotal.  The idea is that each company can still play to its strengths while symbiotically feeding off each other to provide more complete business systems as required.  More on this, later.

At last years event, Tucci started to make the point that the world was becoming more software oriented, and that he saw the end result of this being the “software defined data centre” (SDDC) based on the overlap between the three main software defined areas of storage, networks and compute.  The launch of ViPR as a pretty far-reaching software defined suite was used to show the direction that EMC was taking – although as was pointed out at the time, it was more vapour than ViPR. Being slow to the flash storage market, EMC showed off its acquisition of XtremIO – but didnt really seem to know what to do with it.

On to this year.  Although hardware was still being talked about, it is now apparent that the focus from EMC II is to create storage hardware that is pretty agnostic as to the workloads thrown at it, whether this be file, object or block. XtremIO has morphed from an idea of “we can throw some flash in the mix somewhere to show that we have flash” to being central to all areas.  The acquisition of super-stealth server-side flash outfit DSSD only shows that EMC II does not believe that it has all the answers yet – but is willing to invest in getting them and integrating them rapidly.

However, the software side of things is now the obvious focus for EMC Corp.  ViPR 2 was launched and now moves from being a good idea to a really valuable product that is increasingly showing its capabilities to operate not only with EMC equipment, but across a range of competitorskit and software environments as well.  The focus is moving from the SDDC to the software defined enterprise (SDE), enabling EMC Corp to position itself across the hybrid world of mixed platforms and clouds.

ScaleIO, EMC IIs software layer for creating scalable storage based on commodity hardware underpinnings was also front and centre in many aspects.  Although hardware is still a big area for EMC Corp, it is not seen as being the biggest part of the long term future.

EMC Corp seems to be well aware of what it needs to do.  It knows that it cannot leap directly from its existing business of storage hardware with software on top to a completely next generation model of software which is less hardware dependent without stretching to breaking point its existing relationships with customers and the channel – as well as Wall Street.  Therefore, it is using an analogy of 2nd and 3rd platforms, along with a term of “digital born” to identify where it needs to apply its focus.  The 2nd Platform is where most organisations are today: client/server and basic web-enabled applications.  The 3rd Platform is where companies are slowly moving towards – one where there is high mobility, a mix of different cloud and physical compute models and an end game of on-the-fly composite applications being built from functions available from a mix of private and public cloud systems. (For anyone interested, the 1st Platform was the mainframe).

The “digital born” companies are those that have little to no legacy IT: they have been created during the emergence of cloud systems, and will already be using a mix of on-demand systems such as Microsoft Office 365, Amazon Web Services, Google and so on.

By identifying this basic mix of usage types, Tucci believes that not only EMC II, but the whole of The Federation will be able to better focus its efforts in maintaining current customers while bringing on board new ones.

I have to say that, on the whole, I agree.  EMC Corp is showing itself to be remarkably astute in its acquisitions, in how it is integrating these to create new offerings and in how it is changing from a “buy Symmetrix and we have you” company to a “what is the best system for your organisation?” one.

However, I believe that there are two major stumbling blocks.  The first is that perennial problem for vendors – the channel.  Using a pretty basic rule of thumb, I would guess that around 5% of EMC Corps channel gets the new EMC and can extend it to push the new offerings through to the customer base.  A further 20% can be trained in a high-touch model to be capable enough to be valuable partners.  The next 40% will struggle – many will not be worth putting any high-touch effort into, as the returns will not be high enough, yet they constitute a large part of EMC Corps volume into the market.  At the bottom, we have the 35% who are essentially box-shifters, and EMC Corp has to decide whether to put any effort into these.  To my mind, the best thing would be to work on ditching them: the capacity for such channel to spread confusion and problems in the market outweighs the margin on the revenues they are likely to bring in.

This gets me back to The Federation.  When Tucci talked about this last year, I struggled with the concept.  His thrust was that EMC Corp research had shown that any enterprise technical shopping list has no more than 5 vendors on it.  By using a Federation-style approach, he believed that any mix of the EMC, VMware and Pivotal companies could be seen as being one single entity.  I didnt, and still do not buy this.

However, Paul Maritz, CEO of Pivotal put it across in a way that made more sense.  Individuals with the technical skills that EMC Corp require could go to a large monolith such as IBM.  They would be compensated well; would have a lot of resources at their disposal; they would be working in an innovative environment.  However, they would still be working for a “general purpose” IT vendor.  By going to one of the companies in EMC Corp’s Federation, in EMC II they are working for a company that specialises in storage technologies; if they go to VMware, they are working for a virtualisation specialist; for Pivotal, a big data specialist – and each has its own special culture. For many individuals, this difference is a major one. 

Sure, the devil remains in the detail, and EMC Corp is seeing a lot of new competition coming through into the market.  However, to my mind it is showing a good grasp of the problems it is facing and a flexibility and agility that belies the overall size and complexity of its corporate structure and mixed portfolio.

I await next years event with strong interest.

Enhanced by Zemanta


May 2, 2014  2:42 PM

Finding new containers for the BYOD genii

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
Business, BYOD, Data-security, Management, Quocirca, Security

Many headline IT trends are driven by organised marketing campaigns and backed by industry players with an agenda – standards initiatives, new consortia, developer ecosystems – and need a constant push, but others just seem to have a life of their own.

BYOD – bring your own device – is one such trend. There is no single group of vendors in partnership pushing the BYOD agenda; in fact most are desperately trying to hang onto its revolutionary coattails.  They do this in the face of IT departments around the world who are desperately trying to hang on to some control.

BYOD is all about ‘power to the people’ – power to make consumer-led personal choices and this is very unsettling for IT departments that are tasked with keeping the organisations resources safe, secure and productive.

No wonder that according to Quocirca’s recent research from 700 interviews across Europe, over 23% only allow BYOD in exceptional circumstances, and a further 20% do not like it but feel powerless to prevent it. Even among those organisation that embrace BYOD, most still limit it to senior management.

This is typical of anyone faced by massive change; shock, denial, anger and confusion all come first and must be dealt with before understanding, acceptance and exploitation take over.

IT managers and CIOs have plenty to be shocked and confused about. On the one hand, they need to empower the business and avoid looking obstructive, but on the other, there is a duty to protect the organisation’s assets. Adding to the confusion, vendors from all product categories have been leaping on the popularity of the BYOD bandwagon and using it as a way to market their own products.

The real challenge is that many of the proposed ‘solutions’ perpetuate a myth about BYOD that is unfortunately inherent in its name, but also is damaging the approach taken to addressing the issues BYOD raises.

The reality is that this is not and should not be centred around the devices or who owns them, but on the enterprise use to which they are put.

The distinction is important for a number of reasons.

First, devices. There are a lot to choose from already today,  with different operating systems, in different form factors – tablets, smartphones etc. – and there is no reason to think this is going to get any simpler. If anything, with wearable technologies such as smart bands, watches and glasses already appearing, the diversity of devices is going to become an even bigger challenge.

Next, users. What might have started as an ‘I want’ (or even an “I demand”) from a senior executive, soon becomes an ‘I would like’ from knowledge workers, who now appear to be the vanguard for BYOD requests. But this is only the start as the requirement moves right across the workforce. Different roles and job responsibilities will dictate that different BYOD management strategies will have to be put in place. Simply trying to manage devices (or control choices) will not be an option.

Those who appear to be embracing rather than trying to deny BYOD in their organisations understand this. Their traits are that they tend to recognise the need to treat both tablets and smartphones as part of the same BYOD strategy and they are already braced for the changes that will inevitably come about from advances in technology.

Most crucially, however, they recognise the importance of data.

Information security is the aspect of BYOD most likely to keep IT managers awake at night – it is a greater concern than managing the devices themselves or indeed the applications they run.

The fear of the impact of data security, however, seems to have created a ‘deer in the headlights’ reaction rather than galvanising IT into positive action. Hence the tendency to try to halt or deny BYOD in pretty much the same way that in the past many tried to stem the flow towards internet access, wireless networks and pretty much anything that opens up the ‘big box‘ that has historically surrounded an organisation’s digital assets.

Most organisations would do far better to realise that the big box approach is no longer valid, but that they can shrink the concept down to apply ‘little boxes’ or bubbles of control around their precious assets. This concept of containerisation or sandboxing is not new, but still has some way to go in terms of adoption and widespread understanding.

Creating a virtual separation between personal and work environments allows the individual employee to get the benefit of their own device preferences, and for the organisation to apply controls that are relevant and specific to the value and vulnerability of the data.

With the right policies in place this can be adapted to best-fit different device types and user profiles. Mobile enterprise management is still about managing little boxes, but virtual ones filled with data, not the shiny metal and plastic ones in the hands of users.

For more detailed information about getting to grips with BYOD, download our free report here

Enhanced by Zemanta


April 23, 2014  7:45 PM

Print security: The cost of complacency

Louella Fernandes Louella Fernandes Profile: Louella Fernandes
Data breach, Data-security, MFP

Quocirca research reveals that enterprises place a low priority on print security despite over 60% admitting that they have experienced a print-related data breach.

Any data breach can be damaging for any company, leaving it open to fines and causing damage to its reputation and undermining customer confidence. In the UK alone, the Ponemon Institute estimates that in 2013, the average organisational cost to a business suffering a data breach is now £2.04m, up from £1.75m in the previous year.

As the boundaries between personal and professional use of technology become increasingly blurred, the need for effective data security has never been greater. While many businesses look to safeguard their laptops, smartphones and tablets from external and internal threats, few pay the same strategic attention to protecting the print environment. Yet it remains a critical element of the IT infrastructure. Over 75% of enterprises in a recent Quocirca study indicating that print is critical or very important to their business activities.

The print landscape has changed dramatically over the past decade. Local single function printers have given way to the new breed of networked multifunction peripherals (MFPs). With print, fax, copy and advanced scanning capabilities, these devices have evolved to become sophisticated document capture and processing hubs.

While they have undoubtedly brought convenience and enhanced user productivity to the workplace, they also pose security risks. They have built in network connectivity, along with hard disk and memory storage, MFPs are susceptible to many of the same security vulnerabilities as any networked device.

Meanwhile, the move to a centralised MFP environment means more users are sharing devices.  Without controls, documents can be collected by unauthorised users – either accidentally or maliciously. Similarly, confidential or sensitive documents can be routed in seconds to unauthorised recipients, through scan to email, scan to file and scan to cloud storage functionality. Further controls are required as  employees print more and more direct from mobile devices.

Yet many enterprises are not taking heed. Quocirca’s study revealed that just 22% place a high priority on securing their print infrastructure. While financial and professional services sector consider print security a much higher priority, counterparts in the retail, manufacturing and the public sectors lag way behind.

Such complacency is misplaced. Overall 63% admitted they have experienced a print-related data breach. An astounding 90% of public sector respondents admit to one or more paper-based data breaches.

So how can businesses minimise the risks? Fortunately thereare simple and effective approaches to protecting the print infrastructure. These methods not only enhance document security, but also promote sustainable printing practices – reducing paper wastage and costs.

1. Conduct a security assessment

For enterprises with a large and diverse printer fleet, it is advisable to use a third party provider to assess device, fleet and enterprise document security. This can evaluate all points of vulnerability across a heterogeneous fleet and provide a tailored security plan, for devices, user access and end of life/disposal. Managed print service (MPS) providers commonly offer this as part of their assessment services.

2. Protect the device.

Many MFPs come as standard with hard drive encryption and data overwrite features. Most also offer lockable and removable hard drives. Data overwriting ensures that the hard drive is clear of readable data when the device is disposed of. It works by overwriting the actual data with random and numerical characters. Residual data can be completely erased when the encrypted device and the hard disk drive are removed from the MFP.

3. Secure the network

MFP devices can make use of several protocols and communication methods to improve security. The most common way of encrypting print jobs is SSL (secure socket layer) makes it safe for sensitive documents to be printed via a wired or wireless network. Xerox, for instance, has taken MFP security a step further by including McAfee Embedded Control technology which uses application whitelisting technology to protect its devices from corrupt software and malware.

4. Control access

Implementing access controls through secure printing ensures only authorised users are able to access MFP device functionality. Also known as PIN and pull printing, print jobs can be saved electronically on the device, or on an external server, until the authorised user is ready to print them. The user provides a PIN code or uses an alternative authentication method such as a swipe card, proximity card or fingerprint. As well as printer vendor products there a range of third party products including Capella’s MegaTrack, Jetmobile’s SecureJet, Equitrac’s Follow-You and Ringdale’s FollowMe, all of which are compatible with most MFP devices.

5. Monitor and audit

Print environments are often a complex and diverse mix of products and technologies, further complicating the task of understanding what is being printed, scanned and copied where and by whom. Enterprises should use centralised print management tools to monitor and track all MFP related usage. This can either be handled in-house or through an MPS provider.

With MFPs increasingly becoming a component of document distribution, storage and management, organisations need to manage MFP security in the same way as the rest of the IT infrastructure. By using the appropriate level of security for their business needs, an organisation can ensure that it’s most valuable asset–corporate data–is protected.

Read Quocirca’s report A False Sense of Security

Enhanced by Zemanta


April 17, 2014  11:19 AM

Internet of Things – Architectures of Jelly

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
Big Data, Business, Instrumentation, Internet of Things, SCADA

In today’s world of acronyms and jargon, there are increasing references to the Internet of things (IoT), machine to machine (M2M) or a ‘steel collar’ workforce. It doesn’t really matter what you call it, as long as you recognise it’s going to be BIG. That is certainly the way the hype is looking – billions of connected devices all generating information – no wonder some call it ‘big data‘, although really volume is only part of the equation.

Little wonder that everyone wants to be involved in this latest digital gold rush, but let’s look a little closer at what ‘big’ really means.

Commercially it means low margins. The first wave of mobile connectivity – mobile email – delivered to a device like a BlackBerry, typically carried by a ‘pink collar’ executive (because they bought their stripy shirts in Thomas Pink‘s in London or New York) was high margin and simple. Mobilising white-collar knowledge workers with their Office tools was the next surge, followed by mobilising the mass processes and tasks that support blue-collar workers.

With each wave volumes rise, but so too do the challenges of scale – integration, security and reliability – whilst the technology commoditises and the margins fall. Steel collar will only push this concept further.

Ok, but the opportunity is BIG, so what is the problem?

The problem is right there in the word ‘big’. IoT applications need to scale – sometimes preposterously – so much so that many of the application architectures that are currently in place or being developed are not adequately taking this into account.

Does this mean the current crop of IoT/M2M platforms are inadequate?

Not really, as the design fault is not there, but generally further up in the application architectures. IoT/M2M platforms are designed to support the management and deployment of huge numbers of devices, with cloud, billing and other services that support mass rollouts especially for service providers.

Reliably scaling the data capture and its usage is the real challenge, and if or when it goes wrong, “Garbage in, Garbage out” (GiGo) will be the least of all concerns.

Several ‘V’s are mentioned when referring to big data; volume of course is top of mind (some think that’s why it’s called ‘big’ data), generally followed by velocity for the real-timeliness and trends, then variety for the different forms or media that will be mashed together. Sneaking along in last but one place is the one often forgotten, but without which the whole of the final ‘V’ – value – is lost – veracity. It has to be accurate, correct and complete.

When scaling to massive numbers of chattering devices, poor architectural design will mean that messages are lost, packets dropped and the resulting data may be not quite right.

Ok, so my fitness band lost a few bytes of data, big deal, even if a day is lost, right? Or my car tracking system skipped a few miles of road – what’s the problem?

It really depends on the application, how it was architected and how it deals with exceptions and loss. This is not even a new problem in the world of connected things – supervisory control and data acquisition (SCADA) – that has been in existence since well before the internet and its things.

The recent example of problem data from mis-aligned electro-mechanical electricity meters in the UK shows just how easy this can happen, and how quickly the numbers can get out of hand. Tens of thousands of precision instruments had inaccurate clocks, but consumers and supplier alike thought they were ok, until a retired engineer discovered a fault in his own home that led to the unearthing that thousands of people had been overcharged for their electricity.

And here is the problem, it’s digital now and therefore perceived to be better; companies think the data is ok, so they extrapolate from it and base decisions on it, and in the massively connected world of IoT, so perhaps does everyone else. The perception of reality overpowers the actual reality.

How long ago did your data become unreliable; do you know, did you check, who else has made decisions based on it? The challenge of car manufacturers recalling vehicles will seem tiny compared to the need for terabyte recalls.

Most are rightly concerned about the vulnerability of data on the internet of people and how that will become an even bigger problem with the internet of things. However, that aside, there is a pressing need to get application developers thinking about resilient, scalable and error-correcting architectures, otherwise the IoT revolution could have collars of lead, not steel and its big data could turn out to be really big GiGo.

Enhanced by Zemanta


April 15, 2014  10:13 AM

Managing a PC estate

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Business, BYOD, License, VDI, Virtual Desktop, VirtualBox

Although there is much talk of a move towards virtual desktops, served as images from a centralised point, for many organisations, the idea does not appeal.  Whatever the reason (and there may be many as a previous blog here points out), staying with PCs leaves the IT department with a headache – not least an estate of decentralised PCs that need managing.

Such technical management tends to be the focus for IT; however, for the business, there are a number of other issues that also need to be considered.  Each PC has its own set of applications.  The majority of these should have been purchased and installed through the business, but many may have been installed directly by the users themselves, something you may want to avoid, but is nowadays an expectation of many IT users.

This can lead to problems: some applications may not be licenced properly (for example, a student licence not permitted for use in a commercial environment); it may contain embedded malware (a recent survey has shown that much pirated software contains harmful payloads including keyloggers); it definitely opens up an organisation to considerable fines should a unlicensed software be present and a software audit be carried out by an external body.

Locking down desktops is increasingly difficult. Employees are getting very used to self-service through their use of their own devices, and expect this within a corporate environment.  Centralised control of desktops is still required – even if virtual desktops are not going to be the solution of choice.

The first action your organisation should take is for a full audit.  You need to fully understand how many PCs there are out there; what software is installed and whether that software is being used or not.  You need to know how many licences for software you have in place and how those can be utilised – for example, are they concurrent licences (a fixed number of people can use them at the same time), or named seat licences (only people with specific identities can use them).

This will help to identify software that your organisation was not aware of, and can also help in identifying unused software sitting idle on PCs.

You can then look at creating an image that contains a copy of all the software that is being used by people to run the business.  Obviously, you do not want every user within your organisation to have access to every application, so something is needed to ensure that each person can be tied in by role or name to a list of software to which they should have access.

Through the installation of an agent on each PC, it should then be possible to apply centralised control over what is happening.  That single golden image containing all allowable applications can then be called upon by that agent as required.  The user gets to see all the applications that they are allowed to access (by role and/or individual policy), and a virtual registry can be created for their desktop.  Should anything happen to that desktop (machine failure, disk corruption, whatever), a new environment can be rapidly built against a new machine.

If needed, virtualisation can be used to hive off a portion of the machine for such a corporate desktop – the user can then install any applications that they want to within the rest of the device.  Rules can be applied to prevent data crossing the divide between the two areas, keeping a split between the consumer and corporate aspects of the device – a great way of enabling laptop-based bring your own device (BYOD).

As with most IT, the “death” of any technology will be widely reported and overdone: VDI does not replace desktop computing for many.  However, centralised control should still be considered – it can make management of an IT estate – and the information across that estate – a lot easier.

This blog first appeared on FSlogix’ site at http://blog.fslogix.com/managing-a-pc-estate 

Enhanced by Zemanta


April 9, 2014  6:15 PM

Web security 3.0 – is your business ready?

Bob Tarzey Profile: Bob Tarzey
Cloud Services, Internet, Internet security

As the web has evolved so have the security products and services that control our use of it. In the early days of the “static web” it was enough to tell us which URLs to avoid because the content was undesirable (porn etc.) As the web became a means distributing malware and perpetrating fraud, there was a need to identify bad URLs that appeared overnight or good URLs that had gone bad as existing sites were compromised. Early innovators in this area included Websense (now a sizable broad-base security vendor) and two British companies SurfControl (that ended up as part of Websense) and ScanSafe that was acquired by Cisco.

 

Web 2.0

These URL filtering products are still widely used to control user behaviour (for example, you can only use Facebook at lunch time) as well as block dangerous and unsavoury sites. They rely on up to date intelligence about all the URLs out there and their status. Most of the big security vendors have capability in this area now. However, as the web became more interactive (for a while we all called this Web 2.0) there was a growing need to be able to monitor the sort of applications that were being accessed via the network ports typically used for web access; port 80 (for HTTP) and port 443 (for HTTPS). Again this was about controlling user behaviour and blocking malicious code and activity.

 

To achieve this firewalls had to change; enter the next generation firewall. The early leader in this space was Palo Alto Networks. The main difference with its firewall was that it was application aware with a granularity that could work within a specific web site (for example, applications running on Facebook). Just as with the URL filtering vendors, next generation firewalls rely on application intelligence, the ability to recognise a given application by its network activity and allow or block it according to user type, policy etc. Palo Alto Networks built up its own application intelligence, but there were other databases, such as FaceTime (a vendor that found itself in a name dispute with Apple) which was acquired by Check Point as it upgraded its firewalls. Other vendors including Cisco’s Sourcefire, Fortinet and Dell’s SonicWALL have followed suit.

 

The rise of shadow IT

So with URLs and web applications under control, is the web is a safer place? Well yes, but the job is never done. A whole new problem has emerged in recent years with the increasing ability for users to upload content to the web. The problem has become acute as users increasingly provision cloud services over the web for themselves (so called shadow IT). How do you know which services are OK to use? How do you even know which ones are in use? Again this is down to intelligence gathering, a task embarked on by Skyhigh Networks in 2012.

 

Skyhigh defines a cloud service as anything that has the potential to “exfiltrate data”; so this would include Dropbox and Facebook, but not the web sites of organisations such as CNN and the BBC. Skyhigh provides protection for businesses, blocking its users from accessing certain cloud services based on its own classification (good, medium, bad) providing a “Cloud Trust” mark (similar to what Symantec does for websites in general). As with URL filtering and next generation firewalls, this is just information, rules about usage need to be applied. Indeed, Skyhigh can provide scripts to be applied to firewalls to enforce rules around the use of cloud services.

 

However, Skyhigh cites other interesting use cases. Many cloud services of are of increasing importance to businesses; LinkedIn is used to manage sales contacts, Dropbox, Box and many other sites are used to keep backups of documents created by users on the move. Skyhigh gives businesses insight into their use, enables it to impose standards and, where subscriptions are involved, allows usage to be aggregated into to single discounted contracts rather than being paid for via expenses (which is often a cost control problem with shadow IT). It also provides enterprise risk scores for a given business based on its overall use of cloud services.

 

Beyond this, Skyhigh can assert controls over those users working beyond the corporate firewall, often on their own devices. For certain cloud services for which access is provided by the business (think salesforce.com, ServiceNow, SuccessFactors etc.), without need for an agent, usage is forced back via Skyhigh’s reverse proxy so that usage is monitored and controls enforced. Skyhigh can also recognise anomalous behaviour with regard to cloud services and thus provide an additional layer of security against malware and malicious activity.

 

Skyhigh is the first to point out that it is not an alternative to web filtering and next generation firewalls but complimentary to them. Skyhigh, which mostly provides it service on-demand, is already starting to co-operate with existing vendors to enhance their own products and services through partnerships. So your organisation may be able to benefit from its capabilities via an incremental upgrade from an existing supplier rather a whole new engagement. So, that is web security 3.0; the trick is to work out what’s next – roll on Web 4.0!

 

Enhanced by Zemanta


April 7, 2014  3:52 PM

Two areas where businesses can learn from IT

Bob Tarzey Profile: Bob Tarzey
ITSM

Many IT industry commentators (not least Quocirca) constantly hassle IT managers to align their activities more closely with those of the businesses they serve; to make sure actual requirements are being met. However, that does not mean that lines of business can stand aloof from IT and learn nothing from the way their IT departments manage their own increasingly complex activities. Two recent examples Quocirca has come across demonstrate this.

 

Everyone needs version control

First, take the tricky problem of software code version control. Many outside of IT will be familiar with the problem, at least at a high level, through the writing and review of documents. For many this is a manual process carried out at the document name level; V1, V1.1, V1.1A, V2.01x etc. Content management (CM) systems, such as EMC’s Documentum and Microsoft’s SharePoint can improve things a lot, automating versioning, providing checking and checkout etc. (but they can be expensive to implement across the business).

 

With software development the problem is a whole lot worse, the granularity of controls needs to be down to individual lines of code and there are multiple types of entities involved; the code itself, usually multiple files linked together by build scripts (another document), binary files that are actually deployed in test and then live environments, documentation (user guides etc.), third party/open source code that is included and so on. As result the version control systems from vendors such as Serena, IBM Rational and a number of open source systems that have been developed over the years to support software development are very sophisticated.

 

In fairly technical companies, where software development is a core activity, the capability of these systems is so useful that it has spread well beyond the software developers themselves. Perforce Software, another well-known name in software version control, estimates that 68% of its customers are storing non-software assets in its version control system. It customers include some impressive names with lots of users, for example; salesforce.com, NYSE, Netflix and Samsung.

 

To capitalise on this increasing tendency of its customers to store non-IT assets Perforce has re-badged its system as Perforce Commons and made it available as an online service as well as being available for on-premise deployment. All the functionality developed can be used for the management of whole range of other business assets. With the latest release this now includes merging Microsoft PowerPoint and Word documents and checking for difference between various versions of the same document. Commons also keeps a full audit trail of document changes, which is important for compliance in many document-based work flows.

 

Turing up the Heat in ITSM

The second area where Quocirca has seen IT management tools being used beyond the business is in IT service management (ITSM). FrontRange’s Heat tool is traditionally used for handing support incidents related to IT assets raised by users (PCs, smartphones, software tools etc.) However, increasingly its use is being extended beyond IT to other departments, for example to manage incidents relating to customer service calls, human resources (HR) issues, facilities management (FM) and finance department requests. Heat is also available as an on-demand service as well as an on-premise tool, in many cases deployments are a hybrid of the two.

 

Of course, there are specialist tools for CM, HR, FM and so on; specially designed for the job with loads of functionality. However, with budgets and resources stretched, IT departments that already use tools such as Perforce version management and Heat ITSM can quickly add value to whole new areas of the business with little extra cost. Others, that are not already customers, may be able to kill several birds with one stone as they seek to show the business that IT can deliver extra beyond its own interests with little incremental cost.

 

Enhanced by Zemanta


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: