Things change, but recent advances in technology coupled with social changes are changing the work/life balance, and not in the way that was once expected. Shorter days and more leisure time was a twentieth century dream for the twenty first century world of work, but the reality is somewhat different.
At one time, information and communications technology (ICT) for the working environment was only made accessible to a select few, controlled by central diktat and superior to anything you were likely to see at home. Now the complete opposite is true and consumerised IT not only extends the working day into individuals’ personal lives, but also allows them choices and to bring their personal devices (BYOD) and activities – especially social communications – into the main hours of the working day.
While this blurring may not be an issue providing employees do not push too much personal activity so as to be a detriment to their work, it does create other challenges.
One in particular is related to another change, but this time instigated by the organisation. There is an increasing need to open up business applications to communicate and share information with users outside of the organisation. This includes outside the physical boundaries and the need to share with employees on the move or working from home, but also outside the corporate boundaries to contractors, third party suppliers, business customers and even consumers. The reasons for this are to improve relationships with customers, transact directly with them and to more tightly integrate the supply chain.
Organisations are themselves also increasingly using social media to do this as they feel that it will make it easier to identify, communicate with and retain customers.
The problem then is how and what to share, and will it be safe?
Up until recently the main method of sharing information remotely with anyone external would either be physical media – CD, memory stick, etc – especially for large volumes of data; or, more often for smaller volumes, email. Most organisations are relatively confident they can secure email sharing, and there are certainly many tools to support this and minimise data leakage.
Physical media is more tricky, and as mobile devices have become increasingly prevalent, this increases the physical device risk further. This might be by direct connection through USB such as memory sticks (although ‘podslurping’ was a term coined for downloading gigabytes to a connected iPod) or over the air through a cellular or Wi-Fi connection.
The risks this brings through the potential loss or theft of device are well known and understood, with mobile device management (MDM) protections often put in place to lock or wipe, and sometimes, though not frequently enough, through on-device encryption. There are also those who avoid data residing on the device at all through virtual connections that leave no permanent data footprints.
However, a greater risk comes from user behaviours related to the increasing use of social media – posting or sharing something ‘out there’ on the internet. This might be as an update to ‘friends’ via a social media site or a dedicated cloud storage provider.
Either way it is potentially out of sight from an enterprise perspective, as employees will be using their own preferred tools to create a Bring Your Own Cloud or Collaboration (BYOC) experience. If this casual and informal usage translates into how official or formal information is shared with third party businesses and consumers, the organisation is not in control, making the demonstration of compliance virtually impossible and increasing security risks.
It might be that enterprise IT has its own set of endorsed tools for information sharing via cloud based services, but the blurring of boundaries in employee behaviour may make the use of these difficult to enforce, especially if employees have been allowed or even encouraged to BYOD in an uncontrolled manner. One way or another, lax behaviour may need to be reined in, monitored or checked.
Technology vendors and industry pundits take great delight in announcing that “this time it’s different!”. There are paradigm shifts, unstoppable trends, ground-breaking changes and disruptive innovations.
Mobile technologies are no exception, yet a short look back in time tells us that things are not always as revolutionary as first perceived. For a while, mobile email was something special. There were dozens of software vendors, although not typically the major email players, offering email on the move. Then there was the BlackBerry – the must-have email gadget for former-Yuppy executives looking to replace their Filofaxes. In fact, mobile email itself was so special that senior folk demanded special exceptions must be made to security policies but that only they should have it.
Now the edge has worn off, it turns out that email is just email, but you can also access it on the move i.e. while mobile. BlackBerry has lost some of its shine and the need for dedicated mobile email software vendors has evaporated. There are certain things that make mobile email more complicated – such as being careful how much is downloaded to keep data costs down and watching out for the risk of loss or theft if private attachments are on the mobile device – but these are management challenges, not reasons to say that mobile email is so radically different.
The broader needs of complete mobile working also seem to be following similar lines.
What started out as a special tool for certain roles and only with certain devices has exploded into a consumer-led boom of a huge diversity of smartphones and tablets. These devices might be operated differently with touchscreens instead of keyboards and connect over public wireless rather than private fixed networks, but they are essentially doing the same job – allowing their users to communicate and interact with data.
Extra risks occur because of the use of open and public networks, a greater variety of devices and increasingly that employees want to be told ‘you can bring your own devices’ (BYOD) and use them for work. These things are not necessarily unique to mobile devices and some businesses will have had employees connecting in from domestic desktop computers over the last couple of decades, but the consumer mind-set towards IT has really gathered most of its momentum from mobile devices.
The risks this varied mobile usage brings do need managing, but it is not enough to think it is simply about mobile device management (MDM), because actually the things that need protecting are sensitive assets that belong to the employer and the employees’ ability to get their work done efficiently without incurring considerable extra costs.
There are several areas beyond the devices themselves that could do with further attention.
First to consider is applications. How will these be deployed, installed and correctly configured now that the concept of a standard corporate build on a standard corporate device is out of the window? It needs to be done in a simple, flexible, self-service manner, delivered over the air with enforcement to ensure critical apps are installed, and unapproved ones are not, or are at least contained. Application versions and configurations need to be managed over the complete usage lifecycle and secured for access control and data leakage prevention. The whole thing needs wrapping with tracking and monitoring of performance, usage and compliance.
The next area that most companies consider is data. The knee-jerk reaction of the most paranoid security manager will be to lock everything down and encrypt everything. Most users will rebel against this at some level if it makes work too complex or difficult, and most especially if their own BYOD phone or tablet is the device the data is on. An organisation, and it is the line of business, not IT’s responsibility, has to determine to value and risk of data in order to decide how much security to apply. Access controls based on users, roles and the capabilities or risks of classes of device might be applied; some data may be ‘geo-fenced’ to ensure it can only be access in certain locations, other may be only accessible from a cloud service and never residing on the device. The important thing is to ensure that the right controls can be exerted on data of known value or risk, without removing the flexibility that mobile brings – otherwise employees will work around the issue, bringing potentially great risks.
Beyond protecting those tangible digital assets, the next question is what are employees doing? For managing the mobile enterprise, this breaks into two areas of interest – behaviour and expenses. These areas might often be related and both are greatly challenged by the move to BYOD. However the relationship between employers and employees with communications technologies – desk phones, internet access etc – has always been one of trust and consequences. And if that seems to be failing, monitor what employees are doing and block things that are not allowed. Little changes.
All together, effective IT management requires an enterprise to consider all aspects – devices, applications, data and users – and apply suitable controls based on the risks. These might be elevated by mobile, but should be assessed based on value and risk to the business.
While all sorts of powerful tools can be readily deployed, it should always be remembered that their goal is to automate the hopefully sensible procedures and policies that an organisation has put in place to support its strategy. This is still true of mobile, just as it is with other technologies. Disruptive? Yes, but ultimately not that different to other innovations in that its implementation needs to fit with the business.
Sellers of computer security products and services sometimes fret that their messaging is too scary as they go on about risk, data loss and regulatory fines. To get around this, every so often they like to remind potential buyers that their wares are also business enablers. The case is easier to make in some areas than others, one such is identity and access management (IAM).
In the old days (pre-business use of the internet) IAM was mainly about providing identities to employees (and the odd contractor) to give them access to various in-house applications. This was generally from PCs and dumb terminals situated on premise and owned by the business; all was restricted to private networks. How things have changed.
A recent Quocirca report, Digital identities and the open business, shows that the majority of European organisations now open up their applications to external users; from either business customers, consumers or both. This is done entirely for positive business reasons, the top drivers being direct transactions with customers, improved customer experience, smoother supply chains and revenue growth.
However, this requires a level of IAM to be put in place that enables the quick capture and on-going authentication of identities. One of the challenges this throws up is the need for federated identity management.
Organisations that only need to worry about their own employees can put in place a single directory for centralised storage and rely solely on this to underpin IAM requirements. Microsoft Active Directory is by far the most common “internal directory”. However, when it comes to users from external organisations a whole range of other identity sources come in to play.
For users from business customers and partner organisations, it will often be the target organisation’s own directory (so may be another instance of Active Directory). However, identities may also be sourced from the membership lists of professional bodies (e.g. legal and accounting associations), government databases and social media sites.
When it comes to dealing with consumers, social media tops the list as a source of identity. Many of us will already be familiar with, being able to optionally use our Facebook identities to login to sites like Spotify of JustGiving. Wherever an identity is sourced from it is clear that for external users there is a growing concept of BYOID (bring-your-own-identity).
Some may frown at this and wonder how secure it can all be. The answer to that is down to the IAM system in place. This is where the different sources of identity are federated and policies about who can access what are enforced.
Banks would clearly be taking a great risk by allowing a user to move large sums of cash around based on a Google identity, but it may be good enough to answer an enquiry about opening a new account and capturing some basic details to kick the relationship off. If things go further the expense of creating a more secure identity and means of authentication can go ahead and the details updated in the IAM system.
Quocirca’s report shows that when IT and IT security managers think about IAM they still think primarily in terms of achieving certain security goals. However, its use for achieving business goals is creeping up the list the priorities. Furthermore, in the past IAM may have been seen as affordable only by large enterprise. However, it is now widely available as an on-demand service (IAM as a service/IAMaaS) and open to business of all sizes.
The majority of respondents to Quocirca’s survey report that their business managers are taking an interest in IAM. This is for not for security reasons but for its power as a business enabler. Now that’s not too scary – is it?
Quocirca’s report Digital identities and the open business is freely available to download here: https://www.ca.com/us/register/forms/collateral/quocirca-european-research-digital-identities-and-the-open-business.aspx
Facebook, Twitter, Apple and Microsoft: all icons of the information technology industry and all the focus for targeted attacks in Feb 2013. The bad news for us all is, that even those that should be some of the most tech-savvy companies in the world, can fall foul of targeted attacks.
Microsoft admitted: “During our investigation, we found a small number of computers, including some in our Mac business unit, [which] were infected by malicious software……” see here for source. Microsoft appears not to have been seriously impacted, at least if the aim of the attackers was to steal data, as it goes on to say “We have no evidence of customer data being affected and our investigation is on-going“. The important lesson is that, whilst Microsoft’s defences were penetrated, it was prepared to acknowledge this and make a statement that its customers’ data remained safe.
The story at Facebook was alike; malware did get on to its devices, but it was confident data was not stolen – see here for more information. Reports about the incident at Apple are similar. Twitter admitted to 250,000 user account details being compromised.
All businesses must accept this, if they become a target, it is very hard to stop determined cybercriminals or hacktivists getting malware on to their systems. What is essential is to ensure that such attacks are identified as soon as possible and that it is hard for the perpetrators to extend their attacks within the impacted networks.
A new research report from Quocirca “The trouble heading for your business” (sponsored by Trend Micro) shows the scale of the problem of targeted attacks across European businesses. The good news is that with all the high profile reporting, awareness is high. This understanding is also due to the fact that most organisations believe they have been a victim of targeted attacks at some point and in about one third say there has been a significant impact of some sort.
The report goes on to show, that there is an over-reliance on traditional security technology and not enough use being made of more advanced techniques. Whilst Quocirca cannot be sure of how Microsoft, Apple and Facebook are defending themselves it seems that their security posture is predicated on the fact that attacks will penetrate their defences but timely detection and multiple layers of security means these attacks can be foiled.
With their high level of interaction with consumers and the need to store personal financial data, Quocirca’s report shows that retailers and financial services organisations are some of the most concerned about the potential impacts of targeted attacks. However, no business can afford to be complacent. With the rise of hacktivism any organisation could unexpectedly become an overnight target.
As another recent Quocirca report “Digital identities and the open business” (sponsored by CA Technologies) shows most businesses are driving more and more value from their online interactions, but this comes at a price. Some of the profit from those interactions must be reinvested in security measures that prepare organisations to respond to increasingly sophisticated and well-targeted attacks on their employees, networks, applications and data. Those that do not face data losses, regulatory fines, damaged competiveness and in the worst case the collapse of their businesses.
The recent announcement at Yahoo! about cutting down working from home and getting employees to come into the office seems to have put a virtual cat among several distributed pigeons. It might be that there are a number of remote, disgruntled and disaffected employees who are simmering remotely out there in distant cyber space who are not getting the message about how the business needs to be changed, but this appears to be a very public way to conduct change management.
One thing is sure, it has brought many opinions, fears and prejudices about work out into the open.
First there is the feeling among many who do not or cannot work from home, that all those who do must be ‘tele-shirking’, i.e. not really working but being subject to a thousand and one distractions – was that the doorbell? I’ll just vacuum the hall and make another cup of coffee.
This feeling also pervades many managers; after all, if you can’t see each and every one of the workers, how do you know if they’re really working or not? This may sound a bit old-fashioned shop floor or weaving mill with an overseer or foreman at one end of a line of workers, literally keeping an eye on them as they work. But, a quick glance around most modern offices and business park facilities will show glass-fronted offices for managers and open plan seating areas for ‘the workers’. Plus ca change?
On the flip side there is the understandable fear of remote workers that those in the office get more ‘real’ time and therefore influence with the boss. This might translate into better opportunities for pay rises and promotions for those able to maximise their visibility and more frequently get the ear of their manager.
Surely technology fixed this? After all, those working from home will be connected via the internet right into the heart of the corporate enterprise IT systems, they will most likely have mobile phones and may even have video conferencing, desktop sharing tools and unified communications. They can phone, email, chat, text, video call, collaborate with a whole variety of tools – in or out of the office – as much as they like and with open IP networks pretty cheaply. So much so that one company banned the use of email for internal communications as it seemed like employees were doing it too much.
So why should it really matter where people are?
Past Quocirca research once indicated a fear of loss of organisational culture if people were working too much while mobile or at home, and some commentators think this might be what Yahoo! is trying to address. However, simply bringing a number of individuals who were simmering at home back together is unlikely to stimulate upbeat and innovative water cooler conversations, but more likely a seething cauldron of gripes.
The underlying problem is unlikely to be either one of technology or location, but management. That’s not just the day-to-day operational stuff of goal-setting, nurturing, mentoring, delegating, support, feedback, correction and reward, but also the higher level direction of who we are, why we’re here and what we do.
This does not mean a meaningless buzzword-laden mission statement that people smirk at, but a credible corporate culture that employees can relate to, sign up to or decide is not for them and move out. It can be as simple as “don’t be evil” or as prescriptive as a training program, but either way it has to be consistent, applied from the top of the organisation to the bottom and understood by everyone.
That underpins the relationship with customers, suppliers, partners, peers, subordinates and managers, which then has to be supported by the right operational management tools. This is the crucial bit that makes it all work, or not and this is one area where the development of management skills has been lacking in recent years – especially people, time and process management. Technology can then play a part in supporting that, but only if people are taught how to use it – not the functional aspects they pick up or eventually read from manuals, but how to get the best out of it to perform a specific task.
At one time companies put their staff on courses to develop soft skills, with many of them geared towards some particular technology or communications medium. Time management for using their new Filofaxes; responsive communications e.g. how to answer the phone politely and in under three rings; take ownership of any issues; how to conduct effective meetings (hint: search online for “John Cleese meetings”).
Some may laugh and say this sort of training is no longer relevant to today’s busy workforce, but the inability to control communications overload, collaborate effectively with colleagues, manage remote or distributed workforces seems a little too widespread. Simply throwing more communications tools at employees, or even allowing them to bring their own, is not the answer on its own, but taking them away is not a step in the right direction.
Since the original Basel Accord was agreed and signed in 1988, central governments, driven by the EU, have been trying to ensure that financial institutions were managed in such a way as to provide a solid platform to the global economy. Starting with Basel I, increasing levels of central oversight have been put in place to try and maintain a good view on what could be happening within the markets. Through the Capital Requirements Directive (CRD) first instituted in 2007, certain levels of capital are required to be held by the banks and insurance companies so that they are able to weather any economic storms that come the way of the markets.
CRD IV is the latest version, and it nominally came into effect on January 1st, 2013. “Nominally” will be covered later…
At the highest level, the basis for CRD IV is covered under the Basel II and Basel III Accords for the banks and under Solvency II for insurance companies, which increase the amounts of common equity and Tier 1 Capital that the institutions are required to hold. Basel II also covers how the banks will need to provide centralised prudential reporting – and this mandates the use of the extended business reporting language, XBRL.
In October 2012, Quocirca carried out research across the UK, Germany, France, Italy and Spain for EMC to gauge the preparedness of financial institutions for the use of XBRL as well as their understanding of the whole CRD IV process.
The research provided some interesting findings – just under half of respondents felt that adopting XBRL would be a major impact on the business, with 65% saying that integrating existing systems into an XBRL system would be of major concern. Unfortunately, only 25% of respondents had even chosen an XBRL solution for something that was to be mandated as of January 1st (at the time, only 3 months away), leaving the notion of the financial markets being ready to meet the implementation date as being a bit far-fetched.
But, back to the “nominally”. As the financial markets collapsed, the EU went into prevarication mode. There was always a transition period built in to CRD IV and Basel III, but this was meant to be for a move along a maturity model with everyone essentially staying in step along a defined set of processes. Although the nominal dates for CRD IV and Basel III remained as 1st January, the EU started to change the goalposts, saying that banks must hold more liquid assets and so lower their risk if facing another meltdown.
Country financial bodies, such as the Financial Services Authority (FSA) in the UK had to move to more of an advisory mode – without agreement from the centre, little in the way of solid process guidance could be provided by them.
So, although few banks and insurance companies were ready for the requirements of CRD IV and Basel III on 1st January, it makes little difference, as the central bodies concerned were still fiddling while the economy burned.
However, this is not an adequate excuse for the financial institutions concerned to be so far away from being able to meet the technical requirements of CRD IV. The need for centralised prudential reporting is still there – and the failure to plan to implement XBRL systems means that these institutions are incapable of meeting this need.
At some stage, the Powers That Be will get their act together and CRD IV will become law with the necessary Directives in place. Financial institutions would do well to ensure that they are implementing the right systems now to meet their reporting needs – without them, they will fall foul of legal requirements, which could cost dear in fines.
Quocirca’s report on the subject can be downloaded for free here.
Quocirca has recently published a free checklist to help those looking at investing in self-service solutions. So, why might it be useful?
Well, there has been a rush in the UK in recent retail situations towards customer self-service and automation. Pay at pump petrol stations, self-checkout tills and so on. The reasons for this are presented as ‘customer convenience’, but it is pretty clear that it is all too often about cutting costs and too little thought is given as to how to how it might affect the overall customer experience.
Specialist retailers will argue they have to do this in order to compete with either online or other higher footfall locations such as supermarkets, hypermarkets and shopping malls. There may be some truth in this, but by simply commoditising the shopping experience, those making knee-jerk decisions to automate customer service run the risk of further business decline.
Clearly something is amiss as so many major and well established specialist companies have and continue to disappear, mainly with a wail about “habits have changed”, “it’s all gone online” after they have narrowed stock ranges, made the stores feel like warehouses and trained the staff to be as friendly as bent nail.
The best (and surviving) retailers – whether online, mobile or physical stores – provide service excellence irrespective of the technology or channel. Automation and self-service has a very important part to play in all these routes to the market, but it has to be delivered with the customer in mind, not simply as a cost cutting exercise.
The first thing to realise is that self-service is not a standalone tool or alternative to existing processes, but has to be integrated into the wider business in order to be successful. It should be viewed as a strategic and well-researched investment, not a simple tactical option. For this reason, the decision making process of how to implement self-service and what solutions or tools to should be implemented has to be well thought out and comprehensive.
To start with, an organisation must identify why the move the self-service is being made in the first place and what the main requirements are. There may be cost reduction element, but how important are other matters such as increasing cross-channel co-ordination or improving customer service levels and internal communications? For example are customers automatically invited to chat if their website interaction indicates they might need help or can support agents see what customers have done, requested or replied in order to avoid duplication of effort on the part of the customer?
However, this process may reveal that there are underlying issues with poor business systems, such as lack of a formal handover at shift changes or problem departments – e.g. a technical group refusing to get involved in customer contact. These will need to be addressed separately to the implementation process as simply deploying self-service alone will not fix these internal problems.
Next consider which suppliers will need to be approached and investigated. As well as taking the partisan views of the vendors themselves and some of their ‘tame’ customers, dig deeper and find out the broader market perspectives from a wider mix of customers, perhaps through trade shows and conferences. Industry analyst perceptions may also be valuable, but be aware that some analyst houses may overlook specialist or niche vendors and it is best to take a broad view.
The bulk of any product or service suitability assessment will come down to comparing features and functions, and a checklist will be useful. However, as this is an important investment, it is always important to check the people, company and its current client base of an intended supplier to get the full insight.
It is never easy going through the process by oneself, and even self-service benefits from some sort of external guidance. So for an idea of how to approach the self-service product and vendor selection process, download a free checklist
At the end of 2012, Quocirca carried out research for BNP Paribas Leasing Solutions into the perceptions around IT and communications financing amongst UK small and medium businesses (SMBs). For the research, SMBS are defined as organisations having revenues of between £5m and £50m per annum. The results show that there are marked differences in buying habits within these SMBs – and that there is a lack of strategic thinking that could impact their capabilities to compete in the market.
The research indicates that although the value added reseller is the most used strategic channel for the strategic buying IT and communications equipment, there is also a lot of tactical buying of equipment directly from the web. Although this happens particularly at the smaller end of the market, where the buying decision was mainly down to the owner/manager, it is still seen amongst the larger organisations where there was a dedicated purchasing function in place.
This tends to indicate “reactive” buying, where equipment is sourced as and when required, for example where a piece of equipment breaks or where a new project requires new hardware. However, by buying reactively, the underlying platform can become less strategic – standardisation and homogeneity can be reduced, while asset lifecycles are difficult to monitor and maintain as no real controls are in place.
It also militates against the way that modern IT is going – virtualisation and cloud computing work best where there is a more standardised and lifecycle managed set of equipment underpinning them.
However, for an SMB, putting in place this sort of rigour may be difficult. Consider and organisation that has a total IT budget per year of, say, £500,000 – this falls someway along the middle of the range of SMBs that are covered in the research. According to standard metrics, between 60 and 70% of this will be spent on maintaining the existing platform – what is known as “keeping the lights on”. This will leave, at the low end, £150,000 for new IT investments.
This is not a lot when it comes to trying to implement a new technology platform – and many SMBs find themselves in the position of wanting to carry out more strategic projects, but cannot as the required money is not within their grasp.
However, the use of structured financing could help SMBs make far more of their available money by aggregating planned spend over three years into a single pool of resource that can be used as needed. Taking the same example as above, that £500,000 IT budget could be aggregated over a three year agreement to give £1,500,000 – and through a suitable finance agreement, all that money can be made available as of day one to the SMB for use against IT spend.
Obviously, the SMB will still need to plan for keeping the lights on over the three year period. However, it should be able to put in place better processes around purchasing ITC equipment; it may be able to negotiate better deals on pricing; a more standardised and modern platform should lead to savings in managing the platform and in its energy usage.
Assuming that making changes to how ITC is purchased and managed drives down the keep the lights on costs to 60%, then £600,000 is now available for ITC project investment – an increase that could make all the difference between an SMB managing by struggling along and reacting to ITC events and an SMB that is more optimally supported by its ITC platform and is better suited to compete in today’s market conditions.
ITC financing can make a massive difference to organisations that are looking to gain better control over future spend and also in controlling its ITC platforms. The key is to make sure that the partner chosen to provide the financing agreement has a track record in this kind of work – banks will often require a legal financial hold against business assets, which could include the business premises and other assets, whereas a good ITC finance organisation will only have a hold against the equipment purchased through the agreement.
Quocirca has written a report on the subject that is freely downloadable here.
Toward the end of 2012, Quocirca met with an interesting company called DataSift. DataSift is a social data platform company – it takes feeds of data from the majority of social media sites and can then mine through social conversations for content, trends and insights. This is of obvious interest for organisations that are tracking sentiment of their brand in the market – but may also have other uses as well.
The one obvious target for DataSift is Twitter – the vast majority of Twitter data is available in the public domain (only direct messages (DMs) are hidden from general view). However, DataSift can also track activity around an organisation’s Facebook page, content from blogs and forums – including other semi-private information the organisation accesses via social networks established between itself and the public.
The platform is cloud-based with prices based on a combination of “complexity”, hours and hourly cost along with a data cost. The hourly cost is the simplest to explain. The price is based on the period being analysed – for a week, this would be 168 hours, for a month (nominally) 720 hours. Complexity is more difficult and is based on a calculation that can only be completed once the query has been created. However, the business model does mean that you only pay for what you get: no on-going subscriptions that have to be paid no matter what – everything is on a per use basis. The data cost is based on a small charge per Tweet analysed. For statistical validity, DataSift recommends that a 10% sample rate is used, which lowers the price significantly.
As a test, Quocirca asked DataSift to run a Twitter-only analysis of 2012 Twitter activity for a named set of vendors who are often mentioned in the same breath as big data. The query required just 10 lines of code to be written, and gave a complexity score of 2.1. Without the 10% filter in place, 2.23 million Tweets were analysed.
We selected an interesting topic as the basis for our test and Quocirca will be writing a more detailed piece on the findings, but the highlights below illustrate the potential power of the system:
- Twitter activity around big data grew by 64% over the year. This is not surprising – big data was still an emerging topic back at the beginning of the year, but was being pushed harder and harder by the vendors and the media as the year progressed.
- Nearly three quarters of Tweets contained an active link. People were not just dropping Twitter comments about big data – they were referring people to other content outside of Twitter.
- Apache had the biggest footprint with 9.4% of vendor mentions in Tweets being about it. Apache, with its Hadoop parallel processing engine and Cassandra database, is unsurprisingly the big player here.
- Second placed was 10gen, the commercial entity that looks after MongoDB, with 6.24% of vendor mentions.
- Of the “big guys”, IBM gained a creditable third place with 3.25%, with HP in fourth with 2.38%.
- There were geographic differences – IBM’s strongest country was France; Cloudera’s was Japan. SAP was (unsurprisingly) strong in Germany; DataSift itself was very strong in the UK.
- At a domain level – the sites that people were pointing people to most from their Tweets, Forbes.com was a surprise winner. Behind that, GigaOM.com and Techcruch.com were the next biggest content sources.
As a single point of interest, a look was taken at HP at a sentiment analysis level. Through the first part of the year, people’s views of HP remained fairly level, with a net sentiment score (positive comments minus negative comments) of 0 – not good news in itself, but it could have been worse. However, between 14th November and 10th December, a lot of sentiment activity took place.
On the 21st November, HP’s sentiment score plunged close to -10,000. It recovered back to zero by the 24th, and then went back down to -5,000 on the 28th, rose again and then crashed down to -7,000 on the 1st December.
Why? On November 20th, HP’s CEO Meg Whitman told Wall Street analysts that HP had massively overpaid for software firm, Autonomy, and accused former executives at Autonomy of cooking the books. Financial and technical analysts went into a frenzy – the very people who use social networking the most to get information out as quickly as possible. The ongoing fall-out was what caused the triple-dip poor sentiment scores over the following weeks.
This shows that, although HP got a fourth place in the mentions it had around big data, it was not necessarily positive to HP’s brand. This is why a company such as DataSift is important – it not only can remove the grunt work of dealing with analysing the massive firehose of data that comes from social networks, but also applies solid analytic against this to ensure that what a customer sees as results is there in context.
The managers of any successful business must keep a constant focus on productivity. Well implemented IT helps to achieve this, for example through automating manufacturing processes, improving supply chain efficiency or enabling flexible working. The same managers may assume that the IT departments that help deliver these innovations are themselves productive. In many cases they will be wrong.
A recent Quocirca research report – The wastage of human capital in IT operations – shows that many IT teams could improve their productivity dramatically. As much as 40% of a team’s time can be spent on routine low level tasks, for example patching software, dealing with end user device problems or error checking.
IT managers themselves are well aware of the issues and those in mid-market organisations in particular list such wastage of their team’s time as a top frustration. They have a clear understanding of their staff’s skills, but are not able to use them as effectively as they would like. For the individuals involved, work becomes boring and there is general demotivation.
Whilst the wastage should in itself be major concern, an even bigger concern is that this very issue is holding IT departments back from their raison d’être; helping businesses overall increase their productivity and competitiveness. IT managers admit that if they had 50% more man hours available to them, they would use these to modernise IT infrastructure and deliver new applications.
So what can be done? The truth is that the mundane tasks are not going to go way. IT managers have three options; stick with the status quo and accept the wastage; introduce cheaper, low skilled labour, probably through outsourcing areas of IT operations management; or introduce more automation.
It is estimate that 80% of IT infrastructure is common to most businesses IT operations. So, mundane tasks are being repeated by skilled operators on a huge scale. Outsourcing just displaces the problem, when in reality automating these tasks and repeating them across multiple businesses should be straight forward.
The vendors of automation tools are themselves experts at building the procedures that enable repetitive tasks to be carried out time and time again across different organisations IT infrastructure. Such tools can recognise exceptions and make an intelligent hand over to human operators, be they an internal staff member or an expert from a third party specialist.
Once the investment in the tools has been made, the incremental charge for repeating is negligible compared to outsourcing. Such tools enable the industrialisation of IT; the efficient repetition of certain tasks hundreds or thousands of times over without consuming valuable IT staff time.
There are three options for achieving this:
- Capital investment in new tools installed on-premise from the “big” systems management vendors; namely BMC, HP, CA and IBM (some would add Microsoft’s Systems Centre to this list)
- Freeing budget from operational spending to subscribe to on-demand system management services that support high levels of automation such as IP Soft and ServiceNow
- A hybrid approach with the flexibility to deliver both of the above, which is possible with the IP Soft tools and a few other vendors such as Kaseya
The ineffectiveness of many IT operations will spiral out of control if action is not taken to improve the way they are managed. Putting in place the necessary IT management tools, services and procedures to maximise automation and to industrialise processes will address this and reduce skills wastage. The ultimate value will be the ability to efficiently manage the increasing complexity of IT infrastructure, whilst delivering new applications that will ensure a business remains competitive.