Quocirca Insights

Page 20 of 30« First...10...1819202122...30...Last »

August 6, 2014  9:26 AM

It’s in the net: The value of big data

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
united
Talking with a journalist friend of mine a few weeks back, we got talking about how to possibly place some actual hard pounds and pence value on data.  It got me thinking – and this is my take on it.
A reasonable example would be the UK Premiership football/soccer league – understood by a large enough number of people to make the analogies useful (hopefully).
Let’s just start with a single data point.  Manchester United is a football team.  This is pretty incontrovertible – but it has little value in itself.  We can add immediate other data points to it, such as that it is a Premiership team, its home ground is Old Trafford, its strip is red etc.
This starts to build more of a picture – but still has little value.
We can then start to add other data to start to create possible value.  Over the past 10 years, Manchester United has won the Premiership title 5 times.  It has won the FA cup once, the league cup 3 times and the FIFA World Club Cup once.  A pretty good track record, then.
After the retirement of its long-term manager, Sir Alex Ferguson in 2012, David Moyes took over for one season – and United did not fare well, as players struggled to come to terms with a new regime.  Moyes was sacked and former Ajax, Bayern Munich, Barcelona and Dutch national team coach Louis van Gaal took over. van Gaal is looking to make major changes to the team, both through transfers and in the way the players are managed and trained. The results of these transfers will probably be known when you read this piece. A firm hand on the tiller could start to steer United back to winning ways.
Forbes estimates that Manchester United’s brand value is around $739m (having fallen from $837m, due to the fall in playing fortunes in the 2013/14 season).  Forbes also estimates the “team” value (based on equity plus debt values) at $2.8b.  This makes it the world’s third most valuable soccer club, behind Real Madrid and Barcelona.  So – deep pockets, and a money making machine.
The club claims to have 659 million fans around the globe, has nearly 3 million followers on Twitter and 54 million likes on Facebook. Wow – lots of eyeballs and merchandising opportunities.
In its first quarter 2014 financial results, it announced that merchandise and licensing revenues were up by 13.8%; that sponsorship revenues were up by 62.6% and that broadcasting revenues were up by 40.9%.  This all led to the quarter’s revenues being up by 29.1% overall at £98.5m with an EBITDA being up by 36.2%. A bad season doesn’t seem to have hit it at the bottom line overall.
The owners of the club, the American Glazer family of Malcolm and his six children, gained control of the club by borrowing money through payment in kind deals through an external company.  However, many of the loans are guaranteed against Manchester United assets.  In 2012, the Glazers sold 10% of the overall shares in the club, followed by a further 5% after Malcolm Glazer‘s death in May 2014. Opportunities are there to buy in to the club through share ownership – and to build up a decent holding if wanted. A leveraged buy-out that is now being sold back to the markets: not so much of a risk now.
Now we are getting somewhere.  We’ve brought together data from all sorts of different environments that starts to build up a more meaningful picture.
As a supporter, we have some idea of new direction:  van Gaal has a good track record; he is strict and is likely to come down hard on players who felt that they could pay little attention to Moyes with an attitude of “Sir Alex didn’t do it that way”.  van Gaal is unlikely to see the 2014/15 season as a turnaround year – he has to prove to all concerned that United is back on track.
For investors, the poor 2013/14 season did have an impact – brand value is down, and overall playing revenues will be hit as United will not be playing in Europe this season.  The supporters have proven to be loyal, and merchandise is still selling well.  However, new sponsors are on board with long-term deals, and the overall books are still looking strong.  
Now – this has just been about Manchester United.  There are 19 other clubs in the Premiership, and the same analysis can be carried out against each one.  Further granularity can be added by analysing at the individual player level; at coaching team level; at commercial team level.  The findings can then be compared and contrasted to give indicators of how the clubs are likely to perform at a sports and a financial level.
This is how big data works – it brings together little bits of unconnected data and creates an overall story that has different values depending on how you look at it.
Does it result in something where you can say “this is worth this much”?  No – but then again, very little in life does allow for such certainty.  As long as sufficient data is pulled together from sufficient sources and is then analysed in the right way, it should be enough to say “this finding will give me a strong chance of greater value”.
The worst thing that you can say to a United supporter is that football is “only a game”.  Bill Shankly, a former manager of United’s arch enemies, Liverpool FC, once said “Some people believe football is a matter of life and death, I am very disappointed with that attitude. I can assure you it is much, much more important than that.”  
Football ceased to be just a game many years ago – it is now a major commercial business, where getting anything wrong can have major long-term impact on earning capabilities, and therefore club survival.  Shankly – speaking well before the use of big data analytics – may well have been right.

July 29, 2014  8:24 AM

The security and visibility of critical national infrastructure: ViaSat’s mega-SIEM

Bob Tarzey Profile: Bob Tarzey
LogRhythm, McAfee, SIEM, Splunk

There has been plenty of talk about the threat of cyber-attacks on critical national infrastructure (CNI). So what’s the risk, what’s involved in protecting CNI and why, to date, do attacks seem to have been limited?

 

CNI is the utility infrastructure that we all rely on day-to-day; national networks such as electricity grids, water supply systems and rail tracks. Others have an international aspect too, for example gas pipelines are often fed by cross-border suppliers. In the past such infrastructure has been often been owned by governments, but much has now been privatised.

 

Some CNI has never been in government hands, mobile phone and broadband networks have largely emerged after the telco monopolies were scrapped in the 1980s. The supply chains of major supermarkets have always been a private matter, but they are very reliant on road networks, an area of CNI still largely in government hands.

 

The working fabric of CNIs is always a network of some sort; pipes, copper wires, supply chains, rails, roads: keeping it all running requires network communications. Before the widespread use of the internet this was achieve through propriety, dedicated and largely isolated networks. Many of these are still in place. However, the problem is that they have increasingly become linked to and/or enriched by internet communications. This makes CNIs part of the nebulous thing we call cyber-space; which is predicted to grow further and faster with the rise of the internet-of-things (IoT).

 

Who would want to attack CNI? Perhaps terrorists, however, some point out that it is not really their modus operandi, regional power cuts being less spectacular that flying planes in to buildings. CNI could become a target in nation state conflicts, perhaps a surreptitious attack where there is no kinetic engagement (a euphemism for direct military conflict), some say this is already happening, for example, the Stuxnet malware that targeted Iranian nuclear facilities.

 

Then there is cybercrime. Poorly protected CNI devices may be used to gain entry to computer networks with more value to criminals. In some case devices could be recruited to botnets, again this is already thought to have happened with IoT devices. Others may be direct targets, for example tampering with electricity meters or stealing data from point-of-sales (PoS) devices that are the ultimate front end of many retail supply chains.

 

Who is ultimately responsible for CNI security? Should it be governments? After all, many of us own the homes we live in, but we expect government to run defence forces to protect our property from foreign invaders. Government also passes down security legislation, for example at airports and other mandates are emerging with regards to CNI. However, at the end of the day it is in the interests of CNI providers to protect their own networks, for commercial reasons as well as in the interests of security. So, what can be done?

 

Securing CNI

One answer is of course, CNI network isolation. However, this simply not practical, laying private communications networks is expensive and innovations like smart metering are only practical because existing communications technology standards and networks can be used. Of course, better security can be built into to CNIs in the first place, but this will take time, many have essential components that were installed decades ago.

 

A starting point would be better visibility of the overall network in the first place and ability to collect inputs from devices and record events occurring across CNI networks.  If this sounds like a kind of SIEM (security information and event management) system, along the lines of those provide for IT networks by LogRhythm, HP, McAfee, IBM and others, then that is because it is; a mega-SIEM for the huge scale of CNI networks. This is the vision behind ViaSat’s Critical Infrastructure Protection. ViaSat is now extending sales of the service from USA to Europe.

The service involves installing monitors and sensors across CNI networks, setting base lines for known normal operations and looking for the absence of the usual and the presence of the unusual. ViaSat can manage the service for its customers out of its own security operations centre (SOC) or provide customers with their own management tools.  Sensors are interconnected across an encrypted, IP fabric, which allows for secure transmission of results and commands to and from the SOC. Where possible the CNI’s own fabric is used for communications, but if necessary this can be supplemented with internet communications; in other words the internet can be recruited to help protect CNI as well as attack it.

Having better visibility of any network not only helps improve security, but enables other improvements to be made through better operational intelligence. ViaSat says it is already doing this for its customers. The story sounds similar to one told in a recent Quocirca research report, Masters of Machines that was sponsored by Splunk. Splunk’s back ground is SIEM and IT operational intelligence, which, as the report shows, is increasingly being used to provide better commercial insight into IT driven business processes.

As it happens ViaSat already uses Splunk as a component of its SOC architecture. However, Splunk has ambitions in the CNI space too, some of it customers are already using its products to monitor and report on industrial systems. Some co-opetition will surely be good thing as the owners of CNIs seek to run and secure them better for the benefit of their customers and in the interests of national security.


July 15, 2014  9:47 AM

Do increasing worries about insider threats mean it is time to take another look at DRM?

Bob Tarzey Profile: Bob Tarzey
Data Loss, Digital rights management, Fasoo

The encryption vendor SafeNet publishes a Breach Level Index which records actual reported incidents of data loss. Whilst the number of losses attributed to malicious outsiders (58%) exceeds those attributed to malicious insiders (13%), SafeNet claims that insiders account for more than half of the actual information lost. This is because insiders will also be responsible for all the accidental losses that account for a further 26.5% of incidents and the stats do not take into account the fact that many breaches caused by insiders will go unreported. The insider threat is clearly something that organisations need to guard against to protect their secrets and regulated data.

Employees can be coached to avoid accidents and technology can support this. Intentional theft is harder to prevent, whether it is for reasons of personal gain, industrial espionage or just out of spite. According to Verizon’s Data Breach Investigations Report, 70% of the thefts of data by insiders are committed within thirty days of an employee resigning from their job, suggesting they plan to a take data with them to their new employer. Malicious insiders will try to find a way around the barriers put in place to protect data; training may even serve to provide useful pointers about how to go about it.

Some existing security technologies have a role to play in protecting against the insider threat. Basic access controls built into data stores, linked to identity and access (IAM) management systems are a good starting point, encryption of stored data strengthens this helping to ensure only those with the necessary rights can access data in the first place. In addition, there have been many implementations of data loss prevention (DLP) systems in recent years; these monitor the movement of data over networks and alert when content is going somewhere it shouldn’t and, if necessary, blocks it.

However, if a user has the rights to access data, and indeed to create it in the first place, then these systems do not help, especially if the user is to be trusted to use that data on remote devices. To protect data at all times controls must extend to wherever the data is. It is to this end that renewed interest is being taken in digital rights management (DRM). In the past issues such as scalability and user acceptance have held many organisations back from implementing DRM. That is something DRM suppliers such as Fasoo and Verdasys have sought to address.

DRM, as with DLP, requires all documents to be classified from the moment of creation and monitored throughout their life cycle. With DRM user actions are controlled through an online policy server, which is referred to each time a sensitive document is accessed. So, for example, a remote user can be prevented from taking actions on a given document such as copying or printing; documents can only be shared with other authorised users. Most importantly an audit trail of who has done what to a document, and when, is collected and managed at all stages.

Just trusting employees would be cheaper and easier than implementing more technology. However, it is clear that this is not a strategy businesses can move forward with. Even if they are prepared to take risk with their own intellectual property regulators will not accept a casual approach when it comes to sensitive personal and financial data. If your organisation cannot be sure what users are doing with its sensitive data at all times, perhaps it is time to take a look at DRM.

Quocirca’s report “What keeps your CEO up at night? The insider threat: solved with DRM”, is freely available here.


July 10, 2014  9:15 AM

Top 10 characteristics of high performing MPS providers

Louella Fernandes Profile: Louella Fernandes
Uncategorized

Quocirca’s research reveals that almost half of enterprises plan to expand their use of managed print services (MPS). MPS has emerged as a proven approach to reducing operational costs and improving the efficiency and reliability of a business’s print infrastructure at a time when in-house resources are increasingly stretched. 

Typically, the main reasons organisations turn to MPS are cost reduction, predictability of expenses and service reliability. However they may also benefit from implementation of solutions such as document workflow, mobility and business process automation, to boost collaboration and productivity among their workforce. MPS providers can also offer businesses added value through transformation initiatives that support revenue and profit growth. MPS providers include printer/copier manufacturers, systems integrators and managed IT service providers. As MPS evolves and companies increase their dependence on it, whatever a provider’s background, it’s important that they can demonstrate their credibility across a range of capabilities. The following are key criteria to consider when selecting an MPS provider:

  1. Strong focus on improving customer performance – In addition to helping customers improve the efficiency of their print infrastructure, leading MPS providers can help them drive transformation and increase employee productivity as well as supporting revenue growth.  An MPS provider should understand the customer’s business and be able to advise them on solutions that can be implemented to improve business performance, extend capabilities and reach new markets.
  2. A broad portfolio of managed services – Many organisations may be using a variety of providers for their print and IT services. However managing multiple service providers can also be costly and complex. For maximum efficiency, look for a provider with a comprehensive suite of services which cover office and production printing, IT services and business process automation.  As businesses look more to ‘as-a-service’ options for software implementation,consider MPS providers with strong expertise across both on-premise and cloud delivery models.
  3. Consistent global service delivery with local support – Global delivery capabilities offer many advantages, including rapid implementation in new locations and the ability to effectively manage engagements across multiple countries. However it’s also important that a provider has local resources with knowledge of the relevant regulatory and legal requirements. Check whether an MPS provider uses standard delivery processes across all locations and how multi-location teams are organised and collaborate.
  4. Proactive continuous improvement – An MPS provider must go beyond a break/fix model to offer proactive and pre-emptive support and maintenance. As well as simple device monitoring they should offer advanced analytics that can drive proactive support and provide visibility into areas for on-going improvement.
  5. Strong multivendor support – Most print infrastructures are heterogeneous environments comprising hardware and software from a variety of vendors, so MPS providers should have proven experience of working in multivendor environments. A true vendor-agnostic MPS provider should play the role of trusted technology advisor, helping an organisation select the technologies that best support their business needs. Independent MPS providers should also have partnerships with a range of leading vendors, giving them visibility of product roadmaps and emerging technologies.
  6. Flexibility  Businesses will always want to engage with MPS in a variety of different ways. Some may want to standardise on a single vendor’s equipment and software, while others may prefer multivendor environments. Some may want a provider to take full control of their print infrastructure while others may only want to hand over certain elements. And some may want to mix new technology with existing systems so they can continue to leverage past investments. Leading MPS providers offer flexible services that are able to accommodate such specific requirements. Flexible procurement and financial options are also key, with pricing models designed to allow for changing needs.
  7. Accountability – Organisations are facing increased accountability demands from shareholders, regulators and other stakeholders. In turn, they are demanding greater accountability from their MPS providers. A key differentiator for leading MPS providers is ensuring strong governance of MPS contracts, and acting as a trusted, accountable advisor, making recommendations on the organisation’s technology roadmap. MPS providers must be willing to meet performance guarantees through contractual SLAs, with financial penalties for underperformance. They should also understand the controls needed to meet increasingly complex regulatory requirements.
  8. Full service transparency – Consistent service delivery is built on consistent processes that employ a repeatable methodology. Look for access to secure, web-based service portals with dashboards that provide real-time service visibility and flexible reporting capabilities.
  9. Alignment with standards – An MPS provider should employ industry best practices, in particular aligning with the ITIL approach to IT service management. ITIL best practices encompass problem, incident, event, change, configuration, inventory, capacity and performance management as well as reporting.
  10. Innovation – Leading MPS providers demonstrate innovation. This may include implementing emerging technologies and new best practices and continually working to improve service delivery and reduce costs. Choose a partner with a proven track record of innovation. Do they have dedicated research centres or partnerships with leading technology players and research institutions? You should also consider how a prospective MPS provider can contribute to your own company’s innovation and business transformation strategy. Bear in mind that innovation within any outsourcing contract may come at a premium – this is where gain-sharing models may be used.

Ultimately, businesses are looking for more than reliability and cost reduction from their MPS provider. Today they also want access to technologies that can increase productivity and collaboration and give them a competitive advantage as well as help with business transformation. By ensuring a provider demonstrates the key characteristics above before committing, organisations can make an informed choice and maximise the chances of a successful engagement. Read Quocirca’s Managed Print Services Landscape, 2014


June 26, 2014  10:05 AM

What is happening to the boring world of storage?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Dell, EMC, IBM, PCI Express, SanDisk, ScaleIO, VeloBit

Storage suddenly seems to have got interesting again.  As the interest moves from increasing spin speeds and applying ever more intelligent means to get a disk head over the right part of a disk in the fastest possible time to flash-based systems where completely different approaches can be taken, a feeding frenzy seems to be underway.  The big vendors are in high-acquisition mode, while the new kids on the block are mixing things up and keeping the incumbents on their toes.

After the acquisitions of Texas Memory Systems (TMS) by IBM, Whiptail by Cisco and XtremIO by EMC in 2012, it may have looked like it was time for a period of calm reflection and the full integration of what they had acquired.  However, EMC acquired ScaleIO and then super-stealth server-side flash company DSSD to help it create a more nuanced storage portfolio capable of dealing with multiple different workloads on the same basic storage architecture.

Pure Storage suddenly popped up and signed a cross-licensing and patent agreement with IBM, with Pure acquiring over 100 storage and related patents from IBM, with Pure stating that this was a defensive move to protect itself from any patent trolling by other companies (or shell companies).  However, it is also likely that IBM will gain some technology benefits from the cross-licensing deal.  At the same time as the IBM deal, Pure also acquired other patents to bolster its position.

SanDisk acquired Fusion-io, another server-side flash pioneer.  More of a strange acquisition, this one – Fusion-io would have been more of a fit for a storage array vendor looking to extend its reach into converged fabric through PCIe storage cards.  SanDisk will now have to forge much stronger links with motherboard vendors – or start to manufacture its own motherboards – to make this acquisition work well.  However, Western Digital had also been picking up flash vendors, such as Virident (itself a PCIe flash vendor), sTec and VeloBit; Seagate acquired the SSD and PCIe parts of Avago – maybe SanDisk wanted to be seen to be doing something.

Then we have Nutanix: a company that started off as marketing itself as a scale-out storage company but was actually far more of a converged infrastructure player.  It has just inked a global deal with Dell, where Dell will license Nutanix’ web-scale software to run on Dell’s own converged architecture systems.  This deal gives a massive boost to Nutanix: it gains access to the louder voice and greater reach of Dell, while still maintaining its independence in the market.

Violin Memory has not been sitting still either.  A company that has always had excellent technology based on moving away from the concept of the physical disk drive, it uses a PCI-X in-line memory module approach (which it calls VIMMs) to provide all-flash based storage arrays.  However, it did suffer from being a company with great hardware but little in the way of intelligent software.

After its IPO, it found that it needed a mass change in management staff, and under a new board and other senior management, Quocirca is seeing some massive changes in its approach to its product portfolio.  Firstly, Violin brought the Windows Flash Array (WFA) to market – far more of an appliance than a storage array.  Now, it has launched its Concerto storage management software as part of its 7000 all flash array.  Those who have already bought the 6000 array can choose to upgrade to a Concerto-managed system in-situ.

Violin has, however, decided that PCIe storage is not for it – it has sold off that part of its business to SK Hynix.

The last few months have been hectic in the storage space.  For buyers, it is a dangerous time – it is all too easy to find yourself with high-cost systems that are either superseded and unsupported all too quickly or where the original vendor is acquired or goes bust leaving you with a dead-end system.  There will also be continued evolution of systems to eke out those extra bits of performance, and a buyer now may not be able to deal with these changes through abstracting everything through a software defined storage (SDS) layer.

However, flash storage is here to stay.  At the moment, it is tempting to choose flash systems for specific workloads where you know that you will be replacing the systems within a relatively short period of time anyway. This is likely to be mission critical latency-dependent workloads where the next round of investment in the next generation of low-latency, high-performance storage can be made within 12-18 months. Server-side storage systems using PCIe cards should be regarded as highly niche for the moment: it will be interesting to see what EMC does with DSSD and what Western Digital and SanDisk do with their acquisitions, but for the moment, the lack of true abstraction of PCIe (apart from via software from the likes of PernixData).

For general storage, the main storage vendors will continue to move from all spinning to hybrid and then to all flash arrays over time – it is probably best to just follow the crowd here for the moment.


June 19, 2014  8:38 AM

Cloud infrastructure services, find a niche or die?

Bob Tarzey Profile: Bob Tarzey
Uncategorized

Back in May it was reported that Morgan Stanley had been appointed to explore options for the sale of hosted? services provider Rackspace. Business Week, May 16th reported the story with the headline Who Might Buy Rackspace? It’s a Big List. 24/7 WALLST reported analysis from Credit Suisse that narrowed this to three potential suitors; Dell, Cisco and HP.

To cut a long story short, Rackspace sees a tough future competing with the big three in the utility cloud market; Amazon, Google and Microsoft. Rackspace could be attractive to Dell, Cisco, HP and other traditional IT infrastructure vendors, that see their core business being eroded by the cloud and need to build out their own offerings (as does IBM which has already made significant acquisitions).

Quocirca sees another question that needs addressing. If Rackspace, one of the most successful cloud service providers, sees the future as uncertain in the face of competition from the big three, then what of the myriad of smaller cloud infrastructure providers? For them the options are twofold.

Be acquired or go niche

First achieve enough market penetration to become an attractive acquisition target for the larger established vendors that want to bolster their cloud portfolios. As well as the IT infrastructure vendors this includes communications providers and system integrators.

Many have already been acquisitive in the cloud market. For example the US number three carrier CenturyLink buying Savvis, AppFog and Tier-3 and NTT’s system integrator arm Dimension Data added to existing cloud services with OpSource and BlueFire. Other cloud service providers have merged to beef up their presence, for example Claranet and Star.

The second option for smaller provider is to establish a niche, where the big players will find it hard to compete. There are a number of cloud providers that are already doing quite well at this, they rely on a mix of geographic, application or industry specialisation. Here are some examples: 

Exponential E – highly integrated network and cloud services

Exponential-E’s background is as a UK focussed virtual private network provider, using its own cross London metro-network and services from BT. In 2010 the vendor moved beyond networking to provide infrastructure-as-a-service. Its differentiator is to embed this into in to its own network services at network level 2 (switching etc.) rather than higher levels. Its customers get the security and performance that would be expected from internal WAN based deployments that cannot be achieved for cloud services accessed over the public internet.

City Lifeline – in finance latency matters

City Lifeline’s data centre is shoe-horned in an old building near Moorgate in central London. Its value proposition is low latency for which it charges a premium over out of town premises for its proximity to the big city institutions.

Eduserve – governments like to know who they are dealing with

For reasons of compliance, ease of procurement and security of tenure, government departments in any country like to have some control over their suppliers, this includes the procurement of cloud services. Eduserv is a not for profit long-term supplier of consultancy and managed services to the UK government and charity organisations. In order to help its customers deliver better services, Eduserve has developed cloud infrastructure offerings out of its own data centre in the central south UK town of Swindon. As a UK G-Cloud partner it has achieved IL3 security accreditation enabling it to host official government data. Eduserve provides value added services to help customers migrate to cloud, including cloud adoption assessments, service designs and on-going support and management. 

Firehost – performance and security for payment processing

Considerable rigour needs to go into building applications for processing highly secure data for sectors such as financial services and healthcare. This rigour must also extend to the underlying platform. Firehost has built an IaaS platform to targets these markets. In the UK its infrastructure is co-located with Equinix, ensuring access to multiple high speed carrier connections. Within such facilities, Firehost applies its own cage level physical security. Whilst infrastructure is shared it maintains the feel a private cloud, with enhanced security through protected VMs with built in web application firewall, DDoS protection, IP reputation filtering and two factor authentication for admin access.

Even for these providers the big three do not disappear. In some cases their niche capability may simply see them bolted on to bigger deployments, for example, a retailer off-loading its payment application to a more secure environment. In other cases, existing providers are staring to offer enhanced services around the big three to extend in-house capability, for example UK hosting provider Attenda now offers services around Amazon Web Services (AWS).

For many IT service providers the growing dominance of the big three cloud infrastructure providers, along with the strength of software-as-a-service providers such as salesforce.com, NetSuite and ServiceNow will turn them into service brokers. This is how Dell positioned itself at its analyst conference last week; of course, that may well change if it bought Rackspace.

                                                                        


June 17, 2014  9:58 AM

Cloud orchestration – will a solution come from SCM?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Business, Cloud Computing, Configuration management, DevOps, Git, IBM, Programming, Serena

Serena Software is a software change and configuration management vendor, right?  It has recently released its Dimensions CM 14 product, with additional functionality driving Serena more into the DevOps space, as well as making life easier for distributed development groups to be able to work collaboratively through synchronised libraries with peer review capabilities.

Various other improvements, such as change and branch visualisation and the use of health indicators to show how “clean” code is and where any change is in a development/operations process, as well as integrations into the likes of Git and Subversion means that Dimensions CM 14 should help many a development team as it moves from an old-style separate development, test and operations system to a more agile, process driven, automated DevOps environment.

However, it seems to me that Serena is actually sitting on something far more important.  Cloud computing is an increasing component of many an organisation’s IT platform, and there will be a move away from the monolithic application towards a more composite one. By this, I mean that depending on the business’ needs, an application will be built up from a set of functions on the fly to facilitate that process.  Through this means, an organisation can be far more flexible and can ensure that it adapts rapidly to changing market needs.

The concept of the composite application does bring in several issues, however.  Auditing what functions were used when is one of them.  Identifying the right functions to be used in the application is another.  Monitoring the health and performance of the overall process is another.

So, let’s have a look at why Serena could be the one to offer this.

·         A composite application is made up from a set of discrete functions.  Each of these can be looked at as being an object requiring indexing and having a set of associated metadata.  Serena Dimensions CM is an object-oriented system that can build up metadata around objects in an intelligent manner.

·         Functions that are available to be used as part of a composite application need to be available from a library.  Dimensions is a library-based system.

·         Functions need to be pulled together in an intelligent manner, and instantiated as the composite application.  This is so close to a DevOps requirement that Dimensions should shine in its capabilities to carry out such a task.

·         Any composite application must be fully audited so that what was done at any one time can be demonstrated at a later date.  Dimensions has strong and complex versioning and audit capabilities, which would allow any previous state to be rebuilt and demonstrated as required at a later date.

·         Everything must be secure.  Dimensions has rigorous user credentials management – access to everything can be defined by user name, roll or function.  Therefore, the way that a composite application operates can be defined by the credentials of the individual user.

·         The “glue” between functions across different clouds needs to be put in place.  Unless cloud standards are improved drastically, getting different functions to work seamlessly together will remain difficult.  Some code will be required to ensure that Function A and Function B do work well together to facilitate Process C.  Dimensions is capable of being the centre for this code to be developed and used – and also as a library for the code to be stored and reused, ensuring that the minimum amount of time is lost in putting together a composite application as required.

Obviously, it would not be all plain sailing for Serena to enter such a market.  Its brand equity currently lies within the development market.  Serena would find itself in competition with the incumbent systems management vendors such as IBM and CA.  However, these vendors are still struggling to come to terms with what the composite application means to them – it could well be that Serena could layer Dimensions on top of existing systems to offer the missing functionality. 

Dimensions would need to be enhanced to provide functions such as the capability to discover and classify available functions across hybrid cloud environments.  A capacity to monitor and measure application performance would be a critical need – which could be created through partnerships with other vendors. 

Overall, Dimensions CM 14 is a good step forward in providing additional functionality to those in the DevOps space.  However, it has so much promise, I would like to see Serena take the plunge and see if it could move it through into a more business-focused capability.


June 11, 2014  10:28 AM

It’s all happening in the world of big data.

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
Uncategorized

For a relatively new market, there is a lot happening in the world of big data.  If we were to take a “Top 20” look at the technologies, it would probably read something along the lines of this week’s biggest climber being Hadoop; biggest loser being relational databases and staying place being the less-schema databases.

Why?  Well, Actian announced the availability of its SQL-in-Hadoop offering.  Not just a small subset of SQL, but a very complete implementation.  Therefore, your existing staff of SQL devotees and all the tools they use can now be used against data stored in HDFS, as well as against Oracle, Microsoft SQLServer, IBM DB2 et al. 

Why is this important?  Well, Hadoop has been one of these fascinating tools that promises a lot – but only produces on this promise if you have a bunch of talented technophiles who know what they are doing.  Unfortunately, these people tend to be as rare as hen’s teeth – and are picked up and paid accordingly by vendors and large companies.  Now, a lot of the power of Hadoop can be put in the hands of the average (still nicely paid) data base administrator (DBA).

The second major event that this could start to usher in is the use of Hadoop as a persistent store.  Sure, many have been doing this for some time, but at Quocirca, we have long advised that Hadoop only be used for its MapReduce capabilities with the outputs being pushed towards a SQL or noSQL database depending on the format of the resulting data, with business analytics being layered over the top of the SQL/noSQL pair.

With SQL being available directly into and out of Hadoop, new applications could use Hadoop directly, and mixed data types can be stored as SQL-style or as JSON-style constructs, with analytics being deployed against a single data store.

Is this marking the end for relational databases?  Of course not.  It is highly unlikely that those using Oracle eBusiness Suite will jump ship and go over to a Hadoop-only back end, nor will the vast majority of those running mission critical applications that currently use relational systems.  However, new applications that require large datasets being run on a linearly scalable, cost-effective, data store could well find that Actian provides them with a back end that works for them.

Another vendor that made an announcement around big data a little while back was Syncsort, which made its Ironcluster ETL engine available in AWS essentially for free – or at worst at a price where you would hardly notice it, and only get charged for the workload being undertaken.

Extract, transform and load (ETL) activities have for long been a major issue with data analytics, and solutions have grown around the issue – but at a pretty high price.  In the majority of cases, ETL tools have also only been capable of dealing with relational data – so making them pretty useless when it comes to true big data needs.

By making Ironcluster available in AWS, Syncsort is playing the elasticity card.  Those requiring an analysis of large volumes of data have a couple of choices – buy a few acres-worth of expensive in-house storage, or go to the cloud.  AWS EC2 (Elastic Compute Cloud) is a well-proven, easy access and predictable cost environment for running an analytics engine – provided that the right data can be made available rapidly.

Syncsort also makes Ironcluster available through AWS’ Elastic MapReduce (EMR) platform, allowing data to be transformed and loaded directly onto a Hadoop platform.

With a visual front end and utilising an extensive library of data connectors from Syncsort’s other products, Ironcluster offers users a rapid and relatively easy means of bringing together multiple different data sources across a variety of data types and creating a single data repository that can then be analysed.

Syncsort is aiming to be highly disruptive with this release – even at its most expensive, the costs are well below the costs for equivalent licence and maintenance ETL tools, and make other subscription-based service look rather expensive.

Big data is a market that is happening, but is still relatively immature in the tools that are available to deal with the data needs that underpin the analytics.  Actian and Syncsort are at the vanguard of providing new tools that should be on the shopping list of anyone serious about coming to terms with their big data needs.


May 12, 2014  9:31 AM

The continuing evolution of EMC

Clive Longbottom Clive Longbottom Profile: Clive Longbottom
EMC, Federation, IBM, VMware, XtremIO

The recent EMCWorld event in Las Vegas was arguably less newsworthy for its product announcements than for the way that the underlying theme and message continues to move EMC away from the company that it was just a couple of years ago.

The new EMC II (as in “eye, eye”, standing for “Information Infrastructure”, although it might as well be as in Roman numerals to designate the changes on the EMC side of things) is part of what Joe Tucci, Chairman and CEO of the overall EMC Corporation calls “The Federation” of EMC II, VMware, and Pivotal.  The idea is that each company can still play to its strengths while symbiotically feeding off each other to provide more complete business systems as required.  More on this, later.

At last years event, Tucci started to make the point that the world was becoming more software oriented, and that he saw the end result of this being the “software defined data centre” (SDDC) based on the overlap between the three main software defined areas of storage, networks and compute.  The launch of ViPR as a pretty far-reaching software defined suite was used to show the direction that EMC was taking – although as was pointed out at the time, it was more vapour than ViPR. Being slow to the flash storage market, EMC showed off its acquisition of XtremIO – but didnt really seem to know what to do with it.

On to this year.  Although hardware was still being talked about, it is now apparent that the focus from EMC II is to create storage hardware that is pretty agnostic as to the workloads thrown at it, whether this be file, object or block. XtremIO has morphed from an idea of “we can throw some flash in the mix somewhere to show that we have flash” to being central to all areas.  The acquisition of super-stealth server-side flash outfit DSSD only shows that EMC II does not believe that it has all the answers yet – but is willing to invest in getting them and integrating them rapidly.

However, the software side of things is now the obvious focus for EMC Corp.  ViPR 2 was launched and now moves from being a good idea to a really valuable product that is increasingly showing its capabilities to operate not only with EMC equipment, but across a range of competitorskit and software environments as well.  The focus is moving from the SDDC to the software defined enterprise (SDE), enabling EMC Corp to position itself across the hybrid world of mixed platforms and clouds.

ScaleIO, EMC IIs software layer for creating scalable storage based on commodity hardware underpinnings was also front and centre in many aspects.  Although hardware is still a big area for EMC Corp, it is not seen as being the biggest part of the long term future.

EMC Corp seems to be well aware of what it needs to do.  It knows that it cannot leap directly from its existing business of storage hardware with software on top to a completely next generation model of software which is less hardware dependent without stretching to breaking point its existing relationships with customers and the channel – as well as Wall Street.  Therefore, it is using an analogy of 2nd and 3rd platforms, along with a term of “digital born” to identify where it needs to apply its focus.  The 2nd Platform is where most organisations are today: client/server and basic web-enabled applications.  The 3rd Platform is where companies are slowly moving towards – one where there is high mobility, a mix of different cloud and physical compute models and an end game of on-the-fly composite applications being built from functions available from a mix of private and public cloud systems. (For anyone interested, the 1st Platform was the mainframe).

The “digital born” companies are those that have little to no legacy IT: they have been created during the emergence of cloud systems, and will already be using a mix of on-demand systems such as Microsoft Office 365, Amazon Web Services, Google and so on.

By identifying this basic mix of usage types, Tucci believes that not only EMC II, but the whole of The Federation will be able to better focus its efforts in maintaining current customers while bringing on board new ones.

I have to say that, on the whole, I agree.  EMC Corp is showing itself to be remarkably astute in its acquisitions, in how it is integrating these to create new offerings and in how it is changing from a “buy Symmetrix and we have you” company to a “what is the best system for your organisation?” one.

However, I believe that there are two major stumbling blocks.  The first is that perennial problem for vendors – the channel.  Using a pretty basic rule of thumb, I would guess that around 5% of EMC Corps channel gets the new EMC and can extend it to push the new offerings through to the customer base.  A further 20% can be trained in a high-touch model to be capable enough to be valuable partners.  The next 40% will struggle – many will not be worth putting any high-touch effort into, as the returns will not be high enough, yet they constitute a large part of EMC Corps volume into the market.  At the bottom, we have the 35% who are essentially box-shifters, and EMC Corp has to decide whether to put any effort into these.  To my mind, the best thing would be to work on ditching them: the capacity for such channel to spread confusion and problems in the market outweighs the margin on the revenues they are likely to bring in.

This gets me back to The Federation.  When Tucci talked about this last year, I struggled with the concept.  His thrust was that EMC Corp research had shown that any enterprise technical shopping list has no more than 5 vendors on it.  By using a Federation-style approach, he believed that any mix of the EMC, VMware and Pivotal companies could be seen as being one single entity.  I didnt, and still do not buy this.

However, Paul Maritz, CEO of Pivotal put it across in a way that made more sense.  Individuals with the technical skills that EMC Corp require could go to a large monolith such as IBM.  They would be compensated well; would have a lot of resources at their disposal; they would be working in an innovative environment.  However, they would still be working for a “general purpose” IT vendor.  By going to one of the companies in EMC Corp’s Federation, in EMC II they are working for a company that specialises in storage technologies; if they go to VMware, they are working for a virtualisation specialist; for Pivotal, a big data specialist – and each has its own special culture. For many individuals, this difference is a major one. 

Sure, the devil remains in the detail, and EMC Corp is seeing a lot of new competition coming through into the market.  However, to my mind it is showing a good grasp of the problems it is facing and a flexibility and agility that belies the overall size and complexity of its corporate structure and mixed portfolio.

I await next years event with strong interest.

Enhanced by Zemanta


May 2, 2014  2:42 PM

Finding new containers for the BYOD genii

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
Business, BYOD, Data-security, Management, Quocirca, Security

Many headline IT trends are driven by organised marketing campaigns and backed by industry players with an agenda – standards initiatives, new consortia, developer ecosystems – and need a constant push, but others just seem to have a life of their own.

BYOD – bring your own device – is one such trend. There is no single group of vendors in partnership pushing the BYOD agenda; in fact most are desperately trying to hang onto its revolutionary coattails.  They do this in the face of IT departments around the world who are desperately trying to hang on to some control.

BYOD is all about ‘power to the people’ – power to make consumer-led personal choices and this is very unsettling for IT departments that are tasked with keeping the organisations resources safe, secure and productive.

No wonder that according to Quocirca’s recent research from 700 interviews across Europe, over 23% only allow BYOD in exceptional circumstances, and a further 20% do not like it but feel powerless to prevent it. Even among those organisation that embrace BYOD, most still limit it to senior management.

This is typical of anyone faced by massive change; shock, denial, anger and confusion all come first and must be dealt with before understanding, acceptance and exploitation take over.

IT managers and CIOs have plenty to be shocked and confused about. On the one hand, they need to empower the business and avoid looking obstructive, but on the other, there is a duty to protect the organisation’s assets. Adding to the confusion, vendors from all product categories have been leaping on the popularity of the BYOD bandwagon and using it as a way to market their own products.

The real challenge is that many of the proposed ‘solutions’ perpetuate a myth about BYOD that is unfortunately inherent in its name, but also is damaging the approach taken to addressing the issues BYOD raises.

The reality is that this is not and should not be centred around the devices or who owns them, but on the enterprise use to which they are put.

The distinction is important for a number of reasons.

First, devices. There are a lot to choose from already today,  with different operating systems, in different form factors – tablets, smartphones etc. – and there is no reason to think this is going to get any simpler. If anything, with wearable technologies such as smart bands, watches and glasses already appearing, the diversity of devices is going to become an even bigger challenge.

Next, users. What might have started as an ‘I want’ (or even an “I demand”) from a senior executive, soon becomes an ‘I would like’ from knowledge workers, who now appear to be the vanguard for BYOD requests. But this is only the start as the requirement moves right across the workforce. Different roles and job responsibilities will dictate that different BYOD management strategies will have to be put in place. Simply trying to manage devices (or control choices) will not be an option.

Those who appear to be embracing rather than trying to deny BYOD in their organisations understand this. Their traits are that they tend to recognise the need to treat both tablets and smartphones as part of the same BYOD strategy and they are already braced for the changes that will inevitably come about from advances in technology.

Most crucially, however, they recognise the importance of data.

Information security is the aspect of BYOD most likely to keep IT managers awake at night – it is a greater concern than managing the devices themselves or indeed the applications they run.

The fear of the impact of data security, however, seems to have created a ‘deer in the headlights’ reaction rather than galvanising IT into positive action. Hence the tendency to try to halt or deny BYOD in pretty much the same way that in the past many tried to stem the flow towards internet access, wireless networks and pretty much anything that opens up the ‘big box‘ that has historically surrounded an organisation’s digital assets.

Most organisations would do far better to realise that the big box approach is no longer valid, but that they can shrink the concept down to apply ‘little boxes’ or bubbles of control around their precious assets. This concept of containerisation or sandboxing is not new, but still has some way to go in terms of adoption and widespread understanding.

Creating a virtual separation between personal and work environments allows the individual employee to get the benefit of their own device preferences, and for the organisation to apply controls that are relevant and specific to the value and vulnerability of the data.

With the right policies in place this can be adapted to best-fit different device types and user profiles. Mobile enterprise management is still about managing little boxes, but virtual ones filled with data, not the shiny metal and plastic ones in the hands of users.

For more detailed information about getting to grips with BYOD, download our free report here

Enhanced by Zemanta


Page 20 of 30« First...10...1819202122...30...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: