With Kubernetes now established in many organisations as the container orchestration platform of the future, are cracks already starting to show? Well, not exactly – but if I could pull out one common theme from what’s hot in the world of cloud-native, it’s the dawning light of Day Two.
That’s a shorthand term used for when this stuff goes live – the Ops end of DevOps, if you like. Going live was only to be expected, yet as so often with these things, it seems to have caught some early-adopters on the hop. The risks? Either you deploy containers into production without all the underpinnings needed to keep them fit and safe out in the wild, and they get exploited, or your high-profile project is embarrassingly terminated before launch due to its security failings.
Building for Day Two
However, Kubernetes – and cloud-native more broadly – is one of the fastest-growing areas of the whole IT industry. The result is a frenzy of activity as open-source, freemium and commercial projects jump in to build that necessary Day Two infrastructure. Here’s some of the key areas of development to watch, and a few of the vendors and projects we’ve looked at recently:
Security: The ‘shift left’ that builds container security into the dev pipeline is essential – restricting container privileges and API access to the minimum needed, for example, using only trusted open-source components, keeping containers simple, and so on – but there’s more. So among other things, you should explore vulnerability and misconfiguration scanners (e.g. Aqua Security), container hardening and compliance (e.g. Twistlock, now being acquired by Palo Alto Networks), and tools to help deliver a secure Kubernetes service (e.g. Rancher Labs).
Storage: Containers are inherently stateless, and it’s safer if you make them immutable too. That means some careful thinking is needed when you containerise a stateful application and give it persistent storage to consume (we’ve a paper on this topic here). Once you’ve abstracted the physical storage as software-defined storage (SDS) using the likes of Ceph or Robin, which can of course themselves be deployed as stateful apps on Kubernetes, you then need to orchestrate and automate it, which is where the popular Rook project comes in.
Data management: As mentioned, the container may be stateless but if the running application is stateful then you still have data to manage, protect and migrate. Most traditional data management tools aren’t well suited to protecting containers, although their suppliers are working to catch up. In the meantime, new ideas and tools are emerging, such as Kasten for stateful app backup and migration, and the highly-resilient distributed database CockroachDB.
Governance and cost management: As the use of Kubernetes grows, so does the risk of container sprawl. So we see projects such as Replex.io, aimed at keeping an inventory, rightsizing containers and managing costs, and Razee from IBM, which can enforce profiles and rules on containers, clusters and clouds – you could call it continuous delivery and auto-updating for Kubernetes clusters.
Runtimes and meshes
There’s lots more going on though, including new container runtime engines such as CRI-O and Containerd. Their aim, as Red Hat’s Urvashi Mohnani, speaking at Kubecon Barcelona, said of CRI-O, is “to make running containers in production secure & boring.”
And then there’s service mesh architecture, where the emerging duopoly between Istio and Linkerd has recently been disrupted by the arrival of the Service Mesh Interface, an interoperability project led by Microsoft but also including service mesh developers such as Buoyant, Solo.io and VMware.
What do you think – are your Kubernetes projects ready for Day Two? What’s missing, if anything, or what more might be needed? Tell us in the comments below…
I hate it when you realise that your point of reference has been unduly influenced by what’s currently trending rather than what’s really important. A recent briefing with Jim Liddle of Storage Made Easy (SME) highlighted how easy it is to fall into this trap.
It was my first ever briefing with SME, so as usual, I was listening for things to help me position the company and its solution on my mental map of the world. When I heard about SME’s ‘File Fabric’ – essentially a solution to enable federated access, search and discovery across a disparate set of storage resources – the phrase ‘multi-cloud storage’ immediately popped into my head, along with the potential benefits in an application delivery context. This was undoubtedly because it’s almost impossible to have a conversation with an IT vendor at the moment without either multi-cloud or DevOps coming up. Together with AI, they are examples of those trending terms that you are obliged to work into any presentation if you work in tech marketing.
Fix the basics first
As the discussion with SME developed, though, while the trendy use cases were certainly acknowledged, some more fundamental requirements came through more prominently. I was reminded – duh! – that most of an organisation’s electronic documents and files are still typically spread (and often duplicated) across a large number of on-prem file servers and other repositories, and that this still represents a significant headache for IT teams. We also talked about the pain and cost of dealing with business basics such as the need to move increasingly-large files or data sets around their corporate network. And while the GDPR panic is over and compliance has become boring again, the truth is that few have this sorted across their organisation with any level of efficiency and consistency.
I knew these things, of course – I really did – but was guilty of being seduced into focusing on the more glamorous cloud and DevOps angles because these are what everyone in ‘the industry’ wants to talk about.
The good news is that while some vendors seem to be obsessed solely with these hot, trending topics, others, like Storage Made Easy, haven’t lost sight of the fact that you need to address the mundane as well as the sexy – and so much the better if you can do that in a single solution. Indeed, solving those tedious but important problems often provides a better case for investment than focusing on speculative benefits in more exciting areas of activity.
With this in mind, it’s interesting to consider how this picture of data fragmentation captured six years ago has changed in the interim. It’s almost certainly become even worse in most organisations. So from now on, in this and other areas, I am going to be even stricter than usual when it comes to keeping my analysis balanced across all the needs that matter.
Words have power – in particular, your choice of words modifies how your readers or listeners react to what you write or say. Politicians do this sort of thing all the time, swapping one word or phrase for another to promote their own version of a narrative or boost their audience’s prejudices. Sometimes the ‘loaded word’ or phrase is obvious, other times it is more subtle.
And of course we do it in business as well, as I was reminded this week at InfoSec in London. It was while speaking with Chris Bush of ObserveIT, one of several companies at the show offering to help safeguard against ‘insider threats’. This was more of a niche term, used mainly among specialists. It describes all the things users might get wrong, including accidents, careless behaviour and so on.
However, it is increasingly being used openly now, and with a focus on the malicious subset of activity that lawyers and others know as ‘employee malfeasance’. Purely as a phrase, it is accurate – there are threats, and some come via insiders. But how does using the term ‘insider threats’ modify your assumptions about, and attitudes to, your colleagues?
You say one thing, what will others hear?
Words don’t come much more laden than ‘threat’. It’s a great one to use if you want to impress your CSO and other board members with the importance of data access controls, compliance and activity monitoring, cybersecurity training and so on. But what does it do to employee morale to know you’re a threat, rather than an asset? How does it affect the trust relationship that needs to exist in an organisation?
Of course, most of us welcome technology that brings reassurance and helps us avoid the errors that are the vast bulk of the traditional ‘insider threat’. None of us wants to be the one who inadvertently shared a secret directory, deleted a key database or emailed a confidential file to the wrong person. Safeguarding software will spend most of its time helping prevent things like this.
Chris added that within the ObserveIT toolkit there’s also features to reassure staff about their privacy. For instance, the usage and auditing data is anonymised by default, and it’s the metadata not the actual content. That’s a good start* but as we went on to discuss, what’s really needed is a more holistic approach.
Security is people, people are security
That’s because once we get onto ‘malicious insiders’ we’re verging into human resources and business psychology territory. As Chris noted, one of the major differences between insiders and outsiders is the variety of motivations at work. Whether it’s someone who was sacked, passed over for promotion, denied a pay rise or whatever, attacking your IT systems is just one way for a ‘wronged’ staffer to get revenge.
This matters because it means that mitigating the risk is at least as much a business management issue as it is an information security one. If someone’s in trouble, whether it’s stress, workplace bullying or other problems, the best fix will be to intervene and help them early. Similarly, if an insider incident does happen, HR needs to be involved in the investigation, in picking up the pieces, and in working to prevent it happening again.
So first, words matter – let’s stop talking about people as threats and start talking about their activity (this is classic psychology – criticise the behaviour, not the person). And second, if you’re in either IT security or HR, what are you doing to build bridges? Let me know please in the comments below.
*From the perspective of intelligence-gathering, threat detection and forensics, the metadata is actually far more useful and valuable than the content. However, from the perspective of most insider communications, it is likely to be a lot less intrusive.
How many new applications, tools, devices, and cloud services have you introduced into your environment over the past few years? Now think about how much you got rid of over that same period, i.e. how many things you decommissioned. Not such a big number, eh?
This is just one example of how complexity creeps up on us. We accumulate more stuff over time – sometimes for good reason, sometimes because it’s easier to add something new and live with the technical debt than disturb what’s already in place. Either way, it all needs to be integrated and managed, and the larger and more complex your environment gets, the more difficult this becomes. It also makes it harder to keep up with change and enhancement requests and more generally deliver a good user experience.
But these principles don’t just apply to enterprise IT environments: software vendors and cloud service providers are subject to them too. If your application suppliers, for example, don’t manage complexity, redundancy and technical debt, their overheads increase. That can mean fewer resources available for R&D and customer support, and possibly even higher prices. Not good for customers, who are also likely to experience more failures, inconsistencies and arbitrary constraints, including things not working as you would expect or in a seemingly illogical or overly-complicated manner.
Software that has been around for a long time are more prone to such issues, especially if the vendor has experienced rapid growth, has made significant acquisitions, and/or has had to pivot because of changing markets. The danger is always that in the rush to market, new functionality is layered on the old, and there’s never the time or money to properly rebuild, dedupe and harmonise.
One giveaway is an admin console that dictates different ways to do essentially the same things depending on which part of the system you are managing. Another is being forced to bounce around a complicated menu structure and make changes in several places in order to achieve something quite simple. If you find yourself muttering “It shouldn’t be this hard”, your pain is very likely a legacy of historical thinking, short-cutting and compromises.
If you administer systems such as Salesforce.com or Office 365 you’ll know how it is. It takes significant training and/or a lot of trial and error to understand the complexities, inconsistencies and dependencies, and ultimately tame such environments. Sure, it keeps IT professionals in work, but most would prefer to spend their time on more useful and rewarding activities.
To be fair to the vendors, established customers are often displeased when platform and architecture changes force costly and risky migrations. However, both customers and suppliers can only put off such changes for so long before it becomes very difficult to move on.
Taming the monster
The lesson is to consider complexity when making buying decisions or reviewing existing systems. Complexity isn’t bad per se – it’s often needed to deal with more demanding requirements. It makes sense though to stop, think, and beware of the kind of complexity that causes cost and constraint while adding no real value.
For a deeper discussion of what to consider in relation to solution complexity when making buying decisions, download our paper entitled “The Application Complexity Monster“.
One of the most interesting things about the recent Open Infrastructure Summit – the new name for the OpenStack users and friends conferences – in Denver was how many people wanted to know if I thought OpenStack had yet overcome its perception problem.
And a problem it is – or increasingly, it was. The amount of interest that OpenStack got when it came out was justified, given both its pedigree with NASA and Rackspace, and the opportunity it offered as an open and flexible replacement for the various proprietary web stacks in use.
However, the hype was not, because at that point it was not really ready for widespread deployment. In particular, it could be complex to integrate and operate, and it regularly received major updates. That was less of a problem for the well-resourced telcos who eagerly adopted and customised it, but more of an issue for the average enterprise who needed support from a third-party specialist to get OpenStack up and running smoothly.
Fast-forward a few years and all that has changed. As the user-base has grown, so has the pool of community knowledge and support. The software too has become both more sophisticated and easier to integrate and update.
Too blasé or too sceptical?
Step outside that user community though, and two contrasting narratives can be found. Those ‘in the know’ risk being too blasé – for some of those I spoke with at Open Infrastructure Summit, it’s just plumbing and it does not of itself deliver business value. For them, it is time to forget it and focus on what can be built on top.
The other narrative, which draws upon the disappointments that followed that excess hype of the past, is that it’s still a bit weird and of questionable reliability. Some of it is the notion that – with the exception perhaps of server Linux – open source is for hobbyists, or the idea that, it not being the product of a company, there will be no one willing to take responsibility for it.
Just look at what most software developers put on their PowerPoint slides when they launch a new cloud-native app: it will be their compatibility with AWS, Azure and Google Cloud. Never mind that many of the world’s largest business-focused clouds are built on OpenStack. Whether it’s by accident or design, this second narrative suits the proprietary suppliers of software infrastructure very well.
The truth is that both those narratives are flawed. Sure, it’s just plumbing, but if your plumbing isn’t solid and reliable you are going to have wet floors and mould. As to whether it’s robust and enterprise-grade, the likes of T-Mobile, Walmart and Volkswagen don’t normally build businesses on flaky infrastructure – and of course there’s now no shortage of companies willing to help you build and run your own OpenStack.
It’s getting hard these days to find places where container technology doesn’t feature, but in a hard drive? Yet that’s what is being explored in one of the Storage Networking Industry Association (SNIA) technical working groups under the title of computational storage.
The idea seems simple: all storage devices already contain microprocessors to handle the data management tasks, so why not give them a bit more compute power, some working memory and something else to do? And while that would have been difficult a few years ago, because these things aren’t designed to be reprogrammed by outsiders, adding a dock for containers makes it rather easier.
There are a couple of caveats, of course, one being that the average hard drive doesn’t have a lot of processor resource to spare, so it might not cope with more than a single container. Some smaller formats, such as M.2, might not even allow that. The other, as I realised when discussing it recently with storage industry veteran and SNIA board director Rob Peglar, is that computational storage is only ‘computational’ when seen through SNIA’s eyes. That’s to say, it is only about compute tasks that are specific to storage.
There’s a lot to compute in storage
However, that in turn is a wider field than we might first think. As well as obviously-relevant tasks such as data compression or calculating RAID parities and erasure code pairs, it could also stretch to things such as video transcoding, for example using the likes of FFmpeg to compress audio and video for storage.
You could even have a drive with computational capabilities providing peer-to-peer services to non-computational drives on the same PCIe subsystem. P2P services might also give us a way around the single container limitation, which means that each computational storage element is likely to be fixed-purpose. Multiple drives, each with a different computational capability, could potentially share those over PCIe.
Examples of intelligent and computationally-capable drives already exist, for example from Eideticom, Netint, NGD Systems and ScaleFlux. But as Rob cautioned, there is still some standardisation work and ecosystem development to be done. In particular, computational storage is distributed processing so it needs orchestration, and storage-related application stacks will need adjustment to take advantage of it. “We are looking to the software industry for careful guidance,” he said.
Offloading drive I/O is the real win
It’s also worth noting that despite the ‘computational storage’ name, offloading the computational workload is a secondary benefit. More important is that all these storage processing tasks also require a lot of data to be moved between the processor and the storage device – lots of I/O, in other words.
As everything else in the system speeds up, I/O delays become more prominent and there is more incentive to find ways to reduce them. For example, as well as computational storage, which keeps the I/O within the drive (or the PCIe subsystem) for storage-specific processing work, there is non-volatile memory (NVM), which provides a different route to I/O reduction for mainstream tasks.
NVM puts fast and persistent storage in a slot next to the processor, and therefore keeps its I/O on the memory bus – we’ll write more about this soon. It is very likely that we will see systems incorporate both NVM and computational storage, as they address different needs.
Many cybersecurity professionals like military analogies. Indeed, some security pros are ex-military. But after a recent security exercise involving IBM’s new lorry-mounted SOC (security operations centre), I found myself wondering just how apt those analogies really are.
For example, there’s the popular statement that is frequently misquoted as “No battle plan survives contact with the enemy.” What its author, the 19th century German field marshal and strategist, Moltke the Elder, actually said* translates as it won’t “go with certainty past first contact with the enemy’s main strength.” His thesis was not that battle plans were pointless, but that commanders should establish flexible operational frameworks, giving stated intentions rather than detailed orders.
Cybersecurity versus the big battalions
So it is – or should be – with cybersecurity. When problems arrive one by one, they can be triaged, assigned and dealt with, like an army dealing with skirmishers and probing attacks. But when a major incident occurs, that’s when the fertiliser hits the fan. The enemy’s main battle tanks have arrived en masse, and you are about to find out which of your lieutenants can think on their feet.
All this seems obvious to us today in a military context – individual leaders are expected to use their initiative and judgement to get the job done within their commander’s overall framework. And yet, while it’s standard for modern militaries, anyone who has had the misfortune to work for a micro-manager will know how firmly it is ignored in much of civilian life.
So, back to the IBM lorry. Called the X-Force Command Cyber Tactical Operation Centre (CTOC), it’s designed for multiple roles. The primary one is training exercises for cybersecurity incident response teams, but it can also be used as a mobile SOC for special events – think of IBM’s Wimbledon sponsorship, for instance – and for educational outreach.
An obvious target for the latter is schools and colleges, to interest and inspire the next generation of security professionals, but it’s more than that. Outreach can also be to anyone from journalists and industry analysts to a company’s board members – the board of course being where the data protection buck ultimately stops.
It’s a fantastic set-up inside the X-Force CTOC lorry, with its expanding sides that open into a good-sized room. As well as the fully-equipped desks of the SOC itself, there’s huge screens on the walls to throw up events and chart the progress of the ‘incident’, plus a computer room, satellite connectivity and a generator make it all independent of the grid.
During an exercise, IBM’s backstage trainers phone in and send messages, playing the roles of end users, anxious customers, TV reporters and more. But while this all is usually referred to as training, this is where military analogies might lead us astray. What civilians picture is soldiers practising combat skills and small-unit tactics, but in reality – and in cybersecurity – that’s only part of the story.
The psychology of stressed teams is fundamental
A far bigger part is the psychology element. Firstly, it’s getting used to thinking on your feet and taking the initiative, as old Moltke might have said. But it’s also testing how your people – both individually, and more importantly as a team – respond to stress. Do they remember the framework or do they panic? Do they co-operate with colleagues or retreat inwards, ignoring the ringing phones? Do their assignments match their individual strengths and weaknesses? And of course, does the organisation’s draft response plan actually help or hinder?
The psychology of stressed teams is fundamental to an organisation’s response to a breach or other incident. So not only is it absolutely essential to invest in adequate incident response training for your team, it must be more than just tech skills – you need to test and stress them as a team too. Only then will you find out not just how their training benefits everyday business operations, but how it works in the heat of a cyberattack.
*In the original: “Kein Operationsplan reicht mit einiger Sicherheit über das erste Zusammentreffen mit der feindlichen Hauptmacht hinaus.”
If you think Brexit will be advantageous for the UK economy and the British people, this post is not for you. If your assessment of the available information, like mine, is that our economy will suffer and many UK businesses will need to cut overheads, then you need to review the Software as a Service (SaaS) contracts you have in place.
This is because so many SaaS contracts are fixed term, typically one to three years, and work on a ratchet basis. You can add seats, capacity, etc at any time during the contract term, but only once the current contract term is up can you reduce your commitment, or indeed cancel the service and/or move to an alternative provider. And if you choose to renew, you are then locked in for another year or more.
The exact details and level of rigidity varies by provider, but some of the big players with more of an ‘old school’ commercial culture like Salesforce.com and Microsoft are pretty strict and uncompromising on both terms and enforcement.
Other significant SaaS providers, e.g. Google, Dropbox and Slack to name but three, are more flexible. They allow you to contract on a rolling monthly basis, with the right to adjust the number of contracted seats down as well as up as the level of activity, requirement and/or performance changes within your business.
Flexible vs fixed terms
The value of such flexibility is obvious if you need to downsize your workforce, or reduce the number of contractors or collaborators you work with who would have previously had access to the system. Either way, you aren’t left paying an ongoing subscription for seats that aren’t being used.
But SaaS contract flexibility provides options for cost reduction in other ways. For example, even for core applications, it is not unusual to have a core of essential users, plus a set of users for whom system access is more of a ‘nice to have’ – a luxury you could afford when times were good, but one that’s harder to justify when the squeeze is on.
And if you review your SaaS portfolio carefully, you’re likely to spot solutions that the organisation could easily live without. One benefit of SaaS is that you can put a solution in place very quickly, but the downside is that this encourages speculative commitments. You may therefore have accumulated some ‘cloud clutter’ – services that seemed like a good idea at the time, but which ultimately didn’t deliver the expected value.
In an ideal world, where flexible contracts exist, you would manage costs through ongoing reviews of needs and usage patterns. You would flex subscriptions up or down, terminate services that are no longer needed, and switch providers where appropriate.
Review those contracts – are they worth it?
Back in the real world, the real challenge is those providers who still insist on long-term fixed contracts and ratchet-style subscription mechanisms. For some core services, you may decide that the value is worth the pain, but even then it’s worth asking for a concession if you need it. While the most likely response to smaller businesses will be “Tough, that’s what you signed up to,” we have heard cases of providers willing to compromise on occasions. This is more likely to happen if you are into your second or third contract term.
For those borderline-value services, consider switching from an annual to a monthly rolling contract until your situation becomes more certain, if that option is available. Not all providers will offer this, and the ones that do will charge you more per seat per month if their preference is for fixed-term, but it can give you the flexibility to make tough choices in the interim without a major cost penalty.
For now though, I would urge anyone making new SaaS commitments in these uncertain times to make contract and commitment terms important factors in your decision-making. Having just one opportunity per year to terminate, switch or optimise means you are constrained by an arbitrary timetable which has nothing to do with your own business cycle or related milestones and events.
That kind of contract clearly isn’t consistent with the ‘on-demand’ and flexible spirit of the original cloud proposition, nor is it very Brexit-friendly. Now, more than ever, it is vital to beware of contractual lock-in when reviewing and making SaaS related commitments.
The Freeform Dynamics crystal ball has been given a polish for 2019. Here’s a few of the upcoming events – and risks – that we glimpsed in it…
Online privacy back in the headlines
A year on from GDPR, the PR machine again goes into overdrive as the EU ePrivacy Regulation, which expands on and supersedes parts of GDPR, comes into force. However, following the way that the original web-cookie rules were dodged by the use of large OK buttons, and the (incorrect) perception that GDPR was a damp squib, many in industry assume the ePrivacy fuss is a case of IT “crying wolf”. Then, with the regulators now having GDPR under their belts, the first fines hit – and they’re calculated at the same level as GDPR, so that’s up to 20m Euro or 4% of world wide annual turnover…
Data governance pays off – for some
Real benefits start to flow for those foresighted organisations that put strategic data governance and value-extraction plans in place, rather than simply aiming to minimise their GDPR costs. This is the upside of GDPR for those that saw it more as an opportunity than a risk, and – crucially – were willing to invest the time and effort that was needed to establish that data governance.
DevOps risks failure as it goes mainstream
More and more organisations scale-out their DevOps strategy this year, so they can leverage its cost, quality and predictability benefits more broadly. Defining a more repeatable, scalable and infrastructure-driven approach becomes a priority, even for slower-moving application areas. It takes a DevOps failure or three, however, before the realisation spreads that while DevOps is full of goodness, it is not a panacea. The winners in all this are the IT architects, because at enterprise level, the ability to design, build and operate a resilient yet flexible systems architecture is far more important than the ability to cut code.
Facebook Workspace goes the way of Google+
Many people love Facebook, but no one has yet cracked the challenge of creating a secure equivalent for businesses, not even the web giant. Is it because the demands of business security are fundamentally incompatible with classic Silicon Valley attitudes to customer data? (As an aside, that oft-quoted line attributed to former Sun Microsystems boss Scott McNealy – “You have no privacy – get over it!” – is now 20 years old.) Or could it simply be that none of the work-focused services really offers enough beyond what Twitter and LinkedIn already provide?
Think we’re right – or wrong? Let us know in the comments below!
Anyone hoping that we would by now have greater clarity on GDPR will have been sadly disappointed. Large swathes of the Regulation have yet to be tested in a court, and many of the high profile fines that you may have read about were actually issued for contraventions of the pre-GDPR rules.
That’s partly because GDPR remains as I and others described it a year ago: descriptive, not prescriptive. Still, when I joined a recent debate on what’s emerged in the months since GDPR Day, we were able to identify several notable clarifications, consequences and caveats.
The biggest caveat for UK readers is yet another Brexiter own-goal that will see the UK giving up control, not taking it back, as and when it leaves the EU. Put simply, while inside the EU, national legislation such as the Investigatory Powers Act 2016 – the “snooper’s charter” pushed through by then-home secretary Theresa May – may be challenged in the European courts, but that doesn’t prevent UK companies moving personal data around the EU.
However, if the UK is outside the EU (or EEA), it will need the equivalent of the USA’s Privacy Shield* – a declaration from the European Commission that the UK’s data protection regime matches EU expectations. In other words, if you are an EU member the Commission can’t examine your laws under GDPR and declare them inadequate, but if you are a ‘third country’ it can.
So if Prime Minister May manages to get the UK out of the jurisdiction of the European courts – which frustrated her ambitions more than once as home secretary – the UK could actually end up with even less room for manoeuvre in the area of data privacy than it has now.
Recognising the importance of data governance
On a wider and more positive note, speakers in the debate noted that GDPR has been a boon for CIOs. It has made data protection and the value of data much more visible at board level – which is of course where the legal responsibility for data privacy sits. In doing so, GDPR has also brought a stronger recognition of the need for data governance.
That means there has been more budget and a greater impetus to get data protection and privacy sorted out from a process, policy and technology perspective. As one speaker put it, GDPR has been a catalyst for a lot of things we should have been doing anyway! Another added that it has empowered some of those responsible for data governance to rein in and cull projects which had run out of control.
And it’s not just European CIOs and data subjects who are benefiting. Increasingly, GDPR is seen world-wide as the ‘gold standard’ for data protection, with similar regulations appearing in other jurisdictions. Multi-national companies are choosing to implement GDPR world-wide too, with Microsoft the highest profile example. For these companies, not only does this demonstrate good faith, it also simplifies their data governance regimes.
Consent may be less important than you think
On the other hand, there’s still far too many people who don’t understand that consent is only one of the possible legal grounds for processing personal data under GDPR, and that in many cases, consent may not be the best one to use. There’s a need too for more and better analytics to help organisations understand the data they collect – what’s sensitive, what’s important, what they can safely discard, and of course which bits they should never have collected in the first place.
For now though, GDPR has made individuals more aware of their data and their rights, it has made privacy easier to define, and perhaps most importantly, it has ensured that data and privacy are seen as strategic issues within organisations. Overall then, the general feeling was that, even with elements awaiting clarification, GDPR has been a success.
*Though this too has been challenged as inadequate in both the European courts and in the European Parliament.
My thanks to fellow GDPR-watchers Renzo Marchini, a partner at law firm Fieldfisher who specialises in privacy and security; Joe Garber who heads the governance product group at Micro Focus; Horizon CIO Network host Mark Chillingworth; Johan Dreher, sales engineering director at Mimecast; and Alex McDonald, who chaired the debate and is both a director at SNIA Europe and an industry evangelist for NetApp.
You can read more research and insight about GDPR on our website here.