Write side up - by Freeform Dynamics


January 20, 2020  4:13 PM

Consent and GDPR: at least get the basics right!

Bryan Betts Bryan Betts Profile: Bryan Betts
Compliance, GDPR, meaningful consent, regulatory compliance

Why do companies find it so hard to get their heads around even the basics of GDPR compliance? I’m not even thinking here of techie stuff like not getting hacked, and not losing a laptop full of unencrypted customer data.

By comparison, informed consent is simple. I mean, it’s not exactly rocket science, is it? Just like in real life, if someone freely and soberly says Yes, you’re OK. If they don’t, you’re not.

There’s even clear explanations of what the various Articles of the GDPR mean and how to interpret them. They’re called Recitals and they’re quite widely available online. For example, the UK Information Commissioner’s website offers a simple document with the Articles and their Recitals grouped together. Third-party websites such as GDPR-info.eu and Privacy-Regulation.eu provide the texts reformatted and linked, so you can easily click from one bit to the other.

All you need to do is jump from Article 7 “Conditions for consent” to its associated Recital 32, and there it is:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement… This could include ticking a box when visiting an internet website… Silence, pre-ticked boxes or inactivity should not therefore constitute consent.

Affirmative and informed

So why am I still finding companies – both UK and EU27 – that think it’s acceptable to require website visitors or customers to “Please tick here if you do not wish to receive marketing information”? Which bit of ‘affirmative’ do they not understand?

One of them even prefaced its “If you do not wish” consent-grab with “Personal data supplied is subject to the Regulation (EU) 20126/679.” Yes, you guessed it – that’s the GDPR’s formal title.

Look, as I said above and as I’ve written several times before, this stuff doesn’t have to be hard. If you get the organisational mindset right, GDPR compliance can even help improve your data security and information governance.

Heck, if you do your research, you might even discover that by concentrating on consent you risk shooting yourself in the metaphorical foot.

PS. None of this will go away after Brexit. In fact, it will most likely get worse, because the European Commission can examine a non-member’s data protection laws and declare them inadequate if they don’t match up to the GDPR. Inadequacy means no more cross-border transfers of personal data without further safeguards.

January 17, 2020  2:29 PM

Business AI needs to focus on psychology, not technology

Bryan Betts Bryan Betts Profile: Bryan Betts

There’s no doubt now that AI works, and it can work very well indeed – as a technology, at least. And as one of our recent research projects reported, AI-based machine learning (ML) is increasingly being used to automate (or part-automate) complex processes such as logistics planning and IT network monitoring. It is even being pre-integrated into industrial machinery and the like.

These applications are typically fairly narrow though, and while they can be transformational within their specific niches, they affect relatively few users. Of course, ML of this kind still needs domain expertise and analysis – to ensure we’re not automating faulty assumptions, for example – but it tends to be designed to assist or enhance an existing role or process.

Where it gets interesting – and not a little worrying, too – is when AI moves into the wider world of business transformation. It’s important because for many organisations, applying AI to general business workflows and processes could represent a new stage in the process of digital transformation, and one that has the potential to be disruptive in all the wrong ways if it’s not managed well.

The desire to promote AI-driven business transformation is a large part of why the big IT vendors are paying a lot more attention now to the human domain – to the ‘softer’ sciences and skills, such as business psychology and change management.

Successful business AI needs trust and transformation

By their very nature, broader AI solutions need to be scaled out across an organisation in order to be effective, and that in turn requires transformation and culture change. The AI needs to be integrated within the business culture – and as part of that, it needs to be trusted by the people who must work with it and will be affected by it. That of course includes staff trusting it to help them work better and smarter, and to do more with less, not simply to put them out of a job!

This is why the thought-leaders and IT suppliers that understand this area best are talking more and more of concepts such as AI ethics, human-centric AI, and ‘AI as human augmentation’. Those concepts are taking a long time to trickle down into end-user organisations, however.

For example, at Microsoft’s Future Decoded event in London last year, it presented findings from a study by researchers at London’s Goldsmiths College on the state of AI in the UK. Among the results highlighted were that 96% of people said they have never been consulted by their boss on the introduction of AI, and that conversely, 83% of business leaders said their employees have never asked about AI.

You might ask why this should matter, when it is hardly routine for bosses to consult their staff about introducing process automation, or workflow management? Quite simply, it matters because of that transformational and cultural aspect. We’ve written before about the importance of the human factor to the success – or otherwise – of workplace transformation, and AI makes it even more important.

How do you learn what to do when AI gets it wrong?

After all, without those consultations, how can an organisation scale up its use of AI beyond niche and pilot projects? How can it be confident about spotting and eliminating the kind of AI bias that’s been in the news in recent months, and which has the potential to completely derail an AI project? How can it acquire and develop the skills not just to implement AI, but to know what to do when AI gets it wrong?

And, routing back to those soft skills, how can the organisation scale AI without getting its people on-board – without getting them to understand and accept what integrating AI into their working practices will need, and of course how they will benefit from it?

This kind of thing – how to get buy-in for change, and the need for change to benefit the individuals affected by it, as well as the organisation’s bottom line – is well understood in business change and re-engineering circles. The challenge now in AI is for the technology to take a back seat. It’s time to focus instead on how to integrate it into business strategy and culture – and yes, into business governance, ethics and responsibility.


January 10, 2020  1:23 PM

Edge computing could fail, if you only pay attention to the tail

Dale Vile Profile: Dale Vile
architecture, Digital transformation, Edge computing, Industrial IoT, Infrastructure, IT operations

In its obsession with software development, all too often at the expense of other disciplines, the IT industry has let the tail wag the proverbial dog for more than ten years. Now, with the rise of edge computing – where your architectural and operational decisions will be absolutely key to your success – it risks losing the dog altogether.

It’s true that Steve Ballmer’s chant on stage of “Developers, Developers, Developers, Developers,” back in 2008 at Microsoft’s 25th anniversary event, made absolute sense at the time. His performance became a huge Internet meme that sparked IT vendors across the board into declaring deep love for the software development community.

However, it’s brought us to a present-day where the typical DevOps discussion places a huge emphasis on the ‘Dev’ piece and relatively little on ‘Ops’. Yet when it comes to the bottom line, it’s Ops that generates the revenues, while Dev is in many ways just a cost!

Fortunately, we do seem to be regaining some balance, with the disciplines of architecture and operations coming back in focus across the industry. It’s now pretty well accepted that success with digital transformation is about much more than the development of good-looking web and mobile apps.

Systems, old and new, need to be properly integrated and work end-to-end in a secure, robust and efficient manner. Customer satisfaction can be hit hard if the back-end doesn’t meet expectations created by the front-end. Disjointed systems also constrain how fast you can respond to new and changing needs and opportunities.

Ops matters more than ever, as computing moves to the edge

All of this means that Ops really matters, and this is even more true as we move into edge computing. This is an area in which you’re going to struggle or fail if you take a developer-led approach, treating the platform, integration and operational considerations as secondary discussions that can take place downstream.

The truth is that the most volatile part of the edge computing equation is what takes place at the edge itself. Devices, sensors, machine learning models and other software components will come and go over time. Making architectural decisions based on the current mix of technology and applications at the edge is simply not sustainable.

In order to implement a relatively future-proof edge computing environment, you have to start with sound architectural design. This not only means thinking through the dependencies and flows across the various tiers from the edge to the cloud, but also considering how provisioning, configuration, monitoring and administration of systems and components will be handled on an ongoing basis. The phrase ‘design for operations’ sums up the mindset to adopt here.

Building on this concept, our own observations of edge computing and industrial IoT initiatives confirm that the need for end-to-end architectural and operational thinking goes way beyond of the IT domain. In an Industrial Digital Transformation context, for example, IT professionals typically need to work with engineers specialising in plant, machinery and other edge resident equipment, whether that edge is a factory, office building, power station, oil pipeline or smart vehicle.

Against this background, the most critical question you need to address before starting any edge computing initiative is who needs to be involved.


December 18, 2019  5:32 PM

Today’s hybrid cloud looks more closed than open

Bryan Betts Bryan Betts Profile: Bryan Betts
Cloud orchestration, DataCenter, Hybrid cloud, Kubernetes, Multi-cloud

We’ve had convergence towards hybrid clouds and ‘universal’ applications, but now we are seeing divergence towards different cloud platforms, each offering its own flavour of interoperability. What’s going on?

I’ve seen the hybrid multi-cloud, and Kubernetes or not, in some ways it’s almost as closed a platform as the traditional IT and proprietary public cloud models that it seeks to supplant.

Of course it can still offer significant advantages for anyone used to traditional IT. Develop the same containerised or serverless app once and run it in a private cloud on-site, on systems hosted at a co-location centre, on a private cloud hosted by a cloud provider, or in a public cloud. Interoperate between them, move tasks and microservices around for workload or cost balancing, and so on.

The big difference versus traditional architectures and locked-in clouds is where you have to make your proprietary choices. With hybrid multi-cloud you can have pretty much whichever hardware, DevOps tools and public clouds you want, for example. Where you need to make a commitment is between those, in the middleware layer that glues it all together and converts your Dev expenses into lovely Ops revenue.

Day Two is when ‘seamless’ really starts to matter

That’s because you can only get all that seamless hybrid goodness – for now, at least – by running the same Day Two orchestration and management tools everywhere. That in turn means you need to pick a single ‘multi-cloud management layer’ for all your clouds, public and private.

In effect, you need the same ‘hybrid operations platform’ everywhere, be it from VMware, Nutanix or Red Hat – and you can slot Microsoft Azure, Google Anthos and others in here as platform options, too.

This proprietary-hybrid model might well change in the future if relevant new standards appear quickly enough, and it’s already an improvement on the past, certainly as far as Kubernetes containers and pods are concerned. We don’t get write-once run-anywhere, because sideways portability between different hybrid platforms isn’t always straightforward, just as it isn’t straightforward between the major public clouds. It can still be very useful though, and perhaps we can call it “write-once run-manywhere” instead.

Have you hit hurdles with hybrid interoperability? Or do you disagree it’s a problem at all? Tell me in the comments below.


November 20, 2019  9:56 AM

Are your message threads tight or Slack?

Bryan Betts Bryan Betts Profile: Bryan Betts
conferencing, discussion, Enterprise messaging, Threads, User interface design

Why doesn’t Slack want us to use threaded replies to messages? At least, that’s the impression I come away with from having used at least five Slacks, some for business collaboration and others for consumer-grade discussions. 

Slack’s had the capability to show a message and its replies as a connected thread for some time now. Once your conversations are threaded, there’s a handy option to see only those threads you’re actually participating in. 

But the option to “add a reply to this message” is hidden away in a pop-up menu, with the result that many users simply post everything as a new message. That’s fine in a one-to-one text message conversation, but not in a group chat. Other frequent Slack users will be familiar with the result, which is the confusing experience of trying to work out who is replying to what – and of missing new replies because they weren’t posted as such, so you never got the relevant notification. 

You’d think the mobile app might be better, but no, the reply option is only obvious once there’s already a reply to a message. So, once again you are just as likely to get disconnected comments with no notification to alert you as to their existence. That’s unless you deliberately reply to yourself first, to remind other people! 

Threading isn’t exactly new – or rocket science

Google Chat and Microsoft Teams do it better – the ‘reply’ link is permanently visible and they clearly separate conversations or threads from each other. They just don’t feel as open as Slack though, and it is still possible though for those brought up on text exchanges to opt instead for the large and inviting new-message box, whose prominence just begs you to type into it. 

Even when your fellows have worked out how to thread their replies, what happens when a comment spawns a second, tangential, conversation? Proper discussion systems, such as Reddit, are able to handle multiple branched and nested levels of replies-to-replies, as can email programs like Mozilla Thunderbird. Indeed, online conferencing systems, such the venerable British CiX service, have been able to track conversations with many forks and sub-branches literally for decades (the CiX codebase derives from CoSy, which went online in 1983). 

We’re often told that consumerisation also drives enterprise software these days, but Chat, Slack and Teams all assume that a thread only contains one conversation. Even Facebook now supports two levels of reply, and it makes them reasonably obvious – although yes, of course there’s still people who fail to use second-level replies, for whatever reason. And WhatsApp lets you quote earlier messages, so others in a chat group can see what you’re commenting on.

So when this kind of thing has been around for 30+ years, and when it can be made simple enough for consumers to use, why are the top business collaboration applications so inadequate? 


August 22, 2019  1:35 PM

DevOps needs more perspective and humility

Dale Vile Profile: Dale Vile
DevOps, enterprise applications, IT operations, Service delivery, Software development

DevOps started as a grass-roots movement led by practitioners. The motives were pure, and focused simply on finding better ways of doing things for the good of everyone involved in software delivery.

In recent years, though, DevOps has become increasingly commercialised. Software vendors and service providers have ridden the wave of interest to sell tools and consulting. Events companies have tapped into community enthusiasm to attract sponsors and sell conference tickets.

None of this is inherently good or bad, but the increased involvement of evangelists and marketing professionals has inevitably led to emergence of repeatable narratives that are, well, constantly repeated. 

One of these revolves around the notion that savvy developers have been adopting modern methods, then pushing ideas and new ways of working downstream. Devs are the heroes of this story, with conservative, protectionist and paranoid operations teams portrayed as being dragged kicking and screaming into the 21st century. 

Ops as a ‘necessary evil’?

This narrative paints Ops almost as a necessary evil, only there to keep the lights on and to deploy the goodness created by Devs. Some even weave a ‘NoOps’ sub-plot into the story, suggesting that advances in cloud and serverless will render Ops professionals redundant in the not-too-distant future.

I won’t try to lay out all of the ways in which this narrative is both wildly inaccurate and potentially insulting to a group of professionals with a lot of practical experience and collective purchasing power (Ops budgets generally dwarf those of Devs). For a full discussion of this, I would encourage you to download our report IT Ops as a Digital Business Enabler

Suffice it to say that there’s no shortage of thought-leadership and innovation in the operations domain: new architectures and service-delivery models, transformation around hybrid/multi-cloud, turning risk management into a value-creating discipline, and so on. Pull this together with everything else going on in relation to business application suites, cybersecurity, advanced comms and collaboration, information lifecycle management, etc, and it soon becomes clear that deploying and running software from in-house development teams is usually a relatively small part of the Enterprise Ops role.

The world really doesn’t revolve around developers

With this in mind, the Dev-centric narrative that we repeatedly hear from DevOps tools and platform suppliers leads at best to much eye-rolling outside of the Dev bubble. And at worst, it runs the risk of alienating some important constituencies. Whether the narrative is propagated consciously or unconsciously is secondary; more important is the fact that it actually gets in the way of achieving the overall objective, which is to get  everyone working together more smoothly and harmoniously across traditional boundaries.

Taking on board different perspectives and maintaining balance is even more important when we consider DevOps in the broader digital transformation context. The introduction of new ideas and alternative ways of thinking and working is not unique to the world of software. Just as DevOps originally drew on advanced manufacturing principles, software delivery teams could learn a lot from looking at other disciplines across the business. 

I remember interviewing business and finance people about 10 years ago on their understanding of service oriented architecture (SOA). While the language used was different, the principles of separating concerns and the concept of functional units working together around agreed contracts and communication protocols were very familiar. Some were amazed, in fact, that IT systems weren’t designed to work that way already – “We’ve had shared services for years!”

Just because it’s software, that doesn’t mean the ideas are new!

And so it often is when you get into conversations around Agile, DevOps, fail-fast, etc with product managers, creatives, logistics specialists, campaign planners, R&D teams and more progressive finance professionals. Once you get past the differences in vocabulary, you often find you’re talking about the same or similar ideas, and that in many cases software teams are playing catch-up.

Against this background, it’s important that as we try to integrate modern application and service delivery into the broader business, we recognise that a lot of ‘stakeholders’ might actually be more advanced in their thinking than we are in the world of software. Sure, there’s a software aspect to most significant business initiatives nowadays, but there are also typically infrastructure, logistics, creative, commercial and risk management dimensions that are equally important and require a similar level of innovation and agility.

If we zoom out to the bigger picture, software is clearly only part of the digital transformation mix. When you then consider what’s available today in terms of off-the-shelf applications and SaaS services, it also becomes clear that in-house development is in turn only part of the software discussion. 

If we keep this in mind, maintain a sense of perspective, and communicate with humility, increasing the reach and impact of modern software delivery methods could become so much easier.


July 12, 2019  12:32 PM

Kubernetes catches up with operational reality

Bryan Betts Bryan Betts Profile: Bryan Betts
Application containerization, Cloud Security, Kubernetes

With Kubernetes now established in many organisations as the container orchestration platform of the future, are cracks already starting to show? Well, not exactly – but if I could pull out one common theme from what’s hot in the world of cloud-native, it’s the dawning light of Day Two.

That’s a shorthand term used for when this stuff goes live – the Ops end of DevOps, if you like. Going live was only to be expected, yet as so often with these things, it seems to have caught some early-adopters on the hop. The risks? Either you deploy containers into production without all the underpinnings needed to keep them fit and safe out in the wild, and they get exploited, or your high-profile project is embarrassingly terminated before launch due to its security failings.

Building for Day Two

However, Kubernetes – and cloud-native more broadly – is one of the fastest-growing areas of the whole IT industry. The result is a frenzy of activity as open-source, freemium and commercial projects jump in to build that necessary Day Two infrastructure. Here’s some of the key areas of development to watch, and a few of the vendors and projects we’ve looked at recently:

Security: The ‘shift left’ that builds container security into the dev pipeline is essential – restricting container privileges and API access to the minimum needed, for example, using only trusted open-source components, keeping containers simple, and so on – but there’s more. So among other things, you should explore vulnerability and misconfiguration scanners (e.g. Aqua Security), container hardening and compliance (e.g. Twistlock, now being acquired by Palo Alto Networks), and tools to help deliver a secure Kubernetes service (e.g. Rancher Labs).

Storage: Containers are inherently stateless, and it’s safer if you make them immutable too. That means some careful thinking is needed when you containerise a stateful application and give it persistent storage to consume (we’ve a paper on this topic here). Once you’ve abstracted the physical storage as software-defined storage (SDS) using the likes of Ceph or Robin, which can of course themselves be deployed as stateful apps on Kubernetes, you then need to orchestrate and automate it, which is where the popular Rook project comes in.

Data management: As mentioned, the container may be stateless but if the running application is stateful then you still have data to manage, protect and migrate. Most traditional data management tools aren’t well suited to protecting containers, although their suppliers are working to catch up. In the meantime, new ideas and tools are emerging, such as Kasten for stateful app backup and migration, and the highly-resilient distributed database CockroachDB.

Governance and cost management: As the use of Kubernetes grows, so does the risk of container sprawl. So we see projects such as Replex.io, aimed at keeping an inventory, rightsizing containers and managing costs, and Razee from IBM, which can enforce profiles and rules on containers, clusters and clouds – you could call it continuous delivery and auto-updating for Kubernetes clusters.

Runtimes and meshes

There’s lots more going on though, including new container runtime engines such as CRI-O and Containerd. Their aim, as Red Hat’s Urvashi Mohnani, speaking at Kubecon Barcelona, said of CRI-O, is “to make running containers in production secure & boring.”

And then there’s service mesh architecture, where the emerging duopoly between Istio and Linkerd has recently been disrupted by the arrival of the Service Mesh Interface, an interoperability project led by Microsoft but also including service mesh developers such as Buoyant, Solo.io and VMware.

What do you think – are your Kubernetes projects ready for Day Two? What’s missing, if anything, or what more might be needed? Tell us in the comments below…


June 21, 2019  10:01 AM

Mea culpa: Trendy isn’t the same as important

Dale Vile Profile: Dale Vile
Clustered storage, De-duplication, file sharing, Fragmentation, Multi-cloud

I hate it when you realise that your point of reference has been unduly influenced by what’s currently trending rather than what’s really important. A recent briefing with Jim Liddle of Storage Made Easy (SME) highlighted how easy it is to fall into this trap.

It was my first ever briefing with SME, so as usual, I was listening for things to help me position the company and its solution on my mental map of the world. When I heard about SME’s ‘File Fabric’ – essentially a solution to enable federated access, search and discovery across a disparate set of storage resources – the phrase ‘multi-cloud storage’ immediately popped into my head, along with the potential benefits in an application delivery context. This was undoubtedly because it’s almost impossible to have a conversation with an IT vendor at the moment without either multi-cloud or DevOps coming up. Together with AI, they are examples of those trending terms that you are obliged to work into any presentation if you work in tech marketing.

Fix the basics first

As the discussion with SME developed, though, while the trendy use cases were certainly acknowledged, some more fundamental requirements came through more prominently. I was reminded – duh! – that most of an organisation’s electronic documents and files are still typically spread (and often duplicated) across a large number of on-prem file servers and other repositories, and that this still represents a significant headache for IT teams. We also talked about the pain and cost of dealing with business basics such as the need to move increasingly-large files or data sets around their corporate network. And while the GDPR panic is over and compliance has become boring again, the truth is that few have this sorted across their organisation with any level of efficiency and consistency.

I knew these things, of course – I really did – but was guilty of being seduced into focusing on the more glamorous cloud and DevOps angles because these are what everyone in ‘the industry’ wants to talk about.

The good news is that while some vendors seem to be obsessed solely with these hot, trending topics, others, like Storage Made Easy, haven’t lost sight of the fact that you need to address the mundane as well as the sexy – and so much the better if you can do that in a single solution. Indeed, solving those tedious but important problems often provides a better case for investment than focusing on speculative benefits in more exciting areas of activity.

With this in mind, it’s interesting to consider how this picture of data fragmentation captured six years ago has changed in the interim. It’s almost certainly become even worse in most organisations. So from now on, in this and other areas, I am going to be even stricter than usual when it comes to keeping my analysis balanced across all the needs that matter.


June 7, 2019  1:55 PM

Threat vs trust: the overlooked power of words

Bryan Betts Bryan Betts Profile: Bryan Betts
Data anonymization, Information security, Insider threat, malicious attack, Psychology

Words have power – in particular, your choice of words modifies how your readers or listeners react to what you write or say. Politicians do this sort of thing all the time, swapping one word or phrase for another to promote their own version of a narrative or boost their audience’s prejudices. Sometimes the ‘loaded word’ or phrase is obvious, other times it is more subtle.

And of course we do it in business as well, as I was reminded this week at InfoSec in London. It was while speaking with Chris Bush of ObserveIT, one of several companies at the show offering to help safeguard against ‘insider threats’. This was more of a niche term, used mainly among specialists. It describes all the things users might get wrong, including accidents, careless behaviour and so on.

However, it is increasingly being used openly now, and with a focus on the malicious subset of activity that lawyers and others know as ‘employee malfeasance’. Purely as a phrase, it is accurate – there are threats, and some come via insiders. But how does using the term ‘insider threats’ modify your assumptions about, and attitudes to, your colleagues?

You say one thing, what will others hear?

Words don’t come much more laden than ‘threat’. It’s a great one to use if you want to impress your CSO and other board members with the importance of data access controls, compliance and activity monitoring, cybersecurity training and so on. But what does it do to employee morale to know you’re a threat, rather than an asset? How does it affect the trust relationship that needs to exist in an organisation?

Of course, most of us welcome technology that brings reassurance and helps us avoid the errors that are the vast bulk of the traditional ‘insider threat’. None of us wants to be the one who inadvertently shared a secret directory, deleted a key database or emailed a confidential file to the wrong person. Safeguarding software will spend most of its time helping prevent things like this.

Chris added that within the ObserveIT toolkit there’s also features to reassure staff about their privacy. For instance, the usage and auditing data is anonymised by default, and it’s the metadata not the actual content. That’s a good start* but as we went on to discuss, what’s really needed is a more holistic approach.

Security is people, people are security

That’s because once we get onto ‘malicious insiders’ we’re verging into human resources and business psychology territory. As Chris noted, one of the major differences between insiders and outsiders is the variety of motivations at work. Whether it’s someone who was sacked, passed over for promotion, denied a pay rise or whatever, attacking your IT systems is just one way for a ‘wronged’ staffer to get revenge.

This matters because it means that mitigating the risk is at least as much a business management issue as it is an information security one. If someone’s in trouble, whether it’s stress, workplace bullying or other problems, the best fix will be to intervene and help them early. Similarly, if an insider incident does happen, HR needs to be involved in the investigation, in picking up the pieces, and in working to prevent it happening again.

So first, words matter – let’s stop talking about people as threats and start talking about their activity (this is classic psychology – criticise the behaviour, not the person). And second, if you’re in either IT security or HR, what are you doing to build bridges? Let me know please in the comments below.

*From the perspective of intelligence-gathering, threat detection and forensics, the metadata is actually far more useful and valuable than the content. However, from the perspective of most insider communications, it is likely to be a lot less intrusive.


May 24, 2019  9:14 AM

Beware the Application Complexity Monster

Dale Vile Profile: Dale Vile
application, Application migration, Enterprise IT, IT suppliers, Office365, SaaS applications, salesforce

How many new applications, tools, devices, and cloud services have you introduced into your environment over the past few years? Now think about how much you got rid of over that same period, i.e. how many things you decommissioned. Not such a big number, eh?

This is just one example of how complexity creeps up on us. We accumulate more stuff over time – sometimes for good reason, sometimes because it’s easier to add something new and live with the technical debt than disturb what’s already in place. Either way, it all needs to be integrated and managed, and the larger and more complex your environment gets, the more difficult this becomes. It also makes it harder to keep up with change and enhancement requests and more generally deliver a good user experience.

But these principles don’t just apply to enterprise IT environments: software vendors and cloud service providers are subject to them too. If your application suppliers, for example, don’t manage complexity, redundancy and technical debt, their overheads increase. That can mean fewer resources available for R&D and customer support, and possibly even higher prices. Not good for customers, who are also likely to experience more failures, inconsistencies and arbitrary constraints, including things not working as you would expect or in a seemingly illogical or overly-complicated manner.

Software that has been around for a long time are more prone to such issues, especially if the vendor has experienced rapid growth, has made significant acquisitions, and/or has had to pivot because of changing markets. The danger is always that in the rush to market, new functionality is layered on the old, and there’s never the time or money to properly rebuild, dedupe and harmonise.

Complexity’s clues

One giveaway is an admin console that dictates different ways to do essentially the same things depending on which part of the system you are managing. Another is being forced to bounce around a complicated menu structure and make changes in several places in order to achieve something quite simple. If you find yourself muttering “It shouldn’t be this hard”, your pain is very likely a legacy of historical thinking, short-cutting and compromises.

If you administer systems such as Salesforce.com or Office 365 you’ll know how it is. It takes significant training and/or a lot of trial and error to understand the complexities, inconsistencies and dependencies, and ultimately tame such environments. Sure, it keeps IT professionals in work, but most would prefer to spend their time on more useful and rewarding activities.

To be fair to the vendors, established customers are often displeased when platform and architecture changes force costly and risky migrations. However, both customers and suppliers can only put off such changes for so long before it becomes very difficult to move on.

Taming the monster

The lesson is to consider complexity when making buying decisions or reviewing existing systems. Complexity isn’t bad per se – it’s often needed to deal with more demanding requirements. It makes sense though to stop, think, and beware of the kind of complexity that causes cost and constraint while adding no real value.

For a deeper discussion of what to consider in relation to solution complexity when making buying decisions, download our paper entitled “The Application Complexity Monster“.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: