Write side up - by Freeform Dynamics


December 20, 2018  2:47 PM

Six months on, GDPR is a qualified success

Bryan Betts Bryan Betts Profile: Bryan Betts
Data protection, EU, GDPR, meaningful consent, privacy

Anyone hoping that we would by now have greater clarity on GDPR will have been sadly disappointed. Large swathes of the Regulation have yet to be tested in a court, and many of the high profile fines that you may have read about were actually issued for contraventions of the pre-GDPR rules.

That’s partly because GDPR remains as I and others described it a year ago: descriptive, not prescriptive. Still, when I joined a recent debate on what’s emerged in the months since GDPR Day, we were able to identify several notable clarifications, consequences and caveats.

The biggest caveat for UK readers is yet another Brexiter own-goal that will see the UK giving up control, not taking it back, as and when it leaves the EU. Put simply, while inside the EU, national legislation such as the Investigatory Powers Act 2016 – the “snooper’s charter” pushed through by then-home secretary Theresa May – may be challenged in the European courts, but that doesn’t prevent UK companies moving personal data around the EU.

However, if the UK is outside the EU (or EEA), it will need the equivalent of the USA’s Privacy Shield* – a declaration from the European Commission that the UK’s data protection regime matches EU expectations. In other words, if you are an EU member the Commission can’t examine your laws under GDPR and declare them inadequate, but if you are a ‘third country’ it can.

So if Prime Minister May manages to get the UK out of the jurisdiction of the European courts – which frustrated her ambitions more than once as home secretary – the UK could actually end up with even less room for manoeuvre in the area of data privacy than it has now.

Recognising the importance of data governance

On a wider and more positive note, speakers in the debate noted that GDPR has been a boon for CIOs. It has made data protection and the value of data much more visible at board level – which is of course where the legal responsibility for data privacy sits. In doing so, GDPR has also brought a stronger recognition of the need for data governance.

That means there has been more budget and a greater impetus to get data protection and privacy sorted out from a process, policy and technology perspective. As one speaker put it, GDPR has been a catalyst for a lot of things we should have been doing anyway! Another added that it has empowered some of those responsible for data governance to rein in and cull projects which had run out of control.

And it’s not just European CIOs and data subjects who are benefiting. Increasingly, GDPR is seen world-wide as the ‘gold standard’ for data protection, with similar regulations appearing in other jurisdictions. Multi-national companies are choosing to implement GDPR world-wide too, with Microsoft the highest profile example. For these companies, not only does this demonstrate good faith, it also simplifies their data governance regimes.

Consent may be less important than you think

On the other hand, there’s still far too many people who don’t understand that consent is only one of the possible legal grounds for processing personal data under GDPR, and that in many cases, consent may not be the best one to use. There’s a need too for more and better analytics to help organisations understand the data they collect – what’s sensitive, what’s important, what they can safely discard, and of course which bits they should never have collected in the first place.

For now though, GDPR has made individuals more aware of their data and their rights, it has made privacy easier to define, and perhaps most importantly, it has ensured that data and privacy are seen as strategic issues within organisations. Overall then, the general feeling was that, even with elements awaiting clarification, GDPR has been a success.

*Though this too has been challenged as inadequate in both the European courts and in the European Parliament.

My thanks to fellow GDPR-watchers Renzo Marchini, a partner at law firm Fieldfisher who specialises in privacy and security; Joe Garber who heads the governance product group at Micro FocusHorizon CIO Network host Mark Chillingworth; Johan Dreher, sales engineering director at Mimecast; and Alex McDonald, who chaired the debate and is both a director at SNIA Europe and an industry evangelist for NetApp.

You can read more research and insight about GDPR on our website here.

November 23, 2018  5:18 PM

Google Chrome: It’s more than a browser

Richard Edwards Richard Edwards Profile: Richard Edwards
Chromebook, Desktop, Desktop virtualization, Google Chrome

I was talking with some of my peers about the changing landscape of the digital workplace. ‘So, what do you think about Google Chrome Enterprise,’ I asked. Not for the first time, I was met with a couple of blank stares. Maybe Google needs a bit of advice in getting its message out? Let me help.

Google Chrome Enterprise is an operating system (Chrome OS), a browser (Chrome), and a device (Chromebook). There’s plenty of ‘cloudy’ stuff too of course, with the likes of G Suite being the more familiar component, but this is Google setting out its alternative desktop agenda as Microsoft flounders amidst the turbulent waves of forced Windows 10 upgrades and migrations.

Are you a ‘cloud worker’?

So, who’s Google Chrome Enterprise aimed at I hear you ask. Well, it’s for the ‘cloud worker’ of course. I suspect we’re going to hear this term used quite a bit, so we might as well get used to it. A cloud worker is someone who spends most of their time working with cloud-based, browser-based apps and services.

You’re probably well on your way to becoming a cloud worker yourself if you’re using Office 365, Salesforce, ServiceNow, etc. Indeed, with the growing prevalence of SaaS solutions and cloud delivery models, we’re all going to be cloud workers before long. This will eventually put the term in the same meaningless category as ‘information worker’, but let’s not worry about that for now.

Chrome OS: the optimum thin client

I’ve written about the viability of Chrome OS as a real alternative to Microsoft Windows in previous articles, including a mention of CloudReady from Neverware (a Google-invested firm) if you want to quickly and easily retrofit Chrome OS to your existing PC fleet. But it’s the combination of Chrome OS with desktop and application virtualisation solutions that put it over-the-top for me.

Chrome OS devices (Chromebooks, Chromeboxes, Chromebases and converted PCs) present a range of advantages if we consider them as thin clients. They’re relatively mainstream, they’re affordable, they’re manageable, they’re self-maintaining and they’re very capable. They won’t suite every use case, but they do sit very comfortably between Windows PCs and zero clients.

The capabilities of Google Chrome Enterprise come to life when considered alongside partner products, such as Citrix Workspace. Using the Citrix Workspace app for Chrome (formerly known as Citrix Receiver), IT departments can provide instant access to SaaS and web applications, virtualized Windows applications, files, and even legacy desktop environments if needed.

I expect we’ll also see some intriguing options come to market when Microsoft makes its Azure-based Windows Virtual Desktop generally available. The offering promises to deliver a multi-user Windows 10 experience that is optimised for Office 365 ProPlus and the subscription generating revenues that go with it. And I see no reason why those revenues can’t equally well be generated on AWS or Google Cloud Platform.

What does Google Chrome Enterprise mean for your organisation?

Setting aside Chrome OS and Chromebooks for a moment, the Chrome browser itself is designed with enterprise IT administrators in mind. It can be managed using Windows Group Policy and has over 400 policy settings to choose from. Please don’t use them all! You can also configure policies on Mac and Linux computers, of course.

You’re already on the Google Chrome Enterprise glidepath if you’ve deployed the browser across your enterprise, so you might want to consider which direction this is taking you. There’s no need to panic, but it’s worth considering how this plays into your desktop strategy.

Chrome has come a long way since its introduction in 2008, but the journey might only just be starting. What do you want Chrome to become? Let us know.

Read more from the Freeform Dynamics EUC team here


November 20, 2018  1:28 PM

Don’t ignore the complexity of hybrid multi-cloud

Bryan Betts Bryan Betts Profile: Bryan Betts
Application delivery, cloud, Kubernetes, Multi-cloud, Software development

Many vendors will argue that moving to cloud brings consistency and simplicity, thanks to service-based delivery models, application access via standard Web browsers and so on. It’s rarely true though – at least, not in the real world.

Oh, sure, if you are able to stay within a single cloud ecosystem you can achieve a fair amount of consistency. Or if you’re a start-up choosing to go Web-only to minimise your capital costs, you can indeed make some things simpler.

But for most companies in the real world, cloud makes things more complex, not less. As our research has consistently shown, single ecosystem companies are a minority, with most having five, 10 or even 20-plus ‘cloudy suppliers’, once you add up public cloud, private cloud, hosted, SaaS and so on. That’s on top of all that traditional on-site IT, which shows no signs even of shrinking, never mind disappearing altogether.

I don’t often hear cloudy vendors acknowledging that hybrid complexity though. Yet lots of them now have a strong multi-cloud message. whether they’re coming from the cloud side, like Rackspace and CloudHealth (now part of VMware), or more from the systems side such as HPE, SUSE, and of course both IBM and Red Hat – acquiring additional multi-cloud capability was stated as a major reason for the Red Hat purchase.

That’s why it was refreshing to see acknowledged complexity as a subtext underpinning several of the keynotes at the recent BMC Exchange customer event in London. The message was that BMC – which has been pushing its multi-cloud management capabilities for some time now – fully understands the challenge of working across multiple hybrid services and wants to help you manage it. Whether it can keep to that message in the long term, only time will tell – but the signs are good, as it’s one of the few software companies with both the technology breadth and depth to make sense of it all.

Cloud, what is it good for? Huh!

Look at the relative immaturity of the public cloud, the shortage of cross-cloud standards, and at how the relationships around key portability-enabling technologies such as Kubernetes are shifting almost week by week. It’s clear that cloud isn’t the panacea or magic bullet that some would have us believe.

Fortunately though, it is very far from being a useless dead-end – it is quite the opposite, in fact. The real benefit of cloud, the benefit that trumps all that multi-cloud complexity, is at least as much conceptual as technical. What cloud does, most importantly, is to change both mindsets and delivery models.

Cloudy services may be like a swan on a swift river – all smooth and graceful on the surface, but lots of frantic paddling underneath – but by appearing accessible and easy-to-use they change people’s attitudes to and expectations of IT. Better still, mechanisms such as APIs and the use of a standard user-interface in the browser make it relatively easy to get software tools from different sources working together.

Indeed, the default assumption now is that they will work together. And that’s important because easy interworking is a big part of what’s liberating innovation and creativity. Software developers don’t think twice now, and nor should they – they turn first to the cloud, whether that’s to source tools, store data or run services.

But pretending it’s as easy to provide those services as it is to consume them would be extremely dangerous. Software development is a highly skilled task, even more so now that secure-by-design and privacy-by-design are becoming the norm, but it’s quite different from the task of building and operating multi-cloud infrastructure. That’s why it’s good to hear companies such as BMC acknowledge the complexity, even as they also advertise the fact that it is all perfectly doable and workable – with the right tools, skills and preparation.

Read more from Freeform Dynamics’ software delivery research here.


November 13, 2018  6:50 PM

Open source is growing up – and here’s how

Bryan Betts Bryan Betts Profile: Bryan Betts
API, Docker, Kubernetes, Open source, OpenStack

If you’re among those who still think that open source is just for hobbyists and academics, think again. Open source is mature now, both as a concept and as tools for building enterprise IT, and we have two major shifts in understanding to thank for that.

The first key change is that there’s a much more mature understanding now of how the layers of IT architecture relate to each other – of what fits where, in other words. Instead of trying to do too much, adding in every feature or capability that might be related, open source projects have become more focused.

For example, instead of misunderstanding them as rivals, we can now see OpenStack and Kubernetes for what they are. The former is an infrastructure layer, upon which you can run platform layers such as the latter.
In parallel with that, open source developers now better understand the need to align their projects with each other. That’s partly driven by users pushing for interoperability – make that easier and you get more users. But it’s also the growing recognition that, as open source usage increases, their project is part of something much bigger – potentially a whole suite of interoperating enterprise-grade software. Open infrastructure, as it’s already being called.

Would you like Kubernetes with that, too?

A good example is the way that hardly a week goes by now without a reference to some new cooperation or interworking with Kubernetes. It was there at the recent Cloud Foundry Summit, it was there when I spoke with Red Hat and SUSE, and it’s here in spades at this week’s OpenStack Summit in Berlin – or the Open Infrastructure Summit, as it’ll be from next year.

What we see is that Kubernetes has won the platform debate. Most major players in the cloud-native ecosystem seem agreed that Kubernetes will be the common platform orchestration layer over or under which they’ll all build – and they have aligned themselves and their own software accordingly.

For example, development frameworks such as OpenShift and Cloud Foundry were once positioned as the key platform layers, but it is more likely now that you will find them used as application layers within Kubernetes, rather than vice versa. And while Docker is still popular as a container format, those containers are likely to run on a Kubernetes platform. Focus on your strengths, is the message.

We should perhaps retain a little caution, of course. The open source community has been notorious in the past for rebel developers deciding that such-and-such a project has “sold out” or taken a wrong turn, and setting up a fork or rival project. After all, even Kubernetes was once the young upstart, with the early attention focused instead on Docker.

The case for open infrastructure

But we should also take comfort in that community’s growing maturity, and with the greater speed of open source. Not just the velocity of development, which is impressive, but also the speed with which the people driving projects are willing to pivot and embrace new and better ideas.

And I get a distinct sense that, as APIs and open standards come to dominate our thinking, modularity is now accepted as the way to get things done. Need a container orchestration layer? Don’t reinvent the wheel. Need converged file, block and object storage? Ceph has come on in leaps and bounds – but even if it doesn’t suit, there’s alternatives.

So yes, open infrastructure is a reality now. Sure, there will be use cases where it’s not appropriate or where proprietary software will be a better fit. But it’s become an attractive alternative for a growing number of organisations, and it’s going to remain a key topic in our research going forward.

Read more from the Freeform Dynamics core infrastructure & services team here


November 12, 2018  8:26 PM

The mainframe returns – as a platform for large-scale Linux

Tony Lock Profile: Tony Lock
IBM, Linux, Mainframe, Mission-critical computing

There are several ways to build large scale Linux server environments, with x86 and public cloud being obvious ones. But there’s another option too, as I reminded myself when I caught up with Adam Jollans, program director for LinuxOne product marketing at IBM. LinuxOne is a solution built by IBM using the mainframe platform as its base, but it’s solely focused on running Linux workloads.

We discussed the way some organisations are using LinuxOne to keep mission-critical open source solutions running without service interruption and, just as importantly, to keep them secure. Typical workloads his customers run include core banking services – where resilience is essential, not just desirable – and similar solutions for Telcos and SPs. These are services that must scale to hundreds or even thousands of virtual machines, doing so both cost-effectively and without risk.

The characteristics of such mission-critical workloads clearly resonate with the traits of the venerable mainframe. After all, the mainframe is regarded by many, even those who have never seen one, as the gold standard for IT resilience and availability. Unfortunately for IBM, and arguably for the wider world, the mainframe is also widely thought of as being out-dated, expensive, and difficult to manage – even though this hasn’t been true for a long time, and is certainly not the case with LinuxOne.

Linux admins, managing mainframes

LinuxOne is built on modern technology, and the management tools available from IBM and other vendors, such as CA Technologies, Compuware and BMC, have done much to simplify everyday tasks. Just as importantly, a trained Linux administrator can look after the platform using the same skills they use on any other Linux system.

While the scalability, security and resilience characteristics of the mainframe are now widely recognised, IBM is still faced with the perception of the mainframe as expensive. True, they’re not cheap, but neither is any system designed to run very large workloads. Indeed, some cost comparison studies indicate that LinuxOne is at least as cost effective as x86 systems to run applications at high scale.

It’s clear that IBM has a significant challenge to educate the market on the qualities of LinuxOne. Perceptions can be difficult to change, especially those that have been actively promoted by other vendors over many years. That said, LinuxOne is picking up new customers in both developed markets and rapidly growing economies around the world. The planned take-over by IBM of Linux and cloud specialist Red Hat will undoubtedly shift the market dynamics in this area, too.

For anyone who operates Linux systems at very large scale to support services which must run without fail, especially where those services have a mainframe history, it may be worthwhile taking a broader look at the options and not just defaulting to x86 or the public cloud.

Read more from the Freeform Dynamics Core Infrastructure & Services team here.


November 5, 2018  6:27 PM

Last call for a European public cloud?

Richard Edwards Richard Edwards Profile: Richard Edwards
cloud, Europe, Hybrid cloud, Hyperscale computing, PaaS, USA Patriot Act

Europe’s self-described No.1 ‘hyper-scale cloud provider’, OVH, held its 6th annual customer conference in Paris recently. Attendees got to meet the new CEO, Michel Paulin, and hear about the company’s four ‘product universes’, but there was more to it than that.

During his keynote, OVH founder and chairman Octave Klaba gave an impassioned speech about a ‘revolution’ designed to ‘liberate and innovate’. What Klaba called for is an alternative European cloud, one that can take on the might of Amazon, Microsoft and Google. But the question is, has Europe got what it takes to compete? And does it really need a home-grown offering when the giants are opening local data centres across the region?

Multiple parallel cloud universes

In the public cloud arena OVH is indeed surrounded by giants, but that isn’t going to stop it from trying to put together a compelling range of products and services. I found the ‘universes’ branding a little strange (constellations might have worked better), but customers seem to get it. Here’s what’s on offer:

  • OVHmarket is the ‘digital toolbox’ for small businesses and entrepreneurs, with services such as domain names, web hosting and network access services. Microsoft products are also offered here, such as hosted Exchange and SharePoint, and subscriptions to Office 365. This is all very much commodity stuff, but it’s the kind of one-stop shopping that small business seem to like, and the pricing looks competitive too.
  • OVHspirit is the universe of compute, storage and networking infrastructure, and it’s where OVH has its roots. The company offers customers a wide range of dedicated servers, virtual private servers and private cloud at an attractive price/performance ratio. And if you want your dedicated servers to be in, say, the UK, then ‘multi-local’ data centres make this happen.
  • OVHstack is the Platform-as-a-Service (PaaS) universe, built on open-source OpenStack. Designed to remove the hassle associated with infrastructure management, OVHstack takes up the notion of the software defined data centre. And because OpenStack is supported by a range of cloud service providers and vendors, customers should get better system portability, or ‘reversibility’ as OVH likes to call it.
  • OVHenterprise is the hybrid cloud universe. This cloud deployment model offers interoperability and a degree of consistency between two or more distinct public or private cloud infrastructures. This is appealing if you’ve already invested in on-premises IT and private cloud infrastructure, but also want to use public cloud to meet specific business needs.

This line-up of products and services is enabled by some 50 OVH partners, most of whom will be familiar to CIOs and IT professionals. Is it enough to tempt enterprises away from the competition, though? Is there something else of value that OVH can offer?

Would you like Patriot Act/Cloud Act with that?

OVH gained a foothold (and a couple of data centres) in the US when it acquired VMware’s vCloud Air business in May 2017, making it one of the few hyperscalers able to offer cloud services with or without the Patriot Act and CLOUD Act. But this distinction is unlikely to drive the kind of mega-growth required to catch-up with the market leaders, and I’m sure Klaba and his team realise this. So what’s to be done?

They believe the answer to this question lies within the European market itself. Does Europe need, or indeed want, a strong, local native public cloud provider? Klaba seems to think so. This is understandable, as the growth of OVH and other EU cloud providers will ultimately be determined by the region’s response. But what do European businesses, governments, institutions and individuals think? Share your thoughts and let us know.

Read more from Freeform Dynamics’ cloud research here


November 1, 2018  5:59 PM

Industrial IoT makes your people more important, not less

Bryan Betts Bryan Betts Profile: Bryan Betts
Analytics, human factors, Industrial IoT, iot, IoT hardware, Manufacturing, Skills

The recent Industry of Things World conference in Berlin offered insights into the state of the much-hyped Industrial Internet of Things (IIoT). At one end, big early adopters such as Airbus, Western Digital and HPE told how their investments in IIoT are genuinely paying off, while at the other, forward-looking tech developers – both start-ups and established suppliers alike – offered fascinating visions of the future of industry.

HPE was also the conference’s lead sponsor, which was interesting of itself. Although better known these days in the data centre, HPE – or rather its parent HP – has a long heritage in industry. Much of the HP family silver was sold off by past CEOs though, in moves that look even more questionable to me now than they did at the time.

What HPE and the other speakers at IoT World have recognised, unlike those misguided former HP execs, is that we are seeing the digital transformation of industry. Volkhard Bregulla, HPE’s VP of manufacturing, spoke for example about manufacturing’s move from automation to autonomy, and to closed-loop manufacturing where IIoT lets you “make everything transparent.”

Others, such as Dave Rauch, Western Digital’s senior VP for world-wide manufacturing operations, spoke persuasively about the need to integrate IIoT in an ongoing and holistic way. It’s not like an ERP system, he said, where “you assemble a team, create a project and so on, and at some point you’re done.” Like any digital transformation process, IIoT is a journey, not a simple destination.

And of course there will be pitfalls and road-blocks along the way. David Purón, the CEO of IoT management start-up Barbara, talked of the need to overcome expertise shortages and device integration woes, for instance. Other start-ups focused on the need to simplify, from edge analytics developer Crosser, filtering data streams from millions of sources for analysis, to Fero Labs, using machine learning to turn IoT data into actionable – and explainable – advice.

The essential human factors in IIoT

When you step back and look at the broader picture though, the common thread is the human element. Sometimes it’s the skills shortage, other times it’s the need to make the incomprehensibly complex simple enough to understand – and more importantly, to act upon. And in some cases, as with ViveLabErgo, it’s about using virtual reality to simulate the people working alongside your advanced machinery to ensure they’re both safe and efficient.

Overall, it is a reminder that even in the sensored-up and AI-enabled industries of the future, there will still be people. They might be there to do skilled manual work that’s too intricate and low-volume to be automated, to supply social and emotional intelligence, or to assess the automatically generated evidence and make the final decision.

Or like most of the people reading this, they will be filling those roles that require us to comfortably assimilate and intuitively combine a breadth and depth of expertise and knowledge that machines simply can’t handle.

Either way, as you make machines autonomous and instrument your processes, the importance of the remaining human element will actually increase, not fall. Not only is it vital to recognise that, but it’s essential to get the people on-board with the changes. In an IIoT project you underestimate the human element and the degree of cultural change involved at your peril.

Read more from Freeform Dynamics on IoT & digital transformation here.


October 24, 2018  10:49 AM

Google has got serious about Enterprise IT

Dale Vile Profile: Dale Vile
datacentre, Enterprise IT, Google, Hybrid cloud, Kubernetes

Google has made a complete about-face on the enterprise. It took so long that if you were watching all the time you probably didn’t notice it, but it has turned through 180 degrees.

Thinking back four years to my last major Google event, it was a strange experience. For a whole day I sat there listening to tales of how everything everyone was doing in mainstream business was wrong. We were using all the wrong tools and generally had no idea how to function effectively in the modern world. And corporate IT teams weren’t helping either. They were wasting their time pointlessly running datacentres, and dragging their feet on the move to the cloud – which was obviously the answer to literally everything. Google was going to show us the way and save us all.

Maybe it wasn’t said exactly like that, but it sums up the tone and spirit of what I heard. I came away with two big impressions. The first was that Google execs thought the rest of us were all a bit dim. The second was that Google had very little understanding of real-world complexities, especially in relation to enterprise IT.

Now wind the clock forward to October 2018 and the Google Cloud Next event in the UK, and the change is astonishing. This time I sat there being briefed on how on-premise computing is still the centre of gravity for enterprise IT, and legitimately so. There was clear acknowledgement of the reasons why CIOs continue to invest in the datacentre – proximity, control, compliance and even cost – yes, on-prem systems can be cheaper if you have the scale and the skills.

A full range of cloud choices is essential

Against this background, the big message now is that Google is committed to bringing the benefits of its cloud environment to customers regardless of location – your own datacentre, private hosting, and/or public cloud. A key component here is GKE – the Google Kubernetes Engine – which was recently made available as a fully supported on-prem solution to enable private and/or hybrid cloud platforms. The idea is that you can move applications and workloads seamlessly between physical environments – and even run them in tandem in a fully coordinated manner. This is a key requirement that we have been hearing consistently from IT leaders pretty much since the term ‘cloud’ was originally coined in relation to technology. Google goes even further by highlighting the ease with which a workload can be moved from GKE to a generic Kubernetes environment, thus further reducing the risk of lock in.

Together with some strong and credible viewpoints on business transformation, security and compliance, plus a clear perspective on the role of partners, this real-world take on enterprise requirements came across as very convincing from the main stage and during breakout sessions. The question is though, can Google walk the walk as well as talk the talk? After all, IT vendor executives have been known to exaggerate, spin, and generally tell people what they want to hear, even when there’s little substance to back it up.

My judgement here is that Google’s ‘born again enterprise vendor’ positioning is not just an act, but a genuine transformation. I formed this opinion after speaking less formally with a number of key Google people, some of whom I knew from their previous roles with other technology vendors. The more conversations I had, the more it became obvious that the change in understanding and attitude is at least partly a result of hiring talent and experience into the company from more traditional enterprise IT players. But I have to say that even the long-serving Googlers I chatted with generally talked about mainstream business needs empathetically, in a way that’s hard to fake. The notion that Google now ‘gets it’ was also corroborated by the customers I encountered at the event.

So, what a difference four years has made. In my mind Google has successfully metamorphosed from an idealistic enterprise wannabe to a serious mainstream contender. With the cloud platform and service market actually still very young, and other cloud players also waking up to the hybrid imperative, the next couple of years could get really interesting.

Read more from Freeform Dynamics’ enterprise IT & cloud research here


October 23, 2018  12:29 PM

Cloud Foundry’s growth highlights open source success and the cracks in commercial licensing

Bryan Betts Bryan Betts Profile: Bryan Betts
Cloud Applications, Cloud platform, Open source, Open source applications, Platform as a Service

Open source has crested the hill of enterprise acceptance. There’s no longer the fear, uncertainty and doubt (FUD) that there was around open source a few years ago, plus it’s often a lot easier and cheaper to scale open source than commercially-licensed software.

These conclusions were reinforced at the recent Cloud Foundry Summit Europe in Basel by a debate over whether the existing distribution model for Cloud Foundry – a platform-as-a-service (PaaS) for developing and deploying cloud-scale applications – was breaking.

The licensing uncertainty was highlighted by a case study presentation from insurance company Fidelity International, who saved several million dollars in licensing fees by switching from a commercial CF distribution (or distro, as it’s often known in the open source world) to the free open source version.

The cracks are showing in pay-per-use cloud licensing

As Computer Weekly’s news pages have highlighted, this flags up a big problem with the way some licensing fees are calculated. Fidelity’s use of CF has at least quadrupled as its developers shifted to cloud and microservices, and if it had stayed commercial its software licensing fees would most likely have risen similarly.

Of course, this is not just an issue for the commercial distros of “supported open source” – it’s a general problem with all pay-per-use cloud services. Organisations are equally likely to get a nasty shock if their use of cloud storage rockets, for instance. In the world of traditional IT there’s the discipline of capacity management, which aims to anticipate how demand will change over time. This seems to have been lost in the rush to cloud, though – perhaps because it is often driven by people without a traditional IT background.

So yes, the cracks are showing in that pay-per-use licensing model. As Fidelity realised, if the system is strategic enough to build a solid business case around, then once you have built up enough experience, the main thing stopping you moving to open source is FUD.

And it is clearly not alone in realising this. One of the people I talked with at CF Summit was a software engineer for a major CF company whose team is responsible for making sure that their popular add-on tools work on the open source distro, not just on their company’s own pay-per-use commercial distro.

Adopt, offload, embed

So what’s the future for commercial distros? One role may be to help organisations adopt the software and skill-up, with the option to go open source over time. There’s opportunities here for those suppliers who have major consulting practices. Time and again, CF users cite the reason for working with their chosen distro as being their software supplier’s methods and domain expertise.

Another is quite simply that not everyone wants to do their own hosting or can build a business case for it. A third is where the open source software is there, but embedded within something of much broader value. As just one example, the SAP Cloud Platform includes CF as its framework for building Web apps, but the value for SAP users is in the rest of the platform and the support it brings for extending their core SAP systems.

And then of course there are other licensing models available. So yes, there’s always going to be a role for commercial distros. That said, for those who do have the skilled people (and the caveat is that there’s a world-wide shortage in CF and Kubernetes skills; indeed, pretty much everyone at CF Summit seemed to be hiring), why wouldn’t you go open source?

Read more from Freeform Dynamics on software delivery here


September 12, 2018  12:14 PM

When modern meetings don’t work

Bryan Betts Bryan Betts Profile: Bryan Betts
appointments and meetings, Conference Room, Team Collaboration, Virtual meeting

Once upon a time, pretty much every meeting had a minute-taker – someone who kept notes, summarising who said what. These meeting minutes were circulated afterwards for comment and correction, and to confirm actions that people had said they would carry out.

Then teleconferencing happened, and all those meetings became a whole lot less formal. Now, participants are often expected to keep their own notes, but if everyone takes their own, then there’s no shared or agreed version for those who can’t attend. And while there might be an audio recording of what was actually said, trying to extract the key points or ‘matters arising’ from an hour of audio at a later date is a thankless task.

It’s all about time

Indeed, a general point might be “Don’t rely on recordings!” If someone genuinely hasn’t got time to spend an hour online with you, when at least there’s some real-time interaction possible, it’s unlikely they have time to passively listen to an hour’s recording. There’s a good reason why meeting minutes are usually in summary form, not verbatim. Time is money, after all.

Of course, formal face-to-face meetings often still have minutes. However, we all rely so much now on disparate technological aids for collaboration – smartphones, CRM systems, project management apps – that even if we do have a set of meeting minutes, they need further processing to make them useful.

So what can we do? For a start, we need to bring back meeting minutes, even for e-meetings – but they need to be minutes fit for the 21st century. For a start, that means making them searchable, shareable and connected. And yes, it might mean applying AI to generate a précis.

Once you realise the problem, you probably won’t be surprised to read that there’s quite a few applications offering to do much of this for you. Most are all-in-one solutions, though, combining note-taking with their own task tracking and so on. If you’re happy with working in a single vendor ecosystem then that’s fine – hello Microsoft Teams! – but it’s not the way everyone works.

So there’s also an emerging set of software tools that enable meeting notes to be linked into other tools – turning action items into tasks, activities or tickets in whatever best-of-breed CRM, project management or helpdesk app you happen to use. You can do much this through a collaboration platform – Slack has a lot of relevant integrations here, for example, as in different ways does Dropbox Paper – but I’m thinking more of the likes of Hugo’s eponymous team-working app. I recently talked with Hugo’s founders about making e-meetings more effective and less wasteful, and they were vocal on the foundational role here of ‘actionable’ minutes.

Is your meeting software making you behave badly?

Whichever route you take, it’s all symptomatic of a wider effect we’ve been observing in our research. This is that, yes, we shape our tools, but then our tools tend to shape us – or rather, they shape our behaviours. The resulting behaviours may not be obviously toxic, but they are clearly not optimal either.

How do we pull those behaviours back on course? A first step, obviously, is to recognise that there is a problem. Then it’s understanding where it comes from and the influence our e-meeting toolkit has on it. And lastly, it’s hacking that toolkit to help us rediscover those essential yet near-forgotten meeting skills and disciplines.

Read more from Freeform Dynamics’ end-user computing team here.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: