DevOps, as we know, is the portmanteau coming together of Developers and Operations to create an amalgamation of cultural philosophies, working practices and technology tools designed to make software application development more Agile and less painful.
So what’s the goal of DevOps, ultimately?
If DevOps comes into existence inside any given software application development shop, then how does it exist in the long term?
Is good DevOps an established long term connection between Developers and Operations that pervades for all eternity?
Or… perhaps, could really effective implementation of DevOps principles lead to an eradication, elimination and ultimate extinction of the DevOps species?
It’s not such a crazy idea, listen up.
Establishing a DevOps culture inside a software application development team and its corresponding operations function means tasking (and skilling up) engineers that are focused specifically on integration, test and deployment i.e. the things that make DevOps happen and make DevOps DevOps.
But… if the DevOps culture is embraced, then doesn’t each engineer take more individual responsibility for the integration, test and deployment of the code that they themselves are working on?
Doesn’t this, arguably, theoretically, lead to the death of DevOps?
The death of DevOps?
This commentary was suggested by Amazon big data engineer Michael Surkan in a LinkedIn discussion that links to a wider piece of the maturity of DevOps.
“Will the brave new world of DevOps be one where specialisation disappears and more responsibility is pushed down to the lowly software engineer to manage their own deployments? Is the day far off when no one will even mention they are a DevOps engineer on their resume,” asks Surkan.
Surkan’s comments are made in relation to a wider piece written by Avi Cavale, CEO and co-founder of Shippable.
As many readers will know, Shippable is a Continuous Integration (CI) and Continuous Delivery (CD) DevOps automation platform with Docker support designed to simplify the provisioning, building, testing, and deploying applications.
Cavale’s ‘Moving up the DevOps’ maturity curve’ is linked here.
Will DevOps kill DevOps, ultimately? Bad DevOps won’t, but good DevOps might.
Industry commentators love to talk about why so many of us think we dislike management consultants.
Surely hiring a consultant means that, internally, something has failed and a firm needs advice, right?
Deeper then, if that same firm had moved to achieve a higher level of information-share, connected web-centric collaboration and overall data transparency (all those things we are constantly told we need to achieve so-called digital transformation) then the firm wouldn’t need that advice in the first place – would it?
In an age when even the Harvard Business Review runs an examination of why we all hate consultants, is this a brush we should tar the whole profession with? Is a management consultant just as (potentially, arguably) guilty as a cloud computing consultant of regurgitation and stating the obvious?
As in management, as in cloud?
Independent technical consultancy Amido argues no, it’s not the same thing.
Amido specialises in implementing what it calls out as ‘cloud-first’ solutions and the firm this year became the highest ranking cloud technology consultancy on the 2017 Sunday Times Lloyds SME Export Track 100, an award that ranks Britain’s 100 small and medium sized companies with the fastest growing international sales over the last two years.
While management consultants are busy organising roundtable meetings and reports, cloud consultants are (hopefully) more focused on the dirty practical mechanics involved with making real cloud software systems work.
As an example, Amido insists that it is focused on realities that include encouraging retail customers to adopt microservices.
How microservices help retail
The company says that many retailers, both on the high street and online are now adopting this approach, isolating individual areas of functionality from the monolith one at a time and creating new service boundaries.
“[Adopting microservices allows retailers] to innovate without affecting the day to day running of the business. When delivering microservices architecture for retailers, understanding [an] organisation’s target service architecture is only half the battle; the war is won by knowing how to break apart existing monolithic systems into bitesize chunks and phasing them in while keeping the lights on in the meantime.”
Principal consultant at Amido Richard Slater talks deeper still on the mechanics of cloud. He is fond of explaining how resilience engineering is a practice originally from the construction industry whereby design of buildings, infrastructure and utilities incorporates failsafe mechanisms, backups and redundancy to keep people alive when the worst happens.
Because of this, Service Resilience, Business Continuity and Disaster Recovery are all important aspects of a cohesive and comprehensive cloud service management strategy.
Release the monkeys
“The question of how you encourage engineering practices to improve the availability or resilience of a system doesn’t have a perfect answer. As developers, we need to balance functional requirements, such as ‘Add this product to a bag` with non-functional requirements, such as ‘The bag must survive a failure when a datacentre fails’. Often the two will compete, and it is easier to prioritise the former when the pressure is on,” wrote Slater, on an official Amido blog.
He notes that Chaos Monkey works by killing off AWS Instances indiscriminately; this has the effect of causing systems that have not been architected and engineered for resilience to fail unexpectedly.
This failure, in turn, gets the engineering teams thinking about Availability and Resilience as part of their daily activities, crucially amplifying a feedback loop between failure point and mitigation.
The Amido blog contains more of the real developer-level guts of cloud we need to get dirty with. Sure… there is still consultancy spin here at some level, but it would be unfair to argue that any cloud consultancy operates on quite the same level as your average management consultancy.
Keep on keeping it real people.
Okta is an authentication layer for developers to implement at the app, service, device and user level.
GEEK QI NOTE: In meteorology, an okta is a unit of measurement used to describe the amount of cloud cover at any given location such as a weather station. Hence the name then i.e. the amount of Okta equates to the amount of technology ‘cover’ (in terms of security) a system is actually providing.
With a dedicated developer track at the event itself (and supporting blog), Okta’s programmer message is one that focuses on how to integrate custom web applications with Okta’s identity management and security APIs using best practices and established workflows.
CEO Todd McKinnon has already laid down his ‘vision’ for the firm’s Okta Identity Cloud. He insists that this technology has the potential to become the authentication layer for every app, service, device and person. He says that Okta Identity Cloud gives developers an easier and more secure way to manage user access to whatever they are building.
“Identity is a part of every digital experience, so investments at the developer level are critical. That’s why we’re dedicating two full days to developer training — workshops and collaboration around integrating custom web apps with our identity and security APIs — before the main event kicks off. We’re also hosting our first-ever AppDev track during Oktane, where we’ll provide insight into Okta’s product roadmap and host sessions on designing REST APIs and seamless registration experiences,” said Todd McKinnon, CEO and co-founder, Okta
It’s a busy time for Okta, in March this year the Computer Weekly Developer Network noticed that the firm had acquired Stormpath, a provider of identity management for applications and APIs.
The firm also made headlines last August when it announced a major partnership with Google — this news saw Okta become Google’s ‘preferred’ secure identity management layer for all work relating to Google Apps for enterprises.
So what of the event itself, what can we expect from Oktane 2017?
“What I loved about Oktane  is actually getting to meet a number of the other CIOs that I’ve had the opportunity to network with and get their use cases on how they’re using Okta… It’s been great to network with these other CIOs and get their feedback, and use cases,” said Mark Hagan, CIO of Envision Healthcare.
Speakers include actor and author Rainn Wilson, activist Reshma Saujani who is also founder of Girls Who Code, plus also Megan Smith, former CTO of the United States. Okta keynotes will be delivered by CEO Todd McKinnon, COO Frederic Kerrest, chief customer officer Krista Anderson, chief product officer Eric Berg and Erin Baudo Felter in her capacity as executive director of Okta for Good.
Last year the company claims to have welcomed over 2000 IT leaders, developers and cloud software vendors (ISVs) to its event — the 2017 event is clearly already bigger than that.
Oktane has two major keynotes, two super-sessions, 48 breakout sessions, 11 training labs and many hours of pre-conference training… the CWDN blog team will be there to cover it.
It’s Friday (okay this is the Internet and this is here forever, but it’s Friday at the time of writing)… and that means we all wish we could take the day off, work a four-day week (or less perhaps) and work better, live better and love life better.
So what to do?
Life empowerment books are all the rage and no American airport magazine stand is complete without at least a stack of these things. But perhaps we should think about work empowerment first and this, theoretically, could lead to life empowerment, right?
Holger Reisinger thinks this general approach is correct i.e. work out your approach to your job and a more harmonious work-life integration point is possible with higher productivity.
Intelligent sound solutions
Reisinger is senior VP of global accounts, products and alliances at Jabra, the ‘intelligent sound solutions’ company. Jabra is actually known for its headsets for offices & call centres with noise cancellation and ‘superior’ sound engineerings. The firm also specialises in Bluetooth headsets and speakers and other products including wireless sport headphones.
According to Jabra’s Reisinger, the firm has focused on what it calls ‘New Ways of Working’ and it is now issuing a rallying cry for a radical transformation of how we organise our work and get more done.
“At [the] heart [of this philosophy] is the power and autonomy of the individual. New Ways of Working is about surrounding people with technologies, processes and a culture that helps them achieve their full potential by feeling more appreciated, engaged and fulfilled in the workplace. Business success today – and tomorrow – isn’t reserved for those who work harder; it belongs to those who adopt New Ways of Working,” said Reisinger.
GET S#!T DONE
The thinking behind these ideas has been published in a free print and e-book called GET S#!T DONE, which readers can download here. The content itself stems from essays and other written pieces that have appeared on the official Jabra company blog itself.
Chapters presented in this book include ‘chunks’ of though that focus on areas including:
- Work isn’t a place, it’s what you do.
- The secret of managing remote teams.
- The most annoying thing in the world.
- Collaboration vs. Concentration.
- The rise of the chief happiness officer.
- In the future, your employees will not be your employees.
- Start sleeping on the job, or you’re fired!
The foundation of much of the thinking presented here (and indeed the foundation of Jabra’s concept of New Ways Of Working) is connected (and credited) to a behavioural model developed by Louise Harder Fischer PhD and fellow at IT-University of Copenhagen and her concept of the productivity cube.
Fischer’s cube segments the world into three dimensions: work-modes, technology and workplace culture.
What (arguably) marks GET S#!T DONE out as interesting is that this is a headset firm (albeit a dedicated specialist in the field and one aiming at the high quality end of the market) talking not about headphone, earpieces and call centres, but about the way we work and the way we harness productivity. If that’s not a lesson in holistic market-wide cross-industry thought-leadership then surely nothing is.
UK-based cloud datacentre and comms firm Node4 gets a clean bill of health this month by virtue of its new NHS Health and Social Care Network (HSCN) compliance rating.
Developers working on data-heavy transactional and analytical software applications connected to the health sector will need to know that this new HSCN tag is now being applied to underlying networks… and it effectively replaces N3.
What is (was) N3?
N3 is the national broadband network for the NHS connecting all NHS locations and 1.3 million employees across England.
According to the NHS, “The Health and Social Care Network (HSCN) is a new data network for health and care organisations which replaced N3 — [and] N3 services ended on 31 March 2017.”
The Health and Social Care Network (HSCN) is a new data network providing a efficient and flexible way for health and social care organisations to access and exchange electronic information.
Node4’s public sector team works across central government, local government, health, education, devolved administrations, emergency services, defence and not-for-profit organisations.
“The HSCN allows the health and social care to transform patient care and services through greater connectivity, making data and information fully accessible to clinicians, health and care professionals and citizens,” said Vicky Withey, compliance manager at Node4 comments.
Withey claims that Node4 has successfully completed the criteria required for HSCN compliance, including meeting rigorous third-party standards.
Node4 head of public sector Paula Johnston is also upbeat. She notes the use of her firm’s connectivity services and HCS collaboration platform and says this will help the NHS with security infrastructure challenges.
This is a guest post for the Computer Weekly Developer Network written by Mat Mathews in his capacity as co-founder and VP of product management at Plexxi — the firm is a specialist in converged and hyperconverged network infrastructure for public and private cloud deployments.
Pressure is mounting for companies to increase business agility and improve efficiencies by adopting advanced IT models to deliver new applications to both customers and internal departments. Given this reality, we now see many firms turning to hyperconverged technologies for help in moving beyond their traditional legacy IT infrastructure.
So how does this sector of the total technology stack play out and what key considerations should we be thinking about?
Mathews writes as follows…
Analyst house IDC has found that hyperconverged infrastructure (HCI) sales grew by 104.3 percent year-over-year in the third quarter of 2016… and Gartner suggests that the HCI market will be worth $5 billion by 2019.
HCI helps increase new application development by making it easier for customers to build their own on premise “cloud” by collapsing the traditional compute (server) and storage silos into a single infrastructure building block and sleek software control.
With HCI on the rise, the need for what is referred to as a Hyperconverged Network (HCN), which does the same for the network silo, emerges. Let’s take a closer look at HCI, HCN and why people are choosing to make the move.
DIY vs. silo’d vendor tradeoff
As companies move from custom-built IT infrastructure (i.e., built around specific application needs) to general purpose cloud computing, they are generally faced with this choice: There is the “do it yourself” model with commodity hardware, open source tools and a lot of savvy developers, or there is the more traditional slio’d vendor approach and turnkey licensed cloud management software.
HCI aims to allow these companies to leverage commodity hardware for compute and storage with sleek, highly integrated, pre-built software to help them easily turn that into a cloud with minimal specialised expertise or development.
HCI also enables a “pay as you grow” model, so companies can start with exactly what they need and only add more as they grow. This translates into agility for application developers at a very favourable cost.
However, as HCI deployments grow, they can quickly run into roadblocks. HCI creates a clustered system of compute and storage commodity components and is therefore highly dependent on the network that connects the cluster together. New application deployment can become stymied by a manual network configuration and the engineering process.
As HCI deployments grow past their initial few nodes, a network architecture usually has to be designed, which takes highly skilled engineers to ensure that the overall cluster operates at peak efficiency and doesn’t interfere with existing gear or applications. This type of network engineering and architecture is a long, arduous process with traditional networking gear, and again slows down or eliminates the agility gains of HCI. This is where HCN comes into play.
Why are customers using HCN?
HCNs are designed to specifically address the needs of HCI deployments by bringing the same agility and scale-out performance benefits to the network that HCI brings to compute and storage. HCN doesn’t force users to pre-design a network for larger deployments, leveraging the same pay-as-you-grow model as HCI.
Another benefit of HCN includes zero-touch network administration.
In other words, virtual local area networks (VLANs) and network switch ports are automatically added, moved, and deleted based on the VM lifecycle information in the HCI systems.
A good HCN will also automatically keep compute and storage traffic isolated on the same physical network, saving tremendous cost that is typically associated with building and operating separate networks. Other advantages include integrated management and visibility into HCI management systems.
HCN allows companies to maintain the agility and ease of use from HCI even in the most complex environments.
If you have plans to deploy a HCI system, you should seriously consider using HCN.
More on Plexxi
Plexxi solutions enable cloud builders to harness the power of a single, simple platform to create private/public/hybrid cloud and datacentre networks. Plexxi HCN offers a hyperconverged network for hyperconverged infrastructure HCI solutions.
Embedded analytics firm Jinfonet Software has hit the 14.5 version release of its JReport product.
Developer focused from the start, the message to programmers from the J-loving Marylanders behind this software is the option to be able to bring together data from different users without the need to hunt and stitch together data from multiple views.
It’s all about the crosstabs, y’see?
As detailed here on GeekInterview, “Crosstab, or Cross Tabulation, is a process or function that combines and/or summarises data from one or more sources into a concise format for analysis or reporting. Crosstabs display the joint distribution of two or more variables and they are usually represented in the form of a contingency table in a matrix.”
Jinfonet says that crosstabs have been one of the most valuable visualisations in BI and analytics history.
They are effective in analysing hierarchically structured information, such as product categories, where the eyes drive across rows. It’s all about data layouts where users can evaluate more variables and outcomes simultaneously.
JReport 14.5 now offers compound crosstabs that allow users to put multiple parallel crosstabs together. Every group of columns and rows can have its own set of measures making it possible to juxtapose information that normally requires multiple rounds of configure-and-run.
Large scale data pipelining
Customers have said the the large scale data pipelining (along with the compound crosstabs function, obviously) are what makes JReport worth its salt.
Also here we find live dashboards with individual widgets. When a user drills down, filters or sorts on a dimension in one widget, visualisations on other subscribed dashboard widgets update instantly.
The product is also enhanced for smartphones in terms of screen layouts, responsive scaling and folding dashboards, reports and charts.
There’s an uprising in the Middle East, but this time it’s no Arab Spring… or if it is, it is a new beginning for the software application developers native to the region.
The Arabic-speaking coder fraternity and sorority has been left behind in many senses. In no small part this is related to the fact that the region has been plagued by software piracy for many decades.
Longstanding efforts made by bodies such as the Business Software Alliance have sought to counter the counterfeiters, but it was always just chipping at the iceberg.
A gulf across the Gulf
Arabian coders have also been left behind in the sense that not enough Arab language applications have ever been natively developed.
Although the tech user base across the Middle East speaks a high level of English, the gap in natively created native Arab language software and tools has always represented a gulf. It’s a gulf that stretches across the Gulf… and to across the rest of the Arab League’s 22 nations.
But could the rise of cloud computing change things? Could the fact that software is no longer ‘owned’ (and is now increasingly rented as-a-Service) mean that Arab developers, programmers and all flavours of software engineers are now given the chance for their ideas to shine without the risk of them being (almost) instantaneously ripped off?
“Cloud computing represents the best deterrent to piracy we have yet seen. The pay-as-you-use model reduces the up-front investment in new software while new business models are emerging to pay for services like advertising through monetising data, which could prove more palatable and affordable than paying for software,” said Agarwal.
Dualistic channel of communication
Agarwal describes what he calls a ‘dualistic channel of communication’ created by the cloud model that exists between provider and user. This he says makes it easier and a lot more practical to customise software and monitor for pirated distributions when delivered as a service.
“The wide-reaching IT shift to the cloud offers an opportunity for companies in the Middle East to leapfrog the west, adopting the newer solutions without being hampered by legacy systems,”said Agarwal.
But there are of course pitfalls here. The more established cloud players have a leg-up in terms of scale. The winner-takes-all kind of business favours the early leader and the companies with more resources to build scale. Data sovereignty is a major issue here.
“When it comes to data sovereignty and cloud and who has control over what data (and what geographic restrictions are in place) this can all add up to make a big difference to the success of a cloud system. This works both ways and can lead to more locally-developed solutions in the Middle East winning the market share because of the same considerations,”said Agarwal.
A golden ticket?
These notions are not confined to the nice snowy people at Snowflake; the proposition here is stretching far and wide. Ahmed Auda is managing director for Middle East and North Africa region at VMware.
Auda says that cloud and virtualisation platforms could provide the golden ticket for Middle East youth job creation – but governments, multi-national organisations, local startups, and educational institutions need to partner on developing the digital skills needed for both Millennials and the more experienced workers.
“In Europe, Middle East and Africa, 71 percent of organisations say digital skills improve competitiveness, with 64 percent of employees (including 39 percent of 45-54-year-olds) willing to use their own time to learn new digital skills such as coding and building mobile apps. According to a recent survey by Vanson Bourne for VMware. However, half (48 percent) of EMEA employees cannot use their digital skills – particularly due to digital not integrated into personal objectives, lack of budget and lack of adequate support from IT,” said Auda.
Shift to LoB IT control
The VMware Arabia man also think that this need for all employees to have digital skills is further driven by technology management shifting from IT to Lines of Business, a trend seen in 68 percent of Middle East organisations, according to another report by Vanson Bourne for VMware.
“In the Middle East, the cloud is seeing strong take-up for enhanced business agility and optimising costs. However, the cloud is not more or less safe than desktop computing. One of the biggest challenges facing Middle East organisations is having the proper cloud-based cybersecurity tools and processes, as only 5 percent of EMEA leads consider security the highest corporate priority, according to “The Cyber-Chasm” report by The Economist Intelligence Unit. In the UAE, for example, 64 percent of organisations expect to be hit by a cyber-attack in the next 90 days,” heeds Auda.
VMware says it is supporting Middle East coders in developing their digital skills and is currently partnering with local universities, starting with Prince Sultan University in the Kingdom of Saudi Arabia, on integrating digital skills in the educational curriculum.
Clouds over the Middle East are good news then… for programmers, at least… and just maybe for anyone that needs some respite from the soaring desert heat.
Data engineering departments and their corresponding software application development shops have got the message regarding the upcoming General Data Protection Regulation (GDPR) by now, right?
Apparently not says Veritas Technologies, the data specialist firm that now chooses to style itself as the ‘multi-cloud data management’ firm.
Veritas claims to have uncovered what may be worrying ‘findings’ in its Veritas 2017 GDPR Report — although thirty-seven per cent of UK organisations claim to be ‘GDPR-ready’, Veritas estimates that only nine per cent of UK firms that believe they are prepared for the GDPR actually are.
When survey respondents were asked about specific GDPR provisions, most provided answers that show they are unlikely to be in compliance, states Vertias.
The implication, if this survey holds water, is that there is a widespread and distinct misunderstanding over regulation readiness.
“With the EU’s General Data Protection Regulations (GDPR) less than one year away, organisations around the world are deeply concerned about the impact that information non-compliance can have on their brand and loyalty of their customers,” said Jason Tooley, VP for Northern Europe at Veritas.
Of 900+ global UK organisations polled:
- 45 per cent of organisations find it difficult to identify and report a data breach within 72 hours
- 16 per cent admit that personal data cannot be purged or modified
- Almost one-third, 32 per cent, believe that former employees still have access to internal data
The findings from the report show that almost half (48 per cent) of organisations who stated they are compliant do not have full visibility over personal data loss incidents. Moreover, 61 per cent of the same group admitted that it is difficult for their organisation to identify and report a personal data breach within 72 hours of awareness – a mandatory GDPR requirement where there is a risk to data subjects.
Any organisation that is unable to report the loss or theft of personal data – such as medical records, email addresses and passwords – to the supervisory body within this timeframe is breaking with this key requirement.
The developer responsibility
As Veritas’ Tooley has openly suggested more education is needed on the tools, processes and policies to support information governance strategies that are required to comply with the GDPR requirements.
“Creating an automated, classification-based, policy-driven approach to GDPR is key to success and will enable organisations to accelerate their ability to meet the regulatory demands within the short timeframes available,” he said.
Developers should also know who holds the responsibility for data held in cloud environments before they embark upon their next cloud-driven deployment, rollout or enhancement.
Veritas asserts that almost half (49 per cent) of the companies that believe they comply with the GDPR consider it the sole responsibility of the cloud service provider (CSP) to ensure data compliance in the cloud.
“In fact, the responsibility lies with the data controller (the organisation) to ensure that the data processor (the CSP) provides sufficient GDPR guarantees. This perceived false sense of protection could lead to serious repercussions once the GDPR is enacted,” said Veritas, in a press statement.
Failure to meet GDPR requirements could attract a fine of up to four percent of global annual turnover or €20 million, whichever is greater.
The is a guest post for the Computer Weekly Developer Network written by Brian Dawson in his capacity as DevOps evangelist at CloudBees — CloudBees describes itself as the ‘hub’ of enterprise Jenkins and DevOps, providing software solutions for continuous delivery.
Dawson’s areas of focus centre on tools, technology and pipeline development, project management, licensing, business development and process improvement.
Dawson writes as follows…
More organisations recognise the value of adopting DevOps practices to accelerate delivery cycles while improving quality, reliability and security. As a result, the disparities between organisations who have begun a DevOps transformation and those who have not are becoming more pronounced as highlighted in the most recent State of DevOps Report.
In addition to noting that DevOps practices “improve organisational culture and enhance employee engagement”, this report cites findings on high-performing organisations: compared to lower-performing peers, they have better employee loyalty and spend less time on unplanned work and rework. They deploy 200 times more frequently, recover from failures 24 times faster and have 2,555 times shorter lead times for changes.
Many of the organisations surveyed started down the road to DevOps by bridging the chasms that exist between upstream development and downstream delivery across three planes:
- People and culture
- Process and practice
- Tools and technology
They are interdependent and establishing an effective DevOps culture requires addressing all three.
On the third plane – tools and technology – high-performing organisations have long recognised the value of using tools to automate the delivery process, but automating legacy architectures, using legacy technologies only goes so for. Accordingly, there is a growing interest in microservices architecture and container technology.
Not your father’s architecture: microservices are not a mashup of SOA
Because the microservices concept has its roots in Service Oriented Architecture (SOA) the two concepts share many characteristics, but there are important distinctions. Like SOA, microservices work by decoupling components of a complex system and defining interfaces or contracts between them.
With microservices, the communications between components tend to be lighter weight and the interfaces and contracts less rigid, often implemented through RESTful APIs. Many also view microservices as more focused on user-facing functionality rather than back-end services, but that is not a hard-and-fast rule.
Microservice components can also be deployed independently, making it easier for relatively small teams to apply iterative processes to build, test and deliver a microservice as an individual component.
Several factors contribute to the above shown acceleration in interest. Small components can be built independently by teams of 8-12 (2 pizza teams) who have end-to-end control over development and delivery.
Decoupling system functionality into smaller components makes it possible to reliably and frequently update individual components with reduced risk and impact on the overall system.
A cross-functional scrum team with development, QA and operations expertise can rapidly develop, test and deploy a complete microservice component, and then react faster to unexpected issues once it is deployed.
How do containers fit in?
Docker has revitalised decades-old container technology and this has captured everyone’s attention. It is difficult to find a mainstream development and delivery tool provider that has not adopted some level of Docker support.
The appeal of using Docker is that a Docker container lets you encapsulate an entire environment into a single lightweight image rather than building and configuring a new physical server. Docker containers provide fast access to infrastructure, a fundamental requirement of DevOps and continuous delivery practices.
As the industry continues to move towards the ideal of building and testing software with every change made, we need environments available on-demand to support the increased number and frequency of builds.
Containers are a perfect fit for small agile teams because they provide fast access to immutable infrastructure without interfering with other development streams. Containers are also a perfect fit for microservices because they are well-suited to hosting smaller, self-contained components.
There are, of course, some caveats to consider before jumping in with both feet with microservices and containers. Container technology is maturing and evolving rapidly and new Docker releases arrive frequently.
If you’re going to be aggressive about container adoption, keep vigilant about changes that may affect your specific use cases. Rather than spending time and resources breaking a legacy monolithic system into microservices, consider leaving that software in place and use a microservices architecture only when implementing new capabilities, gradually replacing legacy architecture.
DevOps requires a marriage of culture and process as well as tools and technology. An organisation’s ability to employ tools (including Docker) and technology (including containers and microservices) in support of a collaborative culture and proven practices is a leading indicator of its ability to differentiate itself by developing software more quickly, from concept to customer and delivering that software with increased quality, reliability and security.