when relevant content is
added and updated.
In a world where IT capabilities, tools, and infrastructure are evolving at an unprecedented pace, it’s easy to get swept up in the forward momentum. But enterprises are beginning to realize that while free open source code and low-cost cloud infrastructure are certainly valuable, unbridled and unmonitored consumption of these resources can come at a steep price. It is time for organizations to take a closer look at how they are managing risks. Otherwise, they will face increased IT spending, reduced productivity, and loss of competitive advantage. Vigilance and responsibility must become a way of life for organizations as a whole and for every software development professional who is responsible for selecting and using resources to design, develop, and deploy software.
Is security as we know it dead?
According to Joshua Corman, Cyber Statecraft Initiative Director at the Atlantic Council, this is a sad but inevitable fact. In the world of software development, Security teams can no longer be expected to simply put the finishing touches on a product before it is launched. Security must be baked in from the very beginning or the consequences could be fatal, especially now that the digital world is interfacing with the real world in more ways than most people even know.
“Software isn’t eating the world, it’s infecting the world. It’s spreading like the plague. We are putting Bluetooth and software connectivity into everything in your life. As a security guy who knows the defect rate per thousand lines of code, if you are putting 10 million lines into a car this is not funny anymore. We are looking at things that affect our families being the same things that affect our web apps.” Obviously, having a web browser crash is one thing. Having a vehicle system crash is quite another.
Saving lives is obviously a worthy reason to rethink software security. Enterprises also have incentive to take risk management seriously from a purely financial perspective. Heartbleed and Shellshock certainly demonstrated that poor attention to detail can result in plenty of unplanned, unscheduled work. On an annual basis, Corman says some organizations are spending millions to fix vulnerabilities that should have been caught before deployment. “Now they have projects that are late, over budget, late to market, and they are losing out to competitors.”
It is open season on open source
According to Corman, open source is a public health issue. “With shared value comes shared risk.” But too many companies are still oblivious to the dangers. Even when there are known vulnerabilities with fixes available, a huge percentage of organizations are not taking the fixes. Joshua painted a grim picture of what failure to take preventive measures can do to a company.
“Software isn’t eating the world, it’s infecting the world. It’s spreading like the plague.”
In his proposed scenario, approximately 90% of code used by a typical organization is third party, and the majority is open source. Most teams estimate that about 50 unique components end up in an average application. But the actual number is more than 100 components since each piece of open source code may choose additional downstream dependencies. Of these components, 23% have a vulnerability with a known fix available. Another 8% have restrictive licenses which may lead to litigation. Even with a small application portfolio and only 10% of problems manifesting per year, Corman estimated a business could easily spend a quarter of a million dollars on wasted payroll and unplanned work. “That’s time and money that could have been spent delivering value to customers.” Of course, poor code hygiene isn’t the only reason enterprises may be bleeding cash.
The role of IT governance
Use of free third party code is a thorny risk management problem. But governance of paid resources is a more prosaic issue that can also be easily overlooked until it spirals out of control. With various groups jockeying for resources and a perception of steadily decreasing cloud costs, it’s easy to blow through an IT budget without getting the greatest return per dollar spent. It’s not enough to just monitor spending. In a self-service environment, there has to be a way to manage and allocate resources to better align with business goals.
Chad Arimura, Co-Founder & CEO at Iron.io, shared the following thoughts about governance in a time when legacy and modern IT must coexist. “We were seeing most innovation happening within business unit groups themselves. We were definitely hearing about ‘bimodal IT’ where you have these legacy applications like CRM and ERP. The mission there is mainly to keep the lights on, perhaps move them to the cloud.” These areas have fairly predictable resource requirements and workloads. The main focus for improvement is typically on doing the same things at a lower cost. A need to spin up servers for app testing or use cloud-bursting to manage heavy traffic is unlikely.
“On the other side of the house, you have this group that is innovating, building new features and functionalities and delivering them to the users of the enterprise. That’s where governance is important, and enterprises are trying to get their arms around implementing a container and a cloud strategy, how to empower developers to create these applications while still working within a governance model.”
While the specifics are a little fuzzy, there are at least a few products that enterprises can use to better control spending habits without creating unnecessary delays. “It is a brave new world around governance. That’s where tooling like Iron.io comes in that allows you to monitor and self-describe the infrastructure spend that you want put forth for your application. Developers can then self-serve within that model from the open stack, for example, or the various managed providers for AWS. You are given a certain amount of infrastructure and then you can build upon that within the governance that the enterprise has put in place.”
In a similar vein, Atlassian has made a move to support better governance in DevOps by refining administrative permissions for its CI platform. Bamboo’s Project Head, Allison Huselid reported, “We know that things can go crazy with big teams, so now you can lock down who can have access to add more repositories or projects. That way the configuration doesn’t get out of hand but assigned permissions also prevent requests from become bottlenecks.”
Businesses must plan for the unprecedented
In the final analysis, Gartner probably said it best in their 2016 publication, Managing Risk and Security at the Speed of Digital Business. It’s impossible to foresee every upcoming circumstance, but organizations should at least try to get their minds around what could happen. “To assume that tomorrow will be just like today, or only slightly different, is a risk in itself. At this early stage, there are precious few best practices for digital business (risk management included), and most of these are only ‘next’ practices. To succeed, enterprises will have to blaze new trails. To be resilient, they will need to go beyond the ordinary, imagining responses to unprecedented but plausible circumstances.”
In the fields of governance and security, preparing for a responsive future starts with understanding what’s happening right now. This means organizations must take a hard, painful look at current shortfalls and wasteful habits with an eye toward meaningful change. In his final takeaway at the DevOps Summit, Corman suggested that software development teams should follow a model akin to that used at Toyota. If organizations opt for selecting code from a few high-quality suppliers, using the most recent versions, and tracking which parts go where, they are on the path to having a better grasp on risk management and security. The good news? These best practices should not hold business back. They will help drive it forward.