Devops needs to be able to SELECT COMPUTE_RESOURCES from CLOUD where LOCATION in (APPLICATION SPECIFIC RESTRICTIONS). This post brought to us by Lori MacVittie.
The awareness of the importance of context in application delivery and especially in the “new network” is increasing, and that’s a good thing. It’s a necessary evolution in networking as both users and applications become increasingly mobile. But what might not be evident is the need for more awareness of context during the provisioning, i.e. deployment, process.
A desire to shift the burden of management of infrastructure does not mean a desire for ignorance of that infrastructure, nor does it imply acquiescence to a complete lack of control. But today that’s partially what one can expect from cloud computing. While the fear of applications being deployed on “any old piece of hardware anywhere in the known universe” is not entirely a reality, the possibility of having no control over where an application instance might be launched—and thus where corporate data might reside—is one that may prevent some industries and individual organizations from choosing to leverage public cloud computing.
This is another one of those “risks” that tips the scales of risk versus benefit to the “too risky” side primarily because there are legal implications to doing so that make organizations nervous.
The legal ramifications of deploying applications—and their data—in random geographic locations around the world differ based on what entity has jurisdiction over the application owner. Or does it? That’s one of the questions that remains to be answered to the satisfaction of many and which, in many cases, has led to a decision to stay away from cloud computing.
“According to the DPA, clouds located outside the European Union are per se unlawful, even if the EU Commission has issued an adequacy decision in favor of the foreign country in question (for example, Switzerland, Canada or Argentina).” -German DPA Issues Legal Opinion on Cloud Computing
Back in January, Paul Miller published a piece on jurisdiction and cloud computing, exploring some of the similar legal juggernauts that exist with cloud computing:
While cloud advocates tend to present “the cloud” as global, seamless and ubiquitous, the true picture is richer and complicated by laws and notions of territoriality developed long before the birth of today’s global network. What issues are raised by today’s legislative realities, and what are cloud providers—and their customers—doing in order to adapt?
To date there are two primary uses for GeoLocation technology. The first is focused on performance, and uses the client location as the basis for determining which data center location is closest and thus, presumably, will provide the best performance. This is most often used as the basis for content delivery networks like Akamai and Amazon’s CloudFront. The second is to control access to applications or data based on the location from which a request comes. This is used, for example, to comply with U.S. export laws by preventing access to applications containing certain types of cryptography from being delivered to those specifically prohibited by law from obtaining such software.
There are additional uses, of course, but these are primary ones today. A third use should be for purposes of constraining application provisioning based on specified parameters.
While James Urquhart touches on location as part of the criteria for automated acquisition of cloud computing services what isn’t delved into is the enforcement of location-based restrictions during provisioning. The question is presented more as “do you support deployment in X location” rather than “can you restrict deployment to X location.” It is the latter piece of this equation that needs further exploration and experimentation specifically in the realm of devops and automated provisioning because it is this part of the deployment equation that will cause some industries to eschew the use of cloud computing.
Location should be incorporated into every aspect of the provisioning and deployment process. Not only should a piece of hardware—server or network infrastructure—be capable of describing itself in terms of resource capabilities (CPU, RAM, bandwidth) it should also be able to provide its physical location. Provisioning services should further be capable of not only including location restrictions as part of the policies governing the automated provisioning of applications, but enforcing them as well.
Standards Need Location-Awareness
Current standards efforts today such as the OCCI specification [PDF] (intended as a means to query cloud computing implementations and its components for information) do not make easily available the ability to query a resource for location at run-time. It does, however, allow the ability to select all resources residing in a specific location—assuming you know what that location is, which nearly ends up in a circular reference loop. The whole problem revolves around the fact that standards and specifications and APIs have been developed with the belief that location wasn’t important—you shouldn’t have to know—without enough consideration for regulatory compliance and the problems of mixing data, laws, and location. It would be very useful, given the state of cloud computing and its “Wizard of Cloud” attitude toward infrastructure transparency, to provide location as an attribute of every resource—dynamically—and further offer the means by which location can easily be one of the constraints.
Having available some standardized method of retrieving the physical location of a device or system would allow the provisioning systems to restrict its pool of available resources based on a match between any existing location restrictions required by the customer and the location of available resources. The reason for making location an attribute of every “kind” of resource is that restrictions on application or data location may extend to data traversal paths. Some industries have very specific requirements regarding not only the storage of data and access by applications, but over the transmission of data, as well. These types of requirements may include the location of network devices which have access to, for processing purposes, that data. What seems to many of us to be trivial becomes highly important to courts and lawyers and thus it behooves network devices and components to also be able to provide location from which eventually automated application-specific routing tables could be derived, thus protecting the interests of organizations highly sensitive to location at all times of its data.
This also implies, of course, that the infrastrucutre itself is capable of enforcing such policies, which means it must be location-aware and able to collaborate with the infrastructure ecosystem to ensure not just at-rest location complies with application restrictions but traversal-location as well, if applicable. That’s going to require a new kind of network, one based on Infrastructure 2.0 principles of collaboratino, connectivity, integration and intelligence.
The inclusion of physical location as part of the attributes of a component, made available to automated provisioning and orchestration systems, could enable these types of policies to be constructed and enforced. It may be that a new attribute descriptor is necessary, something that better describes the intent of the meta-data, such as restriction. A broad restriction descriptor could, in addition to location, contain other desired provisioning-based attributes such as minimum RAM and CPU, network speed, or even—given the rising concerns regarding the depletion of IPv4 addresses—the core network protocol supported, i.e. IPv6, IPv4, or “any”.
If not OCCI, then some other standard—de facto or agreed upon—needs to exist because one thing is certain: something needs to make that information available and some other things needs to be able to enforce those policies. And that governance over deployment location must occur during the provisioning process, before an application is inadvertently deployed in a location not suited to the organization or application.]]>
This post is brought to us by Greg Ness.
I’ll be speaking on a panel at Cisco Live on July 1. I’m looking forward to talking about the new demands on network infrastructure and whether or not the enterprise is ready for seamless cloud. Frankly, so much of the discussion about cloud is the SMEs (or regarding apps) and so little is about the readiness of cloud for the enterprise that it is refreshing for Cisco Live to embrace this topic.
Even the mention of “private cloud” gets negative reactions from some of the clouderati. I heard “no such thing” yesterday on a cloud pundit call. Yet at the end of the day, enterprises will be assessing when, where and what can be delivered from any cloud versus a private cloud and the answers will have a significant impact on the evolution of cloud computing.
While I think Amazon and Google have done well delivering undifferentiated services via subsidized business models, it is fair to ask when and how can enterprises take to the clouds. IMHO, it’s when the network is ready.
You can view the session abstract here: Seamless Enterprise Extension to Cloud (SEEC) – Ready for Primetime?
Or you can read it here:
Length: 2 Hours
Abstract: Infrastructure resources acquired by enterprises in a Cloud typically remain isolated from the enterprise (DC and network). Enterprises typically run classes of applications that are not mission-critical, do not require a high degree of security or trust, are not real-time or suitable for batch processing (we refer to infra resources and applications as just resources). These resources may not also need full application of enterprise policies (security, access control, QoS, firewall, etc.). But can we extend the scope of Cloud to support a wide range of enterprise resources? In other words, can we seamlessly extend an enterprise to Cloud and vice versa? What are the mechanisms (such as security, network, VPC: Virtual Private Cloud, Cloud Service Level, and InterCloud capabilities) that are needed to facilitate SEEC?
You can follow my (Twitter) rants in real-time at Archimedius. I am a vice president at Infoblox.]]>
Last week we held a webinar on network automation with Forrester Senior Analyst Glenn O’Donnell and US Bank VP Eric L. Cummings. Stay tuned for the link and slides. In the meantime, Eric offered to answer the following questions from the audience via the infrastructure 2.0 blog:
Was there a particular event or series of events that helped you consider automation?
The search for automation was initiated by the sudden growth of our organization in the late 1990′s and early 2000′s. This growth was due to three successive mergers of equal size companies over a 5-year period to form today’s US Bank. It would be nice to say that each company was completely standardized before each merger, but that was not the case. Our organization decided on a path of centralized control and tight standards to ensure a high level of availability to the end user. We had to find an efficient way to ensure consistency and allow for a centrally controlled environment.
How important was automating DNS, DHCP and/or IPAM to your organization?
It was extremely important for us to have one unified method for providing DNS/DHCP and IPAM for our organization. The plans that were created after the last major merger allowed us to re-engineer our DNS/DHCP and network infrastructures. We wanted a cost effective but a highly resilient way to centrally manage these services. Our first attempt at this with a competing vendor allowed us to realize some of the benefits of our centralized management strategy. We were able to provide central control and management with a few staff members, but didn’t fully realize our goals as our company continued to grow.
What were the things, benefits, advantages that your team found most compelling about network automation?
First, we wanted a tool set that would turn repetitive tasks with a degree of human error into simple activities that reduced this potential human error. Second, we wanted to feel that our chosen tool’s manufacturer was a partner in our solution. We wanted to focus more on our network’s design and maintenance versus on how to do things. Our partner would focus on maintaining their automation tools market leadership and help us to meet our needs with future releases. Third, we wanted a tool that would allow us some ability to develop our own scripts for highly complex/tailored activities to significantly reduce the man-hours. One such example is the provisioning of new networks for branches. These activities of entering network, DHCP, DNS, and etc. configurations once took an hour or so to complete. Now it is just a few data entry points and the system does the rest using templates that we created.
What would have happened if you had not automated your network infrastructure?
If we hadn’t automated our network, it would not be as standardized and stable by a significant level. We have been able to ensure proper maintenance of routers and switches. We wouldn’t have been able to maintain a virtually flat FTE level when our environment has been significantly growing. Standardization levels for settings and IOS versions would be poor and our company would have extremely high risk of unintended longer than normal outages.
What are some of the projects that your team has been able to address now that you have automated your core network services?
Currently, my team’s implementation of Infoblox for our IPAM/DNS/DHCP services has allowed us to expand our involvement with 100 percent of the work for our internal environments. We are also looking to expand our services to our external DNS resolution in the coming year. Our team has been able to effectively create a N+1 system with Infoblox that allows us to have extremely reliable and easy-to-implement DR tests. In the past we spent all of our time keeping our heads above water. Now we have time to interact with the network engineers and project teams so that new implementations/designs/etc. consider the day-to-day aspects that our team faces. We have become a partner in the solution versus an automaton.]]>