Today’s hybrid clouds don’t live up to the name, in Gartner analyst Eric Knipp’s opinion. Without application portability, hybrid clouds are just a collection of remote and in-house applications that have been tailored for each environment and loosely integrated.
“When I think of hybrid cloud, I’m thinking of a place where I can write an application and then deploy it to any hosting location without changing the application at all,” said Knipp, Gartner managing VP. In a true hybrid cloud, workloads could be shifted easily moved back and forth between private and public clouds to facilitate deployment and efficient scaling. Using either policy or some flick of a switch, a developer could make that same application scale to many other places, which may span private, public or even multiple public providers.
Deploying a single application across many environments with no real limitations is a capability that doesn’t exist today, said Knipp, who is co-author of Gartner’s 2015 Planning Guide for Cloud Computing. “To get there, we have to be architecting portable cloud applications in a way that dimensionally can live up to this hybrid cloud fantasy that we have today,” he explained. “I think it’s going to exist, because I see some building block technologies that are maturing.”
Using a platform as a service (PaaS) framework, like Red Hat OpenShift or Cloud Foundry, is a first step toward such “real” hybrid clouds. PaaS greatly increases application developers’ productivity by outsourcing the “plumbing” required for software development. PaaS provides infrastructure provisioning, development tools and cloud testing so that developers can focus on building the business logic and other important features.
Some say that Docker is the best basic building block for the hybrid cloud, but Knipp doesn’t. Docker is an open source container program in which an application and its dependencies can be bundled. A container is where a program building block, or component, can be run. “Docker is a useful enabling technology, but doesn’t solve most portability challenges,” Knipp said. All Docker enables is either embedding software into the Docker image that can be shared. “It doesn’t help you with the external dependencies, and that’s where the developer’s problem really lies.” Even before Docker, Knipp said, it wasn’t that hard to build an application, even as a .jar, that could be deployed across many different instances of Tomcat.
“Stacking services is what was hard to solve before Docker and still remains hard to solve after Docker,” Knipp explained. All the components involved — the database, the message queues, the in-memory data grids, the analytics service, and so on – add tremendous complexity. It’s not just what’s in that application that’s difficult to handle, it’s all the backend services, Knipp said. PaaS helps by providing and automating many of those services.
Portability has to be the key non-functional requirement consideration for any new application that’s being built in the cloud. “Just because I can take my application and drop it someplace else doesn’t actually make it portable,” Knipp concluded. “For me, hybrid cloud will only be achieved when we have a much higher degree of application portability.”
Twenty-one percent of respondents to a survey by Gigaom Research said they use the cloud to develop enterprise applications. The technology research company recently published the results of three different surveys in a report by analyst George Crump called, “Shadow IT: data protection and cloud security.”
The low percentage of developers creating applications in the cloud may be due to some of the inherent issues of using cloud. The surveys polled software-application developers and other IT workers and decision makers on a variety of cloud practices and issues, including what their top cloud concerns are. Security, performance, learning cloud-related skills, cost and support were the highest ranked, with security well in the lead.
Incidentally, employees may pose the greatest security risk to cloud adopters. Employees commit more than 70% of unauthorized data access, whether accidentally or intentionally. Employees also knowingly use unauthorized cloud applications. Eighty-one percent said they use SaaS applications without IT approval.
Another key finding of the report was that companies use many cloud instances. Of the companies surveyed, 32% said they have more than 50 cloud machine instances in production. An additional 30% use one to ten clouds, and 23% use 11 to 50.
A recent report by Forrester Consulting confirmed this multi-cloud reality. About 83% of respondents struggle with compiling their disparate cloud services, according to the report, which was commissioned by Quincy, Mass.-based software company Infosys.
The report, “Simplify and innovate the way you consume cloud,” also highlighted user experience as a hurdle to working in the cloud. Between 60% and 70% of survey respondents said they are concerned/very concern about the following: the complexity of managing and governing hybrid cloud, that application performance metrics such as availability and speed will be negatively affected in the cloud, and that integration with other applications will be difficult.
When enterprises adopt cloud computing, many of their legacy methods of software integration are instantly obsolete. Hanging on to old integration methods is like trying to fit square pegs into round holes, according to Eric Knipp, Gartner Inc. managing VP. Integration is the biggest barrier to cloud deployment success and the primary driver of change and new opportunities in developers and enterprise architects’ jobs, he said.
Application integration has been a pain point for organizations a long time, Knipp said. Initial efforts focused on point-to-point integration, which usually resulted in fragile integration points that, when multiplied, were not robust. The integrations would break during software upgrades, causing long release cycles. Over time, some, but not all, enterprises remediated integrations using integration middleware, including enterprise services buses (ESBs) and APIs.
“Yet, when organizations adopt SaaS, I see the same point-to-point integrations happening,” Knipp said. “Haven’t we been here before? Don’t we know how this movie ends? It’s not good.”
Today, cloud integration must be a core competency in the utility belt of the modern application developer in an enterprise. “When you’re talking about the mainstream enterprise that is not in the business of writing software unless they have no choice, integration has to absolutely be the first thing that you focus on,” Knipp said.
Gartner Research Director Kyle Hilgendorf agreed. “I like to refer to integration as kind of a four-headed monster,” he said. “You’ve got network integration, data integration, identity integration and then application or services integration.” Knipp and Hilgendorf are co-authors, along with four other analysts, of Gartner’s recent 2015 Planning Guide for Cloud Computing.
To handle all four types of integration well, enterprise architects must ensure that those integration architectures are “pre-plumbed or pre-connected” prior to adoption of public cloud, Hilgendorf said.
In advance of adopting cloud, build a cloud-friendly enterprise architecture and integration strategy, Hilgendorf advised. “Think about how to bridge your networks between your data center and one or more cloud providers,” he said. “Think about how you will do single sign-on and identity and access management federation or synchronization.” Consider how to synchronize the business’ identities for authentication and authorization requests. “All that work and much more has to be done up front by the enterprise architect and architecture team,” he said.
The integration challenges of cloud adoption alone give architects and developers a once in a lifetime opportunity to retool their skillsets for a long-term, successful career, according to both analysts. With the right skills, they’ll be valued leaders as businesses transition from traditional application architectures, deployment methodologies and sourcing arrangements.
“It is more critical than ever for integration to be a core competency in the utility belt of the modern application developer in an enterprise,” Knipp said.
Denodo Express, a free data virtualization tool with a graphical user interface-based studio, is the latest product offered by Denodo Technologies, Inc. Denodo Express connects with and integrates on-premises, in the cloud, structured, unstructured and big data sources. These sources are then delivered and available to end users, as well as enterprise applications, dashboards, portals, Intranet, search and other tools.
“Denodo Express is designed for those data architects that are tired of being a prisoner to archaic data integration methods or are just generally frustrated at not being able to leverage the true value of their data,” said Suresh Chandrasekaran, senior VP of Denodo.
Denodo Express cuts down on the wait time developers often face when they want to integrate data. Along with integration of disparate sources, Denodo Express also performs abstraction; query optimization; caching; extract, transform and load batch scheduling; and data services publishing.
With this new product, Denodo, which is based in Palo Alto, Calif., seeks to eliminate the need for developers to search for and interpret data on their own. Instead, “Denodo Express [delivers it] to them in the format that they prefer, be that [structured query language] for [business intelligence] and reporting tools, Web services for Web and mobile applications, Web parts for SharePoint integration, etc.” said Chandrasekaran.
Denodo Express is free and available for download on the company’s website. The download gives users access to the product for an annual, renewable term. There are no time limits on using Denodo Express; however, the product is best used for smaller projects, such as personal or departmental projects. Once customers want to use Denodo’s data virtualization on an enterprise level, it is recommended that they upgrade to the Denodo Platform, which is not free. Customers interested in the expanded service should contact Denodo for pricing information.
Customers also have access to free educational support when they download Denodo Express. There are tutorials and videos from Denodo experts, as well as online community-based support. Further training courses are fee-based and available online and offline.
The product was released on September 29, 2014.
The refresh cycle, or application modernization boom, in human capital management (HCM) started a few years ago, but purchases and deployments took off last year, thanks to new capabilities in data and application consolidation, reporting and more, says Gartner Inc. analyst Chris Pane.
“Organizations are looking to go beyond the basics in HCM information systems,” says Pane. “They’re trying to consolidate the applications they have into a single data model, one single reporting view.” The one-view-for-all approach is easier to manage, secure and scale across international borders.
Pane is one of several experts in human capital management (HCM) systems my colleagues and I have interviewed lately for articles on deploying human resource applications in the cloud.
While most Fortune 1000 HCM upgrades have been designed for on-premise systems in recent years, cloud services’ scalability and consolidation opportunities are attractive, too. A good bit of the consolidation effort is taken out of the business’s hands in the cloud. In addition, adoption is growing due to greater sophistication in cloud feature functionality and increasing agility in delivering new functionality for geographically-dispersed organizations.
Pane projects growth in cloud HCM deployments because cloud technologies have matured. Also, wider corporate usage of cloud has quashed some businesses’ fears. “Don’t forget also that businesses have outsourced payroll for years already,” he says. “So, moving more of HCM to a third party and not sort of doing the process on-premise (sic) is culturally acceptable.”
HCM directors are becoming less concerned about cloud security, too. These days, most of the tried-and-trusted HCM providers have their own very secure data center facilities or use mega-cloud providers like AWS. “Just in terms of physical security there, cloud is arguably a lot stronger than what you would get in a conventional data center,” Pane says.
More stability is needed, however, before government and financial organizations, among others, put HCM in the cloud. Even mainstream businesses should take precautions, such as spreading their cloud instances across data centers in several geographies.
Pane offers three more tips on choosing new HCM apps:
• Make sure that you have a clean set of data to move to the new system for reporting purposes.
• Ask yourself if all the functions in the current system are still appropriate for today, a must considering the cost of feature deployment. It is not necessary to replicate everything in the old system within the new system, and doing so can complicate the migration.
• On the other hand, make sure that no functionality is lost when buying a new HCM system.
Get more tips on HCM and cloud adoption in the cloud in my article, Six Things to do before Deploying Cloud Apps. Then, get involved in the conversation, and tell us your best practices for cloud app deployment.
Gordon E. Moore, co-founder of Intel, noticed that over the course of tech history, compute power doubled every two years. Later coined as “Moore’s law,” his observation was meant to predict the general upward trend of processor speeds. Over the past few years, however, new content outlets — social media, in particular — have caused an explosion of unstructured data, a phenomenon that has bypassed Moore’s law by a landslide.
There is no single super-tool to tackle the growing generation of big data, according to Ben Butler, senior solutions marketing manager of big data at AWS. In his session at the AWS Summit in San Francisco, Butler advocated, instead, for a network of solutions — AWS solutions, to be specific — that leveraged the flexibility, capacity and cost effectiveness of the cloud.
Last week, Butler hosted another session at an AWS Summit in New York. His talk drilled down AWS solutions a bit further, offering specific use cases from different industries.
Big data has been used for fraud detection, click stream analysis and ad targeting, to name a few. One of the more exciting use cases is gene sequencing. This analysis of genetic variation can be used for disease research, personalized medicine and molecular testing. It is, in short, a tool that contributes to our understanding of disease and could be instrumental to the evolution of healthcare.
The sudden influx of big data has put pressure on on-premises systems that used to store, analyze and share data without much trouble, just a few years ago.
“DNA sequencing is scaling faster than Moore’s Law, so processing the sequence data is an increasingly significant barrier,” said Alex Dickinson, VP of strategic initiatives at Illumina, a genetic research company. Dickinson confirmed that the best solution for this processing bottleneck was cloud computing.
All of Illumina’s raw data streams from its sequencing instruments, over the Internet, to AWS, Dickinson explained. “There the data undergoes intensive processing to assemble final genomes from that raw data. It is then stored on AWS and made available to researchers for further analysis.” In other words, most of the big data lifecycle is processed on AWS.
Dickinson cited three reasons for selecting Amazon over other cloud providers. One, AWS has large instances that can handle big loads of raw data. Two, AWS has sites all over the globe. Three, AWS has competitive pricing.
Whether big data researchers choose AWS or not, the cloud is certainly the next frontier for processing massive datasets. In Illumina’s case, it is removing computational constraints and, by extension, generating more opportunities for scientific insight. As Dickinson put it, “the cloud enables raw instrument data to be transformed into disruptive healthcare discoveries.”
With over a billion users, Android has become the most popular Linux distribution in the consumer market, according to Ron Munitz, CTO and founder of Nubo, a remote Android Workspace solution. Now that enterprises are rushing to migrate their Windows or Linux-based deployments to the cloud, Munitz believes it makes sense to consider Android cloud apps as servers, in and of themselves. He presented this viewpoint at AnDevCon in Boston, in a session called, “Building Android for the Cloud”.
“If there is a migration from Linux to Android, and if there is a massive migration to the cloud, then it makes sense to combine all of them together and to expand Android to be a dominant cloud system on its own,” Munitz said. He believes the next step for Android is the make server side application ns, not only for users, but for organizations.
Munitz acknowledged many challenges that blocked progress in this direction. The primary drawback is latency, which would be introduced to applications running from Android’s cloud system. Android would also have to choose a remote display protocol (RDP) and, according to Munitz, there aren’t any RDPs that could handle Android well enough to satisfy users. After all, the user interface (UI) is the most important part of the mobile device, from the user’s perspective. And this is the part that would be subject to latency were its entire backend moved to the cloud.
That said, there are many reasons to entertain this concept of Android as a full-fledged cloud operating system or, as Munitz put it, “Cloudroid.” One reason is security, which has become a more pressing concern since the rise of BYOD. “Companies say you can have access to the organization’s data but that they need to verify that you will not steal data or, if you lose your phone, that it won’t be a risk.” Data held in Android cloud servers would be one way to protect the enterprise from this risk.
For the time being, Munitz’s position is largely abstract and speculative, but his premise is intriguing. In some regards, this seems to be the natural next step for Android. And, as Munitz put it, only a few years ago, Amazon was known as a book seller. Somehow, it has grown to become a dominant cloud provider. It seems like a much smaller leap to imagine Android doing the same.
Waistlines traditionally expand as the weather gets colder. Quest Software, recently acquired by Dell, has discovered bloating can be a problem for companies using cloud applications as well, according to results from a new survey the company sponsored.
Senior IT officials from 150 companies with more than 500 applications and $500 million in revenue were surveyed by Harris Interactive for the report, which concludes companies are potentially losing millions due to poor application management.
More than half of respondents said applications that were slow, unresponsive or crashed cost their businesses big money each year. Twenty nine percent of respondents reported losing money in the millions, and 7% said they lost tens of millions or more each year.
Quest is a maker of application performance management (APM) software, a field that has expanded in step with the growing world of cloud and mobile applications. Legacy vendors like Hewlett-Packard, IBM, Oracle and Microsoft compete with mid-sized companies and upstarts, from AppDynamics and New Relic to AppNeta, for control of this growing market.
The pitch from these companies is similar: If no one is watching your applications, you’re losing money. Automating the monitoring of applications and building alerts when something isn’t working properly can reduce downtime and save money. Additionally, some APM tools have predictive analytics capabilities that alert users to problems before they happen.
In the survey’s view, that would be a big help for IT departments unable to keep a watchful eye on all its applications. Less than half of applications are accessed more than five times a day by 76% of IT managers, according to the survey.
Companies have been making use of APM tools to fix a wide variety of problems. Vodafone Ireland used HP’s APM tools to increase performance and centralize monitoring and Aptela used AppNeta’s APM appliance to fix problems with its users networks.
Follow Adam Riglian on Twitter @AdamRiglian
In September, Amazon Web Services launched a marketplace for reserved instances — a contracted, fixed-term version of its cloud infrastructure. Cue the analytics startup.
InstanceVibe.com, a two-week old baby of a website launched by Roman Stepanenko, offers analytics and alerts to prospective buyers in the reserved instances marketplace.
“Generally, each company has a preferred timeframe for the amount of time they want to have an instance. Especially with the startups, if you want to have a reserved instance, you have to pay some cash up front,” he explains. “If you want to find perfect instance for your needs, you need to keep logging into the AWS console. [The] natural solution is to supply some sort of alert where you supply the criteria to what you’re interested in and you’re notified by email.”
Stepanenko, a former financial services developer who founded structural exception search engine BrainLeg in April, said he bought the domain right after he saw Amazon’s announcement. The website launched two weeks ago. He got the idea for the site from his own experiences with the reserved instances marketplace.
InstanceVibe users set a certain criteria for the type of instance they want to find, including the amount of time they want on the contract and the amount of usage. The marketplace is scanned by InstanceVibe regularly and alerts are sent out to users when instances are available with their criteria.
Alerts are free for t1.micro instances. Costs scale up to $9.99 for two weeks and $14.99 for four weeks of unlimited alerts for any instance. Each time the marketplace is scanned, the data is stored in a historical prices database and analyzed to show the best possible prices over a certain amount of time. Those analytics are free.
“Every time I scan the marketplace I am saving these data points in my database and that allows me to analyze when instances are sold and when they become listed,” Stepanenko said. “I can calculate the best costs of ownership historically [based on the information].”
Read more from Adam Riglian on SearchCloudApplications.com. Follow him on Twitter @AdamRiglian
Larry Ellison took a swipe at SAP HANA during his keynote address Sunday night. By Monday afternoon, one of enterprise IT’s empires was prepared to strike back.
“For the last 24 hours, my eyebrows have been glued two-thirds of the way up my forehead, ” said an incredulous Steve Lucas, executive vice president of Business Analytics, Database and Technology at SAP. “My first reactions were ‘you’ve got to be kidding me.’ ”
The quotes that had Lucas irate came when Ellison was discussing Exadata X3, the latest incarnation of Oracle’s database. He touted its 26 terabytes of in-memory before drawing comparison with HANA, something he joked he would not do during the speech.
“I know that SAP has an in-memory machine. It’s a little smaller,” Ellison said.
Lucas says not so fast – SAP announced that HANA boasted 100 terabytes in-memory at Sapphire in May.
“These are the most baseless set of statements I’ve ever seen anyone in the market make,” Lucas said. “I don’t know where these people get their facts from, to me it’s absolutely mind-boggling.”
Sniping between the companies is nothing new, but Lucas said he was surprised at the form Ellison’s barbs took at this year’s conference.
“It wasn’t even the normal sort of half-truth. It was this “are you kidding me?” kind of a statement,” he said.
Check in on our guide page for more coverage of Oracle OpenWorld and JavaOne.
Follow Adam Riglian on Twitter @AdamRiglian