Denodo Express, a free data virtualization tool with a graphical user interface-based studio, is the latest product offered by Denodo Technologies, Inc. Denodo Express connects with and integrates on-premises, in the cloud, structured, unstructured and big data sources. These sources are then delivered and available to end users, as well as enterprise applications, dashboards, portals, Intranet, search and other tools.
“Denodo Express is designed for those data architects that are tired of being a prisoner to archaic data integration methods or are just generally frustrated at not being able to leverage the true value of their data,” said Suresh Chandrasekaran, senior VP of Denodo.
Denodo Express cuts down on the wait time developers often face when they want to integrate data. Along with integration of disparate sources, Denodo Express also performs abstraction; query optimization; caching; extract, transform and load batch scheduling; and data services publishing.
With this new product, Denodo, which is based in Palo Alto, Calif., seeks to eliminate the need for developers to search for and interpret data on their own. Instead, “Denodo Express [delivers it] to them in the format that they prefer, be that [structured query language] for [business intelligence] and reporting tools, Web services for Web and mobile applications, Web parts for SharePoint integration, etc.” said Chandrasekaran.
Denodo Express is free and available for download on the company’s website. The download gives users access to the product for an annual, renewable term. There are no time limits on using Denodo Express; however, the product is best used for smaller projects, such as personal or departmental projects. Once customers want to use Denodo’s data virtualization on an enterprise level, it is recommended that they upgrade to the Denodo Platform, which is not free. Customers interested in the expanded service should contact Denodo for pricing information.
Customers also have access to free educational support when they download Denodo Express. There are tutorials and videos from Denodo experts, as well as online community-based support. Further training courses are fee-based and available online and offline.
The product was released on September 29, 2014.
The refresh cycle, or application modernization boom, in human capital management (HCM) started a few years ago, but purchases and deployments took off last year, thanks to new capabilities in data and application consolidation, reporting and more, says Gartner Inc. analyst Chris Pane.
“Organizations are looking to go beyond the basics in HCM information systems,” says Pane. “They’re trying to consolidate the applications they have into a single data model, one single reporting view.” The one-view-for-all approach is easier to manage, secure and scale across international borders.
Pane is one of several experts in human capital management (HCM) systems my colleagues and I have interviewed lately for articles on deploying human resource applications in the cloud.
While most Fortune 1000 HCM upgrades have been designed for on-premise systems in recent years, cloud services’ scalability and consolidation opportunities are attractive, too. A good bit of the consolidation effort is taken out of the business’s hands in the cloud. In addition, adoption is growing due to greater sophistication in cloud feature functionality and increasing agility in delivering new functionality for geographically-dispersed organizations.
Pane projects growth in cloud HCM deployments because cloud technologies have matured. Also, wider corporate usage of cloud has quashed some businesses’ fears. “Don’t forget also that businesses have outsourced payroll for years already,” he says. “So, moving more of HCM to a third party and not sort of doing the process on-premise (sic) is culturally acceptable.”
HCM directors are becoming less concerned about cloud security, too. These days, most of the tried-and-trusted HCM providers have their own very secure data center facilities or use mega-cloud providers like AWS. “Just in terms of physical security there, cloud is arguably a lot stronger than what you would get in a conventional data center,” Pane says.
More stability is needed, however, before government and financial organizations, among others, put HCM in the cloud. Even mainstream businesses should take precautions, such as spreading their cloud instances across data centers in several geographies.
Pane offers three more tips on choosing new HCM apps:
• Make sure that you have a clean set of data to move to the new system for reporting purposes.
• Ask yourself if all the functions in the current system are still appropriate for today, a must considering the cost of feature deployment. It is not necessary to replicate everything in the old system within the new system, and doing so can complicate the migration.
• On the other hand, make sure that no functionality is lost when buying a new HCM system.
Get more tips on HCM and cloud adoption in the cloud in my article, Six Things to do before Deploying Cloud Apps. Then, get involved in the conversation, and tell us your best practices for cloud app deployment.
Gordon E. Moore, co-founder of Intel, noticed that over the course of tech history, compute power doubled every two years. Later coined as “Moore’s law,” his observation was meant to predict the general upward trend of processor speeds. Over the past few years, however, new content outlets — social media, in particular — have caused an explosion of unstructured data, a phenomenon that has bypassed Moore’s law by a landslide.
There is no single super-tool to tackle the growing generation of big data, according to Ben Butler, senior solutions marketing manager of big data at AWS. In his session at the AWS Summit in San Francisco, Butler advocated, instead, for a network of solutions — AWS solutions, to be specific — that leveraged the flexibility, capacity and cost effectiveness of the cloud.
Last week, Butler hosted another session at an AWS Summit in New York. His talk drilled down AWS solutions a bit further, offering specific use cases from different industries.
Big data has been used for fraud detection, click stream analysis and ad targeting, to name a few. One of the more exciting use cases is gene sequencing. This analysis of genetic variation can be used for disease research, personalized medicine and molecular testing. It is, in short, a tool that contributes to our understanding of disease and could be instrumental to the evolution of healthcare.
The sudden influx of big data has put pressure on on-premises systems that used to store, analyze and share data without much trouble, just a few years ago.
“DNA sequencing is scaling faster than Moore’s Law, so processing the sequence data is an increasingly significant barrier,” said Alex Dickinson, VP of strategic initiatives at Illumina, a genetic research company. Dickinson confirmed that the best solution for this processing bottleneck was cloud computing.
All of Illumina’s raw data streams from its sequencing instruments, over the Internet, to AWS, Dickinson explained. “There the data undergoes intensive processing to assemble final genomes from that raw data. It is then stored on AWS and made available to researchers for further analysis.” In other words, most of the big data lifecycle is processed on AWS.
Dickinson cited three reasons for selecting Amazon over other cloud providers. One, AWS has large instances that can handle big loads of raw data. Two, AWS has sites all over the globe. Three, AWS has competitive pricing.
Whether big data researchers choose AWS or not, the cloud is certainly the next frontier for processing massive datasets. In Illumina’s case, it is removing computational constraints and, by extension, generating more opportunities for scientific insight. As Dickinson put it, “the cloud enables raw instrument data to be transformed into disruptive healthcare discoveries.”
With over a billion users, Android has become the most popular Linux distribution in the consumer market, according to Ron Munitz, CTO and founder of Nubo, a remote Android Workspace solution. Now that enterprises are rushing to migrate their Windows or Linux-based deployments to the cloud, Munitz believes it makes sense to consider Android cloud apps as servers, in and of themselves. He presented this viewpoint at AnDevCon in Boston, in a session called, “Building Android for the Cloud”.
“If there is a migration from Linux to Android, and if there is a massive migration to the cloud, then it makes sense to combine all of them together and to expand Android to be a dominant cloud system on its own,” Munitz said. He believes the next step for Android is the make server side application ns, not only for users, but for organizations.
Munitz acknowledged many challenges that blocked progress in this direction. The primary drawback is latency, which would be introduced to applications running from Android’s cloud system. Android would also have to choose a remote display protocol (RDP) and, according to Munitz, there aren’t any RDPs that could handle Android well enough to satisfy users. After all, the user interface (UI) is the most important part of the mobile device, from the user’s perspective. And this is the part that would be subject to latency were its entire backend moved to the cloud.
That said, there are many reasons to entertain this concept of Android as a full-fledged cloud operating system or, as Munitz put it, “Cloudroid.” One reason is security, which has become a more pressing concern since the rise of BYOD. “Companies say you can have access to the organization’s data but that they need to verify that you will not steal data or, if you lose your phone, that it won’t be a risk.” Data held in Android cloud servers would be one way to protect the enterprise from this risk.
For the time being, Munitz’s position is largely abstract and speculative, but his premise is intriguing. In some regards, this seems to be the natural next step for Android. And, as Munitz put it, only a few years ago, Amazon was known as a book seller. Somehow, it has grown to become a dominant cloud provider. It seems like a much smaller leap to imagine Android doing the same.
Waistlines traditionally expand as the weather gets colder. Quest Software, recently acquired by Dell, has discovered bloating can be a problem for companies using cloud applications as well, according to results from a new survey the company sponsored.
Senior IT officials from 150 companies with more than 500 applications and $500 million in revenue were surveyed by Harris Interactive for the report, which concludes companies are potentially losing millions due to poor application management.
More than half of respondents said applications that were slow, unresponsive or crashed cost their businesses big money each year. Twenty nine percent of respondents reported losing money in the millions, and 7% said they lost tens of millions or more each year.
Quest is a maker of application performance management (APM) software, a field that has expanded in step with the growing world of cloud and mobile applications. Legacy vendors like Hewlett-Packard, IBM, Oracle and Microsoft compete with mid-sized companies and upstarts, from AppDynamics and New Relic to AppNeta, for control of this growing market.
The pitch from these companies is similar: If no one is watching your applications, you’re losing money. Automating the monitoring of applications and building alerts when something isn’t working properly can reduce downtime and save money. Additionally, some APM tools have predictive analytics capabilities that alert users to problems before they happen.
In the survey’s view, that would be a big help for IT departments unable to keep a watchful eye on all its applications. Less than half of applications are accessed more than five times a day by 76% of IT managers, according to the survey.
Companies have been making use of APM tools to fix a wide variety of problems. Vodafone Ireland used HP’s APM tools to increase performance and centralize monitoring and Aptela used AppNeta’s APM appliance to fix problems with its users networks.
Follow Adam Riglian on Twitter @AdamRiglian
In September, Amazon Web Services launched a marketplace for reserved instances — a contracted, fixed-term version of its cloud infrastructure. Cue the analytics startup.
InstanceVibe.com, a two-week old baby of a website launched by Roman Stepanenko, offers analytics and alerts to prospective buyers in the reserved instances marketplace.
“Generally, each company has a preferred timeframe for the amount of time they want to have an instance. Especially with the startups, if you want to have a reserved instance, you have to pay some cash up front,” he explains. “If you want to find perfect instance for your needs, you need to keep logging into the AWS console. [The] natural solution is to supply some sort of alert where you supply the criteria to what you’re interested in and you’re notified by email.”
Stepanenko, a former financial services developer who founded structural exception search engine BrainLeg in April, said he bought the domain right after he saw Amazon’s announcement. The website launched two weeks ago. He got the idea for the site from his own experiences with the reserved instances marketplace.
InstanceVibe users set a certain criteria for the type of instance they want to find, including the amount of time they want on the contract and the amount of usage. The marketplace is scanned by InstanceVibe regularly and alerts are sent out to users when instances are available with their criteria.
Alerts are free for t1.micro instances. Costs scale up to $9.99 for two weeks and $14.99 for four weeks of unlimited alerts for any instance. Each time the marketplace is scanned, the data is stored in a historical prices database and analyzed to show the best possible prices over a certain amount of time. Those analytics are free.
“Every time I scan the marketplace I am saving these data points in my database and that allows me to analyze when instances are sold and when they become listed,” Stepanenko said. “I can calculate the best costs of ownership historically [based on the information].”
Read more from Adam Riglian on SearchCloudApplications.com. Follow him on Twitter @AdamRiglian
Larry Ellison took a swipe at SAP HANA during his keynote address Sunday night. By Monday afternoon, one of enterprise IT’s empires was prepared to strike back.
“For the last 24 hours, my eyebrows have been glued two-thirds of the way up my forehead, ” said an incredulous Steve Lucas, executive vice president of Business Analytics, Database and Technology at SAP. “My first reactions were ‘you’ve got to be kidding me.’ ”
The quotes that had Lucas irate came when Ellison was discussing Exadata X3, the latest incarnation of Oracle’s database. He touted its 26 terabytes of in-memory before drawing comparison with HANA, something he joked he would not do during the speech.
“I know that SAP has an in-memory machine. It’s a little smaller,” Ellison said.
Lucas says not so fast – SAP announced that HANA boasted 100 terabytes in-memory at Sapphire in May.
“These are the most baseless set of statements I’ve ever seen anyone in the market make,” Lucas said. “I don’t know where these people get their facts from, to me it’s absolutely mind-boggling.”
Sniping between the companies is nothing new, but Lucas said he was surprised at the form Ellison’s barbs took at this year’s conference.
“It wasn’t even the normal sort of half-truth. It was this “are you kidding me?” kind of a statement,” he said.
Check in on our guide page for more coverage of Oracle OpenWorld and JavaOne.
Follow Adam Riglian on Twitter @AdamRiglian
The first day of Dreamforce 2012 was the tempest before the storm of announcements and hype that tend to go hand-in-hand with a keynote address.
Laid back and relaxed, Tuesday’s speeches and keynotes were directed more at partners and users than press and analysts. (Editor’s note: It’s difficult to write a story when MC Hammer is on stage at Dreamforce.)
There was still a lot to learn from yesterday’s sessions, especially during the AppExchange partner keynote. The AppExchange is eventually expected to account for 30% of Salesforce’s business and the message that apps on the exchange need to connect with one another came across clear.
Esteban Kolsky, founder of strategy firm ThinkJar LLC, raised the interesting point on Twitter that Salesforce’s messaging up front is about social enterprise, but that Tuesday’s speech was more about connected enterprise. For his money, he’d rather just have them say collaborative enterprise.
IDC analyst Alys Woodward also took to Twitter, recognizing that the “I” word — which no one at the keynote used — is critical to a connected app vision. That word is integration, and she sees it as crucial along with data architecture.
Aside from the partner keynote, it was a relatively sleepy day. A lot has been made out of the high attendance at this year’s conference, which is ranging from 85-95K depending on what you read. It feels every bit of that on Day 2, but Day 1 did not have the big event feel to it.