Increasingly, Amazon’s bread is buttered by its public cloud provider, Amazon Web Services. And while AWS entered the IaaS market 10 years ago with on-premises data centers in its crosshairs, evolution is pushing the cash cow further into new revenue streams.
While AWS has expanded its range of services at several junctures over the years, two recent offerings — Amazon Lumberyard and the AWS IoT Button — venture off its traditional path reaching for customers beyond its focus on enterprise IT.
Announced in February, Amazon Lumberyard is a cross-platform, 3D game engine. AWS does not charge Lumberyard users, but expects to gain new paying AWS customers who use other services in conjunction with Lumberyard, which might include Amazon DynamoDB for storage, AWS Lambda and Simple Notification Service for in-game notifications and Amazon Cognito for identification management across devices. This is a straightforward foray into a new stream of revenue — Lumberyard is not a wide-sweeping or veiled play for enterprise applications, it’s intended to draw amateur and enterprise game developers to AWS.
The AWS IoT Button, on the other hand, seemed to be more of an experiment on the part of the cloud provider.
The AWS IoT Button offers the same ease of use for developers, who can perform three actions with the button (click, double-click and long press). Enterprises diving headlong into the internet of things must unspool a world of actionable data, and that process is no easy task. But while the AWS IoT Button connects to AWS resources, its practicality may fall outside traditional business development. AWS itself lists among potential uses opening a garage door, calling a cab and tracking common household chores — none of which represents an enterprise use case, and all entirely within the capabilities of everyday devices.
“Why can’t I [close a garage door or call a cab] on my smartphone, my computer or all the other ways I’m communicating with my IoT devices traditionally?” asked David Linthicum, SVP of Cloud Technology Partners and a TechTarget contributor. “From the use cases that [AWS] explained, I don’t really see anything that useful.”
That might be AWS’ gamble: play the long game, nudge a small batch of developers over to its cloud platform for some tinkering, with the hope that this will further infiltrate the enterprise. “Give it to a number of smart people, and they’ll start developing some applications that do some pretty innovative things using the device,” Linthicum said.
Case in point: Cambia Health Solutions, an insurance provider based in Portland, Ore., which was lucky enough to acquire an AWS IoT Button the day it became available and quickly sold out. And they have a plan for it.
“The AWS IoT Button can help IT scale and produce a lot of data,” said Tisson Mathew, vice president of cloud and consumer services at Cambia, at a Portland AWS Meetup last month. The company plans to combine the AWS IoT Button and AWS Lambda to help automate the back-end stack. Mathew didn’t share full details on how the company would use the AWS IoT Button, but he noted the button’s quick programmability will allow developers to point it at one function for a few hours, and then possibly reprogram it for another function a few hours later.
Even if the IoT Button doesn’t draw in new business, the possibilities for customer-driven big ideas are AWS’ potential windfall, tapping the collective brainpower of developers using cloud services and the devices. “I don’t think it’s there to, in essence, become a profit margin; I think it’s there to become a kind of promotional channel for AWS,” Linthicum said. “Developers will come up with some handy things, but I can’t see using it as my Netflix remote or customizing some kind of order event.”
Also interesting is how the AWS IoT Button is a doppelganger of the Amazon Dash button, which consumers use to order household goods from specific brands, Linthicum pointed out.
“It kind of marries Amazon.com with AWS,” Linthicum added. “It’s sometimes hard to understand how the connections exist. They seem like very separate entities and companies.”
The end game with Amazon Lumberyard is clear: reach for a new cloud audience. With the AWS IoT Button, AWS seems intent to get it into the hands of the savviest development teams to see where they can take it in their environments — and once a few strong use cases are out there, others are sure to follow.
Amazon Web Services updated its Terms of Service this week. This otherwise might not be notable, but buried among standard legalese clauses about usage terms and patching policies is a hidden gem concerning the use of its new 3D game engine, Lumberyard, should the world come to a messy end.
According to clause 57.10 of the TOS:
Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Amazon’s Elasticsearch Service shows promise for users who have long loathed the process of setting up Elasticsearch, Logstash and Kibana clusters in the cloud, but there’s a security snag for some users.
Users have alternatives when it comes to restricting access to the Elasticsearch cluster, either through Identity and Access Management (IAM) roles or IP-based whitelists. However, VPC support would be ideal, according to AWS shops.
“Yes, you can control access via IPs and AWS Accounts, etc. but this still means that all of my private subnet instances will need to traverse their public NAT gateway to communicate with the ES end point,” said one Reddit user of the lack of VPC support. “That [expletive] sucks, and defeats whatever performance/bandwidth benefits I can have with my own internal ES nodes.”
Cloud consultants say some users are taking this in stride, but a significant number of customers will hold back from putting this version of the Amazon ES into production.
“It’s 50/50 whether it meets customers’ full set of needs,” said Patrick McClory, director of automation and DevOps for Datapipe, a provider of managed hosting services for AWS based in Jersey City, N.J.
It’s early in the game for Amazon ES, however, and many customers who are still just getting their feet wet with the service understand that evaluation environments are rarely perfect, McClory said.
“It’s Amazon,” McClory added. “It’ll get VPC support soon, I have no doubt – and it’ll be an easy move to production then.”
Meanwhile, without the ability to house Elasticsearch clusters inside VPCs, users have to pay for data transfer into and out of EC2 instances that access the Elasticsearch Service at a rate of $0.01 per gigabyte. It’s a paltry charge at first glance and Amazon doesn’t charge for transfer in and out of ES itself, but those costs can add up, according to Theodore Kim, senior director of SaaS operations for Jobvite Inc., a talent acquisition software maker in San Mateo, Calif.
“On my ultimate wish list would be the ability to run Elasticsearch within our VPC,” Kim said. “I’m hoping that will eventually happen as it did with S3.”
Also, while Elasticsearch clusters and the Kibana plugin can be accessed with a few clicks, the “L” in the Elasticsearch / Logstash / Kibana (ELK) stack will still take some doing for Amazon ES users.
A Logstash plugin must be built, downloaded and installed into DynamoDB by the user, according to Amazon’s documentation. The service also supports only one Logstash output plugin, according to Amazon’s Developer Guide.
Finally, Elasticsearch clusters today are limited to a maximum of 10 nodes per cluster, according to Amazon’s documentation.
Amazon declined to comment on record for this post.
AWS rolled out the cost management tools hinted at several months ago in its job listings, with support for budgeting and forecasts up to three months out, according to an Amazon blog post.
The new capabilities are an update to the AWS Cost Explorer tool, which previously provided only historical analysis for the last four months.
“The operations provided by these new tools replace the tedious and time-consuming manual calculations that many of our customers (both large and small) have been performing as part of their cost management and budgeting process,” said Amazon’s Jeff Barr in the company’s official blog. “After running a private beta with over a dozen large-scale AWS customers, we are confident that these tools will help you to do an even better job of understanding and managing your costs.”
While this appears to satisfy the requirements for the product set forth in the job descriptions, AWS has much more up its sleeve when it comes to customer service, the same job listings reveal:
Cost management is also among the tasks an AWS development team known as “Kumo” is charged with, according to the job postings. Trusted Advisor is listed on the AWS website as an example of an application the Kumo team has already developed. The AWS Kumo development team is working on three different greenfield projects, including customer-facing applications. Some of the challenges facing the team are in big data, social apps, machine learning and data mining, according to the AWS job postings. The job description also states that Amazon is filing patents in this area.
While data mining and machine learning could become customer-facing applications, the Kumo team is also focused on improving AWS customer support, based on a Kumo senior manager job description. One data analytics project that’s been proposed for AWS billing is integration with the company’s Kinesis data streaming service, to provide up-to-the-minute cost information.
This keeps with the theme of attracting more mission-critical enterprise applications to the public cloud, as do Amazon’s recent efforts to increase its transparency around cloud security.
Meanwhile, AWS also announced on its blog this week that it will open a new region in India in 2016. It’s unclear where exactly the new data centers will be located, but AWS already has Route 53 and CloudFront points of presence in Chennai and Mumbai.
Crittercism Inc. is on the verge of spinning out instances of its mobile application performance management (mAPM) solution to the new AWS data center in Frankfurt, Germany, according to co-founder Andrew Levy. In our recent conversation, Levy credited AWS with providing the backbone for Crittercism’s rapid growth from its 2010 start-up days. He also talked about trends in enterprise mobile application usage and development.
Expanding to the new AWS data center in Germany will help Critercism attract European customers, which have security and disaster recovery concerns about using apps based in U.S. data centers. “You can imagine the cost, expense and time we’d have to put into building a data center in Europe,” Levy said. “Working with Amazon from day one has helped us scale quickly without incurring those costs.”
AWS’ global compute power provides the room for growth and transaction volume coverage needed for mAPM, which must deliver real-time data and transaction information for centers with many platforms, including iOS, Android, Windows Phone 8, Hybrid and HTML5 apps. AWS’ many, and geographically-dispersed, data centers helps Crittercism handle about three billion requests a day from one billion users.
Crittercism uses the AWS IaaS platform and a slew of other services, including RedShift for data warehousing. On the database side, Crittercism uses Amazon.com’s DynamoDB NoSQL database store. To find out how and why, check out TechTarget reporter Jack Vaughan’s article on NoSQL databases.
Crittercism and DevOps
“We give CIOs and DevOps teams the necessary data to do root cause analysis across their own and their service providers’ applications,” Levy said. At the end of the day, companies need to understand if the performance problem is on their end or with the service provider’s. “If it’s an internal bottleneck, be it a code defect or something else, DevOps get the info they need to make a fix quickly.” (For in-depth info, read about how a major telecom uses Critercism’s mAPM products on SearchSOA.)
As recently as a year ago, Levy observed that many enterprises’ internal DevOps staffs did not have the skills needed to handle mobile application issues. Today, he and his colleagues find that DevOps pros know much more about mobile technologies. So do business executives, who see how these apps are affecting businesses. “
More companies are creating mobile centers of excellence and eliminating divisions between IT, development and businesses,” Levy said. Just a couple of years ago, developers mainly spurred their companies to evaluate Critercism’s mAPW products. Now, inquiries are coming in from people in new roles, such as Chief Digital Officers, Mobile Strategists and Mobile Architects.
Mobile computing has fostered other trends, such as the rising importance of an excellent user experience. “Consumers have high expectations around user experience,” Levy said. He cited a recent App Attention Span study that showed the majority of mobile app users deleting poorly-performing mobile apps. He sees great career opportunities for developers with strong user interface design skills.
Amazon may have just released the most confusingly named product to date, especially if you’re a Java developer.
At AWS re:Invent in Las Vegas, Amazon announced mainstream availability of its compute service AWS Lambda. The big sell on it is that it can recognize and respond immediately to server-side events, performing functions and processing data as soon as something interesting happens. For example, if an end user uploads an image to the system, admins can configure AWS Lambda to create thumbnails images, perform facial recognition processes and save the results in Amazon S3. And of course, the whole thing comes with the standard Amazon promise of being able to access high-availability systems, run processes within a millisecond of the event being triggered – with efficiency and cost effectiveness built-in.
But here’s the problem: This new AWS Service shares its name with Project Lambda, the major Java initiative that brought functional programming to the JVM. As far as Java developers go, lambda is a concept that is tied very tightly with Java 8 and the evolution of the language. Of course, it’s not possible to call, “stamped it, no erasies” on the 11th letter of the Greek alphabet. So Amazon hasn’t done anything wrong, but it certainly is confusing for anyone that has done some Java programming in the past.
Looking at the sample code that is presented on their online tutorial, Java isn’t even promoted as the development language. In fact, the sample “HelloWorld” application uses Node.js, let alone Java with a lambda function thrown in for good measure.
It’s hard to believe Amazon was unaware that naming its service the same as one of the biggest things to happen to the Java language in 10 years wouldn’t cause confusion. But it’s not likely Amazon salespeople are targeting developers, so it should not hurt the service’s adoption. After all, it is no doubt an amazing service, especially if it can indeed pull together task scheduling, stream processing, data synchronization, and auditing and notification systems that running in the Amazon cloud. It’s just a shame that they had to give it such a clashing and confusing name.
IT consultant Dr. Sheryl Kitchen talked about the benefits of AWS and Rackspace cloud services for midsized businesses in our recent conversation. Midsized businesses, those with over $50 million in revenue, should look for cloud providers with proven track records for like-sized enterprises, as well as trial periods and seamless integration, said Kitchen, who has worked in senior management positions with NetApp, Oracle Corp. and Sun Microsystems Inc.
A midsized business needs to ensure the cloud provider can meet the service level commitments of customers’ end users. Some typical service level demands include:
- How long can I expect a restore of service in the event of an outage or disruption in service?
- What can I expect in levels of uptime 99.9% or 99.999%?
Those indicators of reliability and disaster recovery are must-have criteria for midsized businesses. “Ensure that the cloud provider can meet these service level commitments to your end users,” Kitchen said.
Amazon Web Services and Rackspace Managed Cloud services are two providers with strong track records in this sector, said Kitchen.
“Rackspace has deployed managed cloud services with companies such as Fujifilm, Six Flags, Dominos and others,” Kitchen told me. “Amazon has a portfolio that ranges from small, midsize and enterprise businesses including Nokia, NASA, Pfizer, Comcast and Pinterest.”
Both Rackspace and AWS provide the ability to see how well a midsized business’ existing application will work in their environment. Each has tiers of services, which enable risk-reducing, step-by-step adoption and investment. “With either, you can have a cloud environment up and running in a very short time,” Kitchen said. In addition both companies offer the ability to expand or decrease services as required based upon the needs of the business.
Of course, Kitchen said, AWS and Rackspace are just two of the many companies with competent solutions. A test drive is the most important evaluation tool, she advised. Don’t choose a provider which doesn’t let you try before you buy.
It’s no secret Amazon Web Services has been on a mission to court enterprise IT, and by now, visible changes in its strategy are becoming pronounced. Namely, AWS will accommodate more traditional, on-premises IT into its operating model.
Take, for example, the recent Directory Services announcement. While the new Simple AD from Amazon could theoretically be used as an identity management service for third-party clouds, its overall target is existing AWS customers who want an easy approach to extend an on-premises Active Directory deployment into the cloud. This is similar to Amazon’s Virtual Private Cloud, which is also oriented toward extending on-premises environments into AWS.
This contrasts keynote talks by AWS executives at the RE:Invent conference in 2012, which only briefly mentioned hybrid IT, and openly disparaged on-premises infrastructure – particularly private clouds — as an outmoded idea perpetuated by legacy vendors to protect fat profit margins.
Most small businesses can leverage cloud-based solutions easily, because they haven’t invested heavily in legacy infrastructure, as larger organizations have. That segment is highly desirable for cloud providers, because the small business is the fastest growing segment in the business industry; they don’t have the challenging corporate protocols found in medium to enterprise businesses, according to software consultant Dr. Sheryl Kitchen, who’s worked in senior management positions with NetApp, Oracle Corp. and Sun Microsystems Inc.
In our recent conversation, Kitchen suggested cloud evaluation questions specifically for small businesses. She then shared her views on the best cloud platforms for small businesses. Core questions for getting to know a cloud provider should include:
- Does the cloud provider offer strategic value and do they have proven products and offerings that you can trust?
- Does the cloud provider have expertise in integration or provide integration services?
- Does the cloud provider have a “try and buy model” and can they rapidly deploy the service beyond the try and buy period?
- Can the cloud provider’s solution scale as the small business grows?
- As a subset of scalability, can the cloud provider manage your business data requirements and in the manner appropriate for your type of business?
- Is the cloud offering just hosting or is it true multi-tenant service?
Which cloud providers measure up to this basic selection criteria list? Kitchen’s first choices are AWS and Informatica.
Amazon Elastic Compute Cloud has provided a true multi-tenant solution along with integration services, Kitchen said. “They also offer a try-and-buy model,” she said. Scalability is an AWS strength, as it has greater compute power and geographic distribution than any provider. Also, AWS provides cost-effective data storage and data storage management for small businesses with its Simple Storage Service or DynamoDB products.
Informatica Cloud offers a true multi-tenant solution and tiers of services for various business needs, Kitchen said. She’s evaluated and gives high marks to Informatica’s integration capabilities, provided through native and PowerExchange, which help small businesses connect on-premise to cloud applications and business data.
“For a small business, the cloud can be the most efficient delivery system for their products or solutions,” said Kitchen. Just be sure to ask the right questions.
Storage is an essential component of the cloud world. You could say that storage IS the cloud. The decision of which storage option you choose will affect the scalability, reliability, availability, latency and cost of the data you’re storing, and is the topic of this month’s SearchAWS handbook, “Sorting through AWS Data Storage options”.
Amazon’s object storage system, Simple Storage Service (S3) offers developers many options when it comes to storing and managing large volumes of data in the cloud. In object based storage systems, like Amazon S3, data is stored and organized in buckets, not files. Buckets can store up to 5 terabytes of data.
If you’re looking for a smaller storage system, Amazon Elastic Block Storage (EBS) is a better choice. Amazon EBS provides block storage devices that are attached to EC2 instances. Users can format Amazon EBS volumes to fit the storage option of their choice, like Amazon S3. In part one, Author Dan Sullivan explores both options and which makes the most sense for your enterprise.
When considering storage, encryption tops the list. Encrypting data can save your business from hackers and lawsuits. While Amazon S3 and EBS both offer security, S3 offers Server Side Encryption that encrypts data at rest and in transit. Business technology advisor Ofir Nachmani explains how AWS services are designed to work together in a seamless workflow, but capabilities still lack in areas.
Due to recent public cloud breaches it’s understandable why customers are hesitant to make the leap from their private cloud. AWS’s encryption choices, and the option for customers to use their own encryption keys, may help make the private to public cloud transition easier. If you find it too overwhelming to be responsible for an encryption key, AWS offers AWS CloudHSM, physical hardware that will manage encryption keys for customers.
If you’re still concerned about a smooth workflow, Dan Sullivan discusses how the AWS Data Pipeline is designed to ease your fears. AWS Data Pipeline provides definitions that customers can define with specific tasks to perform and scheduling information that explains when to run the definition. The AWS Data Pipeline assigns tasks to “Task Runners” which perform their task, like reattempting failed tasks, and report the status back to the Data Pipeline.
Remember, whichever AWS storage system you choose, take into account the storage capacity of the service, the length of time it stores files or objects, which other AWS services it will work with and what type of encryption the storage service offers.