AWS has made it a priority to win over customers in the database market, specifically Oracle shops. And the public cloud provider has a new weapon in that battle — an upgraded primary database conversion tool.
The AWS Database Migration Service (DMS) now supports NoSQL databases, enabling developers to move databases from the open source MongoDB platform onto DynamoDB, Amazon’s native NoSQL database service. AWS DMS also supports migrations to and from Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE and SQL Server as database sources. The cloud provider could target other NoSQL database providers for support in the future.
In addition to homogenous migrations, the AWS Schema Conversion Tool converts database schema to enable migration from a disparate database platform to a target on Amazon Relational Database Service, such as from Oracle to Amazon Aurora.
AWS also recently added support for data lake conversions from Oracle and Taradata to Amazon Redshift, a swift response to an Oracle licensing update that hiked fees for Oracle cloud users.
Despite the potential of lock-in, enterprises are interested in the ability of the DynamoDB platform to integrate database information with other AWS tools. And AWS is happy to beat its chest over winning these database customers — it passed 22,000 database migrations in late March, AWS CEO Andy Jassy claimed on Twitter.
It’s getting crowded in the AWS toolbox
Among AWS’ slate of recent service and tool updates, here are several other noteworthy tidbits:
- A Resource Tagging API. IT teams can now apply tags, remove tags, retrieve a list of tagged resources with optional filtering and retrieve lists of tag keys and values via API. The new API enables developers to code tags into resources instead of doing it from the AWS Management Console. The Resource Tagging API is available through the newest versions of AWS SDKs and the AWS Command Line Interface. The new API functions apply across dozens of resource types and services. The cloud provider also added the ability to specify tags for Elastic Compute Cloud instances and Elastic Block Store volumes within the API call that creates them.
- Support for CloudWatch Alarms on Dashboard Widgets. Added functionality of CloudWatch Alarms for Dashboard Widgets provides AWS users with at-a-glance visibility into potential performance issues. SysOps can view CloudWatch metrics and Alarms in the same widget, and view widgets that display metrics according to number (value of a metric), line graph or stacked area graphs (layering one metric over another).
- Cross-region, cross-account capabilities for Amazon Aurora. IT teams can copy automatic or manual snapshots from one region to another and create read replicas of Aurora clusters in a new region. These features can improve disaster recovery posture or expand read operations to users in geographically-close regions. Additionally, users can share encrypted snapshots across AWS accounts, which enables them to copy or restore a snapshot depending on encryption configuration. AWS also expanded Aurora availability to the US-West region, and added support for t2.small instances.
- Amazon Elastic MapReduce instance fleets. This addition lets ops specify up to five instance types per fleet with weighted capacities, availability zones and a mix of on-demand or spot pricing. EMR instance fleets enables ops teams to craft a strategy for how they want to provision and geographically place capacity, and how much they want to pay for it. EMR automatically spins up the required capacity to support big data frameworks for Apache Hadoop, Spark or HBase, among others.
AWS attributed Tuesday’s extended disruption to outdated processes and human error, according to a postmortem published Thursday.
The post, which classified the incident as a “service disruption,” states that the problem started at 12:37 p.m. ET when an authorized Amazon Simple Storage Service (Amazon S3) team attempted to resolve an issue that had caused the S3 billing system to behave slower than expected. One of the team members, following AWS guidelines, attempted to execute a command that would remove some of the servers for an S3 subsystem, but incorrectly entered one of the inputs.
As a result, too many servers were taken down, including those that supported two additional S3 subsystems that manage metadata and location information, as well as the allocation of new storage. Compounding problems and creating a Catch-22 scenario, the latter subsystem required the former to be in operation. The capacity removal required a full restart, and Amazon S3 was unable to service requests.
It appears the outage was due to fat fingering under pressure with an arcane, hardly used command, said Mike Matchett, senior analyst and consultant at Taneja Group.
“It need not ever have happened, but was too easy a mistake to make,” he said. “Once made, it cascaded into a major outage.”
The impact spread to additional services in the US East-1 region that rely on Amazon S3, including the S3 console, new Elastic Compute Cloud instance launches, Elastic Block Store volumes and AWS Lambda.
AWS’ system is designed to support the removal of significant capacity and is built for occasional failure, according to the company. It has performed this particular operation since S3 was first built, but a complete restart of the affected subsystems hadn’t been done in one of the larger regions in years, the company said.
It’s surprising that a system like AWS is vulnerable at this scope to manual errors, said Carl Brooks, an analyst with 451 Research. The initial failure is understandable, but the compounding impact shows a larger structural flaw in how AWS manages uptime, he added.
“It says something for one manual process to have that much disruptive effect,” Brooks said. “They claim they’ve been working against [these types of failures] all this time, and clear that work has not been completed.”
Amazon S3 fully recovered by 4:54 p.m. ET, and related services recovered afterward, depending on backlogs.
AWS said it has since modified the tool it uses to perform the debugging task to limit how fast it can remove capacity and to put in additional safeguards to prevent a subsystem from going below its minimum capacity. It’s also reviewing other operational tasks for similar architectural problems, and plans to improve the recovery time of critical S3 subsystems by breaking down services into smaller segments and limit the blast radius of a failure, the company said in the postmortem. That work was planned for S3 later this year but that timeline was pushed up following this incident. Company officials did not comment specifically on when this work would be completed.
Critical operations that involve shutting down key resources should be fully scripted and tested often, and it shows a level of hubris that AWS hadn’t tried this restart in years, Matchett said. He was also critical of having the health dashboard connected to S3, saying the management plane for high-availability workloads should have been on a completely separate one than the resources it was controlling.
“It really looks like AWS has built a bigger house of cards than even they are aware of,” Matchett said.
Changes also have been made to the Service Health Dashboard to run across multiple regions.
Amazon has come far to accommodate customers running workloads on-premises; the same can’t be said, however, for how it addresses its customers using other public clouds.
At re:Invent last week, Amazon punctuated its softened stance on hybrid cloud — at least as a stopgap measure — as it touted its recent partnership with VMware and rolled out a series of services and products for use outside its own data centers. But when the notion of using multiple public clouds was broached, Amazon’s answer was simple: Don’t do it.
It’s hard to gauge how many companies actually deploy workloads in multiple infrastructure as a service public clouds. It’s a practice typically reserved for the larger enterprises that hedge their bets and avoid lock-in by storing backups on services such as Microsoft Azure and Google Cloud Platform, or by forward-leaning users that target specific tools for specific applications. It’s still early days as true competitors start to emerge, but there’s little evidence of workloads spanning across clouds — much how bursting was more hype than reality as hybrid clouds emerged.
Amazon, through a series of subtle and not-so-subtle messages, advised users that the grass is not greener elsewhere. AWS CEO Andy Jassy reportedly dissuaded vendors at the partner summit from a “strategy of hedging.” He didn’t ask partners not to work with other cloud providers, but he did say Amazon would direct business to those willing to do the tightest integration with its platform.* Amazon CTO Werner Vogels laid out a series of new products intended to fill in gaps in its data services, and talked about offering primitives and designing a “comprehensive data architecture” to meet users’ every need.
“We give you the choice of how you really want to do your analytics,” Vogels said in his keynote. “We’ll make sure you can do all of this on AWS — you never have to go outside.”
There were splashy roll-outs aimed at getting as much data as possible out of customers’ data centers and on to its cloud. There was James Hamilton, AWS vice president and distinguished engineer, on stage with a beer toasting to the death of the mainframe as part of a migration service offered by Infosys. The next morning Snowmobile, a 48-foot truck for transferring exabytes of data, was driven on to the convention hall floor to much fanfare.
Amazon is essentially reinventing what it means to be a giant tech vendor, except it’s doing it as a service provider, said Carl Brooks, an analyst with 451 Research. It could lead itself to be the next Oracle, not in terms of culture, but in terms of results and becoming a destination where customers are reliant on it.
“Once you start using Oracle you never stop; once you start using AWS you never stop,” Brooks said.
There also was a big push to expand services and functionality around AWS Lambda, Amazon’s serverless compute service that inherently couples a customer to the platform. Lambda is part of an emerging space of higher-end services that offer exciting new capabilities customers likely couldn’t do on their own, but they come with the tradeoff of rendering migration off AWS prohibitive, at best.
Amazon advises its customers against multiple clouds, and the way to get the most out of the platform is to start looking at services such as Lambda.
“The challenge with a multicloud strategy is customers tend to innovate to what we call the lowest common denominator,” said Jim Sherhart, director of product marketing at AWS.
Of course, cloud vendors have obvious reasons for their current perspectives. Microsoft and Google, both of which regularly mention meeting multicloud demands, need it to be a reality to siphon off business, while Amazon gains little by accommodating rivals that seek to chip away at its market dominance.
It’s also worth noting that Amazon is not alone in pushing higher-level services, nor does it force users to go that route. It also added export capabilities earlier this year to its Snowball hardware device that enables customers to pull data back on-premises.
Despite its public stance, there are indications Amazon would change if it’s thrust upon them. And if its embrace of hybrid cloud — albeit in a very Amazon-centric way — is any indicator, it could eventually be open to multicloud if its customers demand it.
“They have an intense need to stick to the narrative, which is Amazon is great, Amazon is good,” Brooks said. “When they talk about alternatives it’s only in the context of solving a point-specific problem, which is essentially what the VMware partnership is.
“They are able to roll with change, they just don’t want to admit it.”
As for Jassy’s comments, it should be taken more as a call for better understanding of the platform than some kind of threat, said Jeff Cotten, senior vice president of multicloud at Rackspace, which provides managed services for AWS and Azure.
“They genuinely do understand that they can’t create everything, but I do think there’s frustration that there’s not a lot of deep experts and partners,” he said.
In fact, in Rackspace’s experience Amazon has been OK with multicloud, Cotten added.
“They’re also not going to be exclusive with any one partner, so they understand their partners may not be exclusive to them,” he said.
* – Statement changed after initial publication.
Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at firstname.lastname@example.org.
The latest AWS updates hit on some common themes to provide customers with more options around compute power and discounted purchasing.
Amazon’s new P2 instance type offers up to 16 graphic processing units (GPUs) and up to eight NVIDIA Tesla K80 Accelerators. GPUs provide massive amounts of compute power and throughput; they were first popular with gaming companies, and are particularly well-suited for cutting-edge workloads in finance and scientific research, according to Amazon.
This latest instance type, however, and the general trend among cloud vendors to offer larger virtual machines, is as much about targeting legacy workloads and the evolving demands from customers, said Carl Brooks, an analyst with 451 Research.
“It reflects the mainstreaming of cloud computing for the bastions of traditional IT where the major reason [to release this instance type is]is big gnarly, hairy systems,” he said.
Customers must use Amazon’s Virtual Private Cloud to use the P2 instances. The instances provide up to 64 vCPUs, 732 gigabytes of memory, 192 gigabytes of GPU memory, nearly 40,000 cores and 20 gigabit network performance when grouped within a single availability zone. They are available in the US East, US West and Europe region as on-demand instances, reserved instances or dedicated hosts.
Amazon isn’t the only company to offer GPU instances. Microsoft’s N-series GPU instance, added in preview in early August, offers up to 24 cores, four NVIDIA Tesla K80 Accelerators, 224 gigabytes of memory and a 1.44 terabyte solid-state drive. Its targeted applications are high-performance computing and visualization workloads.
More ways to reserve Reserved Instances
Amazon also updated its polices around its Reserved Instances, which provide a large discount for customers that reserve EC2 instances in a given availability zone up to three years in advance. It’s been a popular service since its introduction eight years ago and has expanded to include the ability to schedule and modify instances, and to buy and sell these instances on a marketplace.
The pricing model is closer to traditional enterprise IT buying, where a customer can amortize over the life of the contract and treat it more like an investment, compared with the pay-as-you-go philosophy that helped popularize cloud computing.
“The dirty little secret is most customers are not using cloud for elastic workloads but are instead pushing it as a replacement for their data center,” Brooks said.
The latest change targets customers more concerned with price than capacity, by waiving the standard capacity associated with Reserved Instances in exchange for running that instance in any availability zone within a region and automatically applying the discount.
Also new, Convertible Reserved Instances don’t come with the greatest discount on instances but instead allow customers to change the instance type to use a newer iteration or take advantage of different specs, or factor in overall pricing drops on the platform after an instance is initially reserved.
The updates are available across the platform network, with the exception of Convertible Reserved Instances which are unavailable in GovCloud and the China region, though those two are expected to be added soon.
The new regional option will help with automation and flexibility, but Amazon risks making its Reserved Instances too confusing, said Meaghan McGrath, an analyst with Technology Business Research in Hampton, N.H. She pointed to how Google customers appreciate how the vendor automatically calculates sustained usage discounts and applies them to customer instances.
“My feeling is that those customers who understand the system and know how to trade or convert Reserved Instances are the same people who are good at negotiating time-share trades and vacation swaps into nice resorts,” McGrath said. “But the majority of people would really rather just go book a hotel on a discount site that’s going to give them a good deal on the place to stay.”
Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at email@example.com.
Increasingly, Amazon’s bread is buttered by its public cloud provider, Amazon Web Services. And while AWS entered the IaaS market 10 years ago with on-premises data centers in its crosshairs, evolution is pushing the cash cow further into new revenue streams.
While AWS has expanded its range of services at several junctures over the years, two recent offerings — Amazon Lumberyard and the AWS IoT Button — venture off its traditional path reaching for customers beyond its focus on enterprise IT.
Announced in February, Amazon Lumberyard is a cross-platform, 3D game engine. AWS does not charge Lumberyard users, but expects to gain new paying AWS customers who use other services in conjunction with Lumberyard, which might include Amazon DynamoDB for storage, AWS Lambda and Simple Notification Service for in-game notifications and Amazon Cognito for identification management across devices. This is a straightforward foray into a new stream of revenue — Lumberyard is not a wide-sweeping or veiled play for enterprise applications, it’s intended to draw amateur and enterprise game developers to AWS.
The AWS IoT Button, on the other hand, seemed to be more of an experiment on the part of the cloud provider.
The AWS IoT Button offers the same ease of use for developers, who can perform three actions with the button (click, double-click and long press). Enterprises diving headlong into the internet of things must unspool a world of actionable data, and that process is no easy task. But while the AWS IoT Button connects to AWS resources, its practicality may fall outside traditional business development. AWS itself lists among potential uses opening a garage door, calling a cab and tracking common household chores — none of which represents an enterprise use case, and all entirely within the capabilities of everyday devices.
“Why can’t I [close a garage door or call a cab] on my smartphone, my computer or all the other ways I’m communicating with my IoT devices traditionally?” asked David Linthicum, SVP of Cloud Technology Partners and a TechTarget contributor. “From the use cases that [AWS] explained, I don’t really see anything that useful.”
That might be AWS’ gamble: play the long game, nudge a small batch of developers over to its cloud platform for some tinkering, with the hope that this will further infiltrate the enterprise. “Give it to a number of smart people, and they’ll start developing some applications that do some pretty innovative things using the device,” Linthicum said.
Case in point: Cambia Health Solutions, an insurance provider based in Portland, Ore., which was lucky enough to acquire an AWS IoT Button the day it became available and quickly sold out. And they have a plan for it.
“The AWS IoT Button can help IT scale and produce a lot of data,” said Tisson Mathew, vice president of cloud and consumer services at Cambia, at a Portland AWS Meetup last month. The company plans to combine the AWS IoT Button and AWS Lambda to help automate the back-end stack. Mathew didn’t share full details on how the company would use the AWS IoT Button, but he noted the button’s quick programmability will allow developers to point it at one function for a few hours, and then possibly reprogram it for another function a few hours later.
Even if the IoT Button doesn’t draw in new business, the possibilities for customer-driven big ideas are AWS’ potential windfall, tapping the collective brainpower of developers using cloud services and the devices. “I don’t think it’s there to, in essence, become a profit margin; I think it’s there to become a kind of promotional channel for AWS,” Linthicum said. “Developers will come up with some handy things, but I can’t see using it as my Netflix remote or customizing some kind of order event.”
Also interesting is how the AWS IoT Button is a doppelganger of the Amazon Dash button, which consumers use to order household goods from specific brands, Linthicum pointed out.
“It kind of marries Amazon.com with AWS,” Linthicum added. “It’s sometimes hard to understand how the connections exist. They seem like very separate entities and companies.”
The end game with Amazon Lumberyard is clear: reach for a new cloud audience. With the AWS IoT Button, AWS seems intent to get it into the hands of the savviest development teams to see where they can take it in their environments — and once a few strong use cases are out there, others are sure to follow.
Amazon Web Services updated its Terms of Service this week. This otherwise might not be notable, but buried among standard legalese clauses about usage terms and patching policies is a hidden gem concerning the use of its new 3D game engine, Lumberyard, should the world come to a messy end.
According to clause 57.10 of the TOS:
Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Amazon’s Elasticsearch Service shows promise for users who have long loathed the process of setting up Elasticsearch, Logstash and Kibana clusters in the cloud, but there’s a security snag for some users.
Users have alternatives when it comes to restricting access to the Elasticsearch cluster, either through Identity and Access Management (IAM) roles or IP-based whitelists. However, VPC support would be ideal, according to AWS shops.
“Yes, you can control access via IPs and AWS Accounts, etc. but this still means that all of my private subnet instances will need to traverse their public NAT gateway to communicate with the ES end point,” said one Reddit user of the lack of VPC support. “That [expletive] sucks, and defeats whatever performance/bandwidth benefits I can have with my own internal ES nodes.”
Cloud consultants say some users are taking this in stride, but a significant number of customers will hold back from putting this version of the Amazon ES into production.
“It’s 50/50 whether it meets customers’ full set of needs,” said Patrick McClory, director of automation and DevOps for Datapipe, a provider of managed hosting services for AWS based in Jersey City, N.J.
It’s early in the game for Amazon ES, however, and many customers who are still just getting their feet wet with the service understand that evaluation environments are rarely perfect, McClory said.
“It’s Amazon,” McClory added. “It’ll get VPC support soon, I have no doubt – and it’ll be an easy move to production then.”
Meanwhile, without the ability to house Elasticsearch clusters inside VPCs, users have to pay for data transfer into and out of EC2 instances that access the Elasticsearch Service at a rate of $0.01 per gigabyte. It’s a paltry charge at first glance and Amazon doesn’t charge for transfer in and out of ES itself, but those costs can add up, according to Theodore Kim, senior director of SaaS operations for Jobvite Inc., a talent acquisition software maker in San Mateo, Calif.
“On my ultimate wish list would be the ability to run Elasticsearch within our VPC,” Kim said. “I’m hoping that will eventually happen as it did with S3.”
Also, while Elasticsearch clusters and the Kibana plugin can be accessed with a few clicks, the “L” in the Elasticsearch / Logstash / Kibana (ELK) stack will still take some doing for Amazon ES users.
A Logstash plugin must be built, downloaded and installed into DynamoDB by the user, according to Amazon’s documentation. The service also supports only one Logstash output plugin, according to Amazon’s Developer Guide.
Finally, Elasticsearch clusters today are limited to a maximum of 10 nodes per cluster, according to Amazon’s documentation.
Amazon declined to comment on record for this post.
AWS rolled out the cost management tools hinted at several months ago in its job listings, with support for budgeting and forecasts up to three months out, according to an Amazon blog post.
The new capabilities are an update to the AWS Cost Explorer tool, which previously provided only historical analysis for the last four months.
“The operations provided by these new tools replace the tedious and time-consuming manual calculations that many of our customers (both large and small) have been performing as part of their cost management and budgeting process,” said Amazon’s Jeff Barr in the company’s official blog. “After running a private beta with over a dozen large-scale AWS customers, we are confident that these tools will help you to do an even better job of understanding and managing your costs.”
While this appears to satisfy the requirements for the product set forth in the job descriptions, AWS has much more up its sleeve when it comes to customer service, the same job listings reveal:
Cost management is also among the tasks an AWS development team known as “Kumo” is charged with, according to the job postings. Trusted Advisor is listed on the AWS website as an example of an application the Kumo team has already developed. The AWS Kumo development team is working on three different greenfield projects, including customer-facing applications. Some of the challenges facing the team are in big data, social apps, machine learning and data mining, according to the AWS job postings. The job description also states that Amazon is filing patents in this area.
While data mining and machine learning could become customer-facing applications, the Kumo team is also focused on improving AWS customer support, based on a Kumo senior manager job description. One data analytics project that’s been proposed for AWS billing is integration with the company’s Kinesis data streaming service, to provide up-to-the-minute cost information.
This keeps with the theme of attracting more mission-critical enterprise applications to the public cloud, as do Amazon’s recent efforts to increase its transparency around cloud security.
Meanwhile, AWS also announced on its blog this week that it will open a new region in India in 2016. It’s unclear where exactly the new data centers will be located, but AWS already has Route 53 and CloudFront points of presence in Chennai and Mumbai.
Crittercism Inc. is on the verge of spinning out instances of its mobile application performance management (mAPM) solution to the new AWS data center in Frankfurt, Germany, according to co-founder Andrew Levy. In our recent conversation, Levy credited AWS with providing the backbone for Crittercism’s rapid growth from its 2010 start-up days. He also talked about trends in enterprise mobile application usage and development.
Expanding to the new AWS data center in Germany will help Critercism attract European customers, which have security and disaster recovery concerns about using apps based in U.S. data centers. “You can imagine the cost, expense and time we’d have to put into building a data center in Europe,” Levy said. “Working with Amazon from day one has helped us scale quickly without incurring those costs.”
AWS’ global compute power provides the room for growth and transaction volume coverage needed for mAPM, which must deliver real-time data and transaction information for centers with many platforms, including iOS, Android, Windows Phone 8, Hybrid and HTML5 apps. AWS’ many, and geographically-dispersed, data centers helps Crittercism handle about three billion requests a day from one billion users.
Crittercism uses the AWS IaaS platform and a slew of other services, including RedShift for data warehousing. On the database side, Crittercism uses Amazon.com’s DynamoDB NoSQL database store. To find out how and why, check out TechTarget reporter Jack Vaughan’s article on NoSQL databases.
Crittercism and DevOps
“We give CIOs and DevOps teams the necessary data to do root cause analysis across their own and their service providers’ applications,” Levy said. At the end of the day, companies need to understand if the performance problem is on their end or with the service provider’s. “If it’s an internal bottleneck, be it a code defect or something else, DevOps get the info they need to make a fix quickly.” (For in-depth info, read about how a major telecom uses Critercism’s mAPM products on SearchSOA.)
As recently as a year ago, Levy observed that many enterprises’ internal DevOps staffs did not have the skills needed to handle mobile application issues. Today, he and his colleagues find that DevOps pros know much more about mobile technologies. So do business executives, who see how these apps are affecting businesses. “
More companies are creating mobile centers of excellence and eliminating divisions between IT, development and businesses,” Levy said. Just a couple of years ago, developers mainly spurred their companies to evaluate Critercism’s mAPW products. Now, inquiries are coming in from people in new roles, such as Chief Digital Officers, Mobile Strategists and Mobile Architects.
Mobile computing has fostered other trends, such as the rising importance of an excellent user experience. “Consumers have high expectations around user experience,” Levy said. He cited a recent App Attention Span study that showed the majority of mobile app users deleting poorly-performing mobile apps. He sees great career opportunities for developers with strong user interface design skills.
Amazon may have just released the most confusingly named product to date, especially if you’re a Java developer.
At AWS re:Invent in Las Vegas, Amazon announced mainstream availability of its compute service AWS Lambda. The big sell on it is that it can recognize and respond immediately to server-side events, performing functions and processing data as soon as something interesting happens. For example, if an end user uploads an image to the system, admins can configure AWS Lambda to create thumbnails images, perform facial recognition processes and save the results in Amazon S3. And of course, the whole thing comes with the standard Amazon promise of being able to access high-availability systems, run processes within a millisecond of the event being triggered – with efficiency and cost effectiveness built-in.
But here’s the problem: This new AWS Service shares its name with Project Lambda, the major Java initiative that brought functional programming to the JVM. As far as Java developers go, lambda is a concept that is tied very tightly with Java 8 and the evolution of the language. Of course, it’s not possible to call, “stamped it, no erasies” on the 11th letter of the Greek alphabet. So Amazon hasn’t done anything wrong, but it certainly is confusing for anyone that has done some Java programming in the past.
Looking at the sample code that is presented on their online tutorial, Java isn’t even promoted as the development language. In fact, the sample “HelloWorld” application uses Node.js, let alone Java with a lambda function thrown in for good measure.
It’s hard to believe Amazon was unaware that naming its service the same as one of the biggest things to happen to the Java language in 10 years wouldn’t cause confusion. But it’s not likely Amazon salespeople are targeting developers, so it should not hurt the service’s adoption. After all, it is no doubt an amazing service, especially if it can indeed pull together task scheduling, stream processing, data synchronization, and auditing and notification systems that running in the Amazon cloud. It’s just a shame that they had to give it such a clashing and confusing name.