AWS Cloud Cover

Page 1 of 212

July 11, 2017  8:31 PM

Memory issues shine light on hidden serverless environment costs

David Carty Profile: David Carty

This is a guest blog post by Bob Reselman, a nationally known developer, system architect, writer and editor. You can read more of his work at DevOpsAgenda.com.

Serverless computing is all the rage among developers, and with good reason.

A serverless environment is the new vista in modern application development. AWS has Lambda; Microsoft has Azure Functions; Google has Cloud Functions. These technologies are not going away. In fact, we’ll see a lot more work take place to create, build and test code in which the function is the unit of deployment.

Serverless-based applications are easy to architect and easy to deploy. A developer decides the services he needs, wires them up in a script, hits the deploy button and runs some tests — that’s it. Developers don’t need to worry about hardware, capacity or scalability; the serverless provider takes care of all that. Just pay the bill for the resources you use.

It couldn’t be simpler, right? Well, maybe not.

The architecture of a serverless environment with a simple REST API architecture implemented in AWS is fairly straightforward. A set of RESTful endpoints uses Amazon API Gateway and wires each endpoint to some AWS Lambda functions. One Lambda function uses Simple Storage Service (S3) as a data store, and the others store data in an Amazon DynamoDB database.

The API Gateway provides a way to get data in and out of the application; the functions handle computation, while S3 and DynamoDB provide the data storage. What’s not to like? AWS will scale up your application as needed. All you need to do is pay the bill.

So, let’s talk about that bill. Let’s use Will, a systems engineer, as an example.

Will is a low-level engineer who works on content delivery networks for a major telecom. He works closely with bare metal, well below the surface of the average developer’s day-to-day dealings with the cloud. In Will’s world, memory allocation counts.

Over the years, with the growing popularity of higher-level languages such as C# and Java, the common Linux command malloc, which requests memory from the operating system, has become hidden in the language runtime engines, including the common language runtime for .NET and the Java VM. But memory has to be allocated no matter what, and the way you get memory is via the operating system using malloc:

char *str;

str = (char *) malloc(15);

Here is where it interesting: the efficiency of malloc varies depending on your implementation. Standard malloc is inefficient in situations with a high degree of concurrency in multiprocessor environments, so Will won’t use it. It locks up memory — used or unused — and places extra burden on the CPU. Will prefers tcmalloc, created by Google, which exposes configuration capabilities that allow memory allocation to work more efficiently. And it avoids wasteful CPU cycling.

So, what does a memory allocation binary have to do with your AWS bill? It actually has a lot to do with it.

AWS makes money on Lambda by billing you for the time it takes to execute code, which translates into CPU utilization — though you also get billed by your request volume. Thus, every piece of code in your Lambda function that declares a variable is subject to the memory allocation executable, which is most often malloc. That means you might have created code that runs squeaky clean on your local machine or even in a private cloud. But when it gets to AWS, it kills the CPUs.

The provider’s memory allocation infrastructure might not be optimized, so wasteful cycles get spun and you get billed. It’s just like giving a package to a messenger and letting him determine the best route, which might include a lot of stop lights. You pay for the messenger’s time no matter the route efficiency.

Of course, I am not saying AWS is a nefarious agent; quite the opposite. But the serverless environment is theirs to run, and the IT shop doesn’t have a lot say in the matter other than region selection.

Without the ability to optimize a serverless environment to accommodate computationally intensive applications, there is a real financial risk for enterprise IT teams. Hopefully, the major players realize that user optimization for cloud services offers a competitive advantage and more granular capabilities. Otherwise, engineers will fly blind without the aid of instruments on the control panel. And, as we’ve learned on the terrain, when disaster looms, you can’t fix what you can’t see.

June 30, 2017  6:30 PM

Greengrass extends reach of AWS functionality

David Carty Profile: David Carty

Edge computing and IoT continue to infiltrate the enterprise, prompting AWS to release several services at re:Invent 2016. One service, AWS Greengrass, enables AWS Lambda functions on devices and ties together IoT and serverless technologies. Seven months after pulling back the curtains on the service, Greengrass is generally available in the US-East and US-West regions.

As enterprises invest more heavily in IoT-connected devices, they want more connectivity and compute capabilities associated with them. Greengrass delivers limited AWS programming to groups of devices, enabling them to respond to real-world circumstances, such as a faulty internet connection.

AWS Greengrass enables a device to perform functions on data and securely transmit that data to the cloud for additional analytics and storage. Developers can combine Lambda with the AWS Greengrass Core SDK to execute serverless functions locally, establish secure connections from the core device to the cloud and support MQTT messaging on devices.

The service also opens up hybrid cloud possibilities — another recent area of emphasis for AWS. Greengrass is one of few Amazon products that can run in on-premises. Greengrass can run on very lightweight or more intricate computing systems, enabling IT administrators to use the AWS programming model locally, if they choose.

Developers can access Greengrass from the AWS Management Console, API or AWS Command Line Interface, and then define and manage Greengrass groups — devices connected to each other.

In addition to Greengrass, AWS added several features and support this month, including plans for a new data center region. Here’s what you might have missed.

New AWS features and support

  • DAX also goes GA. Amazon DynamoDB Accelerator (DAX), a caching service for eventually consistent, read-heavy workloads on DynamoDB, is generally available. DAX reportedly improves DynamoDB performance up to 10 times, and it is both fully-managed and compatible with existing DynamoDB API calls, lowering the barrier for developers to roll it into their deployments. DAX is available in the US-East-1 (Northern Virginia), US-West-1 (Northern California), US-West-2 (Oregon), EU-West-1 (Ireland) and Asia Pacific-Northeast-1 (Tokyo) regions.
  • New region in Hong Kong. AWS will add a new geographic region to Hong Kong in 2018. The region, AWS’ eighth in Asia Pacific, appeals to local public and private sector clients as well as Asia-based businesses building multi-zone fault-tolerant applications. The region expands AWS’ global footprint to 20 regions; the public cloud provider will open other regions and availability zones in China, France and Sweden in 2017 and 2018.
  • Rekognition adds region, feature. Amazon Rekognition, an image recognition and management service, is available in the AWS GovCloud (US) region. A new celebrity recognition feature enables the service to identify an image of a famous person by comparing it to a global list of thousands of celebrities across politics, entertainment, business, sports and media. The feature expands facial recognition capabilities for developers, who could roll the technology into mobile applications. It keeps pace with a similar tool within the Microsoft Cognitive Services portfolio.
  • X-Ray expands latency monitoring. Two features in the AWS X-Ray service will analyze and debug distributed applications. The Visual Node and Edge latency distribution graphs, accessible in the Service Details sidebar, visualize and track latency among services; they also show current latency from the perspectives of clients, services and microservices. Developers can access the features via API call or the X-Ray console.
  • Device authentication for Amazon Workspaces. AWS’ desktop as a service offering, Amazon Workspaces, added device authentication for users in BYOD work environments. Administrators establish policies to manage devices and client access, and digital certificates grant or block access to certain operating systems.
  • AWS WAF adds more IP address control. AWS Web Application Firewall (WAF), a service that protects web-based apps from common malicious attacks, added a rate-based rules feature. Previously, security ops pros define rules for requests with certain criteria, such as IP address or the size of the request, and choose to allow, block or count those requests. Rate-based rules expand the controlled response to include a large number of requests for a particular IP address, which could signal a DDoS attack or something more benign, such as a software integration that cannot connect to the app. SecOps teams use rules to add or remove an IP address from a blacklist, set higher request rates for technology partners and set CloudWatch metrics — including alarms that can fire off AWS Lambda functions — to monitor each rule. SecOps can also combine rate-based rules with other WAF conditions to establish more sophisticated rate-based policies.
  • Additional AWS Direct Connect locations, monitoring capabilities. AWS Direct Connect, which targets hybrid clouds, establishes secure, dedicated network connections from on-premises resources to the AWS cloud — with increased bandwidth and reduced network costs compared to web-based connections. The list of available locations for AWS Direct Connect now totals 60 — with new ones across North America and Europe. This was the service’s second expansion this year. Admins also can now add Amazon CloudWatch monitoring to all locations (except China), to monitor physical connections to the cloud and set up alarms and triggers through Amazon Simple Notification Service (SNS).
  • Lightsail available in nine new regions. Launched at re:Invent in 2016 in just the US-East region, AWS expanded Lightsail, its Virtual Private Server service, to nine more regions across the United States, Europe and Asia Pacific. Lightsail offers simplified servers with managed infrastructure for businesses with more basic computing needs or limited budgets.
  • CloudTrail improves API tracking. Admins use AWS CloudTrail to monitor AWS API calls, and AWS recently added to those tracking capabilities. The CloudTrail console’s API Activity History page now includes API calls to CloudWatch Events, Elastic Compute Cloud (EC2), DynamoDB, Cognito, Kinesis, CloudHSM and Storage Gateway. This addition centralizes API logs and removes the need to retrieve CloudWatch Events APIs from Simple Storage Service (S3) buckets.
  • EC2 Systems Manager integrates with S3. Developers can query and visualize inventory data across multiple regions and accounts with AWS’ new integration between Amazon EC2 Systems Manager and S3. Developers enable an S3 bucket to automatically collect inventory data, which eliminates the need to create custom scripts. They can then use Amazon Athena to query the data or Amazon QuickSight to visualize it.
  • Convert legacy data warehouses to AWS. The AWS Schema Conversion Tool added more support for legacy data warehouses. IT teams can now export data to Amazon Redshift from Teradata (versions 13 and above) and Oracle Data Warehouse (versions 10g and above).
  • AppStream added user management, web portals. Amazon AppStream 2.0 now enables admins to create and manage users without an identity federation tool. Admins grant user access with the User Pools tab in the AppStream console. Users log in via a web portal to choose which approved applications to use.


June 23, 2017  3:41 PM

AWS GovCloud lags behind private sector, but that’s changing

Trevor Jones Trevor Jones Profile: Trevor Jones
Uncategorized

In the fast-paced world of public cloud, if AWS is the hare, AWS GovCloud is the tortoise.

GovCloud launched in 2011 to meet stricter regulatory requirements for federal, state and local government.  Since then, AWS has added dozens of new services and nine new private-sector regions across the globe. But AWS GovCloud was slow to incorporate new services, and it existed as only a single West Coast region – until now.

AWS will add a second GovCloud region in the East Coast in 2018. This comes on the heels of the public cloud market leader’s increased efforts to meet regulatory standards and improve feature parity among its commercial and public sector offerings.

And while AWS GovCloud might only serve as a curiosity to the private sector, its continued expansion speaks to a broader trend of the public cloud as an accepted place for workloads of all kinds.

“From a technology perspective, [GovCloud] has grown leaps and bounds, even over the last two or three years,” said Tim Israel, director of cloud engineering at Enlighten IT Consulting, a GovCloud reseller that works primarily with the Department of Defense.

AWS GovCloud has seen 185% compounded annual growth rate since it opened in 2011, according to Amazon. Some of the most important additions include new instance types already available on the general site and the addition of services such as AWS Lambda, which was added in May, more than two years after the service was first rolled out. There’s also a growing list of accreditations for various services that are often more important to regulated IT shops than the services themselves.

The real potential benefit of the new East Coast region — one that regular AWS users have had access to since 2015 — is disaster recovery across regions. Currently, AWS GovCloud users can replicate data across data centers within the region, but that’s probably not enough redundancy for mission-critical applications.

For example, users of the standard AWS public cloud, which incorporates regional failover, saw services remain uninterrupted when the US-East 1 region went down earlier this year. Those lacking cross-region replication couldn’t access applications housed in US-East 1 for up to four hours.

Still, AWS has a long way to go before there’s true parity between the two iterations of its cloud. Only 35 of its 92 services are available on GovCloud. The private cloud that AWS built specifically for the CIA is believed to have an even small feature set. All other U.S. regions offer at least 50 services, and across AWS’ global footprint, only the China region, which is operated by Sinnet, has fewer available services.

According to Amazon, the services available in AWS GovCloud align with the needs of government, as indicated by public sector customers.

AWS GovCloud is also generally more expensive than the commercial version. Comparable compute resources cost more in that region than they do in standard AWS regions; it’s also more expensive to transfer data out of the cloud.

Despite those limitations, AWS GovCloud does have benefits for its targeted audience. It meets certain regulatory standards that other regions do not. It’s also maintained only by U.S. citizens and provides encrypted access that meets federal guidelines.

Unlike the private sector, government agencies have to go through a competitive bidding process that puts roughly two years between when a project’s conception and when the actual purchase is made. And given that two years ago was about the time when enterprises really started to embrace the public cloud, AWS GovCloud could also be gaining steam at just the right time.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


May 31, 2017  6:24 PM

AWS slashes Reserved Instances prices

David Carty Profile: David Carty

Price reductions are less common for compute resources than they were in the early days of cloud computing. But AWS customers can still find value in occasional Elastic Compute Cloud (EC2) price cuts.

In May, AWS reduced prices on a slew of one-year standard and three-year convertible EC2 Reserved Instances. AWS customers can save 9% to 17% on standard Reserved Instances, depending on the region, operating system and instance type — discounts apply to C4, M4, R4, I3, P2, X1 and T2 types. Discounts for convertible instance types range up to 21%.

Convertible Reserved Instances allow a user to change the instance family as application needs evolve, to provide an extra level of workload flexibility but still locked into a contract for instance capacity.

AWS also introduced no-upfront pricing for three-year Reserved Instances, a feature previously reserved for one-year terms. It also lowered On-Demand and Reserved pricing for M4 Linux instances.

New features and support

  • Cost allocation tags for Elastic Block Store snapshots. Users can assign costs to a particular project or department via cost allocation tags. Navigate the AWS Management Console’s tag editor feature to find the necessary snapshot, or backup of an Elastic Block Store (EBS volume), and apply tags. Users can also create tags via a script command or function call. Manage and activate cost allocation tags in the billing dashboard, then monitor tagged snapshots in the Cost Explorer feature.
  • Lambda support for AWS X-Ray. The AWS X-Ray service, which analyzes performance of microservices-based applications, now supports AWS Lambda. Developers can enable active function tracing within Lambda to activate X-Ray or update functions in the AWS Command Line Interface. AWS X-Ray processes traces of functions between services and generates visual graphs to ease debugging.
  • New features for IAM policy summaries. Administrators can evaluate and troubleshoot AWS Identity and Access Management (IAM) permissions with three new IAM policy summary features. The new resource summaries display resource types, regions and account IDs to provide a full list of defined resources for each policy action. Admins can evaluate which services or actions a policy denies, and see which possible actions remain. They also can identify typos and other errors in policies by seeing which services and actions IAM fails to recognize.
  • Support for SAP clusters. AWS unveiled extended support for larger SAP clusters at SAP’s Sapphire Now conference. In addition to expanding its X1 instance type to accommodate larger SAP applications, AWS revealed plans to expand RAM on its virtual servers to better support SAP workloads in the near future.


April 14, 2017  6:03 PM

AWS pushes for database domination

David Carty Profile: David Carty

AWS has made it a priority to win over customers in the database market, specifically Oracle shops. And the public cloud provider has a new weapon in that battle — an upgraded primary database conversion tool.

The AWS Database Migration Service (DMS) now supports NoSQL databases, enabling developers to move databases from the open source MongoDB platform onto DynamoDB, Amazon’s native NoSQL database service. AWS DMS also supports migrations to and from Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE and SQL Server as database sources. The cloud provider could target other NoSQL database providers for support in the future.

In addition to homogenous migrations, the AWS Schema Conversion Tool converts database schema to enable migration from a disparate database platform to a target on Amazon Relational Database Service, such as from Oracle to Amazon Aurora.

AWS also recently added support for data lake conversions from Oracle and Taradata to Amazon Redshift, a swift response to an Oracle licensing update that hiked fees for Oracle cloud users.

Despite the potential of lock-in, enterprises are interested in the ability of the DynamoDB platform to integrate database information with other AWS tools. And AWS is happy to beat its chest over winning these database customers — it passed 22,000 database migrations in late March, AWS CEO Andy Jassy claimed on Twitter.

It’s getting crowded in the AWS toolbox

Among AWS’ slate of recent service and tool updates, here are several other noteworthy tidbits:

  • A Resource Tagging API. IT teams can now apply tags, remove tags, retrieve a list of tagged resources with optional filtering and retrieve lists of tag keys and values via API. The new API enables developers to code tags into resources instead of doing it from the AWS Management Console. The Resource Tagging API is available through the newest versions of AWS SDKs and the AWS Command Line Interface. The new API functions apply across dozens of resource types and services. The cloud provider also added the ability to specify tags for Elastic Compute Cloud instances and Elastic Block Store volumes within the API call that creates them.
  • Support for CloudWatch Alarms on Dashboard Widgets. Added functionality of CloudWatch Alarms for Dashboard Widgets provides AWS users with at-a-glance visibility into potential performance issues. SysOps can view CloudWatch metrics and Alarms in the same widget, and view widgets that display metrics according to number (value of a metric), line graph or stacked area graphs (layering one metric over another).
  • Cross-region, cross-account capabilities for Amazon Aurora. IT teams can copy automatic or manual snapshots from one region to another and create read replicas of Aurora clusters in a new region. These features can improve disaster recovery posture or expand read operations to users in geographically-close regions. Additionally, users can share encrypted snapshots across AWS accounts, which enables them to copy or restore a snapshot depending on encryption configuration. AWS also expanded Aurora availability to the US-West region, and added support for t2.small instances.
  • Amazon Elastic MapReduce instance fleets. This addition lets ops specify up to five instance types per fleet with weighted capacities, availability zones and a mix of on-demand or spot pricing. EMR instance fleets enables ops teams to craft a strategy for how they want to provision and geographically place capacity, and how much they want to pay for it. EMR automatically spins up the required capacity to support big data frameworks for Apache Hadoop, Spark or HBase, among others.


March 2, 2017  9:58 PM

How Amazon S3 toppled AWS’ house of cards

Trevor Jones Trevor Jones Profile: Trevor Jones

AWS attributed Tuesday’s extended disruption to outdated processes and human error, according to a postmortem published Thursday.

The post, which classified the incident as a “service disruption,” states that the problem started at 12:37 p.m. ET when an authorized Amazon Simple Storage Service (Amazon S3) team attempted to resolve an issue that had caused the S3 billing system to behave slower than expected. One of the team members, following AWS guidelines, attempted to execute a command that would remove some of the servers for an S3 subsystem, but incorrectly entered one of the inputs.

As a result, too many servers were taken down, including those that supported two additional S3 subsystems that manage metadata and location information, as well as the allocation of new storage. Compounding problems and creating a Catch-22 scenario, the latter subsystem required the former to be in operation. The capacity removal required a full restart, and Amazon S3 was unable to service requests.

It appears the outage was due to fat fingering under pressure with an arcane, hardly used command, said Mike Matchett, senior analyst and consultant at Taneja Group.

“It need not ever have happened, but was too easy a mistake to make,” he said. “Once made, it cascaded into a major outage.”

The impact spread to additional services in the US East-1 region that rely on Amazon S3, including the S3 console, new Elastic Compute Cloud instance launches, Elastic Block Store volumes and AWS Lambda.

AWS’ system is designed to support the removal of significant capacity and is built for occasional failure, according to the company. It has performed this particular operation since S3 was first built, but a complete restart of the affected subsystems hadn’t been done in one of the larger regions in years, the company said.

It’s surprising that a system like AWS is vulnerable at this scope to manual errors, said Carl Brooks, an analyst with 451 Research. The initial failure is understandable, but the compounding impact shows a larger structural flaw in how AWS manages uptime, he added.

“It says something for one manual process to have that much disruptive effect,” Brooks said. “They claim they’ve been working against [these types of failures] all this time, and clear that work has not been completed.”

Amazon S3 fully recovered by 4:54 p.m. ET, and related services recovered afterward, depending on backlogs.

AWS said it has since modified the tool it uses to perform the debugging task to limit how fast it can remove capacity and to put in additional safeguards to prevent a subsystem from going below its minimum capacity. It’s also reviewing other operational tasks for similar architectural problems, and plans to improve the recovery time of critical S3 subsystems by breaking down services into smaller segments and limit the blast radius of a failure, the company said in the postmortem. That work was planned for S3 later this year but that timeline was pushed up following this incident. Company officials did not comment specifically on when this work would be completed.

Critical operations that involve shutting down key resources should be fully scripted and tested often, and it shows a level of hubris that AWS hadn’t tried this restart in years, Matchett said. He was also critical of having the health dashboard connected to S3, saying the management plane for high-availability workloads should have been on a completely separate one than the resources it was controlling.

“It really looks like AWS has built a bigger house of cards than even they are aware of,” Matchett said.

Changes also have been made to the Service Health Dashboard to run across multiple regions.


December 8, 2016  5:34 PM

Amazon doesn’t want IT thinking multicloud

Trevor Jones Trevor Jones Profile: Trevor Jones

Amazon has come far to accommodate customers running workloads on-premises; the same can’t be said, however, for how it addresses its customers using other public clouds.

At re:Invent last week, Amazon punctuated its softened stance on hybrid cloud — at least as a stopgap measure — as it touted its recent partnership with VMware and rolled out a series of services and products for use outside its own data centers. But when the notion of using multiple public clouds was broached, Amazon’s answer was simple: Don’t do it.

It’s hard to gauge how many companies actually deploy workloads in multiple infrastructure as a service public clouds. It’s a practice typically reserved for the larger enterprises that hedge their bets and avoid lock-in by storing backups on services such as Microsoft Azure and Google Cloud Platform, or by forward-leaning users that target specific tools for specific applications. It’s still early days as true competitors start to emerge, but there’s little evidence of workloads spanning across clouds — much how bursting was more hype than reality as hybrid clouds emerged.

Amazon, through a series of subtle and not-so-subtle messages, advised users that the grass is not greener elsewhere. AWS CEO Andy Jassy reportedly dissuaded vendors at the partner summit from a “strategy of hedging.” He didn’t ask partners not to work with other cloud providers, but he did say Amazon would direct business to those willing to do the tightest integration with its platform.* Amazon CTO Werner Vogels laid out a series of new products intended to fill in gaps in its data services, and talked about offering primitives and designing a “comprehensive data architecture” to meet users’ every need.

“We give you the choice of how you really want to do your analytics,” Vogels said in his keynote. “We’ll make sure you can do all of this on AWS — you never have to go outside.”

There were splashy roll-outs aimed at getting as much data as possible out of customers’ data centers and on to its cloud. There was James Hamilton, AWS vice president and distinguished engineer, on stage with a beer toasting to the death of the mainframe as part of a migration service offered by Infosys. The next morning Snowmobile, a 48-foot truck for transferring exabytes of data, was driven on to the convention hall floor to much fanfare.

Amazon is essentially reinventing what it means to be a giant tech vendor, except it’s doing it as a service provider, said Carl Brooks, an analyst with 451 Research. It could lead itself to be the next Oracle, not in terms of culture, but in terms of results and becoming a destination where customers are reliant on it.

“Once you start using Oracle you never stop; once you start using AWS you never stop,” Brooks said.

There also was a big push to expand services and functionality around AWS Lambda, Amazon’s serverless compute service that inherently couples a customer to the platform. Lambda is part of an emerging space of higher-end services that offer exciting new capabilities customers likely couldn’t do on their own, but they come with the tradeoff of rendering migration off AWS prohibitive, at best.

Amazon advises its customers against multiple clouds, and the way to get the most out of the platform is to start looking at services such as Lambda.

“The challenge with a multicloud strategy is customers tend to innovate to what we call the lowest common denominator,” said Jim Sherhart, director of product marketing at AWS.

Of course, cloud vendors have obvious reasons for their current perspectives. Microsoft and Google, both of which regularly mention meeting multicloud demands, need it to be a reality to siphon off business, while Amazon gains little by accommodating rivals that seek to chip away at its market dominance.

It’s also worth noting that Amazon is not alone in pushing higher-level services, nor does it force users to go that route. It also added export capabilities earlier this year to its Snowball hardware device that enables customers to pull data back on-premises.

Despite its public stance, there are indications Amazon would change if it’s thrust upon them. And if its embrace of hybrid cloud — albeit in a very Amazon-centric way — is any indicator, it could eventually be open to multicloud if its customers demand it.

“They have an intense need to stick to the narrative, which is Amazon is great, Amazon is good,” Brooks said. “When they talk about alternatives it’s only in the context of solving a point-specific problem, which is essentially what the VMware partnership is.

“They are able to roll with change, they just don’t want to admit it.”

As for Jassy’s comments, it should be taken more as a call for better understanding of the platform than some kind of threat, said Jeff Cotten, senior vice president of multicloud at Rackspace, which provides managed services for AWS and Azure.

“They genuinely do understand that they can’t create everything, but I do think there’s frustration that there’s not a lot of deep experts and partners,” he said.

In fact, in Rackspace’s experience Amazon has been OK with multicloud, Cotten added.

“They’re also not going to be exclusive with any one partner, so they understand their partners may not be exclusive to them,” he said.

* – Statement changed after initial publication.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


October 4, 2016  4:15 PM

AWS users get more options for buying and sizing instances

Trevor Jones Trevor Jones Profile: Trevor Jones

The latest AWS updates hit on some common themes to  provide customers with more options around compute power and discounted purchasing.

Amazon’s new P2 instance type offers up to 16 graphic processing units (GPUs)  and up to eight NVIDIA Tesla K80 Accelerators. GPUs provide massive amounts of compute power and throughput; they were first popular with gaming companies, and are particularly well-suited for cutting-edge workloads in finance and scientific research, according to Amazon.

This latest instance type, however, and the general trend among cloud vendors to offer larger virtual machines, is as much about targeting legacy workloads and the evolving demands from customers, said Carl Brooks, an analyst with 451 Research.

“It reflects the mainstreaming of cloud computing for the bastions of traditional IT where the major reason [to release this instance type is]is big gnarly, hairy systems,” he said.

Customers must use Amazon’s Virtual Private Cloud to use the P2 instances. The instances provide up to 64 vCPUs, 732 gigabytes of memory, 192 gigabytes of GPU memory, nearly 40,000 cores and 20 gigabit network performance when grouped within a single availability zone. They are available in the US East, US West and Europe region as on-demand instances, reserved instances or dedicated hosts.

Amazon isn’t the only company to offer GPU instances. Microsoft’s N-series GPU instance, added in preview in early August, offers up to 24 cores, four NVIDIA Tesla K80 Accelerators, 224 gigabytes of memory and a 1.44 terabyte solid-state drive. Its targeted applications are high-performance computing and visualization workloads.

More ways to reserve Reserved Instances

Amazon also updated its polices around its Reserved Instances, which provide a large discount for customers that reserve EC2 instances in a given availability zone up to three years in advance. It’s been a popular service since its introduction eight years ago and has expanded to include the ability to schedule and modify instances, and to buy and sell these instances on a marketplace.

The pricing model is closer to traditional enterprise IT buying, where a customer can amortize over the life of the contract and treat it more like an investment, compared with the pay-as-you-go philosophy that helped popularize cloud computing.

“The dirty little secret is most customers are not using cloud for elastic workloads but are instead pushing it as a replacement for their data center,” Brooks said.

The latest change targets customers more concerned with price than capacity, by waiving the standard capacity associated with Reserved Instances in exchange for running that instance in any availability zone within a region and automatically applying the discount.

Also new, Convertible Reserved Instances don’t come with the greatest discount on instances but instead allow customers to change the instance type to use a newer iteration or take advantage of different specs, or factor in overall pricing drops on the platform after an instance is initially reserved.

The updates are available across the platform network, with the exception of Convertible Reserved Instances which are unavailable in GovCloud and the China region, though those two are expected to be added soon.

The new regional option will help with automation and flexibility, but Amazon  risks making its Reserved Instances too confusing, said Meaghan McGrath, an analyst with Technology Business Research in Hampton, N.H. She pointed to how Google customers appreciate how the vendor automatically calculates sustained usage discounts and applies them to customer instances.

“My feeling is that those customers who understand the system and know how to trade or convert Reserved Instances are the same people who are good at negotiating time-share trades and vacation swaps into nice resorts,” McGrath said. “But the majority of people would really rather just go book a hotel on a discount site that’s going to give them a good deal on the place to stay.”

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at tjones@techtarget.com.


June 20, 2016  5:30 PM

Lumberyard, IoT Button aim for new AWS customers

David Carty Profile: David Carty
AWS, Internet of Things

Increasingly, Amazon’s bread is buttered by its public cloud provider, Amazon Web Services. And while AWS entered the IaaS market 10 years ago with on-premises data centers in its crosshairs, evolution is pushing the cash cow further into new revenue streams.

While AWS has expanded its range of services at several junctures over the years, two recent offerings — Amazon Lumberyard and the AWS IoT Button — venture off its traditional path reaching for customers beyond its focus on enterprise IT.

Announced in February, Amazon Lumberyard is a cross-platform, 3D game engine. AWS does not charge Lumberyard users, but expects to gain new paying AWS customers who use other services in conjunction with Lumberyard, which might include Amazon DynamoDB for storage, AWS Lambda and Simple Notification Service for in-game notifications and Amazon Cognito for identification management across devices. This is a straightforward foray into a new stream of revenue — Lumberyard is not a wide-sweeping or veiled play for enterprise applications, it’s intended to draw amateur and enterprise game developers to AWS.

The AWS IoT Button, on the other hand, seemed to be more of an experiment on the part of the cloud provider.

The AWS IoT Button offers the same ease of use for developers, who can perform three actions with the button (click, double-click and long press). Enterprises diving headlong into the internet of things must unspool a world of actionable data, and that process is no easy task. But while the AWS IoT Button connects to AWS resources, its practicality may fall outside traditional business development. AWS itself lists among potential uses opening a garage door, calling a cab and tracking common household chores — none of which represents an enterprise use case, and all entirely within the capabilities of everyday devices.

“Why can’t I [close a garage door or call a cab] on my smartphone, my computer or all the other ways I’m communicating with my IoT devices traditionally?” asked David Linthicum, SVP of Cloud Technology Partners and a TechTarget contributor. “From the use cases that [AWS] explained, I don’t really see anything that useful.”

That might be AWS’ gamble: play the long game, nudge a small batch of developers over to its cloud platform for some tinkering, with the hope that this will further infiltrate the enterprise. “Give it to a number of smart people, and they’ll start developing some applications that do some pretty innovative things using the device,” Linthicum said.

Case in point: Cambia Health Solutions, an insurance provider based in Portland, Ore., which was lucky enough to acquire an AWS IoT Button the day it became available and quickly sold out. And they have a plan for it.

“The AWS IoT Button can help IT scale and produce a lot of data,” said Tisson Mathew, vice president of cloud and consumer services at Cambia, at a Portland AWS Meetup last month. The company plans to combine the AWS IoT Button and AWS Lambda to help automate the back-end stack. Mathew didn’t share full details on how the company would use the AWS IoT Button, but he noted the button’s quick programmability will allow developers to point it at one function for a few hours, and then possibly reprogram it for another function a few hours later.

Even if the IoT Button doesn’t draw in new business, the possibilities for customer-driven big ideas are AWS’ potential windfall, tapping the collective brainpower of developers using cloud services and the devices. “I don’t think it’s there to, in essence, become a profit margin; I think it’s there to become a kind of promotional channel for AWS,” Linthicum said. “Developers will come up with some handy things, but I can’t see using it as my Netflix remote or customizing some kind of order event.”

Also interesting is how the AWS IoT Button is a doppelganger of the Amazon Dash button, which consumers use to order household goods from specific brands, Linthicum pointed out.

“It kind of marries Amazon.com with AWS,” Linthicum added. “It’s sometimes hard to understand how the connections exist. They seem like very separate entities and companies.”

The end game with Amazon Lumberyard is clear: reach for a new cloud audience. With the AWS IoT Button, AWS seems intent to get it into the hands of the savviest development teams to see where they can take it in their environments — and once a few strong use cases are out there, others are sure to follow.


February 10, 2016  9:01 PM

AWS has your back in the Zombie Apocalypse

Beth Pariseau Beth Pariseau Profile: Beth Pariseau
Uncategorized

Amazon Web Services updated its Terms of Service this week. This otherwise might not be notable, but buried among standard legalese clauses about usage terms and patching policies is a hidden gem concerning the use of its new 3D game engine, Lumberyard, should the world come to a messy end.

According to clause 57.10 of the TOS:

Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.

If only Rick Grimes and his crew of survivors on AMC’s The Walking Dead had been a part of the Lumberyard beta.

Should “zombie VMs” invade your data center, meanwhile, there are ways to protect yourself short of turning to Amazon’s newly released 3D game engine. Take it from industry survivor AOL.


Page 1 of 212

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: