AWS Cloud Cover


December 21, 2018  3:31 PM

AWS month in review: Cloud networking services abound

Trevor Jones Trevor Jones Profile: Trevor Jones

December didn’t deliver the avalanche of services and features that surrounded AWS re:Invent in November, but AWS didn’t exactly close out the year quietly. Amazon put its cloud networking services front and center this month with tools to secure connections for cloud-based workloads, and it also added a larger GPU-powered instance type and an EU region in Stockholm.

The newest AWS cloud networking service, AWS Client VPN, enables a customer’s employees to remotely access their company resources either on AWS or inside on-premises data centers. An employee can access the service from anywhere via OpenVPN-based clients. AWS already had a virtual private network (VPN) service, which it now calls AWS Site-to-Site VPN. However, that product only connects offices and branches to an organization’s Amazon Virtual Private Cloud (VPC) environment.

Organizations can already host OpenVPN on Amazon EC2, so they’ll need to determine if it’s cheaper to go that route and incur the charges from both vendors, or opt for this bundled, pay-as-you-go cloud networking service. Client VPN is more expensive than OpenVPN on its own, so it would come down to how much an organization spends on its instances. AWS charges hourly for the service, per active client connections and associated subnets.

Another factor to consider is management, as an organization that uses Client VPN won’t have to maintain any EC2 instances. This is the latest example of AWS’ efforts to offer services that handle the infrastructure for the user — and the cloud vendor plans to do more of this in the future, to attract enterprise clients that don’t want to deal with all those operational complexities.

Organizations can now use a WebSocket API with Amazon API Gateway. Prior to this update, users of the service were limited to the HTTP request/response model, but the WebSocket protocol provides bidirectional communication. This opens the door to a wider range of interactions between end users and services, because the service can push data independent of a specific request.

We’ll have a more thorough analysis on this feature in the coming weeks, but AWS suggests developers can use this functionality to build real-time, serverless applications such as chat apps, multi-player games and collaborative platforms.

Also on the networking front, users can now access Amazon Simple Queue Service (Amazon SQS) and AWS CodePipeline directly through their Amazon VPC, through VPC endpoints and AWS PrivateLink to securely connect services and keep data off the public internet. The Amazon SQS update in particular is a “meat and potato” item that’s more important to some users than flashier services that debuted at re:Invent, according to one prominent AWS engineer.

Lastly, organizations can now share Amazon VPCs with multiple accounts. Large customers use multiple accounts to portion off different business units or teams for security or billing purposes, AWS said. VPC sharing takes responsibility for management and configuration out of the account holder’s hands, and gives it to the IT team, which can then doll out access to these shared environments as needed.

December 17, 2018  7:56 PM

AWS’ container roadmap reveal helps customers plan ahead

Chris Kanaracus Profile: Chris Kanaracus

AWS has been fairly secretive about its technology roadmaps, and drops news without warning on its corporate blog or in the flood of news at its annual re:Invent conference.

To be sure, AWS huddles with customers behind the scenes to get their feedback and determine which directions to head next. But anyone who trawls the AWS website in search of a tidy PowerPoint deck that outlines the future of a service important to their business is in for a long and fruitless journey.

Suddenly, however, last week the cloud vendor ever so slightly shifted its approach, when it quietly posted an “experimental” roadmap for AWS’ container strategy on GitHub.

“Knowing about our upcoming products and priorities helps our customers plan,” the company said. “This repository contains information about what we are working on and allows all AWS customers to give direct feedback.”

The AWS container roadmap is split into three categories: “We’re Working On it,” “Coming Soon” and “Just Shipped.” There are no major revelations in any of them; many entries relate to new regions for EKS, AWS’ managed Kubernetes service, while others are on minor to middling feature updates. Nonetheless, it provides a lot more specifics than AWS has been known to let into the wild.

That’s not to say AWS hasn’t hedged its bets. For one thing, the roadmap lists no delivery dates, because “job zero is security and operational stability,” according to AWS. The company did allow that “coming soon” means “a couple of months out, give or take.”

The roadmaps include information on the majority of development for various AWS container-based services — Elastic Container Service, Fargate, EKS and other projects — but the company said it still plans to reveal other technologies without notice, to “surprise and delight our customers.”

Roadmaps are undoubtedly a boon to customers, but they can be a thorny proposition for vendors because they’re officially and publicly on the hook to deliver. To AWS’ credit, many services it unveils are generally available at that time, or in preview. Vaporware hasn’t been an appreciable part of its modus operandi, although some attendees at this year’s re:Invent grumbled at a few rather vague product announcements.

Vendors that provide many roadmaps tend to lard them up with boilerplate exhortations that plans can change. This is particularly true for publicly traded companies, which may consider roadmap details “forward-looking statements,” a phrase that carries legal and financial weight.

Still, roadmaps are more than just a useful tool for customers. Product organizations like them too when constructed in a certain way, judging from discussions on Roadmap.com, a community site for product managers. Roadmaps should come in a number of flavors, according to several Roadmap.com contributors. For example, a development team-facing roadmap should provide realistic estimates of what can get built if no nasty technical surprises crop up. A roadmap geared for sales teams ought to list top features expected in the next couple of quarters.

A third type of roadmap is higher-level and aimed at customers, media and analysts, Roadmap.com users said. It provides a company’s big-picture plans over the next year or two, but shies away from concrete details to give room for tweaks to the strategy.

AWS hasn’t done anything close to this, but again, it’s not as if they toil in a vacuum and shut out customer input—quite the contrary.

Yet someone with influence inside AWS clearly decided more transparency into roadmaps was desirable — even if for now the focus is on containers, where the market grows more competitive by the day. Don’t expect any state secret-level dirt on AWS’ container strategy through the roadmap, but customers with money to spend on existing or new container workloads will appreciate more clarity as they make plans. Now it’s time to wait and see whether AWS’ experimental effort becomes embedded in its culture.


November 30, 2018  7:49 PM

AWS month in review: Expanded EC2 options and a blastoff into space

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

IT and development teams who try to keep pace with AWS’ ever-expanding portfolio have a lot to catch up on this month.

First, to see the most significant news that came out of AWS re:Invent 2018 this week, check out SearchAWS’ end-to-end guide on the show. There, you can find details about AWS’ latest AI services, databases, storage and security features, and hybrid cloud strategy — which now includes an on-premises hardware component in Outposts.

But re:Invent, and the month of November in general, brought about other important AWS features and tools for users who run day-to-day operations on the Amazon cloud – and there’s even some fodder for those who use AWS to explore outer space.

More EC2 options for management, compute

Pause and resume:  Admins now have the option to pause, or hibernate, Amazon EC2 instances that are backed by Elastic Block Store (EBS). This feature enables users to maintain a “pre-warmed” set of compute instances, so they can launch applications, particularly memory-intensive ones, more quickly than if those instances had to fully reboot after a shutdown. Amazon likened the process to hibernating a laptop rather than turning it off.

Users can control the pause/resume process via the AWS Management Console, the AWS SDK or the command line interface. The feature applies to the following instance families: M3, M4, M5, C3, C4, C5, R3, R4 and R5 instances that run Amazon Linux 1.

More instance types move in: The Amazon EC2 instance family grew with the addition of A1, P3dn and C5n instance types. Intended for workloads that require high scalability, A1 instances are the first to be fueled by AWS’ ARM-based Graviton processors. The GPU-based P3dn instances, designed for machine learning, deliver four times the network throughput of the cloud provider’s existing P3 instance type. Lastly, the C5n family can use up to 100 Gbps of network bandwidth, making them a good fit for applications that require high network performance.

Additional storage, networking services

Amazon FSx: This managed file share service debuted in two flavors: Amazon FSx for Lustre and Amazon FSx for Windows File Server. The former enables users to deploy the open source Lustre distributed file system on AWS, and is geared toward high-performance computing and machine learning apps. The second version, designed for Microsoft shops, delivers the Windows file system on AWS. It’s built on Windows Server, is compatible with Windows-based apps and supports Active Directory integration and Windows NTFS, AWS said.

AWS Global Accelerator: For enterprises that deliver applications to a global customer base, Global Accelerator is a networking service that directs user traffic to the closest and highest-performing application endpoint. AWS expects the accelerator to ensure high availability and free enterprises from grappling with latency and performance issues over the public internet. In addition, the service uses static IP addresses, which, according to AWS, eliminate the need for users to manage unique IP addresses for different AWS availability zones or regions.

AWS Transit Gateway: Another service intended to simplify network management, Transit Gateway lets customers hitch their own on-premises networks, remote office networks and Amazon VPCs to one centralized gateway. Admins manage one connection from that central gateway to each VPC and on-premises network they use. The cloud provider described it as a hub-and-spoke model; think of Transit Gateway as the “hub” that centrally controls and directs traffic to various network “spokes.”

A big move in microservices

AWS App Mesh: Based on the open source service proxy Envoy, App Mesh streamlines microservices management. App Mesh users can monitor communication between individual microservices, and implement rules that govern that communication. It’ll be interesting to see how App Mesh stacks up against other service mesh options, such as Azure Service Fabric and the open source Istio technology, behind which Google, in particular, has thrown its weight.

AWS takes to space

Other AWS news this month took more of a, well, celestial slant.

The cloud provider teamed up with Lockheed Martin to provide easier and cheaper ways for companies to collect satellite data and move it into the cloud for storage and analysis.

Lockheed’s Verge system of globally distributed, compact satellite antennae will work in conjunction with AWS Ground Station, a service that co-locates ground antennas inside AWS availability zones around the world.

Previously, organizations such as NASA and companies like Mapbox had to write complex business logic and scripts to upload and download satellite data, AWS CEO Andy Jassy said at re:Invent. Ground Station lets users work with satellite streams from the AWS management console, and pay by the minute for antenna time. It’s now in preview in two AWS regions, with 10 more to come early next year.

It’s another example of AWS going after specialized customers, but the partnership could also have broader resonance among AWS’ user base. Research organizations and niche startup companies are the heaviest users of satellite data, but enterprise IT shops in general should also watch the implications of geographic information systems (GIS) and spatial data on their business, said Holger Mueller, VP and principal analyst with Constellation Research in Cupertino, Calif.

“Making that data available in an easy, secure, scalable and affordable way is key for next-generation enterprise apps,” he said.

AWS isn’t first in this market — SAP and the European Space Agency partnered in 2016 to bring satellite data into SAP’s HANA Cloud Platform — but its moves to build out a global satellite antenna network take the idea much further.

*Senior News Writer Chris Kanaracus contributed to this blog.


November 30, 2018  7:27 PM

Revisit the sights and sounds from AWS re:Invent 2018

Tim Culverhouse Profile: Tim Culverhouse

AWS re:Invent 2018 returned to Las Vegas this month, and brought with it the usual mix of keynotes, breakout sessions and, of course, new cloud services.

AWS Outposts, a managed hardware and software stack that extends Amazon’s reach into on-premises data centers, especially intrigued attendees. Meanwhile, AWS Ground Station pushed the cloud to the final frontier of space, and Amazon SageMaker updates provided more AI and machine learning opportunities for developers.

Between keeping up with these technology updates, and shuffling back and forth between sessions, re:Invent attendees kept busy – but many still had time to share their thoughts on the event via Twitter.

Here’s a look at some tweets that captured the mood of AWS re:Invent 2018:

https://twitter.com/AWSreInvent/status/1068177349605277701

https://twitter.com/evankirstel/status/1067223197651779584


November 30, 2018  9:03 AM

AWS plans to democratize access to AI and ML

Darryl Taft Profile: Darryl Taft

LAS VEGAS — What is AWS’ end game with AI and machine learning (ML)? Is it to lead the pack with tools and frameworks? Or dominate other players such as Google and Facebook that built powerful ML libraries or frameworks and placed more skin in the game?

No, AWS’ goal with AI and ML is much the same as the company’s overall mission: put the best tools and cloud infrastructure to build applications into the hands of everyone — and they mean everyone.

At the AWS re:Invent 2018 conference here, AWS CEO Andy Jassy said he wanted to empower “builders” with the tools they need to create apps. Not just developers and data scientists, but also IT staff, ops staff, marketing staff, line-of-business folks — even CIOs, CMOs and CEOs should have access to the same tools.

AI and machine learning are no different — the goal is to democratize access to the best tools, said Joel Minnick, senior manager of product marketing for AI and ML at AWS.

“What we don’t want to see is that machine learning is the toolset of the largest, most technically savvy companies,” Minnick said. “If you are a college student with a great idea about how to build a great application, you should have the same toolset to use that the largest technology companies in the world have to use.”

That’s also why AWS is so focused on data for machine learning models. The company’s SageMaker Ground Truth service, unveiled this week, focuses on data preparation and labeling for machine learning models. Because everyone, not just the biggest, richest companies, should have the ability to clean and prepare their data for machine learning models.

In a lunchtime Q&A with reporters, Jassy said AWS customers are hungry for AI and ML. “I think every cloud app, going forward, will have some type of machine learning and AI infused in it,” he said.

Tens of thousands of AWS customers want to use machine learning and they need help at every layer of the ML stack, Jassy said. Moreover, for machine learning to take off in the enterprise, it must become accessible and usable to everyday developers and data scientists. This was AWS’ goal with its SageMaker machine learning service, which the company enhanced significantly here this week.

Amazon doesn’t really play at the AI/ML level — they don’t have a TensorFlow, PyTorch of Caffe2 — so they’re building infrastructure around and below it to make it easier for folks to do AI, and that strategy plays to AWS’ strengths, said Charles Fitzgerald, managing director of Platformonomics, LLC, a Seattle-based strategy consulting practice.

“They lost the great ML framework war, so now they support everything,” he said.

At his annual re:Invent press conference, Jassy responded to a broad range of questions — from AI ethics, to the company’s relationship with the open source community, to how the Amazon HQ2 search and decision plays for AWS and its hiring.

But he didn’t address something I really wanted to know, but wasn’t going to ask in a press conference: What does he think of those 3-8 New York Giants? Jassy once told me he’s a Giants season ticket holder and comes from a family of Giants season ticket holders. Maybe he should sic some of that smart-alecky machine learning on the Giants’ analytics system to generate a winning team again. Saquon Barkley and “OBJ” Odell Beckham Jr. are great players who typically get their stats, but they can’t do it alone.


November 29, 2018  5:21 PM

AWS CEO Andy Jassy talks AI, on-premises moves

Trevor Jones Trevor Jones Profile: Trevor Jones

LAS VEGAS — AWS CEO Andy Jassy this week defended his company’s ethical decisions around the use of AI on its platform and shed some light on its impending move into corporate data centers.

Jassy’s post-keynote press conference here at re:Invent is generally the only occasion each year the Amazon cloud exec takes questions in this type of setting. In addition to AI and the on-premises infrastructure service AWS Outpost, Jassy talked about AWS’ often criticized relationship with the open source community, and the significance of Amazon’s new second headquarters.

Here are some of the excerpts from the hour-long session, edited for brevity and clarity.

On why AWS has pushed so heavily into AI:

“We’re entering this golden age of what couple be possible with applications; and I don’t know if it’s five years from now or 10 years from now, but virtually every application will have machine learning and AI infused in it.”

On how AWS will enforce ethical use of its AI services:

First, the algorithms that different companies produce have to constantly be benchmarked and iterated and refined so they’re as accurate as possible. It also has to be clear how you recommend them using the services. For example, facial recognition for matching celebrity photos, you can have a confidence level of 80%, but if you’re using facial recognition for something like law enforcement, something that can impact people’s civil liberties, you need to have a very high threshold. We recommend at least [a] 99% threshold for things like law enforcement, and even then it shouldn’t be the sole determinant. There should be a human involved, there should be a number of inputs and machine learning should only one of the inputs.

[Customers] get to make decisions for how they want to use our platform. We give a lot of guidance and if we think people are violating our terms of service, we’ll suspend and disable people from using that. But I think society as a whole, our countries as a whole, we really want them to make rules for how things should be used and we’ll participate and abide by that.”

On what hardware vendors they’ll use for for AWS Outposts:

“We use a lot of different hardware in our data centers, so we can interchange whatever hardware we want. At the beginning of AWS, we primarily used OEMs like the Dells and the HPs, but over time, we’ve done more contract manufacturing and more design of our own chips. We’re always open to whatever hardware providers want to provide to us at the right price point.”

On whether Outposts will eventually provide the full catalog of AWS services:

“It remains to be seen how many total service we’ll have on Outposts. Our initial goal is not to recreate all of AWS in Outposts. They’re kind of different delivery models and folks that have done that today [with attempts at full parity] have frustrated customers and they just haven’t gotten the traction that they wanted.

There are some really basic components that customers would like for us to provide on premises, as they connect to the cloud as well as locally – compute, storage, database, machine learning and analytics are good examples of that.”

On Outposts’ specs and getting into the on-premises server management business:

“We’re not ready to release the specs yet, but stay tuned. In general, there’s going to be racks with the same hardware we run in our AWS regions. They can be configured in different ways, depending on which variants you want and which services you want on those racks. People will be able to customize that to some degree.

In terms of scaling out people, over time we have become pretty good at finding ways to streamline operations and we have quite a large number of people in data center operations around the world… so we have a little bit of an idea of how to do that.

Also, if you think about what Amazon does across its business, we’re very comfortable running high volume. We know how to keep our costs low.”

On whether Amazon’s decision to locate its second headquarters in Virginia and New York will affect AWS:

“As much as we have successfully built a significant-size team in Seattle, we can’t convince everyone to move to Seattle. Just in AWS alone we have people in Boston, Virginia, [Washington] DC, New York, Dublin and Vancouver… Because AWS continues to grow at such a rapid rate, I think it’s a fair bet that we’ll have a lot of [employees] in those two HQ2s.”

On its relationship with the open source community:

“We contribute quite a bit and participate quite a bit. We do it across so many things. If you think about some of the open standards we build on, we have to contribute back. We do it with Linux and for instance, about 50[%] to 60% of the contributions are made by AWS; look at ElasticSearch, FreeRTOS and ARM.

Sometimes people lose sight [of how much we contribute], probably because we don’t do a good enough job making people aware of how many open source contributions we make across how many different projects.”


October 31, 2018  3:43 PM

AWS month in review: Amazon Lightsail, Lambda updates

Trevor Jones Trevor Jones Profile: Trevor Jones

AWS made one of its simplest services more functional in a month that saw the cloud vendor bogged down in all sorts of complex subjects.

An update to Amazon Lightsail now lets users incorporate managed relational databases with the virtual private server offering, which bundles set amounts of storage, compute and data transfers. These database servers also come in fixed sizes that can be quickly deployed.

AWS folded some of the technology that underlies RDS into Amazon Lightsail so users can create a database in one availability zone (AZ), or replicate it to a second AZ for high availability. The feature can only deploy MySQL databases currently, though AWS said it plans to add PostgreSQL support soon.

Managed databases were the biggest request from Amazon Lightsail users, according to AWS. The service competes with DigitalOcean and Linode, and users should expect more integrations with AWS’ vast array of services, said Rhett Dillingham, an analyst with Moor Insights & Strategy.

Lambda love

Two Lambda items stood out among other October product updates.

AWS tripled the maximum time a function can run per execution, to a cap of 15 minutes, which opened a debate about how to use the service. Some users called it a welcome move but said AWS should extend the window even further to accommodate more workloads. Others raised concerns about runaway costs, and argued that any function that must run that long should be broken down into smaller microservices.

Also, AWS finally added an SLA to Lambda, nearly four years after it debuted at re:Invent 2014. The SLA guarantees a monthly uptime of 99.95%, with a reimbursement in the form of service credits if AWS fails to meet that target.

In other news…

Amazon appears to be the latest cloud vendor embroiled in internal controversy over its use of AI, as employees have protested an apparent pitch to U.S. Immigration and Customs Enforcement officials to use Amazon Rekognition to record and identify individuals.

Meanwhile, the U.S. Department of Defense’s JEDI cloud contract continues to be a contentious subject, which is no surprise, given the feds’ plan to fork over as much as $10 billion to the winning bidder. IBM this month joined Oracle to protest of the scope of the deal, and several lawmakers have called for investigations.

Speaking of Oracle, the company’s CTO and founder Larry Ellison took his annual whack at AWS as part of his OpenWorld keynote, where he likened the use of an AWS database service to a fatal car crash. The two companies’ executives have traded barbs for years as they fight over the lucrative database market, but a leaked AWS memo that same week as OpenWorld shed new light on the companies’ complicated relationship behind the scenes.

Amazon suffered a major embarrassment earlier this year when its ecommerce site stalled on its Prime Day sales event. The outage didn’t impact other AWS users, but since Amazon.com is AWS’ biggest customer, it raised some eyebrows about the company’s claims of infinite scale. Little was said publicly at the time, but it may have involved some features that were lost in a migration from Oracle to Amazon Aurora, which couldn’t provide the same level of reliability. AWS CTO Werner Vogels vehemently denied the story, calling it “silly and misleading.”

Amazon reportedly is in the midst of a massive internal migration off Oracle, with a scheduled completion in 2020. Naturally, Ellison told investors on a recent earnings call that its rival won’t be able  to quit Oracle so easily. If past is prelude, expect AWS CEO Andy Jassy to take his own shots at Oracle on stage at re:Invent 2018 in a few weeks.

Also on the report-refuting front, the Bloomberg story that claimed Apple, AWS and others detected malicious Chinese hardware in their data centers continues to be a headscratcher. Jassy echoed Apple CEO Tim Cook in a call for Bloomberg to retract the story. For its part, Bloomberg has stood by its reporting, though it hasn’t provided additional details in the face of staunch pushback from the named tech vendors.


September 28, 2018  5:43 PM

AWS month in review: More for hybrid cloud architectures

Trevor Jones Trevor Jones Profile: Trevor Jones

September was a low-key month for AWS, even though it rolled out more than 70 updates to its platform.

AWS advancements in September were a lot of the standard fodder: services expanded to additional regions, deeper integration between tools and a handful of security certifications. All of this is potentially welcome to the respective target audiences.

Still, the updates weren’t completely mundane. There were some serious nods to AWS hybrid cloud architectures along with some intriguing moves aimed at developers.

Let’s start with the enterprise-focused tools that get data to AWS’ cloud. AWS Storage Gateway, a service that connects on-premises and cloud-based data, added a hardware appliance that a company can  install in its own data center or remote office. The service addresses storage needs for a range of hybrid cloud architectures – backup, archiving, disaster recovery, migrations and links to AWS analytics tools. This appliance opens Storage Gateway to non-virtualized environments, and comes on a pre-loaded Dell EMC PowerEdge server at a cost of $12,250.

AWS has emphasized database migrations in recent years to lure corporate clients to its public cloud, either through lift-and-shift approaches or transitions to its native, managed services. That continued in September, as Database Migration Service added DynamoDB as a destination for Cassandra databases and Server Migration Service upped the size of data volumes it can handle from 4TB to 16TB.

Speaking of databases, Amazon Aurora continues to get a lot of attention. A month after its serverless flavor became generally available, users now can start and stop Aurora database clusters, a feature geared toward test and development. Another Aurora feature, a Parallel Query tool, opens the managed service to some analytical queries.This could limit the need for a data warehouse service, but there are lingering concerns that AWS has spent too much time on interesting new features and not enough time on core functionality.

Developer tools raise eyebrows

Two other AWS updates in September may pique developers’ interest, or leave them scratching their heads.

CloudFormation Macros processes templates in the same way the Serverless Application Model (SAM) prescribes rules and defines infrastructure as code, but Macros enables custom transformations handled by Lambda functions within a user’s account.

And for Microsoft shops, AWS Lambda now supports .NET developers that want to manage or automate scripts through support for PowerShell Core 6.0. We’ll have more on these features in the coming months, but for now, at least one group of users is a bit confused with the Macros feature and thinks they’ll stick with Terraform instead.

Updates to security, partnerships

On the security front, admins can now use YubiKey security keys for multi-factor authentication. Network Load Balancers and AWS PrivateLink support AWS VPN, which means an enterprise has more options to build an AWS hybrid cloud architecture where on-prem workloads can privately access AWS services.

AWS also expanded its partnership with Salesforce, with tighter integration of services for companies that rely on both providers. And yes, you can use Lambda functions to move trigger actions between the two environments. The two cloud giants have worked together for years, including the $400 million deal Salesforce signed in 2016 to use AWS services.

And stop me if you’ve heard this before, but a Wall Street analyst called for Amazon to split its retail and AWS businesses. As always, the hope is to avoid regulation, boost the value for shareholders and insulate them against the potential struggles of one of the business units.

Amazon executives haven’t responded to the critique, but last winter AWS CEO Andy Jassy said there’s no need to spin off his company. He brushed off the “optics of the financial statements” and said there’s real value in having internal customers that aren’t afraid to share their feedback.


August 30, 2018  7:11 PM

AWS balances GA of Aurora Serverless with new instance types

Trevor Jones Trevor Jones Profile: Trevor Jones

The end of summer is typically slow for the IT world, but AWS this month continues to expand its horde of instance types, and lay the groundwork for a future where its customers won’t even bother with VMs.

The cloud vendor rolled out more instance options and made Amazon Aurora Serverless generally available. And as the annual VMworld user conference closed out the month, the company advanced the ability to run the VMware stack on AWS, and perhaps more importantly, run AWS on-premises with VMware software.

The T3 instance is the next generation of burstable, generable purpose VMs available in EC2. It’s 30% cheaper than the T2 and supports up to 5 Gbps in network bandwidth. The T series of VMs, first added in 2010, is designed for smaller applications with large, infrequent spikes in demand. Like the previous two generations, the T3 comes in seven sizes, with varying amounts of memory and baseline performance.

The T3 is the latest instance type to rely on AWS’ Nitro system. It is hardware-virtual-machine-only and must be used within a Virtual Private Cloud.

AWS also added two instance sizes to Amazon Lightsail, its virtual private server offering. The 16 GB and 32 GB iterations are the largest Lightsail instances yet, and their additions coincided with a 50% price drop on all other existing Lightsail instance sizes.

There appears to be little cadence to AWS’ instance type expansion, but the cloud giant shows no signs of slowing down. Those additions came just weeks after AWS rolled out the z1d, R5 and R5d instances in late July.

Serverless vs. VMs

At the same time, AWS moved Aurora Serverless out of preview. The highly anticipated version of its fastest growing service, first announced last November, enables users to provision and scale database capacity while AWS manages all the underlying servers.

The GA of Aurora Serverless has limitations, however. It’s only available for MySQL and in the following regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland) and Asia Pacific (Tokyo). AWS says it will continue to add regional availability in the coming year. AWS originally said the PostgreSQL version would be available in the back half of this year, but hasn’t updated that timeframe since Aurora Serverless first went into preview.

EC2 continues to host the vast majority of AWS workloads, but it will be worth watching how long AWS remains on these parallel paths of additional VM variety and VM-free services. Many industry observers expect the latter will eventually overshadow the former. AWS has hedged its bets with a container strategy that was slightly late to the game, but even there serverless gets equal footing.

AWS started with serverless in 2014 with the addition of Lambda functions,  and is still largely seen as the predominant player in this space, but to maintain that edge it’ll be without one if its key contributors. Tim Wagner, who oversaw the development Lambda, was hired by Coinbase, a digital currency exchange, to be its vice president of engineering. Wagner was general manager for AWS Lambda, Amazon API Gateway and AWS Serverless App Repository at the time of his departure.

AWS: coming to a data center near you

And finally, AWS deepened its ties to VMware in more ways than one during VMworld. VMware Cloud on AWS, a service jointly developed by the two vendors but sold by VMware, added tools to simplify migrations from on premises to AWS and to manage workloads post-migration. VMware also cut the price of the service in half, which could attract organizations still on the fence.

What surprised many industry observers was AWS’ continued march beyond its own facilities. AWS will sell and deliver Amazon Relational Database Service on VMware inside users’ own data centers. The on-premises version of the AWS database service will handle management, provisioning, patches and backups. It will also make it easier for VMware customers to migrate their existing databases to Amazon’s public cloud.


August 3, 2018  3:29 PM

AWS cloud features offer network, performance improvements

David Carty Profile: David Carty

As is often the case, AWS’ yearly Summit in New York provided the scene for some additional features and functionality. While AWS cloud features are unveiled regularly, the free-to-attend conference generates some excitement among both cloud newcomers and experienced shops alike.

SearchAWS.com attended the Summit, and reported on various new AWS cloud features and trends, including:

  • New capabilities for AWS Snowball Edge — a boon for enterprises with edge computing needs — as well as a boost to S3 performance, new EC2 instances and a Bring Your Own IP feature;
  • Early adoption patterns for Amazon Elastic Container Service for Kubernetes (EKS), as well as attendee reactions to an EKS workshop, and how AWS might improve the service moving forward; and
  • A one-on-one discussion with Matt Wood, AWS’ GM of Deep Learning and AI, regarding new SageMaker features, enterprise AI challenges and the ethics of facial recognition technology.

ALB gets its actions together

The continued push from HTTP to HTTPS also gives AWS customers easy way to meet their compliance goals.

Application Load Balancer’s (ALB) content-based routing rules now support redirect and fixed-response actions in all AWS regions, which fills two big networking needs for users. Redirect actions enable an ALB to readdress incoming requests from one URL to another, such as from an HTTP to an HTTPS endpoint, which helps organizations improve security and search ranking. Fixed-response actions enable an ALB to answer incoming requests rather than forward the request to the application, for example to send custom error messages when a problem occurs.

EFS gets a performance boost

For users that encounter Amazon Elastic File System (EFS) performance issues, some relief has arrived.

Provisioned Throughput for Amazon EFS enables a developer to dynamically adjust throughput in accordance with an application’s performance needs, regardless of the amount of data stored on the file system. While users previously could burst EFS throughput for applications with more modest needs, the Provisioned Throughput feature suits applications with more strenuous needs.

DevSecOps gets more robust

Amazon GuardDuty, one of two AWS security services that relies on machine learning, could see more widespread adoption after it gained an important integration with another service.

GuardDuty now works with AWS CloudFormation StackSets, which enable an enterprise security team to automate threat detection across multiple accounts and regions. CloudFormation automates the provisioning of infrastructure and services, giving enterprises the ability to quickly and efficiently watch for threats.

For the good of the hack

A pair of upcoming AWS hackathons aims to put developer brainpower to work for socially conscious causes.

Developers can enter two hackathons — one focused on serverless applications and another on Alexa skills — that offer cash prizes for imaginative projects focused on social good. The Amazon Alexa Skills Challenge offers various cash and participation prizes for apps that use a voice command interface, while its Serverless Apps for Social Good hackathon seeks AWS Serverless Application Repository projects that combat human trafficking.

Nonoptimal Prime

Chaos ensued shortly after the start of Amazon’s heavily advertised Prime Day retail initiative on July 16, as customers could not access product pages. The Amazon disruption, attributed to a software issue within Amazon’s internal retail system, was severe enough that the company temporarily killed off international traffic.

According to a CNBC report citing internal documents, Amazon manually added server capacity as traffic surged to its retail site, which points to its Auto Scaling feature as the potential culprit that affected its internal Sable system. While the disruption generated negative press for the retail giant, which touted its internal readiness and scalability for Prime Day as recently as last year’s re:Invent conference, it still reported sales of more than 100 million products.

While AWS experienced intermittent errors with its Management Console that afternoon, the company says AWS infrastructure and services were not involved with the Prime Day snafu.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: