AWS Cloud Cover

July 31, 2019  7:22 PM

AWS month in review: Orlando drops AWS facial recognition program

Ryan Dowd Profile: Ryan Dowd
"Amazon Web Services", Artifical Intelligence, AWS

AWS this month suffered a setback to its agenda of expansion, when the city of Orlando, Fla. ended its facial recognition in law enforcement partnership with the Amazon Rekognition technology.

After 15 months of back and forth with this AWS facial recognition program, Orlando and Amazon have officially gone their separate ways. On July 18, Orlando declined to renew its second pilot program with Amazon Rekognition, AWS’ image analysis service. Bandwidth, video resolution and positioning issues plagued the use of the technology, and the city was unable to set up a reliable camera stream, Rosa Akhtarkavari, Orlando’s chief information officer, told the Orlando Weekly. The city simply lacked the IT infrastructure to support AI software, she said.

The first pilot began in December 2017 and ended in June 2018, before the second started back up again in October 2018. In theory, the city planned to use Rekognition’s facial recognition algorithms to identify and track suspects in real-time. If configured and supported properly, law enforcement officers would upload an image of a suspect and get notified if Rekognition found a match.

With Orlando out of the picture, Oregon’s Washington County is the lone police department that still uses AWS facial recognition technology. Both partnerships faced legal and media pressure from the American Civil Liberties Union which argued unchecked surveillance technology threaten privacy and civil liberties, and that Rekognition, in particular, misidentifies African-Americans as criminals at a higher rate than other races. Other cities, such as Oakland and Somerville, MA have banned government use of the software.  And Amazon’s own employees and shareholders even wrote a resolution that called to halt AWS facial recognition sales to government agencies, though Amazon’s board of directors struck the missive down in its annual shareholder meeting in May 2019.

Amazon claims that the apparent racial bias occurred due to misuse and misunderstanding of the service. It has also argued that it’s up to the federal government, not Amazon, to legislate the use of this technology.

AWS expands CloudWatch, adds event-driven offering

While Amazon bore a blow to its AI ambitions this month, it still improved some bread-and-butter capabilities of the AWS platform. The recently announced Amazon CloudWatch Container Insights and Anomaly Detection capabilities, along with the expansion of its EC2 Spot Instance service, should expand AWS’ compute and monitoring flexibility.

AWS also added Amazon EventBridge, a serverless event bus that integrates users’ AWS applications with  SaaS applications. As more of its customers turn to event-driven applications and architecture, AWS needs a better way to integrate and route real-time data from third-party event sources including DataDog or PagerDuty to service targets, such as AWS Lambda. EventBridge eliminates the need to write custom code that connects application events and should enable more efficient AWS event-driven architectures.

Amazon CloudWatch Container Insights and Anomaly Detection give users more ways to analyze their metrics and improve performance and security. CloudWatch Container Insights collects and organizes metrics from AWS’ container services and files them in CloudWatch’s automatic dashboard. It also handles diagnostics, which can help users identify issues such as container restart failures. Users can set alarms for certain container metrics, including use of CPU, memory or network resources. Container Insights is in open preview.

CloudWatch Anomaly Detection uses machine learning algorithms to analyze data regarding the performance of your system or application. This anomaly detection capability then analyzes the metric’s past data to generate a model of expected values and establishes a high and low value of accepted metric behavior. Users can then decide how and when they are notified. CloudWatch Anomaly Detection is in open preview and priced per alarm.

Spot Instances for Red Hat Enterprise Linux and Amazon SageMaker

Amazon EC2 Spot Instances let users obtain unused EC2 capacity at a discounted rate, and AWS recently extended those capabilities to users with a basic Red Hat Enterprise Linux (RHEL) subscription. Before, only premium RHEL subscribers could access Spot Instances. At its NYC Summit this July, AWS also revealed Spot Instances support for SageMaker users to train machine learning models, which AWS claims could cut training costs by up 70%.

June 28, 2019  6:21 PM

AWS month in review: Security Hub goes live at AWS security conference

Ryan Dowd Profile: Ryan Dowd
"Amazon Web Services", Amazon, Cloud Security

AWS this month hosted its inaugural re:Inforce conference in Boston and used the setting to make AWS Security Hub and Control Tower generally available and to introduce a VPC network security feature.

Other AWS developments of note earlier in June included AWS’ expansion of Auto Scaling to Amazon Relational Database Service (RDS), which should ease over-provisioning woes for some users, and the addition of AWS Personalize to AWS’ machine learning suite.

AWS re:Inforce, an AWS re:Invent-inspired spinoff devoted to cloud security, drew more than 8,000 attendees. The AWS security conference featured Amazon cloud service demos, training sessions and highlighted Security Hub and Control Tower, among other services, as ways to infuse more automation and visibility into cloud security processes.

Security Hub and Control Tower aim to centralize security insights and account management, respectively. Security Hub is a centralized security dashboard to monitor security and compliance posture. It collects and analyzes data from all the AWS security tools and resources you use and checks them against AWS security and compliance best practices – identifying an S3 bucket unintentionally left open to public access, for example.

AWS Control Tower was built to ease multi-account management. Control Tower automates the creation of a secure multi-account AWS environment, with AWS security best practices baked into the process. Accounts configured through Control Tower come with guardrails — high-level policies — that reject or report prohibited deployments.

Amazon Virtual Private Cloud (VPC) Traffic Mirroring is a feature for your Amazon VPC that analyzes network traffic at scale. AWS has described this capability as a “virtual fiber tap” that captures traffic flowing through your VPC. You can capture all the traffic or filter for specific network packets. VPC Traffic Mirroring should improve network visibility and help organizations check off monitoring compliance requirements.

Amazon RDS supports Auto Scaling

Auto Scaling uses Amazon CloudWatch to monitor applications and then automatically scale them according to predetermined resource needs and parameters. Users can now set up Auto Scaling for RDS in the Management Console.

Before Auto Scaling, RDS users either overprovisioned new database instances to be safe or underprovisioned them to save some money. This meant they were either stuck footing a larger bill than necessary, or had to increase capacity on the fly which typically results in application downtime. To ensure RDS performance and cost optimization, users should underprovision from expected capacity and set a maximum storage limit. Auto Scaling will boost capacity as database workloads grow.

Auto Scaling is a key feature for EC2 and Amazon Aurora, as well. Those services enable dynamic scaling — up or down — based on user recommendations for performance and cost optimization. However, RDS Auto Scaling only scales up.

Users who experience cyclical data spikes and lulls may need to use Aurora Serverless or provide additional automation on top of RDS Auto Scaling to bring their storage capacity back down. However, RDS Auto Scaling should still simplify provisioning of storage capacity in most cases.

Users pay for the database resources they use, which includes Amazon CloudWatch monitoring.

Amazon adds Personalize to ML portfolio

Like Amazon SageMaker, Amazon Personalize doesn’t require advanced ML and AI knowledge. The service stems from the machine learning models that uses to recommend products and offers that capability in a plug-and-play fashion to AWS users and their applications.

To get started with Amazon Personalize, users can set up an application activity stream on the Amazon Personalize API. This stream would log customer interaction on the application — along with products they’d like to recommend. Amazon Personalize will then customize a machine learning model for that data and generate real-time recommendations. AWS users can start with a two-month free trial, with data processing and storage limitations.

May 31, 2019  7:42 PM

AWS month in review: Updated Lambda execution environment on its way

Ryan Dowd Profile: Ryan Dowd

AWS this month said it will update the execution environment for Lambda and Lambda@Edge. Lambda runs on top of the Amazon Linux OS distribution, which AWS will move to version 2018.03 in July. AWS has also begun to highlight its niche managed satellite service Ground Station, as the first two stations are now open for business. Finally this May, AWS weighed in on the Clarifying Lawful Overseas Use of Data (CLOUD) Act enacted in March 2018. AWS both echoed support for the law, but also insisted will defend its users’ data to the extent international law allows.

The updated AWS Lambda execution environment AMI should improve Lambda capabilities, performance and security, according to AWS. However, the transition could impact Lambda functions that house libraries or application code compiled against specific underlying OS packages or other system libraries. Lambda users should proactively test their existing functions before the general update goes live Tuesday, July 16.

An AWS Lambda execution environment is what users’ code runs on, made up of an underlying OS, system packages, the runtime for your language, and common capabilities like environment variables. Users can test their functions for the new environment in the Lambda console if they have enabled the Opt-in layer, which will tell Lambda to run function executions on the new environment. They can also test locally through an updated AWS Serverless Application Model CLI, which uses a Docker image that mirrors the new Lambda environment.

On June 11, any newly created Lambda function will run on the updated execution environment. And on June 25, any updated Lambda function update will run on the new environment, too.

The general update will occur on July 16, and all existing functions will use the new execution environment when invoked. If you aren’t ready to deploy to the new execution environment, enable the Delayed-Update layer which will push the distribution transition back to July 23. All functions will have to be migrated by July 29.

The safest course is to begin testing lambda functions now, especially those suspected to have dependencies compiled against system packages.

AWS Ground Station is operational

Introduced at the 2018 re:Invent, AWS Ground Station enables you to downlink data from satellites. Ground stations are quite literally the base of global satellite networks. This managed service now has two antennas up and running in the U.S.East-2 and U.S. West-2 regions, with 10 more under construction and expected online in 2019.

Given expense and satellite access, Ground Station is a niche service that won’t make sense for every AWS user.  However, for organizations that rely on satellite data — weather, maritime or aviation — Ground Station has a chance to provide better data at a cheaper rate. If AWS successfully deploys the remaining antennas, then organizations will be able to connect to satellites when and where they need data, without steep management costs.

AWS Ground Station bills antenna use in per-minute increments and will only charge for time scheduled.

AWS weighs in on CLOUD Act

Since it was enacted in March 2018, the CLOUD Act has caused tension between privacy advocates and big tech companies who support the law, among them AWS. Responding to this U.S. Department of Justice white paper, AWS hoped to quell users’ privacy concerns.

While the white paper outlines the law’s purpose, scope, and importance as a model for international cooperation, AWS insists that the CLOUD Act will not affect its ability to protect its customers’ data. In short, the CLOUD Act streamlines the process by which law enforcement agencies can compel service providers to turn over data outlined in a warrant. AWS, though, insists it reviews any request for customer data and gives users the option to encrypt data in-transit and at rest. AWS also points to its history of challenging government requests for user information, especially when they conflict with local laws — think GDPR here.

AWS is trying to thread a fine line here, complying with the DOJ but also appealing to the privacy concerns of its customers. AWS and other big tech companies will continue to be the middle-man in this privacy conflict.

April 30, 2019  6:10 PM

AWS month in review: Enable snapshot automation for Redshift

Ryan Dowd Profile: Ryan Dowd

Expanded AWS snapshot capabilities of two prominent database services should make them more versatile for data backup.

Amazon Redshift can now take automatic, incremental data snapshots that users can schedule and bulk-delete. To enable AWS snapshot automation, users can configure their snapshot schedule with a cron-style granularity through the AWS Management Console or an API. Once the schedule is set, based on a period of time or per node of change,- it’s attached to a Redshift cluster to generate data backups. Users can then delete groups of unneeded snapshots to limit S3 storage costs.

Also, Amazon Aurora Serverless has been updated so users can share database cluster snapshots publicly or with other AWS accounts. Approved users can access snapshot data directly rather than copy it. This may be useful to share data between development and production environments, or for collaboration between an enterprise and its research partner. Users will have to be careful with this capability and watch what information they share publicly.

Clusters snapshots can also be copied across regions, which is a feature — along with AWS snapshot automation — organizations may want to incorporate into their disaster recovery or migration strategy.

AWS packs block storage into Snowball Edge

AWS expanded its hybrid cloud capabilities with block storage on AWS Snowball Edge. Users can now access block, file, and object storage for edge applications. Block storage enables AWS users to quickly deploy EC2 Amazon Machine Image (AMI) based applications that need at least one block storage volume. AWS continues to advance the capabilities of its edge devices, which have been a natural shortcoming in cloud computing.

T3a instances offer a Nitro boost

AWS has added seven new T3a EC2 instances that cost 10% less than comparable existing T3 instances. Similar to the new M5ad and R5ad instances, T3a instances are built on the AWS Nitro System and deliver burstable, cost-effective performance. The instance will work best for workloads that require a baseline of around 2 vCPUs but experience temporary spikes in usage.

T3a instances are available in five regions so far: U.S. East (N. Virginia), U.S. West (Oregon), Europe (Ireland), U.S. East (Ohio) and Asia Pacific (Singapore).

More migration support

AWS Server Migration Service (SMS) can now transfer Microsoft Azure VMs to AWS cloud, which makes it easier to incorporate Microsoft Azure applications into AWS. Use AWS SMS to discover Azure VMs, sort them into applications and then migrate the application group as a single unit, without the need to replicate individual servers or decouple application dependencies. While this service is free, users still pay for AWS resources used — and keep in mind the potential costs of Azure-to-AWS migration.

AWS is also launching a service to migrate your files to Amazon WorkDocs. The WorkDocs migration service could help enterprises consolidate their files, if they choose to go all on AWS. The migration service enables organizations to configure their migrations tasks, i.e. what source they want to migrate to which WorkDocs account and site. Backed by AWS DataSync, the Amazon WorkDocs migration service enables users to execute a data transfer all at once, over a specific period or in recurring syncs.

Amazon Elasticsearch updates

Amazon Elasticsearch Service (ES) now supports open source Elasticseach 6.5 and Kibana 6.5. This update includes several added features, such as auto-data histogram, conditional token filters and early termination support for min/max aggregations.

Amazon ES also provides built-in monitoring and alerting, which enable AWS users to track data stored in their domain and send notifications based on pre-set thresholds. Alerting is a key feature of the Open Distro for Elasticsearch, AWS’ Apache-licensed distribution of Elasticsearch co-developed by Expedia and Netflix.

March 29, 2019  3:33 PM

AWS month in review: More AWS deep learning capabilities

Ryan Dowd Profile: Ryan Dowd
AWS, Deep learning, Machine learning

This month, AWS gave its users more machine learning capabilities along with a few opportunities to learn, train and get certified with the technology.

Announced at the AWS Summit in Santa Clara, AWS Deep Learning Containers (DL Containers) enable developers to use Docker images preinstalled with deep learning frameworks, such as TensorFlow and Apache MXNet, and can scale machine learning workloads efficiently.

Developers often use Docker containers for machine learning workloads and custom machine learning environments, but that usually involves days of testing and configuration. DL Containers will help developers deploy these machine learning workloads more quickly on Amazon Elastic Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS),.

DL Containers offers the flexibility to build custom machine learning workflows for training, validation, and deployment and handles container orchestration as well. Along with EKS and ECS, DL Containers will work with Kubernetes on Amazon EC2 as well. This new capability will enable developers to focus on deep learning — building and training new models — instead of tedious container orchestration.

AWS also added a new specialty certification for machine learning. The AWS Certified Machine Learning Specialty certification validates a user’s ability to design, implement, deploy, and maintain AWS machine learning services and processes. The exam costs $40.

Concurrency Scaling for Redshift

AWS now offers Concurrency Scaling to handle high volume requests in Amazon Redshift. Before Concurrency Scaling, Redshift users encountered performance issues when too many business analysts tried to access the database concurrently; Redshift’s compute capability lacked the flexibility to adapt on-demand.

Now, when users enable the Concurrent Scaling feature, Redshift automatically adds additional cluster capacity at peak times. You pay for what you use and can remove the extra processing power when it’s no longer needed.

AWS Direct Connect console completes global transformation

The global AWS Direct Connect console is now generally available with a redesigned UI. The service establishes a dedicated connection between an organization’s datacenter and AWS, but those connections were previously limited to links to Direct Connect locations within the same AWS region.  However, users now have the ability to connect to any AWS region — except China — from any AWS Direct Connect location.

AWS also increased connection capacity — available through approved Direct Connect Partners — and lowered prices for low-end users.

DeepRacer League kicks off

The AWS Santa Clara Summit was also opening day for the AWS DeepRacer League’s summer circuit, a workshop and competition with AWS’ little autonomous car that could.

Introduced at re:Invent 2018, AWS DeepRacer is a one-eighth scale car that includes a fully configured environment on Amazon’s cloud. Operators train their vehicles with reinforcement learning models, such as an autonomous driving model. Much like a human or dog, DeepRacer learns via trial and error and users can reward their DeepRacer for success. Reinforcement learning models include reward functions that reward — think of code as a treat here — the car for good behavior, which in this case, means staying on the track. AWS DeepRacer is meant to get developers hands-on experience with reinforcement learning, a recent capability added to Amazon SageMaker.

Congratulations to Cloud Brigade, who with a time of 00:10.43 sits in the pole position on the leaderboard after the first contest. AWS’ toy cars go on sale in April.

February 28, 2019  3:55 PM

AWS month in review: More improvements for hybrid cloud

Ryan Dowd Profile: Ryan Dowd
AWS, Hybrid cloud

In recent years, AWS has grown less dogmatic with regards to hybrid cloud architecture. AWS users already have some capabilities to build AWS hybrid cloud architectures with tools such as AWS Direct Connect, Snowballs, and most notably VMware Cloud on AWS. AWS Outposts, unveiled at re:Invent 2018, is perhaps the exclamation point of AWS’ long transition toward a more hybrid cloud future, with on-premises compute and storage racks made of AWS hardware. And AWS furthered this thread when it acquired the Israeli-based cloud migration company CloudEndure in January.

In February 2019, AWS’ hybrid cloud plans took another step forward with tweaks to some services that simplify the migration and integration of on-premises environments.

AWS’ Server Migration Service, which admins use to automate, schedule, and track the replication of on-premise applications and server volumes to AWS cloud,  now enables them to directly import and migrate applications discovered by AWS Migration Hub without the need to recreate server and applications groupings. This will reduce the time to import on-premises applications to AWS cloud and reduce migration errors.

Meanwhile, AWS added the Infrequent Access storage class in Amazon Elastic File System (EFS) as a less expensive option for both on-premises and AWS files and resources that are sporadically used. This is a cheaper way to store larger amounts of data that you don’t use every day. Unlike standard EFS, EFS Infrequent Access carries an additional cost for every access request. Users won’t need to move or delete their data from AWS to manage costs anymore.

Finally, AWS has added architecture reviews for both hybrid cloud and on-premises workloads to its Well-Architected Tool portfolio. Based on the AWS Well-Architected Framework and developed by experienced AWS architects, the AWS Well-Architected Tool recommends adjustments to make workloads more scalable and efficient. To review workloads for their AWS hybrid cloud architecture, users select both the AWS and non-AWS Region (or regions) when they define their workload in the tool.

AWS bolsters bare metal, GuardDuty

AWS has added five EC2 bare metal instances — M5, M5d, R5, R5d and z1D — designed for all-purpose workloads, such as web and application servers, gaming servers, caching fleets and app development environments. The R5 instances target high performance databases, real-time big data analytics and other memory-intensive enterprise applications.

AWS has also added three threat detections for its security monitoring service Amazon GuardDuty:  two for penetration testing and one for policy violation.

AWS Solutions opens up shop

AWS continues to put its Well-Architected Framework to use. AWS Solutions is a portfolio of deployment designs and implementations vetted to guide users through common problems and enable them to build faster. Examples include guides for AWS Landing Zone, AWS Instance Scheduler, and live streaming on AWS, among others.

More CloudFormation integrations

AWS CloudFormation now supports Amazon FSx, AWS OpsWorks and WebSocket APIs in API gateway. The interest in Infrastructure as code (IaC) is only growing with tools like Terraform and CloudFormation. But AWS needs to continue to expand its native integrations with CloudFormation to make it a more viable option for IaC.

January 31, 2019  8:38 PM

AWS month in review: Cloud SLAs abound

Trevor Jones Trevor Jones Profile: Trevor Jones

Amazon this month added a bevy of performance guarantees to its cloud services.

Service-level agreements (SLAs) are standard practice in traditional IT, but cloud SLAs are far from universal. For most enterprises, an IT product that lacks an SLA is a nonstarter, so it makes sense for AWS to provide these contractual assurances to lure more corporate customers to its cloud.

All told, AWS added cloud SLAs to 11 services in January: Elastic File Store, Elastic MapReduce (now simply called “EMR”), Kenesis Data Streams, Kinesis Data Firehouse, Kinesis Video Streams, Elastic Container Service for Kubernetes, Elastic Container Registry, Secrets Manager, Amazon MQ, Cognito and Step Functions. The cloud SLAs vary by service, but they all include 99.9% uptime guarantee per month, with service credits if AWS fails to meet those standards.

AWS has offered SLAs for its core infrastructure services for some time, but these latest agreements follow a trend of marked expansion of Amazon’s cloud SLAs for higher-level services the vendor manages on its own internal infrastructure.

It’s hard to gauge the impact of these cloud SLAs on adoption. For example, EMR has been around for a decade without one, while Lambda, which added an SLA in October, is among the most talked about services on the platform. Still, it’s clear that AWS felt the need to put these terms in writing and is confident enough in its backend to do so.

Acquisitions and added services

The cloud SLAs are important, but no contract language generates the same buzz among IT teams as new tools to play with. In that regard, AWS came out of the gate quickly to start 2019.

It added Worklink, a service to securely connect employee devices to corporate intranets and apps; Backup, a centralized console to manage and automate backups; DocumentDB, a MongoDB-compatible document database; and Media2Cloud, a serverless ingest workflow for video content.

There were also two acquisitions that should bolster AWS’ capabilities for cost analysis, as well as backup, disaster recovery and migration.

Open source and AWS

DocumentDB added fuel to the fire in the debate about licensing on top of open source software. AWS built MongoDB compatibility through an API, which enabled it to forego licensing restrictions MongoDB added last year.

AWS has a thorny history of contributing back to open source projects, though company leaders contend the reputation no longer fits. But, as is often the case, these things are never quite so black and white. In fact, just this week AWS became a platinum member of the Apache Software Foundation.

December 21, 2018  3:31 PM

AWS month in review: Cloud networking services abound

Trevor Jones Trevor Jones Profile: Trevor Jones

December didn’t deliver the avalanche of services and features that surrounded AWS re:Invent in November, but AWS didn’t exactly close out the year quietly. Amazon put its cloud networking services front and center this month with tools to secure connections for cloud-based workloads, and it also added a larger GPU-powered instance type and an EU region in Stockholm.

The newest AWS cloud networking service, AWS Client VPN, enables a customer’s employees to remotely access their company resources either on AWS or inside on-premises data centers. An employee can access the service from anywhere via OpenVPN-based clients. AWS already had a virtual private network (VPN) service, which it now calls AWS Site-to-Site VPN. However, that product only connects offices and branches to an organization’s Amazon Virtual Private Cloud (VPC) environment.

Organizations can already host OpenVPN on Amazon EC2, so they’ll need to determine if it’s cheaper to go that route and incur the charges from both vendors, or opt for this bundled, pay-as-you-go cloud networking service. Client VPN is more expensive than OpenVPN on its own, so it would come down to how much an organization spends on its instances. AWS charges hourly for the service, per active client connections and associated subnets.

Another factor to consider is management, as an organization that uses Client VPN won’t have to maintain any EC2 instances. This is the latest example of AWS’ efforts to offer services that handle the infrastructure for the user — and the cloud vendor plans to do more of this in the future, to attract enterprise clients that don’t want to deal with all those operational complexities.

Organizations can now use a WebSocket API with Amazon API Gateway. Prior to this update, users of the service were limited to the HTTP request/response model, but the WebSocket protocol provides bidirectional communication. This opens the door to a wider range of interactions between end users and services, because the service can push data independent of a specific request.

We’ll have a more thorough analysis on this feature in the coming weeks, but AWS suggests developers can use this functionality to build real-time, serverless applications such as chat apps, multi-player games and collaborative platforms.

Also on the networking front, users can now access Amazon Simple Queue Service (Amazon SQS) and AWS CodePipeline directly through their Amazon VPC, through VPC endpoints and AWS PrivateLink to securely connect services and keep data off the public internet. The Amazon SQS update in particular is a “meat and potato” item that’s more important to some users than flashier services that debuted at re:Invent, according to one prominent AWS engineer.

Lastly, organizations can now share Amazon VPCs with multiple accounts. Large customers use multiple accounts to portion off different business units or teams for security or billing purposes, AWS said. VPC sharing takes responsibility for management and configuration out of the account holder’s hands, and gives it to the IT team, which can then doll out access to these shared environments as needed.

December 17, 2018  7:56 PM

AWS’ container roadmap reveal helps customers plan ahead

Chris Kanaracus Profile: Chris Kanaracus

AWS has been fairly secretive about its technology roadmaps, and drops news without warning on its corporate blog or in the flood of news at its annual re:Invent conference.

To be sure, AWS huddles with customers behind the scenes to get their feedback and determine which directions to head next. But anyone who trawls the AWS website in search of a tidy PowerPoint deck that outlines the future of a service important to their business is in for a long and fruitless journey.

Suddenly, however, last week the cloud vendor ever so slightly shifted its approach, when it quietly posted an “experimental” roadmap for AWS’ container strategy on GitHub.

“Knowing about our upcoming products and priorities helps our customers plan,” the company said. “This repository contains information about what we are working on and allows all AWS customers to give direct feedback.”

The AWS container roadmap is split into three categories: “We’re Working On it,” “Coming Soon” and “Just Shipped.” There are no major revelations in any of them; many entries relate to new regions for EKS, AWS’ managed Kubernetes service, while others are on minor to middling feature updates. Nonetheless, it provides a lot more specifics than AWS has been known to let into the wild.

That’s not to say AWS hasn’t hedged its bets. For one thing, the roadmap lists no delivery dates, because “job zero is security and operational stability,” according to AWS. The company did allow that “coming soon” means “a couple of months out, give or take.”

The roadmaps include information on the majority of development for various AWS container-based services — Elastic Container Service, Fargate, EKS and other projects — but the company said it still plans to reveal other technologies without notice, to “surprise and delight our customers.”

Roadmaps are undoubtedly a boon to customers, but they can be a thorny proposition for vendors because they’re officially and publicly on the hook to deliver. To AWS’ credit, many services it unveils are generally available at that time, or in preview. Vaporware hasn’t been an appreciable part of its modus operandi, although some attendees at this year’s re:Invent grumbled at a few rather vague product announcements.

Vendors that provide many roadmaps tend to lard them up with boilerplate exhortations that plans can change. This is particularly true for publicly traded companies, which may consider roadmap details “forward-looking statements,” a phrase that carries legal and financial weight.

Still, roadmaps are more than just a useful tool for customers. Product organizations like them too when constructed in a certain way, judging from discussions on, a community site for product managers. Roadmaps should come in a number of flavors, according to several contributors. For example, a development team-facing roadmap should provide realistic estimates of what can get built if no nasty technical surprises crop up. A roadmap geared for sales teams ought to list top features expected in the next couple of quarters.

A third type of roadmap is higher-level and aimed at customers, media and analysts, users said. It provides a company’s big-picture plans over the next year or two, but shies away from concrete details to give room for tweaks to the strategy.

AWS hasn’t done anything close to this, but again, it’s not as if they toil in a vacuum and shut out customer input—quite the contrary.

Yet someone with influence inside AWS clearly decided more transparency into roadmaps was desirable — even if for now the focus is on containers, where the market grows more competitive by the day. Don’t expect any state secret-level dirt on AWS’ container strategy through the roadmap, but customers with money to spend on existing or new container workloads will appreciate more clarity as they make plans. Now it’s time to wait and see whether AWS’ experimental effort becomes embedded in its culture.

November 30, 2018  7:49 PM

AWS month in review: Expanded EC2 options and a blastoff into space

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

IT and development teams who try to keep pace with AWS’ ever-expanding portfolio have a lot to catch up on this month.

First, to see the most significant news that came out of AWS re:Invent 2018 this week, check out SearchAWS’ end-to-end guide on the show. There, you can find details about AWS’ latest AI services, databases, storage and security features, and hybrid cloud strategy — which now includes an on-premises hardware component in Outposts.

But re:Invent, and the month of November in general, brought about other important AWS features and tools for users who run day-to-day operations on the Amazon cloud – and there’s even some fodder for those who use AWS to explore outer space.

More EC2 options for management, compute

Pause and resume:  Admins now have the option to pause, or hibernate, Amazon EC2 instances that are backed by Elastic Block Store (EBS). This feature enables users to maintain a “pre-warmed” set of compute instances, so they can launch applications, particularly memory-intensive ones, more quickly than if those instances had to fully reboot after a shutdown. Amazon likened the process to hibernating a laptop rather than turning it off.

Users can control the pause/resume process via the AWS Management Console, the AWS SDK or the command line interface. The feature applies to the following instance families: M3, M4, M5, C3, C4, C5, R3, R4 and R5 instances that run Amazon Linux 1.

More instance types move in: The Amazon EC2 instance family grew with the addition of A1, P3dn and C5n instance types. Intended for workloads that require high scalability, A1 instances are the first to be fueled by AWS’ ARM-based Graviton processors. The GPU-based P3dn instances, designed for machine learning, deliver four times the network throughput of the cloud provider’s existing P3 instance type. Lastly, the C5n family can use up to 100 Gbps of network bandwidth, making them a good fit for applications that require high network performance.

Additional storage, networking services

Amazon FSx: This managed file share service debuted in two flavors: Amazon FSx for Lustre and Amazon FSx for Windows File Server. The former enables users to deploy the open source Lustre distributed file system on AWS, and is geared toward high-performance computing and machine learning apps. The second version, designed for Microsoft shops, delivers the Windows file system on AWS. It’s built on Windows Server, is compatible with Windows-based apps and supports Active Directory integration and Windows NTFS, AWS said.

AWS Global Accelerator: For enterprises that deliver applications to a global customer base, Global Accelerator is a networking service that directs user traffic to the closest and highest-performing application endpoint. AWS expects the accelerator to ensure high availability and free enterprises from grappling with latency and performance issues over the public internet. In addition, the service uses static IP addresses, which, according to AWS, eliminate the need for users to manage unique IP addresses for different AWS availability zones or regions.

AWS Transit Gateway: Another service intended to simplify network management, Transit Gateway lets customers hitch their own on-premises networks, remote office networks and Amazon VPCs to one centralized gateway. Admins manage one connection from that central gateway to each VPC and on-premises network they use. The cloud provider described it as a hub-and-spoke model; think of Transit Gateway as the “hub” that centrally controls and directs traffic to various network “spokes.”

A big move in microservices

AWS App Mesh: Based on the open source service proxy Envoy, App Mesh streamlines microservices management. App Mesh users can monitor communication between individual microservices, and implement rules that govern that communication. It’ll be interesting to see how App Mesh stacks up against other service mesh options, such as Azure Service Fabric and the open source Istio technology, behind which Google, in particular, has thrown its weight.

AWS takes to space

Other AWS news this month took more of a, well, celestial slant.

The cloud provider teamed up with Lockheed Martin to provide easier and cheaper ways for companies to collect satellite data and move it into the cloud for storage and analysis.

Lockheed’s Verge system of globally distributed, compact satellite antennae will work in conjunction with AWS Ground Station, a service that co-locates ground antennas inside AWS availability zones around the world.

Previously, organizations such as NASA and companies like Mapbox had to write complex business logic and scripts to upload and download satellite data, AWS CEO Andy Jassy said at re:Invent. Ground Station lets users work with satellite streams from the AWS management console, and pay by the minute for antenna time. It’s now in preview in two AWS regions, with 10 more to come early next year.

It’s another example of AWS going after specialized customers, but the partnership could also have broader resonance among AWS’ user base. Research organizations and niche startup companies are the heaviest users of satellite data, but enterprise IT shops in general should also watch the implications of geographic information systems (GIS) and spatial data on their business, said Holger Mueller, VP and principal analyst with Constellation Research in Cupertino, Calif.

“Making that data available in an easy, secure, scalable and affordable way is key for next-generation enterprise apps,” he said.

AWS isn’t first in this market — SAP and the European Space Agency partnered in 2016 to bring satellite data into SAP’s HANA Cloud Platform — but its moves to build out a global satellite antenna network take the idea much further.

*Senior News Writer Chris Kanaracus contributed to this blog.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: