AWS Cloud Cover


November 26, 2019  9:13 PM

AWS month in review: Explore the CloudFormation CLI, Savings Plans

Ryan Dowd Profile: Ryan Dowd
AWS ( Amazon Web Services ), Cloud costs

Each November, AWS pushes important updates to its platform that might otherwise get lost in the shuffle at re:Invent. This year was no different, with a slew of moves to address automation, cost controls and containers ahead of Amazon’s marquee cloud conference.

Among the dozens of new features and services rolled out this month were additions to CloudFormation, including the capability to create, register and import resources, as well as a new discount pricing plan for EC2 compute instances, custom log routing for containers and a Lambda update.

CloudFormation expands its reach

The AWS CloudFormation CLI is an open source toolset that enables users to incorporate a range of tools and services into CloudFormation, even if those resources aren’t currently supported by the infrastructure-as-code tool. This includes AWS offerings on the CloudFormation roadmap or third-party resources.

To get started with the AWS CloudFormation CLI, model a general schema for the resource, develop skeleton code for core actions, test the resource provider in your local environment and register it with CloudFormation in your desired AWS Region. Once the resource is registered, you can call and manage this custom resource like any other CloudFormation action.

AWS also added a resource import capability for CloudFormation, so developers can import an existing resource to be managed with a CloudFormation template. This will help users centralize infrastructure management.

Use the resource input command in CloudFormation to access existing resources and bring them into a CloudFormation stack. This capability currently supports import operations from S3, EC2, Lambda and more.

After user gripes over CloudFormation feature lag drew some attention earlier this year, AWS has spent the last few months improving its infrastructure-as-code tool. To make the effort more transparent, AWS created a public coverage roadmap, where users can suggest improvements and integrations and track progress. So far, the CloudFormation team has shipped 43 feature updates and integrations, with more on the way.

Savings Plans for EC2 expenses

AWS added a pricing plan in an apparent response to the fact that IT teams are often overwhelmed or confused by the range of discount options and stipulations the cloud provider offers for its compute resources.

Savings Plans is a discount program similar to EC2 Reserved Instances, but with more flexibility. It offers the same discount as Reserved Instances if users commit to a set compute amount – measured in dollars per hour — for either a one- or three-year term.

A Savings Plan consists of two prices — a Savings Plan price and a higher On Demand price. When you set compute amount per hour, all usage up to that limit will be charged at the Savings Plan price. Any usage beyond that limit will be charged at the On Demand rate.

Additionally, there are two types of Savings Plans — compute and EC2 instance plans. Compute plans are the more flexible option. They apply to any EC2 expenses regardless of region, instance family, OS or tenancy. EC2 instance plans apply to a specific instance family within a region but offer the larger discount. EC2 instance plans can still cover different instance sizes within the same family.

AWS Cost Explorer can recommend a Savings Plan based on your recent compute usage, but forecast expected usage before committing to a Savings Plan.

FireLens for container logging

AWS FireLens is a new service for custom log routing for container services such as Amazon Elastic Container Service and Fargate. You can use task definition parameters to route container logs to AWS monitoring services or an AWS Partner Network that runs on Fluentd or Fluent Bit, such as Datadog and New Relic, among others.

To use FireLens, create a new task execution IAM role that gives permission to access the services involved for log analysis or storage, such as CloudWatch or Amazon Kinesis Data Firehose. You can then use the Fluent Bit image provided by AWS with plugin for CloudWatch and Kinesis Data Firehose, for example. You can also create a task definition for custom log routing.

AWS Lambda Destinations for asynchronous tasks

The latest update to AWS Lambda is intended to reduce complexity and increase resiliency when building and managing serverless applications.

AWS Lambda Destinations is a feature that adds visibility into asynchronous invocations, alerting developers when those tasks have been processed correctly. Previously, Lambda could only tell users that an event had been received by the corresponding queue, with no information about whether it completed successfully. Developers would have to write additional code for a messaging service to handle any failures.

With this feature, developers can route the execution record to a destination resource without that added code. Execution status can instead be automatically directed – based on results — to another Lambda function, Amazon Simple Notification Service, Amazon Simple Queue Service or Amazon EventBridge.

October 31, 2019  8:48 PM

AWS month in review: Amazon rolls out RDS on VMware, loses JEDI deal

Ryan Dowd Profile: Ryan Dowd
AWS ( Amazon Web Services ), VMware

This month, AWS solidified the next step in its hybrid cloud partnership with VMware, as it faces a more serious cloud market challenge from Microsoft, which won the desirable U.S. government JEDI cloud contract.

Amazon Relational Database Service (RDS) on VMware is generally available more than a year after its initial unveiling. The service, which organization deploy in on-premises vSphere environments, provides many of the same benefits of RDS on AWS — automated provisioning and scaling, as well as integration with Amazon cloud services such as Amazon CloudWatch and AWS Direct Connect. RDS on VMware initially supports Microsoft SQL Server, PostgreSQL and MySQL.

It is a useful addition to AWS’ hybrid cloud portfolio, but has some limitations. Admins will rely on the same web interface as the original but will need to hop through a series a prerequisite hoops — configuring a VMware environment for resiliency and high availability, for instance — to onboard a vSphere cluster and get RDS on VMware up and running.

RDS on VMware pricing is consistent with regular RDS pricing, but it will likely be more expensive overall because enterprises need to run it on their own infrastructure. The service makes the most sense for workloads that have to stay on premises to comply with security, privacy, regulatory or data sovereignty policies.

Amazon could challenge JEDI deal

While industry experts tabbed AWS as the favorite to land the Pentagon’s JEDI cloud computing contract, the deal ultimately went to Microsoft. It could net Microsoft up to $10 billion and reshape the cloud market landscape. Amazon hasn’t publicly outlined its response yet, but it could appeal the decision to the Government Accountability Office or file a federal lawsuit to challenge the award.

AWS already provides cloud services to many federal agencies, including the CIA, but missing out on the JEDI contract is a wakeup call for the cloud provider and its presumed dominance in the market. Experts cite existing Pentagon investments in Microsoft Office 365, improved security certifications and stronger AI and machine learning capabilities as reasons the Pentagon went with Microsoft instead of AWS.

Editor’s note: AWS has since appealed the Pentagon’s choice in the U.S. Court of Federal Claims.

EC2 instances size up

In less controversial matters this month, AWS also made a number of improvements to its EC2 instances.

AWS expanded its A1 instance fleet with a bare-metal option, a1.metal. Developers tap A1 instances for scale-out workloads and Arm-based applications such as web frontends or containerized microservices. A1 instances support popular Linux distributions, as well as all major programming languages and container deployments. Bare-metal instances work best for applications that need access to physical resources and low-level hardware features, such as performance counters, and applications intended to run directly on the hardware, according to AWS.

For its M and R instance families, AWS added instance types with expanded network capabilities. The M5n, M5dn, R5n and R5dn instances can access up to 100 Gbps of network bandwidth, which enables faster data transfers and reduces data ingestion times. They are designed to handle workloads for databases, high performance computing and analytics.

AWS also expanded its EC2 high-memory instances with options for 18 Tib and 24 Tib of memory. These are heavy-duty instances for users to run large-scale SAP HANA installations with Amazon S3, Elastic Block Store and other common Amazon cloud services. Like the original 6 TiB, 9 TiB and 12 TiB high-memory bare-metal instances, the even larger versions are only available with a three-year reservation. Pricing isn’t set for these.

CloudWatch anomaly detection

This month, AWS added an anomaly detection feature to Amazon CloudWatch. In the past, setting up CloudWatch Alarms was an art form of sorts — making sure your alarm thresholds could catch issues early but not incite a host of false alarms. CloudWatch Anomaly Detection applies machine learning to this process and can take over configuration of CloudWatch metrics.

Anomaly Detection analyzes the historical values for a chosen metric and produces a model that takes into account the metric’s normal patterns — spikes and lulls — so CloudWatch can accurately detect abnormal behavior. AWS users can then change the model as they see fit. They can activate anomaly detection by clicking the wave icon on a metric within CloudWatch.


September 30, 2019  8:41 PM

AWS updates: Amazon QLDB hits, S3 expands, new instances arrive

Ryan Dowd Profile: Ryan Dowd
AWS ( Amazon Web Services ), Blockchain, Cloud storage

The Amazon Quantum Ledger Database allows users to store data in logs that are transparent, immutable and cryptographically verifiable.

The managed, serverless AWS ledger database, available as of September, is append-only, which means you can’t change the data within the database. This is what makes Quantum Ledger Database (QLDB) immutable. QLDB also supports PartiQ, AWS’ open source SQL-compatible query language and enables you to export all or part of your data to Amazon S3.

QLDB shares some similarities with blockchain technology – though it is not technically blockchain. It uses the cryptographically verifiable properties of blockchain, but unlike blockchain, which relies on distributed, peer-based verification, QLDB is built on centralized trust. If your application calls for decentralized trust and depends on outside or untrustworthy parties, then a straight blockchain technology would be a better fit, which is why Amazon offers an additional managed blockchain service. If your application only requires a ledger of all application data changes available for query and analysis, the AWS ledger database is a suitable option.

Companies that deal with financial transactions could find a use for QLDB. It could also support companies that log and update customers on supply chain progress. You can access the AWS ledger database in the AWS Management Console, the AWS Command Line Interface, a CloudFormation template or by calling the QLDB API.

Amazon S3 Same-Region Replication

This September,  Amazon bolstered its S3 replication capabilities with Amazon S3 Same-Region Replication (SRR). This capability automatically replicates new S3 objects to a destination bucket in the same AWS region. It replicates not only the objects but the metadata, Access Control Lists and object tags.

SRR builds on S3 Cross-Region Replication (CRR), which replicates objects across additional AWS regions. Together, SRR and CRR protect you from accidental deletion — or an AWS outage. They can also help your organization comply with data sovereignty laws and compliance requirements as well as minimize S3 latency.

You can enable object replication with a bucket-level configuration. You’ll configure the source bucket with the destination bucket where you want S3 to replicate objects and an AWS Identity and Access Management role that can replicate objects on your behalf.

Amazon EC2 G4 instances

Amazon this month added EC2 G4 instances to its GPU-powered compute fleet. The new instances are optimized and cost-effective to power machine learning models in production and graphics-intensive applications. This latest generation offers the newest NVIDIA T4 GPUs, better networking throughput and more local storage. To start, Amazon is offering six different instance sizes, with varying levels of memory, storage and network bandwidth, and plans to add a bare metal version soon.

Amazon launched its Deep Learning Containers in March 2019, and these EC2 G4 instances should make running these deep learning capabilities more efficient as AWS rounds out its machine learning capabilities.

FireLens preview

FireLens for Amazon Elastic Container Service (ECS) enables you to use task definition parameters to route logs to an Amazon cloud service for retention and analytics. AWS has opened a public preview so users can test its basic functionality. FireLens supports Fluentd and Fluent Bit, but also provides the AWS Fluent Bit plugin, which AWS launched earlier this summer.

Fluent Bit is an open source, multi-platform log processor and forwarder. It enables you to collect data from multiple sources and send them to different destinations. The Fluent Bit plugin for container images enables you to route container logs to Amazon CloudWatch and Kinesis Data Firehose.

FireLens for ECS should be a more direct way to route your container logs. With the public preview, you can test three use cases. You can send standard container out logs to different AWS destinations, filter out unnecessary logs and decorate logs with ECS metadata.

AWS IQ

AWS IQ is a service that connects companies with AWS Certified, third-party AWS experts for help on a cloud project. The service is available as of late September.


August 30, 2019  5:10 PM

AWS month in review: AWS time series forecasting service is GA

Ryan Dowd Profile: Ryan Dowd
AWS, Machine learning

Amazon Forecast is another ML service for an IT team’s toolbelt to help companies predict production demands, such as necessary inventory levels, along with other predictive uses. AWS users don’t need ML expertise to use the service, but official documentation is light so far, which could make this AWS time series forecasting service tough for beginners.

The managed predictive analysis service was initially unveiled at AWS re:Invent 2018. Unlike other predictive services, Forecast’s machine learning models use time as an additional dimension, which makes it particularly accurate to predict broad business trends, according to AWS. Early use cases are around resource and financial planning.

AWS positions Forecast as an easy to use, pay as you go service that doesn’t require machine learning experience. The user provides the relevant data sets, and the AWS time series forecasting service picks an appropriate machine learning algorithm to produce a forecasting model, which includes the model’s expected accuracy.

Users with machine learning experience can bring their own custom algorithm and will likely want to add more data and retrain the model to improve on its initial expected accuracy. Unfortunately, those who want to really dig into Forecast documentation won’t find much yet. The most detailed guides are currently on GitHub.

Rekognition adds “Fear” face analysis capability

AWS improved its facial analysis service Rekognition amid increased backlash over Amazon’s involvement with U.S. government agencies such as Immigration and Customs Enforcement. This month, AWS improved Rekognition’s accuracy with gender identification, age estimation and emotion detection. It also added a new emotion — fear.

It’s been less than a banner summer for Rekognition. The city of Orlando, Fla. ended its pilot program with the technology in July. A recent study even called into question the viability of technological emotional analysis. Amazon’s own employees urged the company not to collaborate with law enforcement agencies like ICE, and Amazon ultimately rejected staff and shareholder calls to halt facial recognition sales to government agencies.

Lake Formation now open to all

AWS Lake Formation became generally available this month. Also introduced at re:Invent 2018, Lake Formation is a managed data lake service.

Organizations use data lakes to store, catalog, query and analyze massive amounts of raw data in one central repository. Building data lake architectures on AWS is a complicated process, where users string together several Amazon cloud services like S3, Amazon Elasticsearch, Amazon Athena and others. Lake Formation orchestrates all these services for you.

To get started, navigate to the Lake Formation console and register any existing S3 buckets that you want in your data lake. Create a database and grant permissions to Identity and Access Management users and roles that’ll need to access the data lake. Make sure the database is registered in the Glue Data Catalog for metadata analysis. To orchestrate data ingestion, select blueprints in the console that create different data lake workflows, such as an AWS Elastic Load Balancing (ELB) logs blueprint that loads data from ELB logs.

Follow the workflow progress in the AWS Glue console, and when it’s finished, you’ll find a new table in your data lake database. That centralization is a key benefit of Lake Formation.

Capital One hacker indicted

Former Amazon software engineer Paige Thomspon was indicted Wednesday, Aug. 28 on two counts in connection with the recent Capital One hack and her unauthorized intrusion into data from more than 30 different companies and institutions.

Thompson created scanning software that could identify if cloud computing customers misconfigured their firewalls, according to the indictment. She then allegedly used this access to steal data and channel stolen compute power into cryptojacking. Thompson faces up to 25 years in prison and will remain in custody until her arraignment Sept. 5.

Ahead of Thompson’s detention hearing earlier this month, federal prosecutors filed a memorandum that stated investigators searched Thompson’s servers and found multiple terabytes of additional stolen data from more than 30 different companies. With these additional allegations, along with a history of violent behavior, the court denied Thompson bail.

Capital One expects the affair to cost between $100 million and $150 million in 2019, according to Reuters.


July 31, 2019  7:22 PM

AWS month in review: Orlando drops AWS facial recognition program

Ryan Dowd Profile: Ryan Dowd
"Amazon Web Services", Artifical Intelligence, AWS

AWS this month suffered a setback to its agenda of expansion, when the city of Orlando, Fla. ended its facial recognition in law enforcement partnership with the Amazon Rekognition technology.

After 15 months of back and forth with this AWS facial recognition program, Orlando and Amazon have officially gone their separate ways. On July 18, Orlando declined to renew its second pilot program with Amazon Rekognition, AWS’ image analysis service. Bandwidth, video resolution and positioning issues plagued the use of the technology, and the city was unable to set up a reliable camera stream, Rosa Akhtarkavari, Orlando’s chief information officer, told the Orlando Weekly. The city simply lacked the IT infrastructure to support AI software, she said.

The first pilot began in December 2017 and ended in June 2018, before the second started back up again in October 2018. In theory, the city planned to use Rekognition’s facial recognition algorithms to identify and track suspects in real-time. If configured and supported properly, law enforcement officers would upload an image of a suspect and get notified if Rekognition found a match.

With Orlando out of the picture, Oregon’s Washington County is the lone police department that still uses AWS facial recognition technology. Both partnerships faced legal and media pressure from the American Civil Liberties Union which argued unchecked surveillance technology threaten privacy and civil liberties, and that Rekognition, in particular, misidentifies African-Americans as criminals at a higher rate than other races. Other cities, such as Oakland and Somerville, MA have banned government use of the software.  And Amazon’s own employees and shareholders even wrote a resolution that called to halt AWS facial recognition sales to government agencies, though Amazon’s board of directors struck the missive down in its annual shareholder meeting in May 2019.

Amazon claims that the apparent racial bias occurred due to misuse and misunderstanding of the service. It has also argued that it’s up to the federal government, not Amazon, to legislate the use of this technology.

AWS expands CloudWatch, adds event-driven offering

While Amazon bore a blow to its AI ambitions this month, it still improved some bread-and-butter capabilities of the AWS platform. The recently announced Amazon CloudWatch Container Insights and Anomaly Detection capabilities, along with the expansion of its EC2 Spot Instance service, should expand AWS’ compute and monitoring flexibility.

AWS also added Amazon EventBridge, a serverless event bus that integrates users’ AWS applications with  SaaS applications. As more of its customers turn to event-driven applications and architecture, AWS needs a better way to integrate and route real-time data from third-party event sources including DataDog or PagerDuty to service targets, such as AWS Lambda. EventBridge eliminates the need to write custom code that connects application events and should enable more efficient AWS event-driven architectures.

Amazon CloudWatch Container Insights and Anomaly Detection give users more ways to analyze their metrics and improve performance and security. CloudWatch Container Insights collects and organizes metrics from AWS’ container services and files them in CloudWatch’s automatic dashboard. It also handles diagnostics, which can help users identify issues such as container restart failures. Users can set alarms for certain container metrics, including use of CPU, memory or network resources. Container Insights is in open preview.

CloudWatch Anomaly Detection uses machine learning algorithms to analyze data regarding the performance of your system or application. This anomaly detection capability then analyzes the metric’s past data to generate a model of expected values and establishes a high and low value of accepted metric behavior. Users can then decide how and when they are notified. CloudWatch Anomaly Detection is in open preview and priced per alarm.

Spot Instances for Red Hat Enterprise Linux and Amazon SageMaker

Amazon EC2 Spot Instances let users obtain unused EC2 capacity at a discounted rate, and AWS recently extended those capabilities to users with a basic Red Hat Enterprise Linux (RHEL) subscription. Before, only premium RHEL subscribers could access Spot Instances. At its NYC Summit this July, AWS also revealed Spot Instances support for SageMaker users to train machine learning models, which AWS claims could cut training costs by up 70%.


June 28, 2019  6:21 PM

AWS month in review: Security Hub goes live at AWS security conference

Ryan Dowd Profile: Ryan Dowd
"Amazon Web Services", Amazon, Cloud Security

AWS this month hosted its inaugural re:Inforce conference in Boston and used the setting to make AWS Security Hub and Control Tower generally available and to introduce a VPC network security feature.

Other AWS developments of note earlier in June included AWS’ expansion of Auto Scaling to Amazon Relational Database Service (RDS), which should ease over-provisioning woes for some users, and the addition of AWS Personalize to AWS’ machine learning suite.

AWS re:Inforce, an AWS re:Invent-inspired spinoff devoted to cloud security, drew more than 8,000 attendees. The AWS security conference featured Amazon cloud service demos, training sessions and highlighted Security Hub and Control Tower, among other services, as ways to infuse more automation and visibility into cloud security processes.

Security Hub and Control Tower aim to centralize security insights and account management, respectively. Security Hub is a centralized security dashboard to monitor security and compliance posture. It collects and analyzes data from all the AWS security tools and resources you use and checks them against AWS security and compliance best practices – identifying an S3 bucket unintentionally left open to public access, for example.

AWS Control Tower was built to ease multi-account management. Control Tower automates the creation of a secure multi-account AWS environment, with AWS security best practices baked into the process. Accounts configured through Control Tower come with guardrails — high-level policies — that reject or report prohibited deployments.

Amazon Virtual Private Cloud (VPC) Traffic Mirroring is a feature for your Amazon VPC that analyzes network traffic at scale. AWS has described this capability as a “virtual fiber tap” that captures traffic flowing through your VPC. You can capture all the traffic or filter for specific network packets. VPC Traffic Mirroring should improve network visibility and help organizations check off monitoring compliance requirements.

Amazon RDS supports Auto Scaling

Auto Scaling uses Amazon CloudWatch to monitor applications and then automatically scale them according to predetermined resource needs and parameters. Users can now set up Auto Scaling for RDS in the Management Console.

Before Auto Scaling, RDS users either overprovisioned new database instances to be safe or underprovisioned them to save some money. This meant they were either stuck footing a larger bill than necessary, or had to increase capacity on the fly which typically results in application downtime. To ensure RDS performance and cost optimization, users should underprovision from expected capacity and set a maximum storage limit. Auto Scaling will boost capacity as database workloads grow.

Auto Scaling is a key feature for EC2 and Amazon Aurora, as well. Those services enable dynamic scaling — up or down — based on user recommendations for performance and cost optimization. However, RDS Auto Scaling only scales up.

Users who experience cyclical data spikes and lulls may need to use Aurora Serverless or provide additional automation on top of RDS Auto Scaling to bring their storage capacity back down. However, RDS Auto Scaling should still simplify provisioning of storage capacity in most cases.

Users pay for the database resources they use, which includes Amazon CloudWatch monitoring.

Amazon adds Personalize to ML portfolio

Like Amazon SageMaker, Amazon Personalize doesn’t require advanced ML and AI knowledge. The service stems from the machine learning models that Amazon.com uses to recommend products and offers that capability in a plug-and-play fashion to AWS users and their applications.

To get started with Amazon Personalize, users can set up an application activity stream on the Amazon Personalize API. This stream would log customer interaction on the application — along with products they’d like to recommend. Amazon Personalize will then customize a machine learning model for that data and generate real-time recommendations. AWS users can start with a two-month free trial, with data processing and storage limitations.


May 31, 2019  7:42 PM

AWS month in review: Updated Lambda execution environment on its way

Ryan Dowd Profile: Ryan Dowd

AWS this month said it will update the execution environment for Lambda and Lambda@Edge. Lambda runs on top of the Amazon Linux OS distribution, which AWS will move to version 2018.03 in July. AWS has also begun to highlight its niche managed satellite service Ground Station, as the first two stations are now open for business. Finally this May, AWS weighed in on the Clarifying Lawful Overseas Use of Data (CLOUD) Act enacted in March 2018. AWS both echoed support for the law, but also insisted will defend its users’ data to the extent international law allows.

The updated AWS Lambda execution environment AMI should improve Lambda capabilities, performance and security, according to AWS. However, the transition could impact Lambda functions that house libraries or application code compiled against specific underlying OS packages or other system libraries. Lambda users should proactively test their existing functions before the general update goes live Tuesday, July 16.

An AWS Lambda execution environment is what users’ code runs on, made up of an underlying OS, system packages, the runtime for your language, and common capabilities like environment variables. Users can test their functions for the new environment in the Lambda console if they have enabled the Opt-in layer, which will tell Lambda to run function executions on the new environment. They can also test locally through an updated AWS Serverless Application Model CLI, which uses a Docker image that mirrors the new Lambda environment.

On June 11, any newly created Lambda function will run on the updated execution environment. And on June 25, any updated Lambda function update will run on the new environment, too.

The general update will occur on July 16, and all existing functions will use the new execution environment when invoked. If you aren’t ready to deploy to the new execution environment, enable the Delayed-Update layer which will push the distribution transition back to July 23. All functions will have to be migrated by July 29.

The safest course is to begin testing lambda functions now, especially those suspected to have dependencies compiled against system packages.

AWS Ground Station is operational

Introduced at the 2018 re:Invent, AWS Ground Station enables you to downlink data from satellites. Ground stations are quite literally the base of global satellite networks. This managed service now has two antennas up and running in the U.S.East-2 and U.S. West-2 regions, with 10 more under construction and expected online in 2019.

Given expense and satellite access, Ground Station is a niche service that won’t make sense for every AWS user.  However, for organizations that rely on satellite data — weather, maritime or aviation — Ground Station has a chance to provide better data at a cheaper rate. If AWS successfully deploys the remaining antennas, then organizations will be able to connect to satellites when and where they need data, without steep management costs.

AWS Ground Station bills antenna use in per-minute increments and will only charge for time scheduled.

AWS weighs in on CLOUD Act

Since it was enacted in March 2018, the CLOUD Act has caused tension between privacy advocates and big tech companies who support the law, among them AWS. Responding to this U.S. Department of Justice white paper, AWS hoped to quell users’ privacy concerns.

While the white paper outlines the law’s purpose, scope, and importance as a model for international cooperation, AWS insists that the CLOUD Act will not affect its ability to protect its customers’ data. In short, the CLOUD Act streamlines the process by which law enforcement agencies can compel service providers to turn over data outlined in a warrant. AWS, though, insists it reviews any request for customer data and gives users the option to encrypt data in-transit and at rest. AWS also points to its history of challenging government requests for user information, especially when they conflict with local laws — think GDPR here.

AWS is trying to thread a fine line here, complying with the DOJ but also appealing to the privacy concerns of its customers. AWS and other big tech companies will continue to be the middle-man in this privacy conflict.


April 30, 2019  6:10 PM

AWS month in review: Enable snapshot automation for Redshift

Ryan Dowd Profile: Ryan Dowd

Expanded AWS snapshot capabilities of two prominent database services should make them more versatile for data backup.

Amazon Redshift can now take automatic, incremental data snapshots that users can schedule and bulk-delete. To enable AWS snapshot automation, users can configure their snapshot schedule with a cron-style granularity through the AWS Management Console or an API. Once the schedule is set, based on a period of time or per node of change,- it’s attached to a Redshift cluster to generate data backups. Users can then delete groups of unneeded snapshots to limit S3 storage costs.

Also, Amazon Aurora Serverless has been updated so users can share database cluster snapshots publicly or with other AWS accounts. Approved users can access snapshot data directly rather than copy it. This may be useful to share data between development and production environments, or for collaboration between an enterprise and its research partner. Users will have to be careful with this capability and watch what information they share publicly.

Clusters snapshots can also be copied across regions, which is a feature — along with AWS snapshot automation — organizations may want to incorporate into their disaster recovery or migration strategy.

AWS packs block storage into Snowball Edge

AWS expanded its hybrid cloud capabilities with block storage on AWS Snowball Edge. Users can now access block, file, and object storage for edge applications. Block storage enables AWS users to quickly deploy EC2 Amazon Machine Image (AMI) based applications that need at least one block storage volume. AWS continues to advance the capabilities of its edge devices, which have been a natural shortcoming in cloud computing.

T3a instances offer a Nitro boost

AWS has added seven new T3a EC2 instances that cost 10% less than comparable existing T3 instances. Similar to the new M5ad and R5ad instances, T3a instances are built on the AWS Nitro System and deliver burstable, cost-effective performance. The instance will work best for workloads that require a baseline of around 2 vCPUs but experience temporary spikes in usage.

T3a instances are available in five regions so far: U.S. East (N. Virginia), U.S. West (Oregon), Europe (Ireland), U.S. East (Ohio) and Asia Pacific (Singapore).

More migration support

AWS Server Migration Service (SMS) can now transfer Microsoft Azure VMs to AWS cloud, which makes it easier to incorporate Microsoft Azure applications into AWS. Use AWS SMS to discover Azure VMs, sort them into applications and then migrate the application group as a single unit, without the need to replicate individual servers or decouple application dependencies. While this service is free, users still pay for AWS resources used — and keep in mind the potential costs of Azure-to-AWS migration.

AWS is also launching a service to migrate your files to Amazon WorkDocs. The WorkDocs migration service could help enterprises consolidate their files, if they choose to go all on AWS. The migration service enables organizations to configure their migrations tasks, i.e. what source they want to migrate to which WorkDocs account and site. Backed by AWS DataSync, the Amazon WorkDocs migration service enables users to execute a data transfer all at once, over a specific period or in recurring syncs.

Amazon Elasticsearch updates

Amazon Elasticsearch Service (ES) now supports open source Elasticseach 6.5 and Kibana 6.5. This update includes several added features, such as auto-data histogram, conditional token filters and early termination support for min/max aggregations.

Amazon ES also provides built-in monitoring and alerting, which enable AWS users to track data stored in their domain and send notifications based on pre-set thresholds. Alerting is a key feature of the Open Distro for Elasticsearch, AWS’ Apache-licensed distribution of Elasticsearch co-developed by Expedia and Netflix.


March 29, 2019  3:33 PM

AWS month in review: More AWS deep learning capabilities

Ryan Dowd Profile: Ryan Dowd
AWS, Deep learning, Machine learning

This month, AWS gave its users more machine learning capabilities along with a few opportunities to learn, train and get certified with the technology.

Announced at the AWS Summit in Santa Clara, AWS Deep Learning Containers (DL Containers) enable developers to use Docker images preinstalled with deep learning frameworks, such as TensorFlow and Apache MXNet, and can scale machine learning workloads efficiently.

Developers often use Docker containers for machine learning workloads and custom machine learning environments, but that usually involves days of testing and configuration. DL Containers will help developers deploy these machine learning workloads more quickly on Amazon Elastic Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS),.

DL Containers offers the flexibility to build custom machine learning workflows for training, validation, and deployment and handles container orchestration as well. Along with EKS and ECS, DL Containers will work with Kubernetes on Amazon EC2 as well. This new capability will enable developers to focus on deep learning — building and training new models — instead of tedious container orchestration.

AWS also added a new specialty certification for machine learning. The AWS Certified Machine Learning Specialty certification validates a user’s ability to design, implement, deploy, and maintain AWS machine learning services and processes. The exam costs $40.

Concurrency Scaling for Redshift

AWS now offers Concurrency Scaling to handle high volume requests in Amazon Redshift. Before Concurrency Scaling, Redshift users encountered performance issues when too many business analysts tried to access the database concurrently; Redshift’s compute capability lacked the flexibility to adapt on-demand.

Now, when users enable the Concurrent Scaling feature, Redshift automatically adds additional cluster capacity at peak times. You pay for what you use and can remove the extra processing power when it’s no longer needed.

AWS Direct Connect console completes global transformation

The global AWS Direct Connect console is now generally available with a redesigned UI. The service establishes a dedicated connection between an organization’s datacenter and AWS, but those connections were previously limited to links to Direct Connect locations within the same AWS region.  However, users now have the ability to connect to any AWS region — except China — from any AWS Direct Connect location.

AWS also increased connection capacity — available through approved Direct Connect Partners — and lowered prices for low-end users.

DeepRacer League kicks off

The AWS Santa Clara Summit was also opening day for the AWS DeepRacer League’s summer circuit, a workshop and competition with AWS’ little autonomous car that could.

Introduced at re:Invent 2018, AWS DeepRacer is a one-eighth scale car that includes a fully configured environment on Amazon’s cloud. Operators train their vehicles with reinforcement learning models, such as an autonomous driving model. Much like a human or dog, DeepRacer learns via trial and error and users can reward their DeepRacer for success. Reinforcement learning models include reward functions that reward — think of code as a treat here — the car for good behavior, which in this case, means staying on the track. AWS DeepRacer is meant to get developers hands-on experience with reinforcement learning, a recent capability added to Amazon SageMaker.

Congratulations to Cloud Brigade, who with a time of 00:10.43 sits in the pole position on the leaderboard after the first contest. AWS’ toy cars go on sale in April.


February 28, 2019  3:55 PM

AWS month in review: More improvements for hybrid cloud

Ryan Dowd Profile: Ryan Dowd
AWS, Hybrid cloud

In recent years, AWS has grown less dogmatic with regards to hybrid cloud architecture. AWS users already have some capabilities to build AWS hybrid cloud architectures with tools such as AWS Direct Connect, Snowballs, and most notably VMware Cloud on AWS. AWS Outposts, unveiled at re:Invent 2018, is perhaps the exclamation point of AWS’ long transition toward a more hybrid cloud future, with on-premises compute and storage racks made of AWS hardware. And AWS furthered this thread when it acquired the Israeli-based cloud migration company CloudEndure in January.

In February 2019, AWS’ hybrid cloud plans took another step forward with tweaks to some services that simplify the migration and integration of on-premises environments.

AWS’ Server Migration Service, which admins use to automate, schedule, and track the replication of on-premise applications and server volumes to AWS cloud,  now enables them to directly import and migrate applications discovered by AWS Migration Hub without the need to recreate server and applications groupings. This will reduce the time to import on-premises applications to AWS cloud and reduce migration errors.

Meanwhile, AWS added the Infrequent Access storage class in Amazon Elastic File System (EFS) as a less expensive option for both on-premises and AWS files and resources that are sporadically used. This is a cheaper way to store larger amounts of data that you don’t use every day. Unlike standard EFS, EFS Infrequent Access carries an additional cost for every access request. Users won’t need to move or delete their data from AWS to manage costs anymore.

Finally, AWS has added architecture reviews for both hybrid cloud and on-premises workloads to its Well-Architected Tool portfolio. Based on the AWS Well-Architected Framework and developed by experienced AWS architects, the AWS Well-Architected Tool recommends adjustments to make workloads more scalable and efficient. To review workloads for their AWS hybrid cloud architecture, users select both the AWS and non-AWS Region (or regions) when they define their workload in the tool.

AWS bolsters bare metal, GuardDuty

AWS has added five EC2 bare metal instances — M5, M5d, R5, R5d and z1D — designed for all-purpose workloads, such as web and application servers, gaming servers, caching fleets and app development environments. The R5 instances target high performance databases, real-time big data analytics and other memory-intensive enterprise applications.

AWS has also added three threat detections for its security monitoring service Amazon GuardDuty:  two for penetration testing and one for policy violation.

AWS Solutions opens up shop

AWS continues to put its Well-Architected Framework to use. AWS Solutions is a portfolio of deployment designs and implementations vetted to guide users through common problems and enable them to build faster. Examples include guides for AWS Landing Zone, AWS Instance Scheduler, and live streaming on AWS, among others.

More CloudFormation integrations

AWS CloudFormation now supports Amazon FSx, AWS OpsWorks and WebSocket APIs in API gateway. The interest in Infrastructure as code (IaC) is only growing with tools like Terraform and CloudFormation. But AWS needs to continue to expand its native integrations with CloudFormation to make it a more viable option for IaC.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: