AWS Cloud Cover

September 30, 2019  8:41 PM

AWS updates: Amazon QLDB hits, S3 expands, new instances arrive

Ryan Dowd Profile: Ryan Dowd
AWS ( Amazon Web Services ), Blockchain, Cloud storage

The Amazon Quantum Ledger Database allows users to store data in logs that are transparent, immutable and cryptographically verifiable.

The managed, serverless AWS ledger database, available as of September, is append-only, which means you can’t change the data within the database. This is what makes Quantum Ledger Database (QLDB) immutable. QLDB also supports PartiQ, AWS’ open source SQL-compatible query language and enables you to export all or part of your data to Amazon S3.

QLDB shares some similarities with blockchain technology – though it is not technically blockchain. It uses the cryptographically verifiable properties of blockchain, but unlike blockchain, which relies on distributed, peer-based verification, QLDB is built on centralized trust. If your application calls for decentralized trust and depends on outside or untrustworthy parties, then a straight blockchain technology would be a better fit, which is why Amazon offers an additional managed blockchain service. If your application only requires a ledger of all application data changes available for query and analysis, the AWS ledger database is a suitable option.

Companies that deal with financial transactions could find a use for QLDB. It could also support companies that log and update customers on supply chain progress. You can access the AWS ledger database in the AWS Management Console, the AWS Command Line Interface, a CloudFormation template or by calling the QLDB API.

Amazon S3 Same-Region Replication

This September,  Amazon bolstered its S3 replication capabilities with Amazon S3 Same-Region Replication (SRR). This capability automatically replicates new S3 objects to a destination bucket in the same AWS region. It replicates not only the objects but the metadata, Access Control Lists and object tags.

SRR builds on S3 Cross-Region Replication (CRR), which replicates objects across additional AWS regions. Together, SRR and CRR protect you from accidental deletion — or an AWS outage. They can also help your organization comply with data sovereignty laws and compliance requirements as well as minimize S3 latency.

You can enable object replication with a bucket-level configuration. You’ll configure the source bucket with the destination bucket where you want S3 to replicate objects and an AWS Identity and Access Management role that can replicate objects on your behalf.

Amazon EC2 G4 instances

Amazon this month added EC2 G4 instances to its GPU-powered compute fleet. The new instances are optimized and cost-effective to power machine learning models in production and graphics-intensive applications. This latest generation offers the newest NVIDIA T4 GPUs, better networking throughput and more local storage. To start, Amazon is offering six different instance sizes, with varying levels of memory, storage and network bandwidth, and plans to add a bare metal version soon.

Amazon launched its Deep Learning Containers in March 2019, and these EC2 G4 instances should make running these deep learning capabilities more efficient as AWS rounds out its machine learning capabilities.

FireLens preview

FireLens for Amazon Elastic Container Service (ECS) enables you to use task definition parameters to route logs to an Amazon cloud service for retention and analytics. AWS has opened a public preview so users can test its basic functionality. FireLens supports Fluentd and Fluent Bit, but also provides the AWS Fluent Bit plugin, which AWS launched earlier this summer.

Fluent Bit is an open source, multi-platform log processor and forwarder. It enables you to collect data from multiple sources and send them to different destinations. The Fluent Bit plugin for container images enables you to route container logs to Amazon CloudWatch and Kinesis Data Firehose.

FireLens for ECS should be a more direct way to route your container logs. With the public preview, you can test three use cases. You can send standard container out logs to different AWS destinations, filter out unnecessary logs and decorate logs with ECS metadata.


AWS IQ is a service that connects companies with AWS Certified, third-party AWS experts for help on a cloud project. The service is available as of late September.

August 30, 2019  5:10 PM

AWS month in review: AWS time series forecasting service is GA

Ryan Dowd Profile: Ryan Dowd
AWS, Machine learning

Amazon Forecast is another ML service for an IT team’s toolbelt to help companies predict production demands, such as necessary inventory levels, along with other predictive uses. AWS users don’t need ML expertise to use the service, but official documentation is light so far, which could make this AWS time series forecasting service tough for beginners.

The managed predictive analysis service was initially unveiled at AWS re:Invent 2018. Unlike other predictive services, Forecast’s machine learning models use time as an additional dimension, which makes it particularly accurate to predict broad business trends, according to AWS. Early use cases are around resource and financial planning.

AWS positions Forecast as an easy to use, pay as you go service that doesn’t require machine learning experience. The user provides the relevant data sets, and the AWS time series forecasting service picks an appropriate machine learning algorithm to produce a forecasting model, which includes the model’s expected accuracy.

Users with machine learning experience can bring their own custom algorithm and will likely want to add more data and retrain the model to improve on its initial expected accuracy. Unfortunately, those who want to really dig into Forecast documentation won’t find much yet. The most detailed guides are currently on GitHub.

Rekognition adds “Fear” face analysis capability

AWS improved its facial analysis service Rekognition amid increased backlash over Amazon’s involvement with U.S. government agencies such as Immigration and Customs Enforcement. This month, AWS improved Rekognition’s accuracy with gender identification, age estimation and emotion detection. It also added a new emotion — fear.

It’s been less than a banner summer for Rekognition. The city of Orlando, Fla. ended its pilot program with the technology in July. A recent study even called into question the viability of technological emotional analysis. Amazon’s own employees urged the company not to collaborate with law enforcement agencies like ICE, and Amazon ultimately rejected staff and shareholder calls to halt facial recognition sales to government agencies.

Lake Formation now open to all

AWS Lake Formation became generally available this month. Also introduced at re:Invent 2018, Lake Formation is a managed data lake service.

Organizations use data lakes to store, catalog, query and analyze massive amounts of raw data in one central repository. Building data lake architectures on AWS is a complicated process, where users string together several Amazon cloud services like S3, Amazon Elasticsearch, Amazon Athena and others. Lake Formation orchestrates all these services for you.

To get started, navigate to the Lake Formation console and register any existing S3 buckets that you want in your data lake. Create a database and grant permissions to Identity and Access Management users and roles that’ll need to access the data lake. Make sure the database is registered in the Glue Data Catalog for metadata analysis. To orchestrate data ingestion, select blueprints in the console that create different data lake workflows, such as an AWS Elastic Load Balancing (ELB) logs blueprint that loads data from ELB logs.

Follow the workflow progress in the AWS Glue console, and when it’s finished, you’ll find a new table in your data lake database. That centralization is a key benefit of Lake Formation.

Capital One hacker indicted

Former Amazon software engineer Paige Thomspon was indicted Wednesday, Aug. 28 on two counts in connection with the recent Capital One hack and her unauthorized intrusion into data from more than 30 different companies and institutions.

Thompson created scanning software that could identify if cloud computing customers misconfigured their firewalls, according to the indictment. She then allegedly used this access to steal data and channel stolen compute power into cryptojacking. Thompson faces up to 25 years in prison and will remain in custody until her arraignment Sept. 5.

Ahead of Thompson’s detention hearing earlier this month, federal prosecutors filed a memorandum that stated investigators searched Thompson’s servers and found multiple terabytes of additional stolen data from more than 30 different companies. With these additional allegations, along with a history of violent behavior, the court denied Thompson bail.

Capital One expects the affair to cost between $100 million and $150 million in 2019, according to Reuters.

July 31, 2019  7:22 PM

AWS month in review: Orlando drops AWS facial recognition program

Ryan Dowd Profile: Ryan Dowd
"Amazon Web Services", Artifical Intelligence, AWS

AWS this month suffered a setback to its agenda of expansion, when the city of Orlando, Fla. ended its facial recognition in law enforcement partnership with the Amazon Rekognition technology.

After 15 months of back and forth with this AWS facial recognition program, Orlando and Amazon have officially gone their separate ways. On July 18, Orlando declined to renew its second pilot program with Amazon Rekognition, AWS’ image analysis service. Bandwidth, video resolution and positioning issues plagued the use of the technology, and the city was unable to set up a reliable camera stream, Rosa Akhtarkavari, Orlando’s chief information officer, told the Orlando Weekly. The city simply lacked the IT infrastructure to support AI software, she said.

The first pilot began in December 2017 and ended in June 2018, before the second started back up again in October 2018. In theory, the city planned to use Rekognition’s facial recognition algorithms to identify and track suspects in real-time. If configured and supported properly, law enforcement officers would upload an image of a suspect and get notified if Rekognition found a match.

With Orlando out of the picture, Oregon’s Washington County is the lone police department that still uses AWS facial recognition technology. Both partnerships faced legal and media pressure from the American Civil Liberties Union which argued unchecked surveillance technology threaten privacy and civil liberties, and that Rekognition, in particular, misidentifies African-Americans as criminals at a higher rate than other races. Other cities, such as Oakland and Somerville, MA have banned government use of the software.  And Amazon’s own employees and shareholders even wrote a resolution that called to halt AWS facial recognition sales to government agencies, though Amazon’s board of directors struck the missive down in its annual shareholder meeting in May 2019.

Amazon claims that the apparent racial bias occurred due to misuse and misunderstanding of the service. It has also argued that it’s up to the federal government, not Amazon, to legislate the use of this technology.

AWS expands CloudWatch, adds event-driven offering

While Amazon bore a blow to its AI ambitions this month, it still improved some bread-and-butter capabilities of the AWS platform. The recently announced Amazon CloudWatch Container Insights and Anomaly Detection capabilities, along with the expansion of its EC2 Spot Instance service, should expand AWS’ compute and monitoring flexibility.

AWS also added Amazon EventBridge, a serverless event bus that integrates users’ AWS applications with  SaaS applications. As more of its customers turn to event-driven applications and architecture, AWS needs a better way to integrate and route real-time data from third-party event sources including DataDog or PagerDuty to service targets, such as AWS Lambda. EventBridge eliminates the need to write custom code that connects application events and should enable more efficient AWS event-driven architectures.

Amazon CloudWatch Container Insights and Anomaly Detection give users more ways to analyze their metrics and improve performance and security. CloudWatch Container Insights collects and organizes metrics from AWS’ container services and files them in CloudWatch’s automatic dashboard. It also handles diagnostics, which can help users identify issues such as container restart failures. Users can set alarms for certain container metrics, including use of CPU, memory or network resources. Container Insights is in open preview.

CloudWatch Anomaly Detection uses machine learning algorithms to analyze data regarding the performance of your system or application. This anomaly detection capability then analyzes the metric’s past data to generate a model of expected values and establishes a high and low value of accepted metric behavior. Users can then decide how and when they are notified. CloudWatch Anomaly Detection is in open preview and priced per alarm.

Spot Instances for Red Hat Enterprise Linux and Amazon SageMaker

Amazon EC2 Spot Instances let users obtain unused EC2 capacity at a discounted rate, and AWS recently extended those capabilities to users with a basic Red Hat Enterprise Linux (RHEL) subscription. Before, only premium RHEL subscribers could access Spot Instances. At its NYC Summit this July, AWS also revealed Spot Instances support for SageMaker users to train machine learning models, which AWS claims could cut training costs by up 70%.

June 28, 2019  6:21 PM

AWS month in review: Security Hub goes live at AWS security conference

Ryan Dowd Profile: Ryan Dowd
"Amazon Web Services", Amazon, Cloud Security

AWS this month hosted its inaugural re:Inforce conference in Boston and used the setting to make AWS Security Hub and Control Tower generally available and to introduce a VPC network security feature.

Other AWS developments of note earlier in June included AWS’ expansion of Auto Scaling to Amazon Relational Database Service (RDS), which should ease over-provisioning woes for some users, and the addition of AWS Personalize to AWS’ machine learning suite.

AWS re:Inforce, an AWS re:Invent-inspired spinoff devoted to cloud security, drew more than 8,000 attendees. The AWS security conference featured Amazon cloud service demos, training sessions and highlighted Security Hub and Control Tower, among other services, as ways to infuse more automation and visibility into cloud security processes.

Security Hub and Control Tower aim to centralize security insights and account management, respectively. Security Hub is a centralized security dashboard to monitor security and compliance posture. It collects and analyzes data from all the AWS security tools and resources you use and checks them against AWS security and compliance best practices – identifying an S3 bucket unintentionally left open to public access, for example.

AWS Control Tower was built to ease multi-account management. Control Tower automates the creation of a secure multi-account AWS environment, with AWS security best practices baked into the process. Accounts configured through Control Tower come with guardrails — high-level policies — that reject or report prohibited deployments.

Amazon Virtual Private Cloud (VPC) Traffic Mirroring is a feature for your Amazon VPC that analyzes network traffic at scale. AWS has described this capability as a “virtual fiber tap” that captures traffic flowing through your VPC. You can capture all the traffic or filter for specific network packets. VPC Traffic Mirroring should improve network visibility and help organizations check off monitoring compliance requirements.

Amazon RDS supports Auto Scaling

Auto Scaling uses Amazon CloudWatch to monitor applications and then automatically scale them according to predetermined resource needs and parameters. Users can now set up Auto Scaling for RDS in the Management Console.

Before Auto Scaling, RDS users either overprovisioned new database instances to be safe or underprovisioned them to save some money. This meant they were either stuck footing a larger bill than necessary, or had to increase capacity on the fly which typically results in application downtime. To ensure RDS performance and cost optimization, users should underprovision from expected capacity and set a maximum storage limit. Auto Scaling will boost capacity as database workloads grow.

Auto Scaling is a key feature for EC2 and Amazon Aurora, as well. Those services enable dynamic scaling — up or down — based on user recommendations for performance and cost optimization. However, RDS Auto Scaling only scales up.

Users who experience cyclical data spikes and lulls may need to use Aurora Serverless or provide additional automation on top of RDS Auto Scaling to bring their storage capacity back down. However, RDS Auto Scaling should still simplify provisioning of storage capacity in most cases.

Users pay for the database resources they use, which includes Amazon CloudWatch monitoring.

Amazon adds Personalize to ML portfolio

Like Amazon SageMaker, Amazon Personalize doesn’t require advanced ML and AI knowledge. The service stems from the machine learning models that uses to recommend products and offers that capability in a plug-and-play fashion to AWS users and their applications.

To get started with Amazon Personalize, users can set up an application activity stream on the Amazon Personalize API. This stream would log customer interaction on the application — along with products they’d like to recommend. Amazon Personalize will then customize a machine learning model for that data and generate real-time recommendations. AWS users can start with a two-month free trial, with data processing and storage limitations.

May 31, 2019  7:42 PM

AWS month in review: Updated Lambda execution environment on its way

Ryan Dowd Profile: Ryan Dowd

AWS this month said it will update the execution environment for Lambda and Lambda@Edge. Lambda runs on top of the Amazon Linux OS distribution, which AWS will move to version 2018.03 in July. AWS has also begun to highlight its niche managed satellite service Ground Station, as the first two stations are now open for business. Finally this May, AWS weighed in on the Clarifying Lawful Overseas Use of Data (CLOUD) Act enacted in March 2018. AWS both echoed support for the law, but also insisted will defend its users’ data to the extent international law allows.

The updated AWS Lambda execution environment AMI should improve Lambda capabilities, performance and security, according to AWS. However, the transition could impact Lambda functions that house libraries or application code compiled against specific underlying OS packages or other system libraries. Lambda users should proactively test their existing functions before the general update goes live Tuesday, July 16.

An AWS Lambda execution environment is what users’ code runs on, made up of an underlying OS, system packages, the runtime for your language, and common capabilities like environment variables. Users can test their functions for the new environment in the Lambda console if they have enabled the Opt-in layer, which will tell Lambda to run function executions on the new environment. They can also test locally through an updated AWS Serverless Application Model CLI, which uses a Docker image that mirrors the new Lambda environment.

On June 11, any newly created Lambda function will run on the updated execution environment. And on June 25, any updated Lambda function update will run on the new environment, too.

The general update will occur on July 16, and all existing functions will use the new execution environment when invoked. If you aren’t ready to deploy to the new execution environment, enable the Delayed-Update layer which will push the distribution transition back to July 23. All functions will have to be migrated by July 29.

The safest course is to begin testing lambda functions now, especially those suspected to have dependencies compiled against system packages.

AWS Ground Station is operational

Introduced at the 2018 re:Invent, AWS Ground Station enables you to downlink data from satellites. Ground stations are quite literally the base of global satellite networks. This managed service now has two antennas up and running in the U.S.East-2 and U.S. West-2 regions, with 10 more under construction and expected online in 2019.

Given expense and satellite access, Ground Station is a niche service that won’t make sense for every AWS user.  However, for organizations that rely on satellite data — weather, maritime or aviation — Ground Station has a chance to provide better data at a cheaper rate. If AWS successfully deploys the remaining antennas, then organizations will be able to connect to satellites when and where they need data, without steep management costs.

AWS Ground Station bills antenna use in per-minute increments and will only charge for time scheduled.

AWS weighs in on CLOUD Act

Since it was enacted in March 2018, the CLOUD Act has caused tension between privacy advocates and big tech companies who support the law, among them AWS. Responding to this U.S. Department of Justice white paper, AWS hoped to quell users’ privacy concerns.

While the white paper outlines the law’s purpose, scope, and importance as a model for international cooperation, AWS insists that the CLOUD Act will not affect its ability to protect its customers’ data. In short, the CLOUD Act streamlines the process by which law enforcement agencies can compel service providers to turn over data outlined in a warrant. AWS, though, insists it reviews any request for customer data and gives users the option to encrypt data in-transit and at rest. AWS also points to its history of challenging government requests for user information, especially when they conflict with local laws — think GDPR here.

AWS is trying to thread a fine line here, complying with the DOJ but also appealing to the privacy concerns of its customers. AWS and other big tech companies will continue to be the middle-man in this privacy conflict.

April 30, 2019  6:10 PM

AWS month in review: Enable snapshot automation for Redshift

Ryan Dowd Profile: Ryan Dowd

Expanded AWS snapshot capabilities of two prominent database services should make them more versatile for data backup.

Amazon Redshift can now take automatic, incremental data snapshots that users can schedule and bulk-delete. To enable AWS snapshot automation, users can configure their snapshot schedule with a cron-style granularity through the AWS Management Console or an API. Once the schedule is set, based on a period of time or per node of change,- it’s attached to a Redshift cluster to generate data backups. Users can then delete groups of unneeded snapshots to limit S3 storage costs.

Also, Amazon Aurora Serverless has been updated so users can share database cluster snapshots publicly or with other AWS accounts. Approved users can access snapshot data directly rather than copy it. This may be useful to share data between development and production environments, or for collaboration between an enterprise and its research partner. Users will have to be careful with this capability and watch what information they share publicly.

Clusters snapshots can also be copied across regions, which is a feature — along with AWS snapshot automation — organizations may want to incorporate into their disaster recovery or migration strategy.

AWS packs block storage into Snowball Edge

AWS expanded its hybrid cloud capabilities with block storage on AWS Snowball Edge. Users can now access block, file, and object storage for edge applications. Block storage enables AWS users to quickly deploy EC2 Amazon Machine Image (AMI) based applications that need at least one block storage volume. AWS continues to advance the capabilities of its edge devices, which have been a natural shortcoming in cloud computing.

T3a instances offer a Nitro boost

AWS has added seven new T3a EC2 instances that cost 10% less than comparable existing T3 instances. Similar to the new M5ad and R5ad instances, T3a instances are built on the AWS Nitro System and deliver burstable, cost-effective performance. The instance will work best for workloads that require a baseline of around 2 vCPUs but experience temporary spikes in usage.

T3a instances are available in five regions so far: U.S. East (N. Virginia), U.S. West (Oregon), Europe (Ireland), U.S. East (Ohio) and Asia Pacific (Singapore).

More migration support

AWS Server Migration Service (SMS) can now transfer Microsoft Azure VMs to AWS cloud, which makes it easier to incorporate Microsoft Azure applications into AWS. Use AWS SMS to discover Azure VMs, sort them into applications and then migrate the application group as a single unit, without the need to replicate individual servers or decouple application dependencies. While this service is free, users still pay for AWS resources used — and keep in mind the potential costs of Azure-to-AWS migration.

AWS is also launching a service to migrate your files to Amazon WorkDocs. The WorkDocs migration service could help enterprises consolidate their files, if they choose to go all on AWS. The migration service enables organizations to configure their migrations tasks, i.e. what source they want to migrate to which WorkDocs account and site. Backed by AWS DataSync, the Amazon WorkDocs migration service enables users to execute a data transfer all at once, over a specific period or in recurring syncs.

Amazon Elasticsearch updates

Amazon Elasticsearch Service (ES) now supports open source Elasticseach 6.5 and Kibana 6.5. This update includes several added features, such as auto-data histogram, conditional token filters and early termination support for min/max aggregations.

Amazon ES also provides built-in monitoring and alerting, which enable AWS users to track data stored in their domain and send notifications based on pre-set thresholds. Alerting is a key feature of the Open Distro for Elasticsearch, AWS’ Apache-licensed distribution of Elasticsearch co-developed by Expedia and Netflix.

March 29, 2019  3:33 PM

AWS month in review: More AWS deep learning capabilities

Ryan Dowd Profile: Ryan Dowd
AWS, Deep learning, Machine learning

This month, AWS gave its users more machine learning capabilities along with a few opportunities to learn, train and get certified with the technology.

Announced at the AWS Summit in Santa Clara, AWS Deep Learning Containers (DL Containers) enable developers to use Docker images preinstalled with deep learning frameworks, such as TensorFlow and Apache MXNet, and can scale machine learning workloads efficiently.

Developers often use Docker containers for machine learning workloads and custom machine learning environments, but that usually involves days of testing and configuration. DL Containers will help developers deploy these machine learning workloads more quickly on Amazon Elastic Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS),.

DL Containers offers the flexibility to build custom machine learning workflows for training, validation, and deployment and handles container orchestration as well. Along with EKS and ECS, DL Containers will work with Kubernetes on Amazon EC2 as well. This new capability will enable developers to focus on deep learning — building and training new models — instead of tedious container orchestration.

AWS also added a new specialty certification for machine learning. The AWS Certified Machine Learning Specialty certification validates a user’s ability to design, implement, deploy, and maintain AWS machine learning services and processes. The exam costs $40.

Concurrency Scaling for Redshift

AWS now offers Concurrency Scaling to handle high volume requests in Amazon Redshift. Before Concurrency Scaling, Redshift users encountered performance issues when too many business analysts tried to access the database concurrently; Redshift’s compute capability lacked the flexibility to adapt on-demand.

Now, when users enable the Concurrent Scaling feature, Redshift automatically adds additional cluster capacity at peak times. You pay for what you use and can remove the extra processing power when it’s no longer needed.

AWS Direct Connect console completes global transformation

The global AWS Direct Connect console is now generally available with a redesigned UI. The service establishes a dedicated connection between an organization’s datacenter and AWS, but those connections were previously limited to links to Direct Connect locations within the same AWS region.  However, users now have the ability to connect to any AWS region — except China — from any AWS Direct Connect location.

AWS also increased connection capacity — available through approved Direct Connect Partners — and lowered prices for low-end users.

DeepRacer League kicks off

The AWS Santa Clara Summit was also opening day for the AWS DeepRacer League’s summer circuit, a workshop and competition with AWS’ little autonomous car that could.

Introduced at re:Invent 2018, AWS DeepRacer is a one-eighth scale car that includes a fully configured environment on Amazon’s cloud. Operators train their vehicles with reinforcement learning models, such as an autonomous driving model. Much like a human or dog, DeepRacer learns via trial and error and users can reward their DeepRacer for success. Reinforcement learning models include reward functions that reward — think of code as a treat here — the car for good behavior, which in this case, means staying on the track. AWS DeepRacer is meant to get developers hands-on experience with reinforcement learning, a recent capability added to Amazon SageMaker.

Congratulations to Cloud Brigade, who with a time of 00:10.43 sits in the pole position on the leaderboard after the first contest. AWS’ toy cars go on sale in April.

February 28, 2019  3:55 PM

AWS month in review: More improvements for hybrid cloud

Ryan Dowd Profile: Ryan Dowd
AWS, Hybrid cloud

In recent years, AWS has grown less dogmatic with regards to hybrid cloud architecture. AWS users already have some capabilities to build AWS hybrid cloud architectures with tools such as AWS Direct Connect, Snowballs, and most notably VMware Cloud on AWS. AWS Outposts, unveiled at re:Invent 2018, is perhaps the exclamation point of AWS’ long transition toward a more hybrid cloud future, with on-premises compute and storage racks made of AWS hardware. And AWS furthered this thread when it acquired the Israeli-based cloud migration company CloudEndure in January.

In February 2019, AWS’ hybrid cloud plans took another step forward with tweaks to some services that simplify the migration and integration of on-premises environments.

AWS’ Server Migration Service, which admins use to automate, schedule, and track the replication of on-premise applications and server volumes to AWS cloud,  now enables them to directly import and migrate applications discovered by AWS Migration Hub without the need to recreate server and applications groupings. This will reduce the time to import on-premises applications to AWS cloud and reduce migration errors.

Meanwhile, AWS added the Infrequent Access storage class in Amazon Elastic File System (EFS) as a less expensive option for both on-premises and AWS files and resources that are sporadically used. This is a cheaper way to store larger amounts of data that you don’t use every day. Unlike standard EFS, EFS Infrequent Access carries an additional cost for every access request. Users won’t need to move or delete their data from AWS to manage costs anymore.

Finally, AWS has added architecture reviews for both hybrid cloud and on-premises workloads to its Well-Architected Tool portfolio. Based on the AWS Well-Architected Framework and developed by experienced AWS architects, the AWS Well-Architected Tool recommends adjustments to make workloads more scalable and efficient. To review workloads for their AWS hybrid cloud architecture, users select both the AWS and non-AWS Region (or regions) when they define their workload in the tool.

AWS bolsters bare metal, GuardDuty

AWS has added five EC2 bare metal instances — M5, M5d, R5, R5d and z1D — designed for all-purpose workloads, such as web and application servers, gaming servers, caching fleets and app development environments. The R5 instances target high performance databases, real-time big data analytics and other memory-intensive enterprise applications.

AWS has also added three threat detections for its security monitoring service Amazon GuardDuty:  two for penetration testing and one for policy violation.

AWS Solutions opens up shop

AWS continues to put its Well-Architected Framework to use. AWS Solutions is a portfolio of deployment designs and implementations vetted to guide users through common problems and enable them to build faster. Examples include guides for AWS Landing Zone, AWS Instance Scheduler, and live streaming on AWS, among others.

More CloudFormation integrations

AWS CloudFormation now supports Amazon FSx, AWS OpsWorks and WebSocket APIs in API gateway. The interest in Infrastructure as code (IaC) is only growing with tools like Terraform and CloudFormation. But AWS needs to continue to expand its native integrations with CloudFormation to make it a more viable option for IaC.

January 31, 2019  8:38 PM

AWS month in review: Cloud SLAs abound

Trevor Jones Trevor Jones Profile: Trevor Jones

Amazon this month added a bevy of performance guarantees to its cloud services.

Service-level agreements (SLAs) are standard practice in traditional IT, but cloud SLAs are far from universal. For most enterprises, an IT product that lacks an SLA is a nonstarter, so it makes sense for AWS to provide these contractual assurances to lure more corporate customers to its cloud.

All told, AWS added cloud SLAs to 11 services in January: Elastic File Store, Elastic MapReduce (now simply called “EMR”), Kenesis Data Streams, Kinesis Data Firehouse, Kinesis Video Streams, Elastic Container Service for Kubernetes, Elastic Container Registry, Secrets Manager, Amazon MQ, Cognito and Step Functions. The cloud SLAs vary by service, but they all include 99.9% uptime guarantee per month, with service credits if AWS fails to meet those standards.

AWS has offered SLAs for its core infrastructure services for some time, but these latest agreements follow a trend of marked expansion of Amazon’s cloud SLAs for higher-level services the vendor manages on its own internal infrastructure.

It’s hard to gauge the impact of these cloud SLAs on adoption. For example, EMR has been around for a decade without one, while Lambda, which added an SLA in October, is among the most talked about services on the platform. Still, it’s clear that AWS felt the need to put these terms in writing and is confident enough in its backend to do so.

Acquisitions and added services

The cloud SLAs are important, but no contract language generates the same buzz among IT teams as new tools to play with. In that regard, AWS came out of the gate quickly to start 2019.

It added Worklink, a service to securely connect employee devices to corporate intranets and apps; Backup, a centralized console to manage and automate backups; DocumentDB, a MongoDB-compatible document database; and Media2Cloud, a serverless ingest workflow for video content.

There were also two acquisitions that should bolster AWS’ capabilities for cost analysis, as well as backup, disaster recovery and migration.

Open source and AWS

DocumentDB added fuel to the fire in the debate about licensing on top of open source software. AWS built MongoDB compatibility through an API, which enabled it to forego licensing restrictions MongoDB added last year.

AWS has a thorny history of contributing back to open source projects, though company leaders contend the reputation no longer fits. But, as is often the case, these things are never quite so black and white. In fact, just this week AWS became a platinum member of the Apache Software Foundation.

December 21, 2018  3:31 PM

AWS month in review: Cloud networking services abound

Trevor Jones Trevor Jones Profile: Trevor Jones

December didn’t deliver the avalanche of services and features that surrounded AWS re:Invent in November, but AWS didn’t exactly close out the year quietly. Amazon put its cloud networking services front and center this month with tools to secure connections for cloud-based workloads, and it also added a larger GPU-powered instance type and an EU region in Stockholm.

The newest AWS cloud networking service, AWS Client VPN, enables a customer’s employees to remotely access their company resources either on AWS or inside on-premises data centers. An employee can access the service from anywhere via OpenVPN-based clients. AWS already had a virtual private network (VPN) service, which it now calls AWS Site-to-Site VPN. However, that product only connects offices and branches to an organization’s Amazon Virtual Private Cloud (VPC) environment.

Organizations can already host OpenVPN on Amazon EC2, so they’ll need to determine if it’s cheaper to go that route and incur the charges from both vendors, or opt for this bundled, pay-as-you-go cloud networking service. Client VPN is more expensive than OpenVPN on its own, so it would come down to how much an organization spends on its instances. AWS charges hourly for the service, per active client connections and associated subnets.

Another factor to consider is management, as an organization that uses Client VPN won’t have to maintain any EC2 instances. This is the latest example of AWS’ efforts to offer services that handle the infrastructure for the user — and the cloud vendor plans to do more of this in the future, to attract enterprise clients that don’t want to deal with all those operational complexities.

Organizations can now use a WebSocket API with Amazon API Gateway. Prior to this update, users of the service were limited to the HTTP request/response model, but the WebSocket protocol provides bidirectional communication. This opens the door to a wider range of interactions between end users and services, because the service can push data independent of a specific request.

We’ll have a more thorough analysis on this feature in the coming weeks, but AWS suggests developers can use this functionality to build real-time, serverless applications such as chat apps, multi-player games and collaborative platforms.

Also on the networking front, users can now access Amazon Simple Queue Service (Amazon SQS) and AWS CodePipeline directly through their Amazon VPC, through VPC endpoints and AWS PrivateLink to securely connect services and keep data off the public internet. The Amazon SQS update in particular is a “meat and potato” item that’s more important to some users than flashier services that debuted at re:Invent, according to one prominent AWS engineer.

Lastly, organizations can now share Amazon VPCs with multiple accounts. Large customers use multiple accounts to portion off different business units or teams for security or billing purposes, AWS said. VPC sharing takes responsibility for management and configuration out of the account holder’s hands, and gives it to the IT team, which can then doll out access to these shared environments as needed.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: