AWS Cloud Cover

Page 1 of 41234

June 29, 2018  6:25 PM

VMware-AWS service bubbles into new markets

David Carty Profile: David Carty

With the VMware-AWS partnership, some customers were either reluctant or unable to use earlier versions of the service. Now, with the passage of time and corresponding product maturity, customers with tight compliance requirements might want to review the offering.

VMware Cloud on AWS (VMC) has made some recent advances. First, VMware said the service would soon be available in the AWS GovCloud region, which is typically restricted to public sector customers. The VMware-AWS service has been slow to expand globally – it is only available in four regions. The companies hope that it finds some takers among cash-strapped government agencies, which are typically slow to migrate to the cloud due to cost and regulatory concerns.

Speaking of regulatory concerns, VMC now offers a HIPAA-eligible hybrid cloud environment after passing a third-party evaluation. While HIPAA eligibility still depends on the manner with which an IT team manages cloud data and resources, healthcare providers could nonetheless see the VMware-AWS platform as a boon to their hybrid cloud operations.

Contain your enthusiasm

In other recent AWS news, Amazon released its answer to Google Kubernetes Engine, which came as welcome news to an eager base of container fanatics tired of standing up and managing the necessary infrastructure to support Kubernetes.

After it was introduced at re:Invent in December last year, Amazon Elastic Container Service for Kubernetes (EKS) became generally available in early June. EKS manages Kubernetes clusters for users and provides some potential benefits for AWS customers, such as high availability and support for load balancers and Identity and Access Management. EKS could make it easier to run microservices apps, perform batch processing or even migrate applications — that is, of course, if you’re willing to pay a bit more for the managed service.

Close your Windows

Amazon’s desktop as a service offering once only offered Windows options for operating systems (OSes). Now, as it does with so many of its other services, AWS has dangled a carrot to lure you away from Microsoft.

Amazon WorkSpaces added support for its Amazon Linux 2 OS, which the IaaS provider designed to handle a variety of cloud workloads. Amazon Linux WorkSpaces could help IT allot CPU and memory more efficiently, thus reducing costs. Based on the MATE Desktop Environment, Amazon Linux WorkSpaces purports to offer benefits for developers and ops alike, such as support for tools like Firefox, Evolution, Pidgin and Libre Office, as well as a better development environment and support for kiosk mode. Though, as some users pointed out, WorkSpaces still lacks a Linux client.

A wish fulfilled — finally

AWS Lambda also added support for Amazon Simple Queue Service (SQS) triggers — a longtime request from serverless developers. In the process, one of AWS’ older services, SQS, now works with one of its newer technologies, Lambda.

SQS messages can now sync with Lambda to trigger functions across distributed cloud systems. This integration eases processes like monitoring and error retries, which were previously made difficult by the aforementioned workarounds many developers introduced to trigger functions from messages. But developers should set Lambda concurrency controls to avoid hitting account limits, and the integration does not yet support FIFO queues.

The first ever cloud associates degree

As part of its Public Sector Summit in Washington D.C., AWS revealed a partnership with Northern Virginia Community College, which will offer a cloud computing specialization as part of its Information Systems Technology associates degree. Teresa Carlson, AWS’ vice president of the worldwide public sector, said it was the first ever cloud associate’s degree. As part of the course work, students receive access to the AWS Educate program.

May 31, 2018  5:09 PM

Bare-metal instances augment AWS’ IaaS options

David Carty Profile: David Carty

Infrastructure was AWS’ focus in May, as the cloud provider made good on several of its promises with features that provide more diverse compute options — including some that directly challenge two of its biggest foes.

Customers in five regions can now use EC2 bare-metal instances, which enable access to the memory and processor that runs those instances. Released into preview at re:Invent last year, these EC2 bare-metal instances compete against similar offerings and services from Oracle and IBM. While only available for the I3 storage-optimized instance family, these bare-metal instances can fit a variety of use cases, such as workloads restricted by licenses or lack of support for virtualized instances, and they provide a higher degree of hardware control than previously available.

In addition to several new AWS IaaS features geared toward EC2 instance management, AWS also this month added NVMe storage for its C5 instance family. These instances boost I/O to local storage to help developers take advantage of all available compute capacity. While only available for the C5 family right now, AWS said it plans to introduce NVMe storage to more instances in the coming months.

AWS presses play on Lambda IoT service

AWS’ latest IoT service attempts to make its platform literally push-button simple.

As its name suggests, AWS IoT 1-Click simplifies Lambda triggers to manage simple devices, and perform actions such as send alerts or flag items for inspection.

AWS IoT 1-Click currently supports only two push-button triggers: AWS IoT Enterprise Button (formerly the AWS IoT Button) and the AT&T LTE-M Button, which connect over Wi-Fi and AT&T’s cellular network, respectively. These devices come with their own certificates to protect communication to and from the cloud, and they encrypt outbound data via TLS. In the future, however, AWS plans to support various types of push-button devices, asset trackers, card readers and sensors.

An alien database option

If men are from Mars and women are from Venus, perhaps graph databases can be from Neptune.

Released into general availability in late May across four regions, Amazon Neptune enables developers to build and maintain high-performance graph databases that can scale to store billions of relationships between connected datasets. AWS posits Neptune as the ideal database option for modern applications, which increasingly require large amounts of unstructured data storage and high performance with low latency across the globe. Neptune supports the Property Graph and W3C RDF graph models and their query languages Apache TinkerPop Gremlin and RDF/SPARQL.

Throughout its preview, Amazon said customers used Neptune to build interactive applications that include social networks, fraud detection systems and recommendation engines.

Cracking down on domain abuse

Global AWS customers that want to evade censorship in certain countries were dealt a blow earlier this month, as the cloud provider followed in Google’s footsteps and switched off domain fronting for its CloudFront service. This process enables apps to conceal their network traffic through a cloud CDN, which changes the domain name after it establishes a connection — though it is also a popular means for hackers and attackers to obfuscate their malware’s origin.

The cloud providers’ decisions come after the Russian government in April attempted to block instant messenger app Telegram, which had moved to AWS infrastructure. In doing so, Russia also blocked millions of Amazon and Google IP addresses, including many legitimate web services and companies.

Amazon’s decision to protect against domain fronting follows in line with its terms of service, and  AWS said it already polices such violations. At the same time, this crackdown aims to roll more protections directly into the CloudFront service and API.


April 30, 2018  2:01 PM

AWS Lambda features rev Node.js support

David Carty Profile: David Carty

Breathe easier, serverless application developers — your lengthy wait is over, with AWS Lambda’s added support for a newer Node.js runtime

After a year’s wait, AWS developers can now use Node.js version 8.10 to enable a number of AWS Lambda features that were on their wish lists. The async/await pattern makes it much easier to implement asynchronous calls without muddying up the code with callbacks or promises, which make it difficult to read. The support update also simplifies error handling, which further reduces unnecessary code, and it offers faster runtime and render time speeds.

In the past, AWS has been slow to add Lambda support for other languages as well, including Python, though Amazon’s lengthy code review process, which ensures no potentially damaging code exists in releases, are a big reason for that delay.

Summit summary

Meanwhile, a pair of new security tools unveiled at AWS’ yearly San Francisco Summit in early April.

The AWS Secrets Manager service enables an administrator to abstract the manual process to store, manage and retrieve encryption keys, database credentials and other secrets. The service saves time and cost to stand up infrastructure to specifically manage secrets, a process complicated by increasingly distributed applications. Secret Manager also enables you to rotate credentials with a Lambda function.

With the AWS Firewall Manager, admins can define and apply Amazon Web Application Firewall security rules across various cloud applications and accounts. The service centralizes security management, which enables grouped control and enhances visibility of attacks on Application Load Balancers and CloudFront workloads, to help enterprises adhere to compliance requirements.

Living on the edge

Two AWS offerings became general available in April to help enterprises more quickly process IoT data, in different ways.

AWS IoT Analytics enables users to process raw data directly from IoT devices and sensors. For some enterprises, however, the cost of data transfers is prohibitive, so it’s appealing to preprocess data before it reaches the cloud. The AWS Greengrass ML Inference an enterprise can deploy cloud-trained machine learning models on connected devices for locally collected data. Combined, these two offerings enable real-time data processing at the edge and more detailed analytics when chosen data reaches the cloud.

Meet SAM

One other service update doesn’t open up new AWS Lambda features for serverless developers, but it does open up the code base and removes a barrier to automation.

AWS open-sourced its Serverless Application Model (SAM) implementation, with which developers define resources spun up by CloudFormation stacks. Previously, developers submitted feature requests to AWS which would change the implementation. With an open-source SAM implementation, developers can more quickly specify new features and enhancements, and then build serverless apps.


April 17, 2018  2:54 PM

Three laws of IoT connectivity govern AWS’ strategy

David Carty Profile: David Carty

The growth and proliferation of connected sensors has drawn AWS headlong into the IoT market. And a recent keynote speech provided clues as to how the cloud provider shapes its tools to manage those workloads.

Many industries such as agriculture, health care, and energy have embraced the IoT, but businesses must also face IoT connectivity and other technology limitations particularly as they pertain to cloud, said Dirk Didascalou, AWS vice president of IoT, at the MIT Enterprise Forum’s recent Connected Things conference. In a question and answer session, Didascalou revealed three principles that govern his team’s present and future strategy and necessitate the need for edge computing to address IoT connectivity concerns.

“We call them laws because we believe they will still be valid also with the advance of technology,” Didascalou said.

Here are those three laws:

  • The law of physics. Physical limitations of data transfers to the cloud can be prohibitive as autonomous devices increasingly need real-time responses to triggers. This means some IoT devices need some degree of local compute to get around data transfer speed limitations, particularly where safety is concerned and each millisecond delay can cost lives, as with self-driving cars. “The speed of light is only [so] fast,” Didascalou said.
  • The law of economics. Exponential data growth creates performance bottlenecks and cost overruns. It’s simply not feasible for enterprises to transmit all IoT data to the cloud in an economical fashionespecially when transmission and storage costs are factored in.
  • The law of the land. Legal and geographical restrictions can hamper data collection and transfers. For example, GDPR regulations in Europe and HIPAA guidelines in the United States mean enterprises must adapt IoT deployments to fit compliance needs. Additionally, some parts of the world don’t have the infrastructure to support regular IoT connectivity to the open internet, which limits cloud availability.

Over the last six months AWS has reinforcedits IoT strategy with services for simpler Lambda invocation, device management, security policies, IoT analytics and microcontrollers. Didascalou’s three laws could hint at enhanced AWS edge compute capabilities to negate the limitations of unreliable or unfeasible IoT connectivity to the cloud.

“As long as you believe that these three [laws] will coexist, we need to figure out with our customers, ‘How can you take the benefit of the cloud but do local compute?'” Didascalou said. “[These laws] won’t go away; they will be there forever. So we just try to find a technical solution to that instead of pretending it’s not going to happen.”


April 12, 2018  7:20 PM

AWS Lambda serverless platform holds center stage for devs

Jan Stafford Jan Stafford Profile: Jan Stafford
AWS, DevOps, Serverless computing

Enterprises are adopting AWS Lambda faster than competing serverless platforms from Google, Microsoft, IBM and others, citing ease of use to replace manual with automated functions and the broad reach of Amazon cloud services.

AWS Lambda — an event-driven automation platform — owns 70% percent of the active serverless platform user base, according to a survey by the Cloud Native Computing Foundation. By comparison, the nearest competitors’ shares were much lower, with Google Cloud Functions at 13 percent and Microsoft Azure Functions and Apache/IBM OpenWhisk at 12%.

Amazon released the AWS Lambda serverless platform in 2014, while the above-mentioned competing products came out in 2016. In that between time, AWS made hay while the sun shone. “AWS really got people to pay attention to Lambda and — unusually for enterprises — start using it quickly,” said Daryl Plummer, managing VP and chief of research at Gartner in Atlanta, Ga. Enterprises’ prototype phases for Lambda shortened to a few weeks from what is usually a months-long process, said AWS technology evangelist Jeff Barr.

Attendees at an AWS Summit in San Francisco last week cited their reasons to embrace AWS Lambda. Matthew Stanwick, systems analyst at Sony Network Entertainment International in San Diego, Calif., said he finds it easier to script and deploy simple tests and terminate cloud instances. “I can build tests right there on the console with no problem,” he said.

AWS Lambda’s family ties

Amazon doesn’t have any particular advantage in serverless beyond competitors Google, Microsoft or IBM, but it has better promoted Lambda’s ease of use and an overall services portfolio that support serverless, said Plummer. For example, Lambda hides some of the more complex mechanisms such as Amazon EC2, upscaling and downscaling and VM management, and it can be used as a front end to facilities like S3 or CloudFront caching for content delivery. “In short, anything that AWS does can be made easier and front-ended by Lambda,” he said.

AWS also quickly connected Lambda to many different event sources, Barr said. “People started to think of it as this nervous system they could connect up to the incoming flow of data into S3, to message queues and to notifications that are wired into different parts of the AWS infrastructure,” he said. At its release, AWS Lambda was made a part of the platform structure behind the Alexa Voice Service, and that gave developers a practical place to try out serverless. “Developers can deliver functions and be responsive without having to rebuild the platform itself,” Plummer said. Alexa skill code can be released as a AWS Lambda function that, typically, enables voice-activated activities. Software actions or natural world events also can generate events. For example, a request for the time can trigger a time function embedded in the interaction model for Alexa.

Lambda’s serverless support of the AWS family of services makes is less risky than investments to build a serverless architecture, said Clay Smith, developer advocate at New Relic. People can run experiments with it, such as DevOps automation tests, and if these small ventures don’t succeed they’ve only paid for usage time, he said.

What’s ahead for serverless

Right now, serverless platforms are still more of a sideshow, Plummer said. But soon vendors will deliver more and more critical functions to make the technology more robust, and usage will spread. Eventually, everyone will have a serverless platform beneath their newest applications, to provide flexibility for the people-centric workloads d built today, he said.

“Imagine a world where you are not searching through app stores anymore, but you are looking for a function,” Plummer said. It doesn’t matter who built the function, only that the function is reliable. “Functions delivered from developers all over the planet will truly realize the service-oriented architecture vision.”


March 30, 2018  7:31 PM

Extended duration for IAM roles appeases some, alarms others

David Carty Profile: David Carty

Cloud security best practices and IT convenience don’t always align, but as standards such as GDPR take hold and new vulnerabilities constantly emerge, maybe it’s ok to loosen the reins from time to time.

AWS has increased the maximum session time for Identity and Access Management (IAM) roles, extending the cap from one hour to 12 hours. Federated users can request credentials from the AWS Security Token Service via the AWS SDK or Command Line Interface.

AWS recommends the lowest possible threshold for IAM roles, but IT teams complained they were kicked out during long-running workloads. This move should appease those folks, even if extended-time credential validation is a cloud security no-no. Teams with tight security restrictions might want to steer clear, or at least stay below the time limit for IAM roles.

Some AWS admins expressed confusion over whether the IAM roles’ duration applied to CloudFormation, but AWS’ blog post explicitly mentions that use case. In a reply to a reader comment, AWS stated that a “CloudFormation template will respect the session duration set for your IAM role.”

In addition to the extended IAM role duration, AWS rolled out several other new features this month, including those related to DynamoDB, its own documentation and containers, that might pique the interest of dev and operations teams.

DynamoDB gives backup a boost

Amazon continued to enhance its DynamoDB NoSQL database service with the addition of two backup features: continuous backups, and point-in-time recovery (PITR), which was previously in preview. PITR is enabled via the AWS Management Console or an API call, an application can make erroneous writes and deletes until its digital heart’s content. Admins can restore the DynamoDB table back up to a maximum of 35 days out, or contact AWS to restore deleted tables.

These features, along with DynamoDB Global Tables and Multi-Master, eliminate several DynamoDB enterprise workarounds. AWS also released DynamoDB Accelerator last summer to boost database performance at scale.

AWS opens its books — sort of

Another AWS update in March could be a boon for some AWS developers: the ability to access and submit pull requests on AWS documentation through GitHub. AWS open sourced more than 100 user guides to GitHub, which should help its documentation team clarify concepts, improve code samples and fix bugs.

This will surely improve AWS documentation, but developers also want more transparency, said Mike Tria, head of infrastructure at Atlassian, in a discussion with SearchAWS Senior News Writer Trevor Jones.

“The more they open that stuff up, the more my developers can know how [AWS] is building and build appropriately to that,” he said. “It enables developers to make assumptions about how it works, as opposed to thinking it’s just AWS magic.”

Containerize your excitement

Lastly, an additional service discovery feature for Amazon Elastic Container Service (ECS) simplifies DNS housekeeping for services within a VPC. This feature removes the need for AWS admins to run their own service discovery system or connect containerized services to a load balancer. ECS now maintains a registry that uses the Route 53 Auto Naming API, then maps aliases to service endpoints.

The service discovery feature also enables health checks — via either Route 53 or ECS, but not both — to ensure that container endpoints remain healthy. If a container-level check reveals an unhealthy endpoint, it will be removed from the DNS routing list.


March 1, 2018  8:18 PM

Serverless apps, encryption top AWS features in February

David Carty Profile: David Carty

Many cloud developers espouse the benefits of serverless computing, but others find the approach unwieldy or difficult to manage. The AWS Serverless Application Repository, one of several AWS features released into general availability in February, can help wary developers join the Lambda fraternity — or sorority.

The Serverless Application Repository service enables developers to publish serverless application frameworks to share privately with a team or organization, and publicly with other developers. Likewise, developers can deploy serverless code samples, components or entire apps that cover a variety of uses. Each application in the repository breaks down the AWS resources it will consume — don’t confuse “serverless” with “free.”

A developer can search the repository for an application that fits his or her use case via the AWS Lambda console, AWS Command Line Interface and AWS SDKs. He or she can then tweak the configuration as desired before deployment.

A variety of software providers, including Datadog, Splunk and New Relic, contribute to the Serverless Application Repository to broaden its reach into areas such as internet of things and machine learning processes. The Serverless Application Repository is currently available in 14 global regions.

Red Hat opens the door to AWS features, hybrid cloud

AWS’ embrace of hybrid cloud technology opens up new avenues for other software companies. Among them is Red Hat, which last month released Red Hat Satellite 6.3 with deeper integration with Ansible and AWS. Returning the favor, Amazon EC2 now supports Satellite and Satellite Capsule Server, enabling users to manage their Red Hat infrastructure via EC2 instances.

Don’t rest on your security responsibilities

If last year’s rash of S3 bucket leaks didn’t scare the daylights out of you, perhaps nothing will. Those with a healthy fear of exposure can now use DynamoDB for server-side encryption via AWS Key Management Service. A user can continue to query data unabated, while the AWS-managed encryption keys protect security-intensive apps.

Unlike other AWS features and services, DynamoDB’s server-side encryption lacks the option to use customer master keys — for now — and only works natively for new tables. Encryption at rest adds to another recent DynamoDB feature, VPC Endpoints, which isolates databases from internet exposure and unauthorized access.

Alexa, use your indoor voice

The next time you speak softly to your computer, it might just whisper back.

The Amazon Polly text-to-speech service adds a phonation tag that enables developers to produce softer speech, one of several new AWS features that enhance voice output options available via Speech Synthesis Markup Language (SSML), including volume enhancement and timbre adjustment.

Amazon Connect also added support for SSML to control certain aspects of speech for customer contact center calls. And Amazon Lex added support for customized responses directly from the AWS Management Console, which simplifies the process of building chatbots.


February 28, 2018  7:00 PM

Dropbox is likely an outlier with its successful cloud data migration off AWS

Trevor Jones Trevor Jones Profile: Trevor Jones

Dropbox’s move off AWS was a windfall for the company, but most traditional corporations shouldn’t bank on similar success.

The file hosting company saved nearly $75 million in infrastructure costs over the past two years following a cloud data migration off a “third-party data center service provider,” according to an S-1 form filed with the U.S. Securities and Exchange Commission. That third party isn’t named, but it’s likely AWS, given Dropbox’s past statements.

Those savings may reinvigorate debate in some circles about whether to house infrastructure on-premises or on the cloud, but Dropbox is likely more of an outlier than a harbinger of what others can expect.

Plenty of case studies highlight the cost benefits to move to the public cloud, but Dropbox’s feat is rare. First, the company was an early AWS success story that made waves in 2016 when it disclosed it had moved 90% of its users’ data to its own custom-built infrastructure. Second, Dropbox’s IPO filing precipitated a financial report with unique insights into those cost differentials, and many customers may balk at such deep disclosures.

Dropbox was never 100% on AWS, nor has it completely abandoned AWS. It originally split its architecture to host metadata in private data centers and to host file content on Simple Storage Service (S3), but Dropbox built systems to better fit its needs, and so far that has translated to big savings following its cloud data migration. That transition didn’t come cheap, however.  The company spent more than $53 million for custom architectures in three colocation facilities to accommodate exabytes of storage, according to the S-1 filing.

Dropbox said it stores the remaining 10% of user data on AWS, in part to localize data in the U.S. and Europe, and it uses Amazon’s public cloud to “help deliver our services.” (Dropbox declined to comment for this report.)

Dropbox can serve as an example for SaaS or online services providers that don’t want to outsource a key pillar of a business value proposition to AWS, said Melanie Posey, an analyst with 451 Research. But that model may be feasible only for digital service providers with established businesses and patterns of demand, she said.

After years of debate about on-premises versus the cloud, IT leaders have become less dogmatic, and more pragmatic. Corporations increasingly trust public cloud providers to host their workloads, but most established corporations won’t relinquish the entirety of their private infrastructures any time soon. That’s why the major cloud providers, which make money hand over fist as customer data flood their hyperscale data centers, have either partnered with a private data center stalwart or built their own scaled down versions of their cloud to sit inside customers’ own facilities.

Still, the debate persists in part because of the lack of clarity on bottom line costs, and straight per-server comparisons. The shift from CapEx to OpEx has caused consternation among companies that expected big savings on the cloud, particularly those that failed to account for per-second billing and other ancillary costs. Moreover, the public cloud removes parts or all of the manual work of in-house infrastructure maintenance.

Public cloud advocates argue the true benefit is not in cost, but rather in agility and access to higher-level services. It’s also unclear how true is the axiom that the public cloud becomes cost-prohibitive once workloads reach a certain scale. The best example to rebut that argument is Netflix, which operates more than 150,000 instances on AWS to serve more than 100 million customers. AWS is also known to give preferential pricing to some of its largest customers and has greatly expanded its professional services.

Perhaps the biggest takeaway from Dropbox’s migration windfall is what industry observers have said for years: focus your IT dollars on what makes your business different. Dropbox’s strategy to build one of the largest data stores in the world depended on owning custom architecture, and that bet appears to have paid off big time. But if infrastructure is just a means to an end, maybe don’t drop $50 million on physical assets that are of little consequence to your business outcomes.


January 30, 2018  4:08 PM

EC2 features aim at network, performance improvements

David Carty Profile: David Carty

AWS users have a green light to rev the full bandwidth potential of a particular instance.

AWS removed its 5 Gbps limit and improved performance for network connections to and from Elastic Compute Cloud (EC2) instances. One of several new EC2 features, this speeds up connections between EC2 instances and Simple Storage Service (S3) resources, as well as connections between instances in different availability zones. Network bandwidth is now up to five times faster for instances with enhanced networking capabilities.

AWS also unveiled other EC2 features in January. Developers can now pause and resume C5 and M5 Spot Instances and Fleets that rely on Elastic Block Store (EBS). When AWS interrupts a workload due to Spot Instance price increase, those instances can now hibernate and retain their ID numbers, instead of the default to terminate and restore from the EBS root device.

These new instance capabilities add to the slew of new EC2 features and types unveiled during AWS re:Invent, including bare metal, M5, H1 and T2 Unlimited instances.

2018 predictions

AWS’ lead in the public cloud market might be in danger, as Microsoft and Google capture more customer workloads. How will AWS respond to the threats of its competitors, and what’s in store for 2018? Our SearchAWS contributors weigh in with their predictions for this year.

New features and support

  • Glue expands functionality. AWS Glue added conditional event triggers for failed and stopped jobs. Previously, the service could only trigger new extract, transform and load (ETL) jobs when another job succeeded. Administrators can now provide a list of events to track for succeeded, failed and stopped state changes and trigger ETL jobs accordingly.Glue also added support for the Scala programming language, so developers can run Scala scripts via development endpoints and jobs.
  • New serverless coding options. AWS Lambda now supports C# and Go programming languages for .NET Core 2.0. With C#, a developer can use the AWS Toolkit for Visual Studio to access templates for Lambda functions, applications and tools, or they can code manually. Developers can upload Go artifacts through the AWS Command Line Interface or AWS Management Console, and Lambda will work natively with Go tools.
  • Boost database DR. Businesses can now deploy Amazon Relational Database Service (RDS) read replicas in multiple available zones to enhance availability. AWS expanded this disaster recovery functionality for MySQL and MariaDB databases by enabling the RDS service to automatically fail over to a standby database instance if infrastructure fails.
  • Audit SageMaker logs. AWS CloudTrail now logs API calls made with the AWS SageMaker machine learning service. CloudTrail delivers those API calls to an Amazon Simple Storage Service bucket for administrative assessments.
  • Ruby beta for tracing service. AWS X-Ray added an open source SDK for Ruby, for developers to generate and send trace data from distributed web applications. X-Ray also includes support for Java, Go, Node.js, Python and .NET programming languages.
  • Two minutes on the clock. An AWS user can now set up a push notification to trigger when an EC2 Spot Instance has two minutes left before AWS reclaims it. The two-minute warning is available in Amazon CloudWatch Events, through which an admin can set up a rule to route the event to other services, such as Simple Notification Service.
  • Import, replicate PostgreSQL instances. A pair of capabilities gives PostgreSQL engineers new options during migrations from RDS to Aurora. First, an engineer can continuously replicate live workload migration from an RDS PostgreSQL instance to Aurora PostgreSQL. Also, engineers can now import encrypted snapshots to protect that data during the migration.
  • Get to learning. AWS updated its slate of Deep Learning Amazon Machine Images (AMIs). Developers can configure AMIs with a shared environment for source code and deep learning frameworks, using TensorFlow 1.5.0, which supports the drivers and GPUs behind AWS’ P3 instances. They also can build AMIs based on the open source Conda package and environment management system, including TensorBoard and TensorFlow Serving, to manage the open source machine intelligence library. The latter also added support for the latest versions of Caffe, Keras, Microsoft Cognitive Toolkit and Theano.
  • Those are the rules. AWS Config added support for seven predefined rules that verify whether or not your AWS resources are configured in accordance with best practices. These managed rules apply to CodeBuild, Identity and Access Management, S3 and AWS load balancers.

David Carty is the site editor for SearchAWS. Contact him at dcarty@techtarget.com.


December 29, 2017  6:03 PM

Amazon API Gateway boosts compression, tagging

David Carty Profile: David Carty

AWS customers were enticed by products and services introduced at the cloud provider’s annual customer and partner confab, re:Invent, held recently. AWS also kept up a steady pace of basic service updates to round out 2017, which included some API management capabilities.

Amazon API Gateway now offers content encoding support, which lets a client  compress content before a response to an API request. This feature can cut costs and improve performance, as it reduces the amount of data sent from the service to clients. Developers can define the minimum response size and enable encoding in the API itself.

The service also lets developers use application logic in custom Lambda authorizer functions to support API keys. This makes it simpler to control usage assigned to API requests, and the feature also allows teams to track request properties to API keys, such as HTTP request headers.

Additionally, Amazon API Gateway lets teams tag API stages for better organization of resources. Teams can filter API stage allocation tags through AWS Budgets to potentially reduce costs. The API Gateway feature also helps categorize APIs.

Catch up on re:Invent

AWS released several products and features at its AWS annual re:Invent conference that were not called out in this blog. Catch up on what you missed with oodles of re:Invent news and analysis from our team of writers.

New features and support

  • Restart logic in ECS. The Amazon Elastic Container Service (ECS) scheduler lets a developer program logic to control retry attempts for failing tasks. This feature reduces the potential cost and performance impacts of continuous attempts to run tasks that fail. The schedule can increase time between restart attempts, stop the deployment and add a message to notify developers.
  • Speed up Redshift queries. AWS’ data warehouse, Amazon Redshift, added late materialization with row-level filters to improve performance by reducing the amount of data it scans. Predicate filters reduce scans to only table items that satisfy criteria to boost query performance by attrition. AWS enables this feature by default.
  • Customize edge error responses. Lambda@Edge now lets developers respond with Lambda functions when CloudFront receives an error from your origin. Developers can access and define responses for 4XX and 5XX error status codes, and they can add headers, redirects and dynamically issue responses to end users based on their requests.
  • Send real-time SQL data to Lambda. Developers can configure Amazon Kinesis Data Analytics to output real-time data to AWS Lambda. From there, they can code functions that respond to that SQL data, such as send an alert or update a database.
  • Cross-account S3 bucket access from QuickSight. Data analysts can now use a QuickSight account tied to a specific AWS account to access data stored in Simple Storage Service (S3) buckets that belong to another AWS account. This cross-account S3 access enables more seamless data analysis for large businesses with multiple departments.
  • More instance support for PostgreSQL databases. Amazon Relational Database Service (RDS) for PostgreSQL added support for R4, db.t2.xlarge, db.t2.2xlarge, and db.m4.16xlarge instances for enhanced performance.
  • Increase ES scale, decrease cost. Amazon Elasticsearch Service (ES) added support for I3 instances, which improve upon the previous generation of I/O-intensive instances. With I3 instances, developers can use up to 1.5 PB of storage in an ES cluster, 15 TB of data in each node, 3.3 million IOPS and 16 GB/s of sequential disk throughput – all for less than half the cost of I2 instances.
  • A NICE combination. After acquiring NICE in 2016, AWS combined with the Italian software company to release Desktop Cloud Visualization (DCV) 2017, a steaming and remote access service. DCV 2017 improves on-premises capabilities, and the service is now available on EC2 instances, such as Elastic GPU. AWS customers only pay for the underlying compute resources.
  • CloudFront enhances encryption. AWS’ content delivery network, Amazon CloudFront, introduced field-level encryption to protect sensitive data with HTTPS. This feature can be helpful for financial or personally identifiable information, ensuring that only specific components or services in a stack can decrypt and view that data.
  • Use containers in CD pipelines. Amazon CodePipeline added integration with container-based deployments to Amazon Elastic Container Service and AWS Fargate. Developers push code changes through a continuous delivery pipeline, which calls the desired service to create a container image, test and then update containers in production.
  • Process MySQL queries faster. Amazon Aurora sped up query processing with support for hash joins and batched scans. These features are available for Amazon Aurora MySQL version 1.16.
  • CloudWatch adds new visuals, encryption support. Amazon CloudWatch added two new chart visuals: zoom, for magnification of a shorter time period, and pan, for browsing a specific time interval. Administrators can find these visualization options in the CloudWatch Metrics console and dashboards. CloudWatch Logs also added support for integration with AWS Key Management Service (KMS), which enable an admin to encrypt logs with AWS-managed keys, if they choose.
  • KMS integrates with ES. Developers can now encrypt data at rest in Amazon ES with keys managed through KMS. This feature lets data scientists use ES while encrypting all data on the underlying file systems without application modification.
  • Set alerts for free tier usage. AWS Budgets include the capability to track service usage and send an email alert to administrators if it forecasts usage to exceed a free tier limit.
  • Define an IoT backup plan. Developers can now define a backup action in Amazon IoT Rules Engine if a primary action fails. In addition to keeping an application running, this feature preserves error message data, which can include unavailability of services and insufficient resource provisioning.


Page 1 of 41234

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: