AWS Cloud Cover

Page 1 of 512345

September 28, 2018  5:43 PM

AWS month in review: More for hybrid cloud architectures

Trevor Jones Trevor Jones Profile: Trevor Jones

September was a low-key month for AWS, even though it rolled out more than 70 updates to its platform.

AWS advancements in September were a lot of the standard fodder: services expanded to additional regions, deeper integration between tools and a handful of security certifications. All of this is potentially welcome to the respective target audiences.

Still, the updates weren’t completely mundane. There were some serious nods to AWS hybrid cloud architectures along with some intriguing moves aimed at developers.

Let’s start with the enterprise-focused tools that get data to AWS’ cloud. AWS Storage Gateway, a service that connects on-premises and cloud-based data, added a hardware appliance that a company can  install in its own data center or remote office. The service addresses storage needs for a range of hybrid cloud architectures – backup, archiving, disaster recovery, migrations and links to AWS analytics tools. This appliance opens Storage Gateway to non-virtualized environments, and comes on a pre-loaded Dell EMC PowerEdge server at a cost of $12,250.

AWS has emphasized database migrations in recent years to lure corporate clients to its public cloud, either through lift-and-shift approaches or transitions to its native, managed services. That continued in September, as Database Migration Service added DynamoDB as a destination for Cassandra databases and Server Migration Service upped the size of data volumes it can handle from 4TB to 16TB.

Speaking of databases, Amazon Aurora continues to get a lot of attention. A month after its serverless flavor became generally available, users now can start and stop Aurora database clusters, a feature geared toward test and development. Another Aurora feature, a Parallel Query tool, opens the managed service to some analytical queries.This could limit the need for a data warehouse service, but there are lingering concerns that AWS has spent too much time on interesting new features and not enough time on core functionality.

Developer tools raise eyebrows

Two other AWS updates in September may pique developers’ interest, or leave them scratching their heads.

CloudFormation Macros processes templates in the same way the Serverless Application Model (SAM) prescribes rules and defines infrastructure as code, but Macros enables custom transformations handled by Lambda functions within a user’s account.

And for Microsoft shops, AWS Lambda now supports .NET developers that want to manage or automate scripts through support for PowerShell Core 6.0. We’ll have more on these features in the coming months, but for now, at least one group of users is a bit confused with the Macros feature and thinks they’ll stick with Terraform instead.

Updates to security, partnerships

On the security front, admins can now use YubiKey security keys for multi-factor authentication. Network Load Balancers and AWS PrivateLink support AWS VPN, which means an enterprise has more options to build an AWS hybrid cloud architecture where on-prem workloads can privately access AWS services.

AWS also expanded its partnership with Salesforce, with tighter integration of services for companies that rely on both providers. And yes, you can use Lambda functions to move trigger actions between the two environments. The two cloud giants have worked together for years, including the $400 million deal Salesforce signed in 2016 to use AWS services.

And stop me if you’ve heard this before, but a Wall Street analyst called for Amazon to split its retail and AWS businesses. As always, the hope is to avoid regulation, boost the value for shareholders and insulate them against the potential struggles of one of the business units.

Amazon executives haven’t responded to the critique, but last winter AWS CEO Andy Jassy said there’s no need to spin off his company. He brushed off the “optics of the financial statements” and said there’s real value in having internal customers that aren’t afraid to share their feedback.

August 30, 2018  7:11 PM

AWS balances GA of Aurora Serverless with new instance types

Trevor Jones Trevor Jones Profile: Trevor Jones

The end of summer is typically slow for the IT world, but AWS this month continues to expand its horde of instance types, and lay the groundwork for a future where its customers won’t even bother with VMs.

The cloud vendor rolled out more instance options and made Amazon Aurora Serverless generally available. And as the annual VMworld user conference closed out the month, the company advanced the ability to run the VMware stack on AWS, and perhaps more importantly, run AWS on-premises with VMware software.

The T3 instance is the next generation of burstable, generable purpose VMs available in EC2. It’s 30% cheaper than the T2 and supports up to 5 Gbps in network bandwidth. The T series of VMs, first added in 2010, is designed for smaller applications with large, infrequent spikes in demand. Like the previous two generations, the T3 comes in seven sizes, with varying amounts of memory and baseline performance.

The T3 is the latest instance type to rely on AWS’ Nitro system. It is hardware-virtual-machine-only and must be used within a Virtual Private Cloud.

AWS also added two instance sizes to Amazon Lightsail, its virtual private server offering. The 16 GB and 32 GB iterations are the largest Lightsail instances yet, and their additions coincided with a 50% price drop on all other existing Lightsail instance sizes.

There appears to be little cadence to AWS’ instance type expansion, but the cloud giant shows no signs of slowing down. Those additions came just weeks after AWS rolled out the z1d, R5 and R5d instances in late July.

Serverless vs. VMs

At the same time, AWS moved Aurora Serverless out of preview. The highly anticipated version of its fastest growing service, first announced last November, enables users to provision and scale database capacity while AWS manages all the underlying servers.

The GA of Aurora Serverless has limitations, however. It’s only available for MySQL and in the following regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland) and Asia Pacific (Tokyo). AWS says it will continue to add regional availability in the coming year. AWS originally said the PostgreSQL version would be available in the back half of this year, but hasn’t updated that timeframe since Aurora Serverless first went into preview.

EC2 continues to host the vast majority of AWS workloads, but it will be worth watching how long AWS remains on these parallel paths of additional VM variety and VM-free services. Many industry observers expect the latter will eventually overshadow the former. AWS has hedged its bets with a container strategy that was slightly late to the game, but even there serverless gets equal footing.

AWS started with serverless in 2014 with the addition of Lambda functions,  and is still largely seen as the predominant player in this space, but to maintain that edge it’ll be without one if its key contributors. Tim Wagner, who oversaw the development Lambda, was hired by Coinbase, a digital currency exchange, to be its vice president of engineering. Wagner was general manager for AWS Lambda, Amazon API Gateway and AWS Serverless App Repository at the time of his departure.

AWS: coming to a data center near you

And finally, AWS deepened its ties to VMware in more ways than one during VMworld. VMware Cloud on AWS, a service jointly developed by the two vendors but sold by VMware, added tools to simplify migrations from on premises to AWS and to manage workloads post-migration. VMware also cut the price of the service in half, which could attract organizations still on the fence.

What surprised many industry observers was AWS’ continued march beyond its own facilities. AWS will sell and deliver Amazon Relational Database Service on VMware inside users’ own data centers. The on-premises version of the AWS database service will handle management, provisioning, patches and backups. It will also make it easier for VMware customers to migrate their existing databases to Amazon’s public cloud.


August 3, 2018  3:29 PM

AWS cloud features offer network, performance improvements

David Carty Profile: David Carty

As is often the case, AWS’ yearly Summit in New York provided the scene for some additional features and functionality. While AWS cloud features are unveiled regularly, the free-to-attend conference generates some excitement among both cloud newcomers and experienced shops alike.

SearchAWS.com attended the Summit, and reported on various new AWS cloud features and trends, including:

  • New capabilities for AWS Snowball Edge — a boon for enterprises with edge computing needs — as well as a boost to S3 performance, new EC2 instances and a Bring Your Own IP feature;
  • Early adoption patterns for Amazon Elastic Container Service for Kubernetes (EKS), as well as attendee reactions to an EKS workshop, and how AWS might improve the service moving forward; and
  • A one-on-one discussion with Matt Wood, AWS’ GM of Deep Learning and AI, regarding new SageMaker features, enterprise AI challenges and the ethics of facial recognition technology.

ALB gets its actions together

The continued push from HTTP to HTTPS also gives AWS customers easy way to meet their compliance goals.

Application Load Balancer’s (ALB) content-based routing rules now support redirect and fixed-response actions in all AWS regions, which fills two big networking needs for users. Redirect actions enable an ALB to readdress incoming requests from one URL to another, such as from an HTTP to an HTTPS endpoint, which helps organizations improve security and search ranking. Fixed-response actions enable an ALB to answer incoming requests rather than forward the request to the application, for example to send custom error messages when a problem occurs.

EFS gets a performance boost

For users that encounter Amazon Elastic File System (EFS) performance issues, some relief has arrived.

Provisioned Throughput for Amazon EFS enables a developer to dynamically adjust throughput in accordance with an application’s performance needs, regardless of the amount of data stored on the file system. While users previously could burst EFS throughput for applications with more modest needs, the Provisioned Throughput feature suits applications with more strenuous needs.

DevSecOps gets more robust

Amazon GuardDuty, one of two AWS security services that relies on machine learning, could see more widespread adoption after it gained an important integration with another service.

GuardDuty now works with AWS CloudFormation StackSets, which enable an enterprise security team to automate threat detection across multiple accounts and regions. CloudFormation automates the provisioning of infrastructure and services, giving enterprises the ability to quickly and efficiently watch for threats.

For the good of the hack

A pair of upcoming AWS hackathons aims to put developer brainpower to work for socially conscious causes.

Developers can enter two hackathons — one focused on serverless applications and another on Alexa skills — that offer cash prizes for imaginative projects focused on social good. The Amazon Alexa Skills Challenge offers various cash and participation prizes for apps that use a voice command interface, while its Serverless Apps for Social Good hackathon seeks AWS Serverless Application Repository projects that combat human trafficking.

Nonoptimal Prime

Chaos ensued shortly after the start of Amazon’s heavily advertised Prime Day retail initiative on July 16, as customers could not access product pages. The Amazon disruption, attributed to a software issue within Amazon’s internal retail system, was severe enough that the company temporarily killed off international traffic.

According to a CNBC report citing internal documents, Amazon manually added server capacity as traffic surged to its retail site, which points to its Auto Scaling feature as the potential culprit that affected its internal Sable system. While the disruption generated negative press for the retail giant, which touted its internal readiness and scalability for Prime Day as recently as last year’s re:Invent conference, it still reported sales of more than 100 million products.

While AWS experienced intermittent errors with its Management Console that afternoon, the company says AWS infrastructure and services were not involved with the Prime Day snafu.


June 29, 2018  6:25 PM

VMware-AWS service bubbles into new markets

David Carty Profile: David Carty

With the VMware-AWS partnership, some customers were either reluctant or unable to use earlier versions of the service. Now, with the passage of time and corresponding product maturity, customers with tight compliance requirements might want to review the offering.

VMware Cloud on AWS (VMC) has made some recent advances. First, VMware said the service would soon be available in the AWS GovCloud region, which is typically restricted to public sector customers. The VMware-AWS service has been slow to expand globally – it is only available in four regions. The companies hope that it finds some takers among cash-strapped government agencies, which are typically slow to migrate to the cloud due to cost and regulatory concerns.

Speaking of regulatory concerns, VMC now offers a HIPAA-eligible hybrid cloud environment after passing a third-party evaluation. While HIPAA eligibility still depends on the manner with which an IT team manages cloud data and resources, healthcare providers could nonetheless see the VMware-AWS platform as a boon to their hybrid cloud operations.

Contain your enthusiasm

In other recent AWS news, Amazon released its answer to Google Kubernetes Engine, which came as welcome news to an eager base of container fanatics tired of standing up and managing the necessary infrastructure to support Kubernetes.

After it was introduced at re:Invent in December last year, Amazon Elastic Container Service for Kubernetes (EKS) became generally available in early June. EKS manages Kubernetes clusters for users and provides some potential benefits for AWS customers, such as high availability and support for load balancers and Identity and Access Management. EKS could make it easier to run microservices apps, perform batch processing or even migrate applications — that is, of course, if you’re willing to pay a bit more for the managed service.

Close your Windows

Amazon’s desktop as a service offering once only offered Windows options for operating systems (OSes). Now, as it does with so many of its other services, AWS has dangled a carrot to lure you away from Microsoft.

Amazon WorkSpaces added support for its Amazon Linux 2 OS, which the IaaS provider designed to handle a variety of cloud workloads. Amazon Linux WorkSpaces could help IT allot CPU and memory more efficiently, thus reducing costs. Based on the MATE Desktop Environment, Amazon Linux WorkSpaces purports to offer benefits for developers and ops alike, such as support for tools like Firefox, Evolution, Pidgin and Libre Office, as well as a better development environment and support for kiosk mode. Though, as some users pointed out, WorkSpaces still lacks a Linux client.

A wish fulfilled — finally

AWS Lambda also added support for Amazon Simple Queue Service (SQS) triggers — a longtime request from serverless developers. In the process, one of AWS’ older services, SQS, now works with one of its newer technologies, Lambda.

SQS messages can now sync with Lambda to trigger functions across distributed cloud systems. This integration eases processes like monitoring and error retries, which were previously made difficult by the aforementioned workarounds many developers introduced to trigger functions from messages. But developers should set Lambda concurrency controls to avoid hitting account limits, and the integration does not yet support FIFO queues.

The first ever cloud associates degree

As part of its Public Sector Summit in Washington D.C., AWS revealed a partnership with Northern Virginia Community College, which will offer a cloud computing specialization as part of its Information Systems Technology associates degree. Teresa Carlson, AWS’ vice president of the worldwide public sector, said it was the first ever cloud associate’s degree. As part of the course work, students receive access to the AWS Educate program.


May 31, 2018  5:09 PM

Bare-metal instances augment AWS’ IaaS options

David Carty Profile: David Carty

Infrastructure was AWS’ focus in May, as the cloud provider made good on several of its promises with features that provide more diverse compute options — including some that directly challenge two of its biggest foes.

Customers in five regions can now use EC2 bare-metal instances, which enable access to the memory and processor that runs those instances. Released into preview at re:Invent last year, these EC2 bare-metal instances compete against similar offerings and services from Oracle and IBM. While only available for the I3 storage-optimized instance family, these bare-metal instances can fit a variety of use cases, such as workloads restricted by licenses or lack of support for virtualized instances, and they provide a higher degree of hardware control than previously available.

In addition to several new AWS IaaS features geared toward EC2 instance management, AWS also this month added NVMe storage for its C5 instance family. These instances boost I/O to local storage to help developers take advantage of all available compute capacity. While only available for the C5 family right now, AWS said it plans to introduce NVMe storage to more instances in the coming months.

AWS presses play on Lambda IoT service

AWS’ latest IoT service attempts to make its platform literally push-button simple.

As its name suggests, AWS IoT 1-Click simplifies Lambda triggers to manage simple devices, and perform actions such as send alerts or flag items for inspection.

AWS IoT 1-Click currently supports only two push-button triggers: AWS IoT Enterprise Button (formerly the AWS IoT Button) and the AT&T LTE-M Button, which connect over Wi-Fi and AT&T’s cellular network, respectively. These devices come with their own certificates to protect communication to and from the cloud, and they encrypt outbound data via TLS. In the future, however, AWS plans to support various types of push-button devices, asset trackers, card readers and sensors.

An alien database option

If men are from Mars and women are from Venus, perhaps graph databases can be from Neptune.

Released into general availability in late May across four regions, Amazon Neptune enables developers to build and maintain high-performance graph databases that can scale to store billions of relationships between connected datasets. AWS posits Neptune as the ideal database option for modern applications, which increasingly require large amounts of unstructured data storage and high performance with low latency across the globe. Neptune supports the Property Graph and W3C RDF graph models and their query languages Apache TinkerPop Gremlin and RDF/SPARQL.

Throughout its preview, Amazon said customers used Neptune to build interactive applications that include social networks, fraud detection systems and recommendation engines.

Cracking down on domain abuse

Global AWS customers that want to evade censorship in certain countries were dealt a blow earlier this month, as the cloud provider followed in Google’s footsteps and switched off domain fronting for its CloudFront service. This process enables apps to conceal their network traffic through a cloud CDN, which changes the domain name after it establishes a connection — though it is also a popular means for hackers and attackers to obfuscate their malware’s origin.

The cloud providers’ decisions come after the Russian government in April attempted to block instant messenger app Telegram, which had moved to AWS infrastructure. In doing so, Russia also blocked millions of Amazon and Google IP addresses, including many legitimate web services and companies.

Amazon’s decision to protect against domain fronting follows in line with its terms of service, and  AWS said it already polices such violations. At the same time, this crackdown aims to roll more protections directly into the CloudFront service and API.


April 30, 2018  2:01 PM

AWS Lambda features rev Node.js support

David Carty Profile: David Carty

Breathe easier, serverless application developers — your lengthy wait is over, with AWS Lambda’s added support for a newer Node.js runtime

After a year’s wait, AWS developers can now use Node.js version 8.10 to enable a number of AWS Lambda features that were on their wish lists. The async/await pattern makes it much easier to implement asynchronous calls without muddying up the code with callbacks or promises, which make it difficult to read. The support update also simplifies error handling, which further reduces unnecessary code, and it offers faster runtime and render time speeds.

In the past, AWS has been slow to add Lambda support for other languages as well, including Python, though Amazon’s lengthy code review process, which ensures no potentially damaging code exists in releases, are a big reason for that delay.

Summit summary

Meanwhile, a pair of new security tools unveiled at AWS’ yearly San Francisco Summit in early April.

The AWS Secrets Manager service enables an administrator to abstract the manual process to store, manage and retrieve encryption keys, database credentials and other secrets. The service saves time and cost to stand up infrastructure to specifically manage secrets, a process complicated by increasingly distributed applications. Secret Manager also enables you to rotate credentials with a Lambda function.

With the AWS Firewall Manager, admins can define and apply Amazon Web Application Firewall security rules across various cloud applications and accounts. The service centralizes security management, which enables grouped control and enhances visibility of attacks on Application Load Balancers and CloudFront workloads, to help enterprises adhere to compliance requirements.

Living on the edge

Two AWS offerings became general available in April to help enterprises more quickly process IoT data, in different ways.

AWS IoT Analytics enables users to process raw data directly from IoT devices and sensors. For some enterprises, however, the cost of data transfers is prohibitive, so it’s appealing to preprocess data before it reaches the cloud. The AWS Greengrass ML Inference an enterprise can deploy cloud-trained machine learning models on connected devices for locally collected data. Combined, these two offerings enable real-time data processing at the edge and more detailed analytics when chosen data reaches the cloud.

Meet SAM

One other service update doesn’t open up new AWS Lambda features for serverless developers, but it does open up the code base and removes a barrier to automation.

AWS open-sourced its Serverless Application Model (SAM) implementation, with which developers define resources spun up by CloudFormation stacks. Previously, developers submitted feature requests to AWS which would change the implementation. With an open-source SAM implementation, developers can more quickly specify new features and enhancements, and then build serverless apps.


April 17, 2018  2:54 PM

Three laws of IoT connectivity govern AWS’ strategy

David Carty Profile: David Carty

The growth and proliferation of connected sensors has drawn AWS headlong into the IoT market. And a recent keynote speech provided clues as to how the cloud provider shapes its tools to manage those workloads.

Many industries such as agriculture, health care, and energy have embraced the IoT, but businesses must also face IoT connectivity and other technology limitations particularly as they pertain to cloud, said Dirk Didascalou, AWS vice president of IoT, at the MIT Enterprise Forum’s recent Connected Things conference. In a question and answer session, Didascalou revealed three principles that govern his team’s present and future strategy and necessitate the need for edge computing to address IoT connectivity concerns.

“We call them laws because we believe they will still be valid also with the advance of technology,” Didascalou said.

Here are those three laws:

  • The law of physics. Physical limitations of data transfers to the cloud can be prohibitive as autonomous devices increasingly need real-time responses to triggers. This means some IoT devices need some degree of local compute to get around data transfer speed limitations, particularly where safety is concerned and each millisecond delay can cost lives, as with self-driving cars. “The speed of light is only [so] fast,” Didascalou said.
  • The law of economics. Exponential data growth creates performance bottlenecks and cost overruns. It’s simply not feasible for enterprises to transmit all IoT data to the cloud in an economical fashionespecially when transmission and storage costs are factored in.
  • The law of the land. Legal and geographical restrictions can hamper data collection and transfers. For example, GDPR regulations in Europe and HIPAA guidelines in the United States mean enterprises must adapt IoT deployments to fit compliance needs. Additionally, some parts of the world don’t have the infrastructure to support regular IoT connectivity to the open internet, which limits cloud availability.

Over the last six months AWS has reinforcedits IoT strategy with services for simpler Lambda invocation, device management, security policies, IoT analytics and microcontrollers. Didascalou’s three laws could hint at enhanced AWS edge compute capabilities to negate the limitations of unreliable or unfeasible IoT connectivity to the cloud.

“As long as you believe that these three [laws] will coexist, we need to figure out with our customers, ‘How can you take the benefit of the cloud but do local compute?'” Didascalou said. “[These laws] won’t go away; they will be there forever. So we just try to find a technical solution to that instead of pretending it’s not going to happen.”


April 12, 2018  7:20 PM

AWS Lambda serverless platform holds center stage for devs

Jan Stafford Jan Stafford Profile: Jan Stafford
AWS, DevOps, Serverless computing

Enterprises are adopting AWS Lambda faster than competing serverless platforms from Google, Microsoft, IBM and others, citing ease of use to replace manual with automated functions and the broad reach of Amazon cloud services.

AWS Lambda — an event-driven automation platform — owns 70% percent of the active serverless platform user base, according to a survey by the Cloud Native Computing Foundation. By comparison, the nearest competitors’ shares were much lower, with Google Cloud Functions at 13 percent and Microsoft Azure Functions and Apache/IBM OpenWhisk at 12%.

Amazon released the AWS Lambda serverless platform in 2014, while the above-mentioned competing products came out in 2016. In that between time, AWS made hay while the sun shone. “AWS really got people to pay attention to Lambda and — unusually for enterprises — start using it quickly,” said Daryl Plummer, managing VP and chief of research at Gartner in Atlanta, Ga. Enterprises’ prototype phases for Lambda shortened to a few weeks from what is usually a months-long process, said AWS technology evangelist Jeff Barr.

Attendees at an AWS Summit in San Francisco last week cited their reasons to embrace AWS Lambda. Matthew Stanwick, systems analyst at Sony Network Entertainment International in San Diego, Calif., said he finds it easier to script and deploy simple tests and terminate cloud instances. “I can build tests right there on the console with no problem,” he said.

AWS Lambda’s family ties

Amazon doesn’t have any particular advantage in serverless beyond competitors Google, Microsoft or IBM, but it has better promoted Lambda’s ease of use and an overall services portfolio that support serverless, said Plummer. For example, Lambda hides some of the more complex mechanisms such as Amazon EC2, upscaling and downscaling and VM management, and it can be used as a front end to facilities like S3 or CloudFront caching for content delivery. “In short, anything that AWS does can be made easier and front-ended by Lambda,” he said.

AWS also quickly connected Lambda to many different event sources, Barr said. “People started to think of it as this nervous system they could connect up to the incoming flow of data into S3, to message queues and to notifications that are wired into different parts of the AWS infrastructure,” he said. At its release, AWS Lambda was made a part of the platform structure behind the Alexa Voice Service, and that gave developers a practical place to try out serverless. “Developers can deliver functions and be responsive without having to rebuild the platform itself,” Plummer said. Alexa skill code can be released as a AWS Lambda function that, typically, enables voice-activated activities. Software actions or natural world events also can generate events. For example, a request for the time can trigger a time function embedded in the interaction model for Alexa.

Lambda’s serverless support of the AWS family of services makes is less risky than investments to build a serverless architecture, said Clay Smith, developer advocate at New Relic. People can run experiments with it, such as DevOps automation tests, and if these small ventures don’t succeed they’ve only paid for usage time, he said.

What’s ahead for serverless

Right now, serverless platforms are still more of a sideshow, Plummer said. But soon vendors will deliver more and more critical functions to make the technology more robust, and usage will spread. Eventually, everyone will have a serverless platform beneath their newest applications, to provide flexibility for the people-centric workloads d built today, he said.

“Imagine a world where you are not searching through app stores anymore, but you are looking for a function,” Plummer said. It doesn’t matter who built the function, only that the function is reliable. “Functions delivered from developers all over the planet will truly realize the service-oriented architecture vision.”


March 30, 2018  7:31 PM

Extended duration for IAM roles appeases some, alarms others

David Carty Profile: David Carty

Cloud security best practices and IT convenience don’t always align, but as standards such as GDPR take hold and new vulnerabilities constantly emerge, maybe it’s ok to loosen the reins from time to time.

AWS has increased the maximum session time for Identity and Access Management (IAM) roles, extending the cap from one hour to 12 hours. Federated users can request credentials from the AWS Security Token Service via the AWS SDK or Command Line Interface.

AWS recommends the lowest possible threshold for IAM roles, but IT teams complained they were kicked out during long-running workloads. This move should appease those folks, even if extended-time credential validation is a cloud security no-no. Teams with tight security restrictions might want to steer clear, or at least stay below the time limit for IAM roles.

Some AWS admins expressed confusion over whether the IAM roles’ duration applied to CloudFormation, but AWS’ blog post explicitly mentions that use case. In a reply to a reader comment, AWS stated that a “CloudFormation template will respect the session duration set for your IAM role.”

In addition to the extended IAM role duration, AWS rolled out several other new features this month, including those related to DynamoDB, its own documentation and containers, that might pique the interest of dev and operations teams.

DynamoDB gives backup a boost

Amazon continued to enhance its DynamoDB NoSQL database service with the addition of two backup features: continuous backups, and point-in-time recovery (PITR), which was previously in preview. PITR is enabled via the AWS Management Console or an API call, an application can make erroneous writes and deletes until its digital heart’s content. Admins can restore the DynamoDB table back up to a maximum of 35 days out, or contact AWS to restore deleted tables.

These features, along with DynamoDB Global Tables and Multi-Master, eliminate several DynamoDB enterprise workarounds. AWS also released DynamoDB Accelerator last summer to boost database performance at scale.

AWS opens its books — sort of

Another AWS update in March could be a boon for some AWS developers: the ability to access and submit pull requests on AWS documentation through GitHub. AWS open sourced more than 100 user guides to GitHub, which should help its documentation team clarify concepts, improve code samples and fix bugs.

This will surely improve AWS documentation, but developers also want more transparency, said Mike Tria, head of infrastructure at Atlassian, in a discussion with SearchAWS Senior News Writer Trevor Jones.

“The more they open that stuff up, the more my developers can know how [AWS] is building and build appropriately to that,” he said. “It enables developers to make assumptions about how it works, as opposed to thinking it’s just AWS magic.”

Containerize your excitement

Lastly, an additional service discovery feature for Amazon Elastic Container Service (ECS) simplifies DNS housekeeping for services within a VPC. This feature removes the need for AWS admins to run their own service discovery system or connect containerized services to a load balancer. ECS now maintains a registry that uses the Route 53 Auto Naming API, then maps aliases to service endpoints.

The service discovery feature also enables health checks — via either Route 53 or ECS, but not both — to ensure that container endpoints remain healthy. If a container-level check reveals an unhealthy endpoint, it will be removed from the DNS routing list.


March 1, 2018  8:18 PM

Serverless apps, encryption top AWS features in February

David Carty Profile: David Carty

Many cloud developers espouse the benefits of serverless computing, but others find the approach unwieldy or difficult to manage. The AWS Serverless Application Repository, one of several AWS features released into general availability in February, can help wary developers join the Lambda fraternity — or sorority.

The Serverless Application Repository service enables developers to publish serverless application frameworks to share privately with a team or organization, and publicly with other developers. Likewise, developers can deploy serverless code samples, components or entire apps that cover a variety of uses. Each application in the repository breaks down the AWS resources it will consume — don’t confuse “serverless” with “free.”

A developer can search the repository for an application that fits his or her use case via the AWS Lambda console, AWS Command Line Interface and AWS SDKs. He or she can then tweak the configuration as desired before deployment.

A variety of software providers, including Datadog, Splunk and New Relic, contribute to the Serverless Application Repository to broaden its reach into areas such as internet of things and machine learning processes. The Serverless Application Repository is currently available in 14 global regions.

Red Hat opens the door to AWS features, hybrid cloud

AWS’ embrace of hybrid cloud technology opens up new avenues for other software companies. Among them is Red Hat, which last month released Red Hat Satellite 6.3 with deeper integration with Ansible and AWS. Returning the favor, Amazon EC2 now supports Satellite and Satellite Capsule Server, enabling users to manage their Red Hat infrastructure via EC2 instances.

Don’t rest on your security responsibilities

If last year’s rash of S3 bucket leaks didn’t scare the daylights out of you, perhaps nothing will. Those with a healthy fear of exposure can now use DynamoDB for server-side encryption via AWS Key Management Service. A user can continue to query data unabated, while the AWS-managed encryption keys protect security-intensive apps.

Unlike other AWS features and services, DynamoDB’s server-side encryption lacks the option to use customer master keys — for now — and only works natively for new tables. Encryption at rest adds to another recent DynamoDB feature, VPC Endpoints, which isolates databases from internet exposure and unauthorized access.

Alexa, use your indoor voice

The next time you speak softly to your computer, it might just whisper back.

The Amazon Polly text-to-speech service adds a phonation tag that enables developers to produce softer speech, one of several new AWS features that enhance voice output options available via Speech Synthesis Markup Language (SSML), including volume enhancement and timbre adjustment.

Amazon Connect also added support for SSML to control certain aspects of speech for customer contact center calls. And Amazon Lex added support for customized responses directly from the AWS Management Console, which simplifies the process of building chatbots.


Page 1 of 512345

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: