Many cloud developers espouse the benefits of serverless computing, but others find the approach unwieldy or difficult to manage. The AWS Serverless Application Repository, one of several AWS features released into general availability in February, can help wary developers join the Lambda fraternity — or sorority.
The Serverless Application Repository service enables developers to publish serverless application frameworks to share privately with a team or organization, and publicly with other developers. Likewise, developers can deploy serverless code samples, components or entire apps that cover a variety of uses. Each application in the repository breaks down the AWS resources it will consume — don’t confuse “serverless” with “free.”
A developer can search the repository for an application that fits his or her use case via the AWS Lambda console, AWS Command Line Interface and AWS SDKs. He or she can then tweak the configuration as desired before deployment.
A variety of software providers, including Datadog, Splunk and New Relic, contribute to the Serverless Application Repository to broaden its reach into areas such as internet of things and machine learning processes. The Serverless Application Repository is currently available in 14 global regions.
Red Hat opens the door to AWS features, hybrid cloud
AWS’ embrace of hybrid cloud technology opens up new avenues for other software companies. Among them is Red Hat, which last month released Red Hat Satellite 6.3 with deeper integration with Ansible and AWS. Returning the favor, Amazon EC2 now supports Satellite and Satellite Capsule Server, enabling users to manage their Red Hat infrastructure via EC2 instances.
Don’t rest on your security responsibilities
If last year’s rash of S3 bucket leaks didn’t scare the daylights out of you, perhaps nothing will. Those with a healthy fear of exposure can now use DynamoDB for server-side encryption via AWS Key Management Service. A user can continue to query data unabated, while the AWS-managed encryption keys protect security-intensive apps.
Unlike other AWS features and services, DynamoDB’s server-side encryption lacks the option to use customer master keys — for now — and only works natively for new tables. Encryption at rest adds to another recent DynamoDB feature, VPC Endpoints, which isolates databases from internet exposure and unauthorized access.
Alexa, use your indoor voice
The next time you speak softly to your computer, it might just whisper back.
The Amazon Polly text-to-speech service adds a phonation tag that enables developers to produce softer speech, one of several new AWS features that enhance voice output options available via Speech Synthesis Markup Language (SSML), including volume enhancement and timbre adjustment.
Amazon Connect also added support for SSML to control certain aspects of speech for customer contact center calls. And Amazon Lex added support for customized responses directly from the AWS Management Console, which simplifies the process of building chatbots.
Dropbox’s move off AWS was a windfall for the company, but most traditional corporations shouldn’t bank on similar success.
The file hosting company saved nearly $75 million in infrastructure costs over the past two years following a cloud data migration off a “third-party data center service provider,” according to an S-1 form filed with the U.S. Securities and Exchange Commission. That third party isn’t named, but it’s likely AWS, given Dropbox’s past statements.
Those savings may reinvigorate debate in some circles about whether to house infrastructure on-premises or on the cloud, but Dropbox is likely more of an outlier than a harbinger of what others can expect.
Plenty of case studies highlight the cost benefits to move to the public cloud, but Dropbox’s feat is rare. First, the company was an early AWS success story that made waves in 2016 when it disclosed it had moved 90% of its users’ data to its own custom-built infrastructure. Second, Dropbox’s IPO filing precipitated a financial report with unique insights into those cost differentials, and many customers may balk at such deep disclosures.
Dropbox was never 100% on AWS, nor has it completely abandoned AWS. It originally split its architecture to host metadata in private data centers and to host file content on Simple Storage Service (S3), but Dropbox built systems to better fit its needs, and so far that has translated to big savings following its cloud data migration. That transition didn’t come cheap, however. The company spent more than $53 million for custom architectures in three colocation facilities to accommodate exabytes of storage, according to the S-1 filing.
Dropbox said it stores the remaining 10% of user data on AWS, in part to localize data in the U.S. and Europe, and it uses Amazon’s public cloud to “help deliver our services.” (Dropbox declined to comment for this report.)
Dropbox can serve as an example for SaaS or online services providers that don’t want to outsource a key pillar of a business value proposition to AWS, said Melanie Posey, an analyst with 451 Research. But that model may be feasible only for digital service providers with established businesses and patterns of demand, she said.
After years of debate about on-premises versus the cloud, IT leaders have become less dogmatic, and more pragmatic. Corporations increasingly trust public cloud providers to host their workloads, but most established corporations won’t relinquish the entirety of their private infrastructures any time soon. That’s why the major cloud providers, which make money hand over fist as customer data flood their hyperscale data centers, have either partnered with a private data center stalwart or built their own scaled down versions of their cloud to sit inside customers’ own facilities.
Still, the debate persists in part because of the lack of clarity on bottom line costs, and straight per-server comparisons. The shift from CapEx to OpEx has caused consternation among companies that expected big savings on the cloud, particularly those that failed to account for per-second billing and other ancillary costs. Moreover, the public cloud removes parts or all of the manual work of in-house infrastructure maintenance.
Public cloud advocates argue the true benefit is not in cost, but rather in agility and access to higher-level services. It’s also unclear how true is the axiom that the public cloud becomes cost-prohibitive once workloads reach a certain scale. The best example to rebut that argument is Netflix, which operates more than 150,000 instances on AWS to serve more than 100 million customers. AWS is also known to give preferential pricing to some of its largest customers and has greatly expanded its professional services.
Perhaps the biggest takeaway from Dropbox’s migration windfall is what industry observers have said for years: focus your IT dollars on what makes your business different. Dropbox’s strategy to build one of the largest data stores in the world depended on owning custom architecture, and that bet appears to have paid off big time. But if infrastructure is just a means to an end, maybe don’t drop $50 million on physical assets that are of little consequence to your business outcomes.
AWS users have a green light to rev the full bandwidth potential of a particular instance.
AWS removed its 5 Gbps limit and improved performance for network connections to and from Elastic Compute Cloud (EC2) instances. One of several new EC2 features, this speeds up connections between EC2 instances and Simple Storage Service (S3) resources, as well as connections between instances in different availability zones. Network bandwidth is now up to five times faster for instances with enhanced networking capabilities.
AWS also unveiled other EC2 features in January. Developers can now pause and resume C5 and M5 Spot Instances and Fleets that rely on Elastic Block Store (EBS). When AWS interrupts a workload due to Spot Instance price increase, those instances can now hibernate and retain their ID numbers, instead of the default to terminate and restore from the EBS root device.
These new instance capabilities add to the slew of new EC2 features and types unveiled during AWS re:Invent, including bare metal, M5, H1 and T2 Unlimited instances.
AWS’ lead in the public cloud market might be in danger, as Microsoft and Google capture more customer workloads. How will AWS respond to the threats of its competitors, and what’s in store for 2018? Our SearchAWS contributors weigh in with their predictions for this year.
New features and support
- Glue expands functionality. AWS Glue added conditional event triggers for failed and stopped jobs. Previously, the service could only trigger new extract, transform and load (ETL) jobs when another job succeeded. Administrators can now provide a list of events to track for succeeded, failed and stopped state changes and trigger ETL jobs accordingly.Glue also added support for the Scala programming language, so developers can run Scala scripts via development endpoints and jobs.
- New serverless coding options. AWS Lambda now supports C# and Go programming languages for .NET Core 2.0. With C#, a developer can use the AWS Toolkit for Visual Studio to access templates for Lambda functions, applications and tools, or they can code manually. Developers can upload Go artifacts through the AWS Command Line Interface or AWS Management Console, and Lambda will work natively with Go tools.
- Boost database DR. Businesses can now deploy Amazon Relational Database Service (RDS) read replicas in multiple available zones to enhance availability. AWS expanded this disaster recovery functionality for MySQL and MariaDB databases by enabling the RDS service to automatically fail over to a standby database instance if infrastructure fails.
- Audit SageMaker logs. AWS CloudTrail now logs API calls made with the AWS SageMaker machine learning service. CloudTrail delivers those API calls to an Amazon Simple Storage Service bucket for administrative assessments.
- Ruby beta for tracing service. AWS X-Ray added an open source SDK for Ruby, for developers to generate and send trace data from distributed web applications. X-Ray also includes support for Java, Go, Node.js, Python and .NET programming languages.
- Two minutes on the clock. An AWS user can now set up a push notification to trigger when an EC2 Spot Instance has two minutes left before AWS reclaims it. The two-minute warning is available in Amazon CloudWatch Events, through which an admin can set up a rule to route the event to other services, such as Simple Notification Service.
- Import, replicate PostgreSQL instances. A pair of capabilities gives PostgreSQL engineers new options during migrations from RDS to Aurora. First, an engineer can continuously replicate live workload migration from an RDS PostgreSQL instance to Aurora PostgreSQL. Also, engineers can now import encrypted snapshots to protect that data during the migration.
- Get to learning. AWS updated its slate of Deep Learning Amazon Machine Images (AMIs). Developers can configure AMIs with a shared environment for source code and deep learning frameworks, using TensorFlow 1.5.0, which supports the drivers and GPUs behind AWS’ P3 instances. They also can build AMIs based on the open source Conda package and environment management system, including TensorBoard and TensorFlow Serving, to manage the open source machine intelligence library. The latter also added support for the latest versions of Caffe, Keras, Microsoft Cognitive Toolkit and Theano.
- Those are the rules. AWS Config added support for seven predefined rules that verify whether or not your AWS resources are configured in accordance with best practices. These managed rules apply to CodeBuild, Identity and Access Management, S3 and AWS load balancers.
David Carty is the site editor for SearchAWS. Contact him at firstname.lastname@example.org.
AWS customers were enticed by products and services introduced at the cloud provider’s annual customer and partner confab, re:Invent, held recently. AWS also kept up a steady pace of basic service updates to round out 2017, which included some API management capabilities.
Amazon API Gateway now offers content encoding support, which lets a client compress content before a response to an API request. This feature can cut costs and improve performance, as it reduces the amount of data sent from the service to clients. Developers can define the minimum response size and enable encoding in the API itself.
The service also lets developers use application logic in custom Lambda authorizer functions to support API keys. This makes it simpler to control usage assigned to API requests, and the feature also allows teams to track request properties to API keys, such as HTTP request headers.
Additionally, Amazon API Gateway lets teams tag API stages for better organization of resources. Teams can filter API stage allocation tags through AWS Budgets to potentially reduce costs. The API Gateway feature also helps categorize APIs.
Catch up on re:Invent
AWS released several products and features at its AWS annual re:Invent conference that were not called out in this blog. Catch up on what you missed with oodles of re:Invent news and analysis from our team of writers.
New features and support
- Restart logic in ECS. The Amazon Elastic Container Service (ECS) scheduler lets a developer program logic to control retry attempts for failing tasks. This feature reduces the potential cost and performance impacts of continuous attempts to run tasks that fail. The schedule can increase time between restart attempts, stop the deployment and add a message to notify developers.
- Speed up Redshift queries. AWS’ data warehouse, Amazon Redshift, added late materialization with row-level filters to improve performance by reducing the amount of data it scans. Predicate filters reduce scans to only table items that satisfy criteria to boost query performance by attrition. AWS enables this feature by default.
- Customize edge error responses. Lambda@Edge now lets developers respond with Lambda functions when CloudFront receives an error from your origin. Developers can access and define responses for 4XX and 5XX error status codes, and they can add headers, redirects and dynamically issue responses to end users based on their requests.
- Send real-time SQL data to Lambda. Developers can configure Amazon Kinesis Data Analytics to output real-time data to AWS Lambda. From there, they can code functions that respond to that SQL data, such as send an alert or update a database.
- Cross-account S3 bucket access from QuickSight. Data analysts can now use a QuickSight account tied to a specific AWS account to access data stored in Simple Storage Service (S3) buckets that belong to another AWS account. This cross-account S3 access enables more seamless data analysis for large businesses with multiple departments.
- More instance support for PostgreSQL databases. Amazon Relational Database Service (RDS) for PostgreSQL added support for R4, db.t2.xlarge, db.t2.2xlarge, and db.m4.16xlarge instances for enhanced performance.
- Increase ES scale, decrease cost. Amazon Elasticsearch Service (ES) added support for I3 instances, which improve upon the previous generation of I/O-intensive instances. With I3 instances, developers can use up to 1.5 PB of storage in an ES cluster, 15 TB of data in each node, 3.3 million IOPS and 16 GB/s of sequential disk throughput – all for less than half the cost of I2 instances.
- A NICE combination. After acquiring NICE in 2016, AWS combined with the Italian software company to release Desktop Cloud Visualization (DCV) 2017, a steaming and remote access service. DCV 2017 improves on-premises capabilities, and the service is now available on EC2 instances, such as Elastic GPU. AWS customers only pay for the underlying compute resources.
- CloudFront enhances encryption. AWS’ content delivery network, Amazon CloudFront, introduced field-level encryption to protect sensitive data with HTTPS. This feature can be helpful for financial or personally identifiable information, ensuring that only specific components or services in a stack can decrypt and view that data.
- Use containers in CD pipelines. Amazon CodePipeline added integration with container-based deployments to Amazon Elastic Container Service and AWS Fargate. Developers push code changes through a continuous delivery pipeline, which calls the desired service to create a container image, test and then update containers in production.
- Process MySQL queries faster. Amazon Aurora sped up query processing with support for hash joins and batched scans. These features are available for Amazon Aurora MySQL version 1.16.
- CloudWatch adds new visuals, encryption support. Amazon CloudWatch added two new chart visuals: zoom, for magnification of a shorter time period, and pan, for browsing a specific time interval. Administrators can find these visualization options in the CloudWatch Metrics console and dashboards. CloudWatch Logs also added support for integration with AWS Key Management Service (KMS), which enable an admin to encrypt logs with AWS-managed keys, if they choose.
- KMS integrates with ES. Developers can now encrypt data at rest in Amazon ES with keys managed through KMS. This feature lets data scientists use ES while encrypting all data on the underlying file systems without application modification.
- Set alerts for free tier usage. AWS Budgets include the capability to track service usage and send an email alert to administrators if it forecasts usage to exceed a free tier limit.
- Define an IoT backup plan. Developers can now define a backup action in Amazon IoT Rules Engine if a primary action fails. In addition to keeping an application running, this feature preserves error message data, which can include unavailability of services and insufficient resource provisioning.
Another AWS re:Invent has come and gone, with another slew of new products to delight its fans. But in the cloud, can there be too much of a good thing?
The user conference was bursting at the seams this year, with 43,000 people shuffling in controlled chaos between six hotels that spanned two miles of the Las Vegas strip. The show is part networking, part training exercise, but more than anything it’s a victory lap for AWS and its prodigious pace of innovation. But could that overstuffed sprawl portend future problems for the platform itself? With roughly two dozen new products or updates lumped on top of AWS’ already extensive IT portfolio, does the cloud giant run the risk of spreading itself too thin, or at a minimum overwhelming its customers with choices?
Some conference attendees acknowledged this is a concern, though the consensus was that Amazon hasn’t shown any signs yet of failing where other tech companies have before.
“It would be reckless to say we don’t think about it,” said Biba Helou, managing vice president of cloud at Capital One. “But they really do seem to have a really good model for how they incubate and build products and then gain momentum based on customer feedback and then put the resources into what they need to.”
AWS’ track record with products isn’t perfect. Elastic File System remains a subject of consternation for some, and other services such as AppStream have been criticized for falling short of their initial promise. Nevertheless, users remain assured by a development model that organizes small teams to focus on specific products and features. And AWS has a history of releasing a base product and adding to it over time. Customers have become so conditioned to that model that despite frustration with a new product’s lack of a certain feature or language support, they’re content to assume that piece will arrive eventually.
Customers also find comfort in AWS’ continued investments in its core services. Alongside sexier new products rolled out at AWS re:Invent 2017 were a handful of updates to Amazon Elastic Compute Cloud and Amazon Simple Storage Service.
Still, the company that started out selling basic compute and storage has added a staggering number of products over the last 10-plus years, and shows no signs of slowing down. There’s a greater focus today on managed services and even a push into business services with products such as Amazon Chime and Alexa for Business. And AWS CEO Andy Jassy told conference attendees to expect more innovation over the next decade than the previous one.
The backdrop to all this product expansion is intensified competition. AWS still dominates the market with impressive, yet slowing, year-over-year revenue growth of 42%, and its market is still growing, according to a Gartner study. But for a company that claims its product decisions are tethered to customers’ wishes, part of that response now has to address services that customers can find in Microsoft Azure or Google Cloud Platform (GCP).
For example, machine learning and containers are two areas AWS has been criticized for falling behind Azure and GCP. Lo and behold, at AWS re:Invent, AWS added a bevy of services to fill those gaps. AWS added bare metal servers — which didn’t excite anyone I spoke with at the show, but checks a box for any enterprise that compares the AWS platform to alternatives from IBM or Oracle.
Amazon is looking at the laundry list of cloud services people want to implement and trying to cover as many of those requests as possible,
“There’s definitely that risk [of overextending] but the big play was about making it clear they’re trying to remove as many of those incentives as possible to move to any other cloud,” said Henry Shapiro, vice president and general manager at New Relic, a San Francisco-based monitoring and management company and AWS partner.
And while users and partners feel confident that AWS will address this theoretical problem, the dizzying pace of releases creates a practical problem for users today. AWS has excelled at democratizing technology and packaging it for the masses, but it can be a challenge for people to understand the breadth of services, said Owen Rogers, an analyst with 451 Research. That’s why the partner ecosystem will be crucial to AWS’ future growth, as those companies step up to help resolve the complexity so enterprises can navigate the landscape.
And enterprises contend with more than just the AWS learning curve. Amid a larger shift in how companies build and deploy applications, nearly every enterprise is scurrying to address clichés about digital transformation and avoid being undercut or outflanked by some tech upstart.
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at email@example.com.
It should come as no surprise, but AWS CEO Andy Jassy is awfully bullish about the company he leads.
Jassy sat down for a press Q&A following his keynote speech here at re:Invent this week. Most of the roughly 45-minute session focused on why he sees AWS as the best place for cloud workloads, but he also shed some light on the future of the platform and mused on the state of IT. The following are selected excerpts from his responses, edited for brevity.
AWS looks closely at the technology and has lots of customers and partners looking to build blockchain on top of AWS. Other cloud vendors such as IBM and Microsoft have added services in this space, but Jassy implied AWS won’t follow suit any time soon:
“We’re watching it carefully. When we talk to customers, they’re really into blockchain as a concept but we don’t yet see a lot of practical concepts that are much broader than using a distributed ledger.”
AWS is all about automation, that includes the elimination of humans from the equation. Some jobs and tasks will fall by the wayside. AWS has added AI services that could directly displace employees in areas such as translation and transcription, but Jassy sees the net result of these innovations as more, different jobs in the future:
Even before AI, if you look at part of what’s going on in the U.S. there are so many people who historically followed relatives into the mills and the mines and factories and agricultural fields and those jobs have moved out of the U.S. and they’re not likely to move back any time soon. It’s progress and people find different ways to do things but it usually opens other opportunities.
“If you look at the number of jobs that companies including Amazon have… there are tons of jobs and we don’t have enough people to do those jobs. We as a country and as a world need to change the educational systems so more people are equipped to do the jobs that are available.”
AWS operates at an $18 million run rate and has millions of active customers, but unsurprisingly Jassy sees this as just the beginning. Future growth will be “lumpy” because enterprises and the public sector methodically adopt new technology and move tranches of workloads in stages over many years, he said.
Still, that growth likely won’t push Amazon to spin out AWS, Jassy said.
“I would be surprised if we spun out AWS mostly because there isn’t a need to do so. When companies are spun out it’s because either they can’t commit enough capital to that business unit so they do an IPO, or it’s because they don’t want the financial statements of one of those businesses on the overall set of financial statements.
If you look at the history of Amazon, we’re not really focused on the optics of the financial statements. We’re comfortable being misunderstood for long periods of time, so it’s not really a driver for how we behave.
The company has been so gracious committing whatever amounts of capital we need to growing AWS in the first 11 and a half years — and by the way, it has required a lot of capital — that there just hasn’t been a need to do so… There’s a lot of value in having so many internal customers at Amazon who are not shy about telling us how everything works.”
There’s a lot of talk these days about multi-cloud strategies. Microsoft and Google, the two companies perceived to be AWS’ closest competitors, often tout this as the way enterprises will adopt cloud in the future, but AWS has been mostly quiet on this front. When asked if AWS would do more to address these needs, Jassy downplayed the concept, saying most companies go with one provider to be their predominate cloud platform.
“We certainly get asked about multi-cloud a lot. What you see is most enterprises, when they’re thinking of making their plan of how to move to the cloud, they start out thinking that they would distribute their workloads somewhat evenly across several providers. When they actually do the homework on what that actually means, very few make that decision because it means you have to standardize at the lowest common denominator and these platforms are nowhere close to the same [as each other] today.
When you’re making a change from on premises to the cloud, that’s a pretty big change… Asking a development teams to be fluid not just on prem to the cloud but to multiple platforms is a lot. And all these cloud providers have volume discounts, so if you split your workloads evenly across a couple or even a few you’re diminishing your buying leverage.”
Data center expansion
AWS currently has 16 regions and 44 availability zones, with plans to add seven regions and 17 availability zones in the next two years. Jassy says that eventually there will be regions in “every major country” to address latency and data sovereignty. Here’s how he described the decision-making process for where to open new regions:
“We look at how many companies are there, how much technology is being consumed, how much technology are companies willing to outsource, what kind of infrastructure is there — what’s the cost structure as well as the quality of the network and the data centers and the power and the things we need to operate effectively. And what’s the connectivity to other parts of the world, because even though our regions have large endemic customer bases, it turns out every time we open a region our customers who are primarily operating in regions outside of that country choose to deploy inside of that region as well.”
Tech and ethics
Major tech companies are increasingly scrutinized over their role to moderate the use of their platforms. With new machine learning services such as SageMaker and the DeepLens AI camera intended to make machine learning more palatable to the average developer, Jassy was asked about his company’s role in responding to potential sinister use of AWS:
“If you look at all the services that AWS has there is a potential for a bad actor to choose one of those services to do something malicious or sinister or ill-intended, in the same way you have the ability to do that if you buy a server yourself or use any other software yourself.
We have very straightforward and clear terms of service and if we find anyone violating those terms of services — and I think anything sinister would violate those terms of services — we suspend those customers and they’re not able to use our platform.”
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at firstname.lastname@example.org.
AWS continues to push its Virtual Private Cloud as the new norm for cloud development and deployment, and further limit public internet exposure.
AWS PrivateLink enables customers to privately access services while keeping all network traffic within an Amazon Virtual Private Cloud (VPC). Instead of whitelisting approved public IP addresses, IT teams can establish private IP addresses and connect them to services via Elastic Network Interface. Amazon services on PrivateLink also support Direct Connect for on-premises connections.
Amazon later added PrivateLink support for AWS-hosted customer and partner services so developers can securely work with third-party tools. Together, AWS PrivateLink and Network Load Balancer enable administrators to identify the origin of an incoming request and route them.
AWS PrivateLink is the latest in a string of new features that secure cloud connections between resources and regions.
AWS re:Invent 2017
Amazon’s yearly cloud conference, AWS re:Invent 2017, is the launchpad for a number of product and service introductions. Visit our essential guide to catch up on all the news from the conference, plus expert tips for IT professionals across a variety of roles.
New features and support
- Data goes on lockdown. Several additional features aim to boost data protection in Amazon Simple Storage Service (S3), which has been subject to numerous data leaks thanks to improper customer configurations. A Default Encryption setting for buckets automatically applies server-side encryption for all objects, and Cross-Region Replication improves efficiency and governance ofobjects encrypted by AWS Key Management Service.
- Sync up. Amazon Elastic File System (EFS) now includes an EFS File Sync feature that synchronizes on-premises or cloud-based files with the service, and replace file storage and Linux copy tools that required manual configuration.
- Upgrade your load balancer. A one-step migration wizard enables an IT team to switch from a Classic Load Balancer — formerly Elastic Load Balancing — to a Network or Application Load Balancer. Developers can view and modify load balancer configuration before deployment and add more advanced features afterward.
- Unclutter your messages. With an added message filter for pub/sub architectures, subscribers to Amazon Simple Notification Service (SNS) can choose specific subsets of messages to receive, and reduce unneeded messages without the need to write and implement their own message filters or routing logic.
- Personalize viewer content. Three capabilities in Lambda@Edge improve latency and simplify infrastructure. Content-based dynamic origin selection allows attribute-based routing to multiple back-end origins. Developers can also make network calls on CloudFront end user-facing events,, not just from origin-facing events. Lambda@Edge can also make advanced responses that rely on more complex logic to specialize content for specific end users.
- Extra code protection. AWS CodeBuild now works with VPC resources, for dev teams to build and test code within a VPC and prevent public exposure of resources. Developers can also cache dependencies for more efficiency with software builds.
- Machine learning boosts data warehouses. A Short Query Acceleration feature in Amazon Redshift uses machine learning to predict which short-running requests should move to a separate queue for faster processing – so, for example, queries such as reports and dashboards aren’t blocked behind larger extract, transform, and load requests. Another Redshift feature hops reads and writes to the next available queue without the need for a restart to improve query performance and efficiency.
- Automate deployments locally. An update to the AWS CodeDeploy agent enables developers to deploy software code on premises to test and debug, before they move code to production.
- Pull more strings. AWS OpsWorks now supports Puppet Enterprise, which gives administrators a managed service for Puppet automation tools for infrastructure and application management.
- Visually modify security policies. Admins can create and manage AWS Identity and Access Management policies with a new visual editor, which makes it easier to grant least privileges with lists of resource types and request conditions.
- Update state machines. AWS Step Functions enables developers to change state machine definitions and configurations for distributed application workflows. The API call UpdateStateMachine makes it easier to modify applications, which previously required a multi-step process.
- Cloud carpool. AWS unveiled a reference guide for automotive manufacturers to produce vehicles with secure connectivity to the AWS cloud. The guide includes capabilities for local computing and data processing, which can be used to power voice- and location-based services, car health checks, predictive analytics and more.
AWS has added a new hypervisor behind the scenes, but customers likely won’t see much of a direct impact on their cloud environment.
Amazon this month began selling its C5 instance nearly a year after first announcing the compute-heavy VMs would be built with the latest Intel chips. Tucked into a blog post about the C5’s general availability was mention of a new unspecified hypervisor to better coordinate with Amazon’s hardware. The company has since confirmed to SearchAWS that it is “KVM based.” Word of a possible switch to KVM was first reported by The Register, which cited a since-deleted FAQ from Amazon that said the hypervisor was KVM based.
AWS isn’t abandoning Xen, its hypervisor of choice since the outset of the platform. Instead, it will adopt a multi-hypervisor strategy with both Xen and KVM depending on a given workload’s specific requirements. We asked AWS if the introduction of KVM had to do with any issues with Xen; an AWS spokesperson responded with a statement that the P3 instances on sale since October use Xen, and the company will continue to heavily invest in Xen.
“For future platforms, we will use the best virtualization technology for each specific platform and plan to continue to launch platforms that are built on both Xen and our new hypervisor going forward,” the spokesperson said.
The addition of KVM addition is an interesting behind the scenes glimpse from a company that rarely discloses much about its internal architecture, but it’s unclear what if any impact customers will feel from this. In AWS’ shared-responsibility model, the hypervisor essentially acts as the line in the sand, with the virtualization, operating systems and physical hardware all the responsibility of the cloud provider.
Why would AWS go to the trouble to juggle different hypervisors for different instance types? AWS is believed to be the only major service provider working at scale that uses Xen, so part of the rational for the switch may be to save support and development costs by allowing KVM’s far larger community support to bear the brunt of that work.
“Amazon is notorious for taking open source and leveraging it for their own benefit and not giving back to the open source community,” said Keith Townsend, a TechTarget contributor and principal of The CTO Advisor LLC and founder of TheCTOAdvisor.com.
And after a decade-plus of using Xen, AWS probably would be challenged to move everything to KVM, he said.
Such a hardware virtual machine (HVM) approach means a limited number of HVMs per node and a need for more hardware to handle larger nodes, said Edward L. Haletky, president of AstroArch Consulting in Austin, Texas. It also means AWS’ cloud management tools must go in a new direction and become multi-hypervisor. A bigger question is why Amazon isn’t simply calling the new hypervisor KVM.
“[It] means to me that they have modified it in some unknown way to either help it scale, access existing storage, security and networks, or some other set of elements within KVM,” he said.
The new hypervisor may well fit “hand-in-glove” with AWS hardware to optimize security and performance, as AWS chief evangelist Jeff Barr wrote in the blog post about the C5 instance. But customers likely won’t notice much of a difference.
“It’s more probable that it will impact [AWS’] bottom line but doesn’t necessarily impact the customer,” Townsend said.
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at email@example.com.
As AI capabilities find uses in new markets, more companies are turning to the cloud for these high-performance computing workloads. And AWS is opening its arms wider with expanded support for GPU-backed instances to provide those resources, at premium prices.
The P3 Elastic Compute Cloud (EC2) instance, released into general availability last week, improves performance for advanced applications with graphics processing units (GPUs). The P3 instance comes in three sizes: p3.2xlarge, p3.8xlarge and p3.16xlarge, with 1-8 NVIDIA Tesla V100 GPUs, 16-128 GBs of GPU memory, 8-64 vCPUs and 61-488 GB of instance memory. The instances also offer enhanced network performance of up to 25Gbps and 14Gbps of Elastic Block Store (EBS) bandwidth.
The P3 instance fits advanced workloads such as machine learning, high performance computing and video processing. It is also one of AWS most expensive instances, ranging from $3.06 to $24.48 per hour for On-Demand pricing.
Amazon also unveiled new Amazon Machine Images (AMIs) for the P3 instance family. These AWS Deep Learning AMIs include frameworks designed specifically for the NVIDIA Volta V100 GPUs included with the P3 instance family. Developers can use the AMIs to build custom AI models and algorithms.
New features and support
- PostgreSQL compatibility, new features. After months of preview, AWS made PostgreSQL for Amazon Aurora generally available. AWS hopes to entice users to migrate PostgreSQL workloads to Aurora, promising a more scalable, secure and durable managed database service and lower costs. AWS claims PostgreSQL with Aurora has “three times better performance” than standard PostgreSQL databases. Aurora also added the ability to launch R4 instances with a larger cache and faster memory than the previous R3 generation –a developer can double Aurora’s maximum throughput on MySQL databases.
- New AWS Batch functionality. AWS Batch can now trigger CloudWatch Events when a job transitions from one state to another, so a developer won’t have to poll the state of each Batch job. The event stream feature sends state updates in near real-time, which can route through CloudWatch Events to targets such as AWS Lambda or Amazon Simple Notification Service. AWS also adjusted the service to spin idle EC2 resources down faster in accordance with the cloud provider’s new per-second billing. AWS Batch previously held on to idle resources for the majority of the billing hour to prevent unnecessary instance launches.
- ElastiCache supports Redis encryption. Redis, an open source in-memory database, does not natively support encryption, but AWS now provides that capability for Amazon ElastiCache. The service now enables encryption for personally identifiable information at rest and in transit. At-rest encryption protects Amazon Simple Storage Service (S3) and disk backups, while in-transit encryption protects data communicated between Redis servers and clients.
- Apply Glue via CloudFormation. AWS has included its Glue service, which helps execute ETL jobs, as an option for AWS CloudFormation templates. This support helps IT teams automate AWS Glue functions — such as jobs, triggers and crawlers — to quickly load and prepare data for analytics.
- Address data warehouse demands. Dense compute (DC2) nodes for Amazon Redshift are a second generation of compute clusters designed to reduce latency and boost throughput for demanding data warehouse workloads. The DC2 nodes, which include Intel E5-2686 v4 (Broadwell) CPUs, DDR4 memory and NVMe-based solid state disks, are available for the same price as the previous generation DC1 nodes.
- Use Elasticsearch in a VPC. Amazon Elasticsearch Service (ES) now supports access from an Amazon Virtual Private Cloud (VPC), which removes the need to connect to the service over the public internet. IT teams can now use Elasticsearch, an open source search engine and analytics service, without configuring firewall rules and domain access policies for ES.
- Geographic application restriction. AWS Web Application Firewall now includes an option to restrict access to applications based on geographic location to fulfill licensing requirements and security needs. Geographic Match Conditions allows a business to create a whitelist that only allows visitors from specified countries. or a blacklist that blocks access to certain countries.
- CodePipeline takes pushes from CodeCommit. The latter service can now send an Amazon CloudWatch Event to the former service to trigger a pipeline, which eliminates the need to periodically check for code changes.
- ALBs support multiple certificates. Businesses can now host multiple secure HTTPS applications and assign each one a Secure Sockets Layer certificate behind one Application Load Balancer. AWS uses Server Name Indication to allow these apps to run on the same load balancer. This means businesses don’t have to use risky Wildcard or complicated Multi-Domain certificates to run multiple HTTPS apps on one load balancer.
- Migrate to new database sources. The AWS Database Migration Service (DMS) added Azure SQL Database and S3 as sources. S3 was previously supported as a target, but its addition as a source allows teams to freely move data to and from S3 buckets and other DMS sources. Amazon EC2 also now supports Microsoft SQL Server 2017 for extra scalability and performance.
The cost of graphics acceleration can often make the technology prohibitive, but a new AWS GPU instance type for AppStream 2.0 makes that process more affordable.
Amazon AppStream 2.0, which enables enterprises to stream desktop apps from AWS to an HTML5-compatible web browser, delivers graphics-intensive applications for workloads such as creative design, gaming and engineering that rely on DirectX, OpenGL or OpenCL for hardware acceleration. The managed AppStream service eliminates the need for IT teams to recode applications to be browser-compatible.
The newest AWS GPU instance type for AppStream, Graphics Design, cuts the cost of streaming graphics applications up to 50%, according to the company. AWS customers can launch Graphics Design GPU instances or create a new instance fleet with the Amazon AppStream 2.0 console or AWS software development kit. AWS’ Graphics Design GPU instances come in four sizes that range from 2-16 virtual CPUs and 7.5-61 gibibytes (GiB) of system memory, and run on AMD FirePro S7150x2 Server GPUs with AMD Multiuser GPU technology.
Developers can now also select between two types of Amazon AppStream instance fleets in a streaming environment. Always-On fleets provide instant access to apps, but charge fees for every instance in the fleet. On-Demand fleets charges fees for instances when end users are connected, plus an hourly fee, but there is a delay when an end user accesses the first application.
New features and support
In addition to the new AWS GPU instance type, the cloud vendor rolled out several other features this month, including:
- ELB adds network balancer. AWS Network Load Balancer helps maintain low latency during spikes on a single static IP address per Availability Zone. Network Load Balancer — the second offshoot of Elastic Load Balancing features, following Application Load Balancer — routes connections to Virtual Private Cloud-based Elastic Compute Cloud (EC2) instances and containers.
- New edge locations on each coast. Additional Amazon CloudFront edge locations in Boston and Seattle improve end user speed and performance when they interact with content via CloudFront. AWS now has 95 edge locations across 50 cities in 23 countries.
- X1 instance family welcomes new member. The AWS x1e.32xlarge instance joins the X1 family of memory-optimized instances, with the most memory of any EC2 instance — 3,904 GiB of DDR4 instance memory — to help businesses reduce latency for large databases, such as SAP HANA. The instance is also AWS’ most expensive at about $16-$32 per hour, depending on the environment and payment model.
- AWS Config opens up support. The AWS Config service, which enables IT teams to manage service and resource configurations, now supports both DynamoDB tables and Auto Scaling groups. Administrators can integrate those resources to evaluate the health and scalability of their cloud deployments.
- Start and stop on the Spot. IT teams can now stop Amazon EC2 Spot Instances when an interruption occurs and then start them back up as needed. Previously, Spot Instances were terminated when prices rose above the user-defined level. AWS saves the EBS root device, attached volumes and the data within those volumes; those resources restore when capacity returns, and instances maintain their ID numbers.
- EC2 expands networking performance. The largest instances of the M4, X1, P2, R4, I3, F1 and G3 families now use Elastic Network Adapter (ENA) to reach a maximum bandwidth of 25 Gb per second. The ENA interface enables both existing and new instances to reach this capacity, which boosts workloads reliant on high-performance networking.
- New Direct Connect locations. Three new global AWS Direct Connect locations allow businesses to establish dedicated connections to the AWS cloud from an on-premises environment. New locations include: Boston, at Markley, One Summer Data Center for US-East-1; Houston, at CyrusOne West I-III data center for US-East-2; and Canberra, Australia, at NEXTDC C1 Canberra data center for AP-Southeast-2.
- Role and policy changes. Several changes to AWS Identity and Access Management (IAM) aim to better protect an enterprise’s resources in the cloud. A policy summaries feature lets admins identify errors and evaluate permissions in the IAM console to ensure each action properly matches to the resources and conditions it affects. Other updates include a wizard for admins to create the IAM roles, and the ability to delete service-linked roles through the IAM console, API or CLI — IAM ensures that no resources are attached to a role before deletion.
- Six new data streams. Amazon Kinesis Analytics, which enables businesses to process and query streaming data in an SQL format, has six new types of stream processes to simplify data processing: STEP(), LAG(), TO_TIMESTAMP(), UNIX_TIMESTAMP(), REGEX_REPLACE() and SUBSTRING(). AWS also increased the service’s capacity to process higher data volume streams.
- Get DevOps notifications. Additional notifications from AWS CodePipeline for stage or action status changes enable a DevOps team to track, manage and act on changes during continuous integration and continuous delivery. CodePipeline integrates with Amazon CloudWatch to enable Amazon Simple Notification Service messages, which can trigger an AWS Lambda function in response.
- AWS boosts HIPAA eligibility. Amazon’s HIPAA Compliance Program now includes Amazon Connect, AWS Batch and two Amazon Relational Database Service (RDS) engines, RDS for SQL Server and RDS for MariaDB — all six RDS engines are HIPAA eligible. AWS customers that sign a Business Associate Agreement can use those services to build HIPAA-compliant applications.
- RDS for Oracle adds features. The Amazon RDS for Oracle engine now supports Oracle Multimedia, Oracle Spatial and Oracle Locator features, with which businesses can store, manage and retrieve multimedia and multi-dimensional data as they migrate databases from Oracle to AWS. The RDS Oracle engine also added support for multiple Oracle Application Express versions, which enables developers to build applications within a web browser.
- Assess RHEL security. Amazon Inspector expanded support for Red Hat Enterprise Linux (RHEL) 7.4 assessments, to run Vulnerabilities & Exposures, Amazon Security Best Practices and Runtime Behavior Analysis scans in that RHEL environment on EC2 instances.