AWS Cloud Cover

Page 1 of 3123

December 29, 2017  6:03 PM

Amazon API Gateway boosts compression, tagging

David Carty Profile: David Carty

AWS customers were enticed by products and services introduced at the cloud provider’s annual customer and partner confab, re:Invent, held recently. AWS also kept up a steady pace of basic service updates to round out 2017, which included some API management capabilities.

Amazon API Gateway now offers content encoding support, which lets a client  compress content before a response to an API request. This feature can cut costs and improve performance, as it reduces the amount of data sent from the service to clients. Developers can define the minimum response size and enable encoding in the API itself.

The service also lets developers use application logic in custom Lambda authorizer functions to support API keys. This makes it simpler to control usage assigned to API requests, and the feature also allows teams to track request properties to API keys, such as HTTP request headers.

Additionally, Amazon API Gateway lets teams tag API stages for better organization of resources. Teams can filter API stage allocation tags through AWS Budgets to potentially reduce costs. The API Gateway feature also helps categorize APIs.

Catch up on re:Invent

AWS released several products and features at its AWS annual re:Invent conference that were not called out in this blog. Catch up on what you missed with oodles of re:Invent news and analysis from our team of writers.

New features and support

  • Restart logic in ECS. The Amazon Elastic Container Service (ECS) scheduler lets a developer program logic to control retry attempts for failing tasks. This feature reduces the potential cost and performance impacts of continuous attempts to run tasks that fail. The schedule can increase time between restart attempts, stop the deployment and add a message to notify developers.
  • Speed up Redshift queries. AWS’ data warehouse, Amazon Redshift, added late materialization with row-level filters to improve performance by reducing the amount of data it scans. Predicate filters reduce scans to only table items that satisfy criteria to boost query performance by attrition. AWS enables this feature by default.
  • Customize edge error responses. Lambda@Edge now lets developers respond with Lambda functions when CloudFront receives an error from your origin. Developers can access and define responses for 4XX and 5XX error status codes, and they can add headers, redirects and dynamically issue responses to end users based on their requests.
  • Send real-time SQL data to Lambda. Developers can configure Amazon Kinesis Data Analytics to output real-time data to AWS Lambda. From there, they can code functions that respond to that SQL data, such as send an alert or update a database.
  • Cross-account S3 bucket access from QuickSight. Data analysts can now use a QuickSight account tied to a specific AWS account to access data stored in Simple Storage Service (S3) buckets that belong to another AWS account. This cross-account S3 access enables more seamless data analysis for large businesses with multiple departments.
  • More instance support for PostgreSQL databases. Amazon Relational Database Service (RDS) for PostgreSQL added support for R4, db.t2.xlarge, db.t2.2xlarge, and db.m4.16xlarge instances for enhanced performance.
  • Increase ES scale, decrease cost. Amazon Elasticsearch Service (ES) added support for I3 instances, which improve upon the previous generation of I/O-intensive instances. With I3 instances, developers can use up to 1.5 PB of storage in an ES cluster, 15 TB of data in each node, 3.3 million IOPS and 16 GB/s of sequential disk throughput – all for less than half the cost of I2 instances.
  • A NICE combination. After acquiring NICE in 2016, AWS combined with the Italian software company to release Desktop Cloud Visualization (DCV) 2017, a steaming and remote access service. DCV 2017 improves on-premises capabilities, and the service is now available on EC2 instances, such as Elastic GPU. AWS customers only pay for the underlying compute resources.
  • CloudFront enhances encryption. AWS’ content delivery network, Amazon CloudFront, introduced field-level encryption to protect sensitive data with HTTPS. This feature can be helpful for financial or personally identifiable information, ensuring that only specific components or services in a stack can decrypt and view that data.
  • Use containers in CD pipelines. Amazon CodePipeline added integration with container-based deployments to Amazon Elastic Container Service and AWS Fargate. Developers push code changes through a continuous delivery pipeline, which calls the desired service to create a container image, test and then update containers in production.
  • Process MySQL queries faster. Amazon Aurora sped up query processing with support for hash joins and batched scans. These features are available for Amazon Aurora MySQL version 1.16.
  • CloudWatch adds new visuals, encryption support. Amazon CloudWatch added two new chart visuals: zoom, for magnification of a shorter time period, and pan, for browsing a specific time interval. Administrators can find these visualization options in the CloudWatch Metrics console and dashboards. CloudWatch Logs also added support for integration with AWS Key Management Service (KMS), which enable an admin to encrypt logs with AWS-managed keys, if they choose.
  • KMS integrates with ES. Developers can now encrypt data at rest in Amazon ES with keys managed through KMS. This feature lets data scientists use ES while encrypting all data on the underlying file systems without application modification.
  • Set alerts for free tier usage. AWS Budgets include the capability to track service usage and send an email alert to administrators if it forecasts usage to exceed a free tier limit.
  • Define an IoT backup plan. Developers can now define a backup action in Amazon IoT Rules Engine if a primary action fails. In addition to keeping an application running, this feature preserves error message data, which can include unavailability of services and insufficient resource provisioning.

December 8, 2017  9:08 PM

Could AWS’ torrid pace of innovation come back to haunt the cloud giant?

Trevor Jones Trevor Jones Profile: Trevor Jones

Another AWS re:Invent has come and gone, with another slew of new products to delight its fans. But in the cloud, can there be too much of a good thing?

The user conference was bursting at the seams this year, with 43,000 people shuffling in controlled chaos between six hotels that spanned two miles of the Las Vegas strip. The show is part networking, part training exercise, but more than anything it’s a victory lap for AWS and its prodigious pace of innovation. But could that overstuffed sprawl portend future problems for the platform itself? With roughly two dozen new products or updates lumped on top of AWS’ already extensive IT portfolio, does the cloud giant run the risk of spreading itself too thin, or at a minimum overwhelming its customers with choices?

Some conference attendees acknowledged this is a concern, though the consensus was that Amazon hasn’t shown any signs yet of failing where other tech companies have before.

“It would be reckless to say we don’t think about it,” said Biba Helou, managing vice president of cloud at Capital One. “But they really do seem to have a really good model for how they incubate and build products and then gain momentum based on customer feedback and then put the resources into what they need to.”

AWS’ track record with products isn’t perfect. Elastic File System remains a subject of consternation for some, and other services such as AppStream have been criticized for falling short of their initial promise. Nevertheless, users remain assured by a development model that organizes small teams to focus on specific products and features. And AWS has a history of releasing a base product and adding to it over time. Customers have become so conditioned to that model that despite frustration with a new product’s lack of a certain feature or language support, they’re content to assume that piece will arrive eventually.

Customers also find comfort in AWS’ continued investments in its core services. Alongside sexier new products rolled out at AWS re:Invent 2017 were a handful of updates to Amazon Elastic Compute Cloud and Amazon Simple Storage Service.

Still, the company that started out selling basic compute and storage has added a staggering number of products over the last 10-plus years, and shows no signs of slowing down. There’s a greater focus today on managed services and even a push into business services with products such as Amazon Chime and Alexa for Business. And AWS CEO Andy Jassy told conference attendees to expect more innovation over the next decade than the previous one.

The backdrop to all this product expansion is intensified competition. AWS still dominates the market with impressive, yet slowing, year-over-year revenue growth of 42%, and its market is still growing, according to a Gartner study. But for a company that claims its product decisions are tethered to customers’ wishes, part of that response now has to address services that customers can find in Microsoft Azure or Google Cloud Platform (GCP).

For example, machine learning and containers are two areas AWS has been criticized for falling behind Azure and GCP. Lo and behold, at AWS re:Invent, AWS added a bevy of services to fill those gaps. AWS added bare metal servers — which didn’t excite anyone I spoke with at the show, but checks a box for any enterprise that compares the AWS platform to alternatives from IBM or Oracle.

Amazon is looking at the laundry list of cloud services people want to implement and trying to cover as many of those requests as possible,

“There’s definitely that risk [of overextending] but the big play was about making it clear they’re trying to remove as many of those incentives as possible to move to any other cloud,” said Henry Shapiro, vice president and general manager at New Relic, a San Francisco-based monitoring and management company and AWS partner.

And while users and partners feel confident that AWS will address this theoretical problem, the dizzying pace of releases creates a practical problem for users today. AWS has excelled at democratizing technology and packaging it for the masses, but it can be a challenge for people to understand the breadth of services, said Owen Rogers, an analyst with 451 Research. That’s why the partner ecosystem will be crucial to AWS’ future growth, as those companies step up to help resolve the complexity so enterprises can navigate the landscape.

And enterprises contend with more than just the AWS learning curve. Amid a larger shift in how companies build and deploy applications, nearly every enterprise is scurrying to address clichés about digital transformation and avoid being undercut or outflanked by some tech upstart.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


December 2, 2017  12:34 AM

AWS CEO Jassy shares thoughts on the future of AWS, cloud

Trevor Jones Trevor Jones Profile: Trevor Jones

It should come as no surprise, but AWS CEO Andy Jassy is awfully bullish about the company he leads.

Jassy sat down for a press Q&A following his keynote speech here at re:Invent this week. Most of the roughly 45-minute session focused on why he sees AWS as the best place for cloud workloads, but he also shed some light on the future of the platform and mused on the state of IT. The following are selected excerpts from his responses, edited for brevity.

Blockchain

AWS looks closely at the technology and has lots of customers and partners looking to build blockchain on top of AWS. Other cloud vendors such as IBM and Microsoft have added services in this space, but Jassy implied AWS won’t follow suit any time soon:

“We’re watching it carefully. When we talk to customers, they’re really into blockchain as a concept but we don’t yet see a lot of practical concepts that are much broader than using a distributed ledger.”

Jobs

AWS is all about automation, that includes the elimination of humans from the equation. Some jobs and tasks will fall by the wayside. AWS has added AI services that could directly displace employees in areas such as translation and transcription, but Jassy sees the net result of these innovations as more, different jobs in the future:

Even before AI, if you look at part of what’s going on in the U.S. there are so many people who historically followed relatives into the mills and the mines and factories and agricultural fields and those jobs have moved out of the U.S. and they’re not likely to move back any time soon. It’s progress and people find different ways to do things but it usually opens other opportunities.

“If you look at the number of jobs that companies including Amazon have… there are tons of jobs and we don’t have enough people to do those jobs. We as a country and as a world need to change the educational systems so more people are equipped to do the jobs that are available.”

Future growth

AWS operates at an $18 million run rate and has millions of active customers, but unsurprisingly Jassy sees this as just the beginning. Future growth will be “lumpy” because enterprises and the public sector methodically adopt new technology and move tranches of workloads in stages over many years, he said.

Still, that growth likely won’t push Amazon to spin out AWS, Jassy said.

“I would be surprised if we spun out AWS mostly because there isn’t a need to do so. When companies are spun out it’s because either they can’t commit enough capital to that business unit so they do an IPO, or it’s because they don’t want the financial statements of one of those businesses on the overall set of financial statements.

If you look at the history of Amazon, we’re not really focused on the optics of the financial statements. We’re comfortable being misunderstood for long periods of time, so it’s not really a driver for how we behave.

The company has been so gracious committing whatever amounts of capital we need to growing AWS in the first 11 and a half years — and by the way, it has required a lot of capital — that there just hasn’t been a need to do so… There’s a lot of value in having so many internal customers at Amazon who are not shy about telling us how everything works.”

Multi-cloud

There’s a lot of talk these days about multi-cloud strategies. Microsoft and Google, the two companies perceived to be AWS’ closest competitors, often tout this as the way enterprises will adopt cloud in the future, but AWS has been mostly quiet on this front. When asked if AWS would do more to address these needs, Jassy downplayed the concept, saying most companies go with one provider to be their predominate cloud platform.

“We certainly get asked about multi-cloud a lot. What you see is most enterprises, when they’re thinking of making their plan of how to move to the cloud, they start out thinking that they would distribute their workloads somewhat evenly across several providers. When they actually do the homework on what that actually means, very few make that decision because it means you have to standardize at the lowest common denominator and these platforms are nowhere close to the same [as each other] today.

When you’re making a change from on premises to the cloud, that’s a pretty big change… Asking a development teams to be fluid not just on prem to the cloud but to multiple platforms is a lot. And all these cloud providers have volume discounts, so if you split your workloads evenly across a couple or even a few you’re diminishing your buying leverage.”

Data center expansion

AWS currently has 16 regions and 44 availability zones, with plans to add seven regions and 17 availability zones in the next two years. Jassy says that eventually there will be regions in “every major country” to address latency and data sovereignty. Here’s how he described the decision-making process for where to open new regions:

“We look at how many companies are there, how much technology is being consumed, how much technology are companies willing to outsource, what kind of infrastructure is there — what’s the cost structure as well as the quality of the network and the data centers and the power and the things we need to operate effectively. And what’s the connectivity to other parts of the world, because even though our regions have large endemic customer bases, it turns out every time we open a region our customers who are primarily operating in regions outside of that country choose to deploy inside of that region as well.”

Tech and ethics

Major tech companies are increasingly scrutinized over their role to moderate the use of their platforms. With new machine learning services such as SageMaker and the DeepLens AI camera intended to make machine learning more palatable to the average developer, Jassy was asked about his company’s role in responding to potential sinister use of AWS:

“If you look at all the services that AWS has there is a potential for a bad actor to choose one of those services to do something malicious or sinister or ill-intended, in the same way you have the ability to do that if you buy a server yourself or use any other software yourself.

We have very straightforward and clear terms of service and if we find anyone violating those terms of services — and I think anything sinister would violate those terms of services — we suspend those customers and they’re not able to use our platform.”

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


December 1, 2017  10:44 PM

AWS PrivateLink clamps down on endpoint exposure

David Carty Profile: David Carty

AWS continues to push its Virtual Private Cloud as the new norm for cloud development and deployment, and further limit public internet exposure.

AWS PrivateLink enables customers to privately access services while keeping all network traffic within an Amazon Virtual Private Cloud (VPC). Instead of whitelisting approved public IP addresses, IT teams can establish private IP addresses and connect them to services via Elastic Network Interface. Amazon services on PrivateLink also support Direct Connect for on-premises connections.

Amazon later added PrivateLink support for AWS-hosted customer and partner services so developers can securely work with third-party tools. Together, AWS PrivateLink and Network Load Balancer enable administrators to identify the origin of an incoming request and route them.

AWS PrivateLink is the latest in a string of new features that secure cloud connections between resources and regions.

AWS re:Invent 2017

Amazon’s yearly cloud conference, AWS re:Invent 2017, is the launchpad for a number of product and service introductions. Visit our essential guide to catch up on all the news from the conference, plus expert tips for IT professionals across a variety of roles.

New features and support

  • JavaScript library adds to dev possibilities. With this AWS Amplify open source library, developers can code JavaScript applications for web or mobile platforms via a declarative interface, apply best practices and perform common scripting actions to speed software deployment. AWS also unveiled a command-line interface that integrates with the AWS Mobile Hub for developers to code apps from scratch.
  • Data goes on lockdown. Several additional features aim to boost data protection in Amazon Simple Storage Service (S3), which has been subject to numerous data leaks thanks to improper customer configurations. A Default Encryption setting for buckets automatically applies server-side encryption for all objects, and Cross-Region Replication improves efficiency and governance ofobjects encrypted by AWS Key Management Service.
  • Sync up. Amazon Elastic File System (EFS) now includes an EFS File Sync feature that synchronizes on-premises or cloud-based files with the service, and replace file storage and Linux copy tools that required manual configuration.
  • Upgrade your load balancer. A one-step migration wizard enables an IT team to switch from a Classic Load Balancer — formerly Elastic Load Balancing — to a Network or Application Load Balancer. Developers can view and modify load balancer configuration before deployment and add more advanced features afterward.
  • Unclutter your messages. With an added message filter for pub/sub architectures, subscribers to Amazon Simple Notification Service (SNS) can choose specific subsets of messages to receive, and reduce unneeded messages without the need to write and implement their own message filters or routing logic.
  • Personalize viewer content. Three capabilities in Lambda@Edge improve latency and simplify infrastructure. Content-based dynamic origin selection allows attribute-based routing to multiple back-end origins. Developers can also make network calls on CloudFront end user-facing events,, not just from origin-facing events. Lambda@Edge can also make advanced responses that rely on more complex logic to specialize content for specific end users.
  • Extra code protection. AWS CodeBuild now works with VPC resources, for dev teams to build and test code within a VPC and prevent public exposure of resources. Developers can also cache dependencies for more efficiency with software builds.
  • Machine learning boosts data warehouses. A Short Query Acceleration feature in Amazon Redshift uses machine learning to predict which short-running requests should move to a separate queue for faster processing – so, for example, queries such as reports and dashboards aren’t blocked behind larger extract, transform, and load requests. Another Redshift feature hops reads and writes to the next available queue without the need for a restart to improve query performance and efficiency.
  • Automate deployments locally. An update to the AWS CodeDeploy agent enables developers to deploy software code on premises to test and debug, before they move code to production.
  • Pull more strings. AWS OpsWorks now supports Puppet Enterprise, which gives administrators a managed service for Puppet automation tools for infrastructure and application management.
  • Visually modify security policies. Admins can create and manage AWS Identity and Access Management policies with a new visual editor, which makes it easier to grant least privileges with lists of resource types and request conditions.
  • Update state machines. AWS Step Functions enables developers to change state machine definitions and configurations for distributed application workflows. The API call UpdateStateMachine makes it easier to modify applications, which previously required a multi-step process.
  • Cloud carpool. AWS unveiled a reference guide for automotive manufacturers to produce vehicles with secure connectivity to the AWS cloud. The guide includes capabilities for local computing and data processing, which can be used to power voice- and location-based services, car health checks, predictive analytics and more.


November 21, 2017  9:55 PM

AWS using KVM and Xen, but users may not feel any impact

Trevor Jones Trevor Jones Profile: Trevor Jones

AWS has added a new hypervisor behind the scenes, but customers likely won’t see much of a direct impact on their cloud environment.

Amazon this month began selling its C5 instance nearly a year after first announcing the compute-heavy VMs would be built with the latest Intel chips. Tucked into a blog post about the C5’s general availability was mention of a new unspecified hypervisor to better coordinate with Amazon’s hardware. The company has since confirmed to SearchAWS that it is “KVM based.” Word of a possible switch to KVM was first reported by The Register, which cited a since-deleted FAQ from Amazon that said the hypervisor was KVM based.

AWS isn’t abandoning Xen, its hypervisor of choice since the outset of the platform. Instead, it will adopt a multi-hypervisor strategy with both Xen and KVM depending on a given workload’s specific requirements. We asked AWS if the introduction of KVM had to do with any issues with Xen; an AWS spokesperson responded with a statement that the P3 instances on sale since October use Xen, and the company will continue to heavily invest in Xen.

“For future platforms, we will use the best virtualization technology for each specific platform and plan to continue to launch platforms that are built on both Xen and our new hypervisor going forward,” the spokesperson said.

The addition of KVM addition is an interesting behind the scenes glimpse from a company that rarely discloses much about its internal architecture, but it’s unclear what if any impact customers will feel from this. In AWS’ shared-responsibility model, the hypervisor essentially acts as the line in the sand, with the virtualization, operating systems and physical hardware all the responsibility of the cloud provider.

Why would AWS go to the trouble to juggle different hypervisors for different instance types? AWS is believed to be the only major service provider working at scale that uses Xen, so part of the rational for the switch may be to save support and development costs by allowing KVM’s far larger community support to bear the brunt of that work.

“Amazon is notorious for taking open source and leveraging it for their own benefit and not giving back to the open source community,” said Keith Townsend, a TechTarget contributor and principal of The CTO Advisor LLC and founder of TheCTOAdvisor.com.

And after a decade-plus of using Xen, AWS probably would be challenged to move everything to KVM, he said.

Such a hardware virtual machine (HVM) approach means a limited number of HVMs per node and a need for more hardware to handle larger nodes, said Edward L. Haletky, president of AstroArch Consulting in Austin, Texas. It also means AWS’ cloud management tools must go in a new direction and become multi-hypervisor. A bigger question is why Amazon isn’t simply calling the new hypervisor KVM.

“[It] means to me that they have modified it in some unknown way to either help it scale, access existing storage, security and networks, or some other set of elements within KVM,” he said.

The new hypervisor may well fit “hand-in-glove” with AWS hardware to optimize security and performance, as AWS chief evangelist Jeff Barr wrote in the blog post about the C5 instance. But customers likely won’t notice much of a difference.

“It’s more probable that it will impact [AWS’] bottom line but doesn’t necessarily impact the customer,” Townsend said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


October 31, 2017  8:14 PM

P3 instance lures AI, machine learning workloads

David Carty Profile: David Carty

As AI capabilities find uses in new markets, more companies are turning to the cloud for these high-performance computing workloads. And AWS is opening its arms wider with expanded support for GPU-backed instances to provide those resources, at premium prices.

The P3 Elastic Compute Cloud (EC2) instance, released into general availability last week, improves performance for advanced applications with graphics processing units (GPUs). The P3 instance comes in three sizes: p3.2xlarge, p3.8xlarge and p3.16xlarge, with 1-8 NVIDIA Tesla V100 GPUs, 16-128 GBs of GPU memory, 8-64 vCPUs and 61-488 GB of instance memory. The instances also offer enhanced network performance of up to 25Gbps and 14Gbps of Elastic Block Store (EBS) bandwidth.

The P3 instance fits advanced workloads such as machine learning, high performance computing and video processing. It is also one of AWS most expensive instances, ranging from $3.06 to $24.48 per hour for On-Demand pricing.

Amazon also unveiled new Amazon Machine Images (AMIs) for the P3 instance family. These AWS Deep Learning AMIs include frameworks designed specifically for the NVIDIA Volta V100 GPUs included with the P3 instance family. Developers can use the AMIs to build custom AI models and algorithms.

New features and support

  • PostgreSQL compatibility, new features. After months of preview, AWS made PostgreSQL for Amazon Aurora generally available. AWS hopes to entice users to migrate PostgreSQL workloads to Aurora, promising a more scalable, secure and durable managed database service and lower costs. AWS claims PostgreSQL with Aurora has “three times better performance” than standard PostgreSQL databases. Aurora also added the ability to launch R4 instances with a larger cache and faster memory than the previous R3 generation –a developer can double Aurora’s maximum throughput on MySQL databases.
  • New AWS Batch functionality. AWS Batch can now trigger CloudWatch Events when a job transitions from one state to another, so a developer won’t have to poll the state of each Batch job. The event stream feature sends state updates in near real-time, which can route through CloudWatch Events to targets such as AWS Lambda or Amazon Simple Notification Service. AWS also adjusted the service to spin idle EC2 resources down faster in accordance with the cloud provider’s new per-second billing. AWS Batch previously held on to idle resources for the majority of the billing hour to prevent unnecessary instance launches.
  • ElastiCache supports Redis encryption. Redis, an open source in-memory database, does not natively support encryption, but AWS now provides that capability for Amazon ElastiCache. The service now enables encryption for personally identifiable information at rest and in transit. At-rest encryption protects Amazon Simple Storage Service (S3) and disk backups, while in-transit encryption protects data communicated between Redis servers and clients.
  • Apply Glue via CloudFormation. AWS has included its Glue service, which helps execute ETL jobs, as an option for AWS CloudFormation templates. This support helps IT teams automate AWS Glue functions — such as jobs, triggers and crawlers — to quickly load and prepare data for analytics.
  • Address data warehouse demands. Dense compute (DC2) nodes for Amazon Redshift are a second generation of compute clusters designed to reduce latency and boost throughput for demanding data warehouse workloads. The DC2 nodes, which include Intel E5-2686 v4 (Broadwell) CPUs, DDR4 memory and NVMe-based solid state disks, are available for the same price as the previous generation DC1 nodes.
  • Use Elasticsearch in a VPC. Amazon Elasticsearch Service (ES) now supports access from an Amazon Virtual Private Cloud (VPC), which removes the need to connect to the service over the public internet. IT teams can now use Elasticsearch, an open source search engine and analytics service, without configuring firewall rules and domain access policies for ES.
  • Geographic application restriction. AWS Web Application Firewall now includes an option to restrict access to applications based on geographic location to fulfill licensing requirements and security needs. Geographic Match Conditions allows a business to create a whitelist that only allows visitors from specified countries. or a blacklist that blocks access to certain countries.
  • CodePipeline takes pushes from CodeCommit. The latter service can now send an Amazon CloudWatch Event to the former service to trigger a pipeline, which eliminates the need to periodically check for code changes.
  • ALBs support multiple certificates. Businesses can now host multiple secure HTTPS applications and assign each one a Secure Sockets Layer certificate behind one Application Load Balancer. AWS uses Server Name Indication to allow these apps to run on the same load balancer. This means businesses don’t have to use risky Wildcard or complicated Multi-Domain certificates to run multiple HTTPS apps on one load balancer.
  • Migrate to new database sources. The AWS Database Migration Service (DMS) added Azure SQL Database and S3 as sources. S3 was previously supported as a target, but its addition as a source allows teams to freely move data to and from S3 buckets and other DMS sources. Amazon EC2 also now supports Microsoft SQL Server 2017 for extra scalability and performance.


September 29, 2017  5:38 PM

AWS GPU instance type slashes cost of streaming apps

David Carty Profile: David Carty

The cost of graphics acceleration can often make the technology prohibitive, but a new AWS GPU instance type for AppStream 2.0 makes that process more affordable.

Amazon AppStream 2.0, which enables enterprises to stream desktop apps from AWS to an HTML5-compatible web browser, delivers graphics-intensive applications for workloads such as creative design, gaming and engineering that rely on DirectX, OpenGL or OpenCL for hardware acceleration. The managed AppStream service eliminates the need for IT teams to recode applications to be browser-compatible.

The newest AWS GPU instance type for AppStream, Graphics Design, cuts the cost of streaming graphics applications up to 50%, according to the company. AWS customers can launch Graphics Design GPU instances or create a new instance fleet with the Amazon AppStream 2.0 console or AWS software development kit. AWS’ Graphics Design GPU instances come in four sizes that range from 2-16 virtual CPUs and 7.5-61 gibibytes (GiB) of system memory, and run on AMD FirePro S7150x2 Server GPUs with AMD Multiuser GPU technology.

Developers can now also select between two types of Amazon AppStream instance fleets in a streaming environment. Always-On fleets provide instant access to apps, but charge fees for every instance in the fleet. On-Demand fleets charges fees for instances when end users are connected, plus an hourly fee, but there is a delay when an end user accesses the first application.

New features and support

In addition to the new AWS GPU instance type, the cloud vendor rolled out several other features this month, including:

  • ELB adds network balancer. AWS Network Load Balancer helps maintain low latency during spikes on a single static IP address per Availability Zone. Network Load Balancer — the second offshoot of Elastic Load Balancing features, following Application Load Balancer — routes connections to Virtual Private Cloud-based Elastic Compute Cloud (EC2) instances and containers.
  • New edge locations on each coast. Additional Amazon CloudFront edge locations in Boston and Seattle improve end user speed and performance when they interact with content via CloudFront. AWS now has 95 edge locations across 50 cities in 23 countries.
  • X1 instance family welcomes new member. The AWS x1e.32xlarge instance joins the X1 family of memory-optimized instances, with the most memory of any EC2 instance — 3,904 GiB of DDR4 instance memory — to help businesses reduce latency for large databases, such as SAP HANA. The instance is also AWS’ most expensive at about $16-$32 per hour, depending on the environment and payment model.
  • AWS Config opens up support. The AWS Config service, which enables IT teams to manage service and resource configurations, now supports both DynamoDB tables and Auto Scaling groups. Administrators can integrate those resources to evaluate the health and scalability of their cloud deployments.
  • Start and stop on the Spot. IT teams can now stop Amazon EC2 Spot Instances when an interruption occurs and then start them back up as needed. Previously, Spot Instances were terminated when prices rose above the user-defined level. AWS saves the EBS root device, attached volumes and the data within those volumes; those resources restore when capacity returns, and instances maintain their ID numbers.
  • EC2 expands networking performance. The largest instances of the M4, X1, P2, R4, I3, F1 and G3 families now use Elastic Network Adapter (ENA) to reach a maximum bandwidth of 25 Gb per second. The ENA interface enables both existing and new instances to reach this capacity, which boosts workloads reliant on high-performance networking.
  • New Direct Connect locations. Three new global AWS Direct Connect locations allow businesses to establish dedicated connections to the AWS cloud from an on-premises environment. New locations include: Boston, at Markley, One Summer Data Center for US-East-1; Houston, at CyrusOne West I-III data center for US-East-2; and Canberra, Australia, at NEXTDC C1 Canberra data center for AP-Southeast-2.
  • Role and policy changes. Several changes to AWS Identity and Access Management (IAM) aim to better protect an enterprise’s resources in the cloud. A policy summaries feature lets admins identify errors and evaluate permissions in the IAM console to ensure each action properly matches to the resources and conditions it affects. Other updates include a wizard for admins to create the IAM roles, and the ability to delete service-linked roles through the IAM console, API or CLI — IAM ensures that no resources are attached to a role before deletion.
  • Six new data streams. Amazon Kinesis Analytics, which enables businesses to process and query streaming data in an SQL format, has six new types of stream processes to simplify data processing: STEP(), LAG(), TO_TIMESTAMP(), UNIX_TIMESTAMP(), REGEX_REPLACE() and SUBSTRING(). AWS also increased the service’s capacity to process higher data volume streams.
  • Get DevOps notifications. Additional notifications from AWS CodePipeline for stage or action status changes enable a DevOps team to track, manage and act on changes during continuous integration and continuous delivery. CodePipeline integrates with Amazon CloudWatch to enable Amazon Simple Notification Service messages, which can trigger an AWS Lambda function in response.
  • AWS boosts HIPAA eligibility. Amazon’s HIPAA Compliance Program now includes Amazon Connect, AWS Batch and two Amazon Relational Database Service (RDS) engines, RDS for SQL Server and RDS for MariaDB — all six RDS engines are HIPAA eligible. AWS customers that sign a Business Associate Agreement can use those services to build HIPAA-compliant applications.
  • RDS for Oracle adds features. The Amazon RDS for Oracle engine now supports Oracle Multimedia, Oracle Spatial and Oracle Locator features, with which businesses can store, manage and retrieve multimedia and multi-dimensional data as they migrate databases from Oracle to AWS. The RDS Oracle engine also added support for multiple Oracle Application Express versions, which enables developers to build applications within a web browser.
  • Assess RHEL security. Amazon Inspector expanded support for Red Hat Enterprise Linux (RHEL) 7.4 assessments, to run Vulnerabilities & Exposures, Amazon Security Best Practices and Runtime Behavior Analysis scans in that RHEL environment on EC2 instances.


August 31, 2017  5:18 PM

EC2 Elastic GPUs boost compute efficiency, flexibility

David Carty Profile: David Carty

AWS customers can add graphic acceleration to instances, but with little flexibility. To change that, the cloud provider has finally fulfilled a promise from early last year, with Elastic GPUs that fit enterprise needs.

Developers attach Elastic GPUs to Elastic Compute Cloud (EC2) instances to boost graphics performance in applications for intermittent spikes in workloads. EC2 Elastic GPUs are network-attached compute power available in sizes ranging from 1 GB to 8 GBs.

GPU users were previously limited to spinning up a G2 or G3 instance. But those require investment in a full physical GPU, which overshoots some business needs, resulting in costly and wasteful resource usage. Teams can use Elastic GPUs at a lower price than G2 and G3 instances, using just a portion of the physical GPU for graphics-intensive apps.

Elastic GPUs also help customers that need graphics acceleration without being restricted to a particular instance type. They choose another instance type – such as memory- or storage-optimized – and attach an Elastic GPU to it.

Elastic GPUs for Windows are now generally available in US-East-1 and US-East-2. AWS also revealed pricing details for Elastic GPUs and documentation.

Busy month for AWS

August was a busy month for AWS, with updates from both the AWS Summit in New York and VMworld in Las Vegas.

AWS and VMware finally released their hybrid cloud service nine months after they unveiled the partnership. Enterprises were particularly interested in pricing and functionality details, while small businesses might not be a fit for the service.

At the AWS Summit, AWS unveiled new services for migration and security, a variety of new features for Elastic File System (EFS), Config and CloudTrail, and an upgrade to CloudHSM. And AWS Glue, a service revealed at last year’s re:Invent, is now generally available.

More new features and support

  • DynamoDB adds VPC Endpoints. Amazon DynamoDB offers more secure network traffic via a free Virtual Private Cloud (VPC) Endpoints feature, which is now generally available. VPC Endpoints keeps traffic within the AWS cloud instead of exposed in the public internet, in line with businesses’ strict compliance needs.
  • More HIPAA eligibility. A new AWS Quick Start helps healthcare enterprises automate a deployment based on a CloudFormation customizable template that adheres to HIPAA regulatory requirements. Additionally, Amazon Cloud Directory implemented new controls to help teams build and run apps that meet HIPAA and PCI DSS guidelines. As with all HIPAA-eligible services, an AWS user must first execute a Business Associate Agreement before building an app that achieves compliance.
  • Develop serverless functions locally. A new beta Command Line Interface tool, AWS Serverless Application Model (SAM), enables dev teams to test and debug AWS Lambda functions on premises. Developers can write functions in Node.js, Java, and Python, choose an integrated development environment, and simulate function triggers and make calls via Amazon API Gateway to invoke functions.
  • AWS Marketplace adds functionality, new region. Users can now visualize, analyze and control their AWS Marketplace spending via new integration with several existing cost management tools: AWS Cost Explorer, AWS Cost and Usage Report and AWS Budgets. In addition, the AWS Marketplace also is now available in the AWS GovCloud region for public sector customers.
  • New capabilities for Simple Email Service. A new Reputation Dashboard helps Amazon Simple Email Service (SES) users track bounce and compliant rates for an account, and act on sending failures. Amazon SES also added dedicated IP pools so an AWS customer can send emails from a specific IP address, or organize IP addresses into configurable pools for large email sends. SES also added capabilities that enable businesses to track and optimize email recipient engagement.
  • AWS adds global edge locations. AWS added three new edge locations for its Amazon CloudFront CDN service: Chicago (now home to two edge locations), Frankfurt (six locations) and Paris (three locations). In all, AWS has 93 global edge locations.
  • Amazon RDS SQL Server quadruples max database size. Database instances for SQL Server on Amazon Relational Database Service (RDS) now range up to 16 TB of storage, four times higher than the previous maximum of 4 TB. The range for IOPS to storage also increased five times, from 10:1 to 50:1. With these new limits, available on Provisioned IOPS and General Purpose storage types in all regions, databases and data warehouses can support larger workloads without additional RDS instances.
  • New CodeCommit features. Amazon’s code repository service, AWS CodeCommit, added several new features and integrations. The service now sends repository state changes to Amazon CloudWatch Events, which enables developers to trigger workflows based on those changes. CodeCommit users can now view, change and save preferences to customize the service’s dashboard presentation. Finally, CodeCommit added a Git tags view that eases code repository navigation.
  • EFS adds more permissions. Amazon EFS added support for special permissions, enabling administrators to customize granular access permissions for directories. EFS now supports setgid, which applies ownership of new directory files to the group associated with the directory, and sticky bit special permissions, which restrict file deletion or renaming to either the file or directory owner or to the root user. EFS users can now also manage access to executable files so that end users can launch them but not read or write them.
  • CloudTrail supports Lex. Amazon CloudTrail now integrates with Amazon Lex to track application programming interface (API) calls to and from the conversational interface app.
  • New render management tool. AWS’ new render management system, Deadline 10, is now available, allowing developers to launch and manage rendering fleets.
  • Amazon Cloud Directory boosts search performance. Amazon Cloud Directory users can now optimize searches by defining facets of schema to limit queries to subsets of a directory. A schema contains multiple attributes called facets, which help create different object classes and enable multiple apps to share one directory.


July 31, 2017  11:27 PM

G3 instance doubles predecessor’s processing power

David Carty Profile: David Carty

Amazon is upgrading its compute power to court more cloud-hosted graphics-intensive workloads, seeking to benefit from the high cost customers pay for that heavy compute power.

AWS has added a new G3 instance to its graphics-optimized Elastic Compute Cloud (EC2) instances, to power 3-D rendering or visualization, computer-aided design, video encoding augmented / virtual reality workloads. While the hardware upgrade could entice enterprises, IT teams should be wary of high costs and processing times with the instances.

The largest of the three G3 instances contains twice the CPU processing power and eight times the memory of the previous G2 generation. The instances, which provide enhanced video encoding and networking features, run on Intel Xeon E5-2686 v4 (Broadwell) processors and backed by NVIDIA Tesla M60 GPUs.

AWS customers can launch EC2 instances from the AWS Management Console, AWS software development kits, AWS Command Line Interface and other libraries.

New features and support

  • Amazon Inspector adds triggers. The Amazon Inspector service, which assesses security vulnerabilities in AWS deployments, can launch automatic scans through integration with CloudWatch Events. With Assessment Events, a customer can create event rules in CloudWatch that notify Inspector to run an assessment on a cloud environment. Users can also schedule recurring assessments and monitor other services to look for event triggers. Inspector displays Assessment Events in its console so a user can see all the triggers assigned to an assessment.
  • Visualize resource configurations. A dashboard for AWS Config summarizes account resources and makes configuration history easily accessible. The dashboard displays the number of resources in an account and resources by type, so an administrator can quickly identify resources that fail to comply with AWS Config Rules.
  • CloudWatch gains speed. Amazon CloudWatch now supports high-resolution custom metrics and alarms,enabling SysOps to monitor deployments in seconds. Metrics publish in as little as one second and alarms occur in as few as 10 seconds, for more immediate and granular visibility into a cloud environment. The support also includes dashboard widgets.
  • Spot Fleets improve tagging. Users can now apply up to 50 tags to EC2 instances launched in a Spot Fleet, to quickly identify specific instances and improve access control, compliance protocols and cost accounting for those compute resources. SysOps defines which tags they want to apply to Fleets, which apply those tags to individual instances. The tagging feature is available in all regions.
  • New HIPAA eligibility. Two Amazon services gained HIPAA eligibility and PCI compliance. Amazon WorkSpaces is a desktop as a service that enables administrators to deploy HIPAA-compliant work environments for employees. The service also adheres to Payment Card Industry (PCI) Security Standards, which lets applications and files safely interact with data from card holders. Amazon WorkDocs, a file sharing and collaboration service, can safely handle sensitive health or cardholder information with HIPAA eligibility and PCI DSS compliance. Both updates help AWS customers, particularly in the healthcare field, conform to strict compliance standards.
  • Lambda@Edge goes GA. Eight months after its unveiling, the AWS Lambda@Edge service is generally available for developers who want to run Node.js-based Lambda functions across AWS edge locations. Developers upload code to Lambda and configure it to trigger CloudWatch Events. AWS then routes the request to the edge location that’s geographically closest to the customer and executes it. For example, an IT team can create custom web pages and logic at lower latencies for individual Lambda requests based on their geographic origins.
  • Reduce unwanted email. An added flow rules feature in Amazon Workmail enables an IT team to filter inbound email traffic to reduce unwanted email messages from specific senders, route email to junk folders and ensure delivery of priority email. Rules can apply to individual email addresses and entire email domains that AWS hosts.


July 11, 2017  8:31 PM

Memory issues shine light on hidden serverless environment costs

David Carty Profile: David Carty

This is a guest blog post by Bob Reselman, a nationally known developer, system architect, writer and editor. You can read more of his work at DevOpsAgenda.com.

Serverless computing is all the rage among developers, and with good reason.

A serverless environment is the new vista in modern application development. AWS has Lambda; Microsoft has Azure Functions; Google has Cloud Functions. These technologies are not going away. In fact, we’ll see a lot more work take place to create, build and test code in which the function is the unit of deployment.

Serverless-based applications are easy to architect and easy to deploy. A developer decides the services he needs, wires them up in a script, hits the deploy button and runs some tests — that’s it. Developers don’t need to worry about hardware, capacity or scalability; the serverless provider takes care of all that. Just pay the bill for the resources you use.

It couldn’t be simpler, right? Well, maybe not.

The architecture of a serverless environment with a simple REST API architecture implemented in AWS is fairly straightforward. A set of RESTful endpoints uses Amazon API Gateway and wires each endpoint to some AWS Lambda functions. One Lambda function uses Simple Storage Service (S3) as a data store, and the others store data in an Amazon DynamoDB database.

The API Gateway provides a way to get data in and out of the application; the functions handle computation, while S3 and DynamoDB provide the data storage. What’s not to like? AWS will scale up your application as needed. All you need to do is pay the bill.

So, let’s talk about that bill. Let’s use Will, a systems engineer, as an example.

Will is a low-level engineer who works on content delivery networks for a major telecom. He works closely with bare metal, well below the surface of the average developer’s day-to-day dealings with the cloud. In Will’s world, memory allocation counts.

Over the years, with the growing popularity of higher-level languages such as C# and Java, the common Linux command malloc, which requests memory from the operating system, has become hidden in the language runtime engines, including the common language runtime for .NET and the Java VM. But memory has to be allocated no matter what, and the way you get memory is via the operating system using malloc:

char *str;

str = (char *) malloc(15);

Here is where it interesting: the efficiency of malloc varies depending on your implementation. Standard malloc is inefficient in situations with a high degree of concurrency in multiprocessor environments, so Will won’t use it. It locks up memory — used or unused — and places extra burden on the CPU. Will prefers tcmalloc, created by Google, which exposes configuration capabilities that allow memory allocation to work more efficiently. And it avoids wasteful CPU cycling.

So, what does a memory allocation binary have to do with your AWS bill? It actually has a lot to do with it.

AWS makes money on Lambda by billing you for the time it takes to execute code, which translates into CPU utilization — though you also get billed by your request volume. Thus, every piece of code in your Lambda function that declares a variable is subject to the memory allocation executable, which is most often malloc. That means you might have created code that runs squeaky clean on your local machine or even in a private cloud. But when it gets to AWS, it kills the CPUs.

The provider’s memory allocation infrastructure might not be optimized, so wasteful cycles get spun and you get billed. It’s just like giving a package to a messenger and letting him determine the best route, which might include a lot of stop lights. You pay for the messenger’s time no matter the route efficiency.

Of course, I am not saying AWS is a nefarious agent; quite the opposite. But the serverless environment is theirs to run, and the IT shop doesn’t have a lot say in the matter other than region selection.

Without the ability to optimize a serverless environment to accommodate computationally intensive applications, there is a real financial risk for enterprise IT teams. Hopefully, the major players realize that user optimization for cloud services offers a competitive advantage and more granular capabilities. Otherwise, engineers will fly blind without the aid of instruments on the control panel. And, as we’ve learned on the terrain, when disaster looms, you can’t fix what you can’t see.


Page 1 of 3123

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: