AWS Cloud Cover

Page 1 of 41234

April 17, 2018  2:54 PM

Three laws of IoT connectivity govern AWS’ strategy

David Carty Profile: David Carty

The growth and proliferation of connected sensors has drawn AWS headlong into the IoT market. And a recent keynote speech provided clues as to how the cloud provider shapes its tools to manage those workloads.

Many industries such as agriculture, health care, and energy have embraced the IoT, but businesses must also face IoT connectivity and other technology limitations particularly as they pertain to cloud, said Dirk Didascalou, AWS vice president of IoT, at the MIT Enterprise Forum’s recent Connected Things conference. In a question and answer session, Didascalou revealed three principles that govern his team’s present and future strategy and necessitate the need for edge computing to address IoT connectivity concerns.

“We call them laws because we believe they will still be valid also with the advance of technology,” Didascalou said.

Here are those three laws:

  • The law of physics. Physical limitations of data transfers to the cloud can be prohibitive as autonomous devices increasingly need real-time responses to triggers. This means some IoT devices need some degree of local compute to get around data transfer speed limitations, particularly where safety is concerned and each millisecond delay can cost lives, as with self-driving cars. “The speed of light is only [so] fast,” Didascalou said.
  • The law of economics. Exponential data growth creates performance bottlenecks and cost overruns. It’s simply not feasible for enterprises to transmit all IoT data to the cloud in an economical fashionespecially when transmission and storage costs are factored in.
  • The law of the land. Legal and geographical restrictions can hamper data collection and transfers. For example, GDPR regulations in Europe and HIPAA guidelines in the United States mean enterprises must adapt IoT deployments to fit compliance needs. Additionally, some parts of the world don’t have the infrastructure to support regular IoT connectivity to the open internet, which limits cloud availability.

Over the last six months AWS has reinforcedits IoT strategy with services for simpler Lambda invocation, device management, security policies, IoT analytics and microcontrollers. Didascalou’s three laws could hint at enhanced AWS edge compute capabilities to negate the limitations of unreliable or unfeasible IoT connectivity to the cloud.

“As long as you believe that these three [laws] will coexist, we need to figure out with our customers, ‘How can you take the benefit of the cloud but do local compute?'” Didascalou said. “[These laws] won’t go away; they will be there forever. So we just try to find a technical solution to that instead of pretending it’s not going to happen.”

April 12, 2018  7:20 PM

AWS Lambda serverless platform holds center stage for devs

Jan Stafford Jan Stafford Profile: Jan Stafford
AWS, DevOps, Serverless computing

Enterprises are adopting AWS Lambda faster than competing serverless platforms from Google, Microsoft, IBM and others, citing ease of use to replace manual with automated functions and the broad reach of Amazon cloud services.

AWS Lambda — an event-driven automation platform — owns 70% percent of the active serverless platform user base, according to a survey by the Cloud Native Computing Foundation. By comparison, the nearest competitors’ shares were much lower, with Google Cloud Functions at 13 percent and Microsoft Azure Functions and Apache/IBM OpenWhisk at 12%.

Amazon released the AWS Lambda serverless platform in 2014, while the above-mentioned competing products came out in 2016. In that between time, AWS made hay while the sun shone. “AWS really got people to pay attention to Lambda and — unusually for enterprises — start using it quickly,” said Daryl Plummer, managing VP and chief of research at Gartner in Atlanta, Ga. Enterprises’ prototype phases for Lambda shortened to a few weeks from what is usually a months-long process, said AWS technology evangelist Jeff Barr.

Attendees at an AWS Summit in San Francisco last week cited their reasons to embrace AWS Lambda. Matthew Stanwick, systems analyst at Sony Network Entertainment International in San Diego, Calif., said he finds it easier to script and deploy simple tests and terminate cloud instances. “I can build tests right there on the console with no problem,” he said.

AWS Lambda’s family ties

Amazon doesn’t have any particular advantage in serverless beyond competitors Google, Microsoft or IBM, but it has better promoted Lambda’s ease of use and an overall services portfolio that support serverless, said Plummer. For example, Lambda hides some of the more complex mechanisms such as Amazon EC2, upscaling and downscaling and VM management, and it can be used as a front end to facilities like S3 or CloudFront caching for content delivery. “In short, anything that AWS does can be made easier and front-ended by Lambda,” he said.

AWS also quickly connected Lambda to many different event sources, Barr said. “People started to think of it as this nervous system they could connect up to the incoming flow of data into S3, to message queues and to notifications that are wired into different parts of the AWS infrastructure,” he said. At its release, AWS Lambda was made a part of the platform structure behind the Alexa Voice Service, and that gave developers a practical place to try out serverless. “Developers can deliver functions and be responsive without having to rebuild the platform itself,” Plummer said. Alexa skill code can be released as a AWS Lambda function that, typically, enables voice-activated activities. Software actions or natural world events also can generate events. For example, a request for the time can trigger a time function embedded in the interaction model for Alexa.

Lambda’s serverless support of the AWS family of services makes is less risky than investments to build a serverless architecture, said Clay Smith, developer advocate at New Relic. People can run experiments with it, such as DevOps automation tests, and if these small ventures don’t succeed they’ve only paid for usage time, he said.

What’s ahead for serverless

Right now, serverless platforms are still more of a sideshow, Plummer said. But soon vendors will deliver more and more critical functions to make the technology more robust, and usage will spread. Eventually, everyone will have a serverless platform beneath their newest applications, to provide flexibility for the people-centric workloads d built today, he said.

“Imagine a world where you are not searching through app stores anymore, but you are looking for a function,” Plummer said. It doesn’t matter who built the function, only that the function is reliable. “Functions delivered from developers all over the planet will truly realize the service-oriented architecture vision.”


March 30, 2018  7:31 PM

Extended duration for IAM roles appeases some, alarms others

David Carty Profile: David Carty

Cloud security best practices and IT convenience don’t always align, but as standards such as GDPR take hold and new vulnerabilities constantly emerge, maybe it’s ok to loosen the reins from time to time.

AWS has increased the maximum session time for Identity and Access Management (IAM) roles, extending the cap from one hour to 12 hours. Federated users can request credentials from the AWS Security Token Service via the AWS SDK or Command Line Interface.

AWS recommends the lowest possible threshold for IAM roles, but IT teams complained they were kicked out during long-running workloads. This move should appease those folks, even if extended-time credential validation is a cloud security no-no. Teams with tight security restrictions might want to steer clear, or at least stay below the time limit for IAM roles.

Some AWS admins expressed confusion over whether the IAM roles’ duration applied to CloudFormation, but AWS’ blog post explicitly mentions that use case. In a reply to a reader comment, AWS stated that a “CloudFormation template will respect the session duration set for your IAM role.”

In addition to the extended IAM role duration, AWS rolled out several other new features this month, including those related to DynamoDB, its own documentation and containers, that might pique the interest of dev and operations teams.

DynamoDB gives backup a boost

Amazon continued to enhance its DynamoDB NoSQL database service with the addition of two backup features: continuous backups, and point-in-time recovery (PITR), which was previously in preview. PITR is enabled via the AWS Management Console or an API call, an application can make erroneous writes and deletes until its digital heart’s content. Admins can restore the DynamoDB table back up to a maximum of 35 days out, or contact AWS to restore deleted tables.

These features, along with DynamoDB Global Tables and Multi-Master, eliminate several DynamoDB enterprise workarounds. AWS also released DynamoDB Accelerator last summer to boost database performance at scale.

AWS opens its books — sort of

Another AWS update in March could be a boon for some AWS developers: the ability to access and submit pull requests on AWS documentation through GitHub. AWS open sourced more than 100 user guides to GitHub, which should help its documentation team clarify concepts, improve code samples and fix bugs.

This will surely improve AWS documentation, but developers also want more transparency, said Mike Tria, head of infrastructure at Atlassian, in a discussion with SearchAWS Senior News Writer Trevor Jones.

“The more they open that stuff up, the more my developers can know how [AWS] is building and build appropriately to that,” he said. “It enables developers to make assumptions about how it works, as opposed to thinking it’s just AWS magic.”

Containerize your excitement

Lastly, an additional service discovery feature for Amazon Elastic Container Service (ECS) simplifies DNS housekeeping for services within a VPC. This feature removes the need for AWS admins to run their own service discovery system or connect containerized services to a load balancer. ECS now maintains a registry that uses the Route 53 Auto Naming API, then maps aliases to service endpoints.

The service discovery feature also enables health checks — via either Route 53 or ECS, but not both — to ensure that container endpoints remain healthy. If a container-level check reveals an unhealthy endpoint, it will be removed from the DNS routing list.


March 1, 2018  8:18 PM

Serverless apps, encryption top AWS features in February

David Carty Profile: David Carty

Many cloud developers espouse the benefits of serverless computing, but others find the approach unwieldy or difficult to manage. The AWS Serverless Application Repository, one of several AWS features released into general availability in February, can help wary developers join the Lambda fraternity — or sorority.

The Serverless Application Repository service enables developers to publish serverless application frameworks to share privately with a team or organization, and publicly with other developers. Likewise, developers can deploy serverless code samples, components or entire apps that cover a variety of uses. Each application in the repository breaks down the AWS resources it will consume — don’t confuse “serverless” with “free.”

A developer can search the repository for an application that fits his or her use case via the AWS Lambda console, AWS Command Line Interface and AWS SDKs. He or she can then tweak the configuration as desired before deployment.

A variety of software providers, including Datadog, Splunk and New Relic, contribute to the Serverless Application Repository to broaden its reach into areas such as internet of things and machine learning processes. The Serverless Application Repository is currently available in 14 global regions.

Red Hat opens the door to AWS features, hybrid cloud

AWS’ embrace of hybrid cloud technology opens up new avenues for other software companies. Among them is Red Hat, which last month released Red Hat Satellite 6.3 with deeper integration with Ansible and AWS. Returning the favor, Amazon EC2 now supports Satellite and Satellite Capsule Server, enabling users to manage their Red Hat infrastructure via EC2 instances.

Don’t rest on your security responsibilities

If last year’s rash of S3 bucket leaks didn’t scare the daylights out of you, perhaps nothing will. Those with a healthy fear of exposure can now use DynamoDB for server-side encryption via AWS Key Management Service. A user can continue to query data unabated, while the AWS-managed encryption keys protect security-intensive apps.

Unlike other AWS features and services, DynamoDB’s server-side encryption lacks the option to use customer master keys — for now — and only works natively for new tables. Encryption at rest adds to another recent DynamoDB feature, VPC Endpoints, which isolates databases from internet exposure and unauthorized access.

Alexa, use your indoor voice

The next time you speak softly to your computer, it might just whisper back.

The Amazon Polly text-to-speech service adds a phonation tag that enables developers to produce softer speech, one of several new AWS features that enhance voice output options available via Speech Synthesis Markup Language (SSML), including volume enhancement and timbre adjustment.

Amazon Connect also added support for SSML to control certain aspects of speech for customer contact center calls. And Amazon Lex added support for customized responses directly from the AWS Management Console, which simplifies the process of building chatbots.


February 28, 2018  7:00 PM

Dropbox is likely an outlier with its successful cloud data migration off AWS

Trevor Jones Trevor Jones Profile: Trevor Jones

Dropbox’s move off AWS was a windfall for the company, but most traditional corporations shouldn’t bank on similar success.

The file hosting company saved nearly $75 million in infrastructure costs over the past two years following a cloud data migration off a “third-party data center service provider,” according to an S-1 form filed with the U.S. Securities and Exchange Commission. That third party isn’t named, but it’s likely AWS, given Dropbox’s past statements.

Those savings may reinvigorate debate in some circles about whether to house infrastructure on-premises or on the cloud, but Dropbox is likely more of an outlier than a harbinger of what others can expect.

Plenty of case studies highlight the cost benefits to move to the public cloud, but Dropbox’s feat is rare. First, the company was an early AWS success story that made waves in 2016 when it disclosed it had moved 90% of its users’ data to its own custom-built infrastructure. Second, Dropbox’s IPO filing precipitated a financial report with unique insights into those cost differentials, and many customers may balk at such deep disclosures.

Dropbox was never 100% on AWS, nor has it completely abandoned AWS. It originally split its architecture to host metadata in private data centers and to host file content on Simple Storage Service (S3), but Dropbox built systems to better fit its needs, and so far that has translated to big savings following its cloud data migration. That transition didn’t come cheap, however.  The company spent more than $53 million for custom architectures in three colocation facilities to accommodate exabytes of storage, according to the S-1 filing.

Dropbox said it stores the remaining 10% of user data on AWS, in part to localize data in the U.S. and Europe, and it uses Amazon’s public cloud to “help deliver our services.” (Dropbox declined to comment for this report.)

Dropbox can serve as an example for SaaS or online services providers that don’t want to outsource a key pillar of a business value proposition to AWS, said Melanie Posey, an analyst with 451 Research. But that model may be feasible only for digital service providers with established businesses and patterns of demand, she said.

After years of debate about on-premises versus the cloud, IT leaders have become less dogmatic, and more pragmatic. Corporations increasingly trust public cloud providers to host their workloads, but most established corporations won’t relinquish the entirety of their private infrastructures any time soon. That’s why the major cloud providers, which make money hand over fist as customer data flood their hyperscale data centers, have either partnered with a private data center stalwart or built their own scaled down versions of their cloud to sit inside customers’ own facilities.

Still, the debate persists in part because of the lack of clarity on bottom line costs, and straight per-server comparisons. The shift from CapEx to OpEx has caused consternation among companies that expected big savings on the cloud, particularly those that failed to account for per-second billing and other ancillary costs. Moreover, the public cloud removes parts or all of the manual work of in-house infrastructure maintenance.

Public cloud advocates argue the true benefit is not in cost, but rather in agility and access to higher-level services. It’s also unclear how true is the axiom that the public cloud becomes cost-prohibitive once workloads reach a certain scale. The best example to rebut that argument is Netflix, which operates more than 150,000 instances on AWS to serve more than 100 million customers. AWS is also known to give preferential pricing to some of its largest customers and has greatly expanded its professional services.

Perhaps the biggest takeaway from Dropbox’s migration windfall is what industry observers have said for years: focus your IT dollars on what makes your business different. Dropbox’s strategy to build one of the largest data stores in the world depended on owning custom architecture, and that bet appears to have paid off big time. But if infrastructure is just a means to an end, maybe don’t drop $50 million on physical assets that are of little consequence to your business outcomes.


January 30, 2018  4:08 PM

EC2 features aim at network, performance improvements

David Carty Profile: David Carty

AWS users have a green light to rev the full bandwidth potential of a particular instance.

AWS removed its 5 Gbps limit and improved performance for network connections to and from Elastic Compute Cloud (EC2) instances. One of several new EC2 features, this speeds up connections between EC2 instances and Simple Storage Service (S3) resources, as well as connections between instances in different availability zones. Network bandwidth is now up to five times faster for instances with enhanced networking capabilities.

AWS also unveiled other EC2 features in January. Developers can now pause and resume C5 and M5 Spot Instances and Fleets that rely on Elastic Block Store (EBS). When AWS interrupts a workload due to Spot Instance price increase, those instances can now hibernate and retain their ID numbers, instead of the default to terminate and restore from the EBS root device.

These new instance capabilities add to the slew of new EC2 features and types unveiled during AWS re:Invent, including bare metal, M5, H1 and T2 Unlimited instances.

2018 predictions

AWS’ lead in the public cloud market might be in danger, as Microsoft and Google capture more customer workloads. How will AWS respond to the threats of its competitors, and what’s in store for 2018? Our SearchAWS contributors weigh in with their predictions for this year.

New features and support

  • Glue expands functionality. AWS Glue added conditional event triggers for failed and stopped jobs. Previously, the service could only trigger new extract, transform and load (ETL) jobs when another job succeeded. Administrators can now provide a list of events to track for succeeded, failed and stopped state changes and trigger ETL jobs accordingly.Glue also added support for the Scala programming language, so developers can run Scala scripts via development endpoints and jobs.
  • New serverless coding options. AWS Lambda now supports C# and Go programming languages for .NET Core 2.0. With C#, a developer can use the AWS Toolkit for Visual Studio to access templates for Lambda functions, applications and tools, or they can code manually. Developers can upload Go artifacts through the AWS Command Line Interface or AWS Management Console, and Lambda will work natively with Go tools.
  • Boost database DR. Businesses can now deploy Amazon Relational Database Service (RDS) read replicas in multiple available zones to enhance availability. AWS expanded this disaster recovery functionality for MySQL and MariaDB databases by enabling the RDS service to automatically fail over to a standby database instance if infrastructure fails.
  • Audit SageMaker logs. AWS CloudTrail now logs API calls made with the AWS SageMaker machine learning service. CloudTrail delivers those API calls to an Amazon Simple Storage Service bucket for administrative assessments.
  • Ruby beta for tracing service. AWS X-Ray added an open source SDK for Ruby, for developers to generate and send trace data from distributed web applications. X-Ray also includes support for Java, Go, Node.js, Python and .NET programming languages.
  • Two minutes on the clock. An AWS user can now set up a push notification to trigger when an EC2 Spot Instance has two minutes left before AWS reclaims it. The two-minute warning is available in Amazon CloudWatch Events, through which an admin can set up a rule to route the event to other services, such as Simple Notification Service.
  • Import, replicate PostgreSQL instances. A pair of capabilities gives PostgreSQL engineers new options during migrations from RDS to Aurora. First, an engineer can continuously replicate live workload migration from an RDS PostgreSQL instance to Aurora PostgreSQL. Also, engineers can now import encrypted snapshots to protect that data during the migration.
  • Get to learning. AWS updated its slate of Deep Learning Amazon Machine Images (AMIs). Developers can configure AMIs with a shared environment for source code and deep learning frameworks, using TensorFlow 1.5.0, which supports the drivers and GPUs behind AWS’ P3 instances. They also can build AMIs based on the open source Conda package and environment management system, including TensorBoard and TensorFlow Serving, to manage the open source machine intelligence library. The latter also added support for the latest versions of Caffe, Keras, Microsoft Cognitive Toolkit and Theano.
  • Those are the rules. AWS Config added support for seven predefined rules that verify whether or not your AWS resources are configured in accordance with best practices. These managed rules apply to CodeBuild, Identity and Access Management, S3 and AWS load balancers.

David Carty is the site editor for SearchAWS. Contact him at dcarty@techtarget.com.


December 29, 2017  6:03 PM

Amazon API Gateway boosts compression, tagging

David Carty Profile: David Carty

AWS customers were enticed by products and services introduced at the cloud provider’s annual customer and partner confab, re:Invent, held recently. AWS also kept up a steady pace of basic service updates to round out 2017, which included some API management capabilities.

Amazon API Gateway now offers content encoding support, which lets a client  compress content before a response to an API request. This feature can cut costs and improve performance, as it reduces the amount of data sent from the service to clients. Developers can define the minimum response size and enable encoding in the API itself.

The service also lets developers use application logic in custom Lambda authorizer functions to support API keys. This makes it simpler to control usage assigned to API requests, and the feature also allows teams to track request properties to API keys, such as HTTP request headers.

Additionally, Amazon API Gateway lets teams tag API stages for better organization of resources. Teams can filter API stage allocation tags through AWS Budgets to potentially reduce costs. The API Gateway feature also helps categorize APIs.

Catch up on re:Invent

AWS released several products and features at its AWS annual re:Invent conference that were not called out in this blog. Catch up on what you missed with oodles of re:Invent news and analysis from our team of writers.

New features and support

  • Restart logic in ECS. The Amazon Elastic Container Service (ECS) scheduler lets a developer program logic to control retry attempts for failing tasks. This feature reduces the potential cost and performance impacts of continuous attempts to run tasks that fail. The schedule can increase time between restart attempts, stop the deployment and add a message to notify developers.
  • Speed up Redshift queries. AWS’ data warehouse, Amazon Redshift, added late materialization with row-level filters to improve performance by reducing the amount of data it scans. Predicate filters reduce scans to only table items that satisfy criteria to boost query performance by attrition. AWS enables this feature by default.
  • Customize edge error responses. Lambda@Edge now lets developers respond with Lambda functions when CloudFront receives an error from your origin. Developers can access and define responses for 4XX and 5XX error status codes, and they can add headers, redirects and dynamically issue responses to end users based on their requests.
  • Send real-time SQL data to Lambda. Developers can configure Amazon Kinesis Data Analytics to output real-time data to AWS Lambda. From there, they can code functions that respond to that SQL data, such as send an alert or update a database.
  • Cross-account S3 bucket access from QuickSight. Data analysts can now use a QuickSight account tied to a specific AWS account to access data stored in Simple Storage Service (S3) buckets that belong to another AWS account. This cross-account S3 access enables more seamless data analysis for large businesses with multiple departments.
  • More instance support for PostgreSQL databases. Amazon Relational Database Service (RDS) for PostgreSQL added support for R4, db.t2.xlarge, db.t2.2xlarge, and db.m4.16xlarge instances for enhanced performance.
  • Increase ES scale, decrease cost. Amazon Elasticsearch Service (ES) added support for I3 instances, which improve upon the previous generation of I/O-intensive instances. With I3 instances, developers can use up to 1.5 PB of storage in an ES cluster, 15 TB of data in each node, 3.3 million IOPS and 16 GB/s of sequential disk throughput – all for less than half the cost of I2 instances.
  • A NICE combination. After acquiring NICE in 2016, AWS combined with the Italian software company to release Desktop Cloud Visualization (DCV) 2017, a steaming and remote access service. DCV 2017 improves on-premises capabilities, and the service is now available on EC2 instances, such as Elastic GPU. AWS customers only pay for the underlying compute resources.
  • CloudFront enhances encryption. AWS’ content delivery network, Amazon CloudFront, introduced field-level encryption to protect sensitive data with HTTPS. This feature can be helpful for financial or personally identifiable information, ensuring that only specific components or services in a stack can decrypt and view that data.
  • Use containers in CD pipelines. Amazon CodePipeline added integration with container-based deployments to Amazon Elastic Container Service and AWS Fargate. Developers push code changes through a continuous delivery pipeline, which calls the desired service to create a container image, test and then update containers in production.
  • Process MySQL queries faster. Amazon Aurora sped up query processing with support for hash joins and batched scans. These features are available for Amazon Aurora MySQL version 1.16.
  • CloudWatch adds new visuals, encryption support. Amazon CloudWatch added two new chart visuals: zoom, for magnification of a shorter time period, and pan, for browsing a specific time interval. Administrators can find these visualization options in the CloudWatch Metrics console and dashboards. CloudWatch Logs also added support for integration with AWS Key Management Service (KMS), which enable an admin to encrypt logs with AWS-managed keys, if they choose.
  • KMS integrates with ES. Developers can now encrypt data at rest in Amazon ES with keys managed through KMS. This feature lets data scientists use ES while encrypting all data on the underlying file systems without application modification.
  • Set alerts for free tier usage. AWS Budgets include the capability to track service usage and send an email alert to administrators if it forecasts usage to exceed a free tier limit.
  • Define an IoT backup plan. Developers can now define a backup action in Amazon IoT Rules Engine if a primary action fails. In addition to keeping an application running, this feature preserves error message data, which can include unavailability of services and insufficient resource provisioning.


December 8, 2017  9:08 PM

Could AWS’ torrid pace of innovation come back to haunt the cloud giant?

Trevor Jones Trevor Jones Profile: Trevor Jones

Another AWS re:Invent has come and gone, with another slew of new products to delight its fans. But in the cloud, can there be too much of a good thing?

The user conference was bursting at the seams this year, with 43,000 people shuffling in controlled chaos between six hotels that spanned two miles of the Las Vegas strip. The show is part networking, part training exercise, but more than anything it’s a victory lap for AWS and its prodigious pace of innovation. But could that overstuffed sprawl portend future problems for the platform itself? With roughly two dozen new products or updates lumped on top of AWS’ already extensive IT portfolio, does the cloud giant run the risk of spreading itself too thin, or at a minimum overwhelming its customers with choices?

Some conference attendees acknowledged this is a concern, though the consensus was that Amazon hasn’t shown any signs yet of failing where other tech companies have before.

“It would be reckless to say we don’t think about it,” said Biba Helou, managing vice president of cloud at Capital One. “But they really do seem to have a really good model for how they incubate and build products and then gain momentum based on customer feedback and then put the resources into what they need to.”

AWS’ track record with products isn’t perfect. Elastic File System remains a subject of consternation for some, and other services such as AppStream have been criticized for falling short of their initial promise. Nevertheless, users remain assured by a development model that organizes small teams to focus on specific products and features. And AWS has a history of releasing a base product and adding to it over time. Customers have become so conditioned to that model that despite frustration with a new product’s lack of a certain feature or language support, they’re content to assume that piece will arrive eventually.

Customers also find comfort in AWS’ continued investments in its core services. Alongside sexier new products rolled out at AWS re:Invent 2017 were a handful of updates to Amazon Elastic Compute Cloud and Amazon Simple Storage Service.

Still, the company that started out selling basic compute and storage has added a staggering number of products over the last 10-plus years, and shows no signs of slowing down. There’s a greater focus today on managed services and even a push into business services with products such as Amazon Chime and Alexa for Business. And AWS CEO Andy Jassy told conference attendees to expect more innovation over the next decade than the previous one.

The backdrop to all this product expansion is intensified competition. AWS still dominates the market with impressive, yet slowing, year-over-year revenue growth of 42%, and its market is still growing, according to a Gartner study. But for a company that claims its product decisions are tethered to customers’ wishes, part of that response now has to address services that customers can find in Microsoft Azure or Google Cloud Platform (GCP).

For example, machine learning and containers are two areas AWS has been criticized for falling behind Azure and GCP. Lo and behold, at AWS re:Invent, AWS added a bevy of services to fill those gaps. AWS added bare metal servers — which didn’t excite anyone I spoke with at the show, but checks a box for any enterprise that compares the AWS platform to alternatives from IBM or Oracle.

Amazon is looking at the laundry list of cloud services people want to implement and trying to cover as many of those requests as possible,

“There’s definitely that risk [of overextending] but the big play was about making it clear they’re trying to remove as many of those incentives as possible to move to any other cloud,” said Henry Shapiro, vice president and general manager at New Relic, a San Francisco-based monitoring and management company and AWS partner.

And while users and partners feel confident that AWS will address this theoretical problem, the dizzying pace of releases creates a practical problem for users today. AWS has excelled at democratizing technology and packaging it for the masses, but it can be a challenge for people to understand the breadth of services, said Owen Rogers, an analyst with 451 Research. That’s why the partner ecosystem will be crucial to AWS’ future growth, as those companies step up to help resolve the complexity so enterprises can navigate the landscape.

And enterprises contend with more than just the AWS learning curve. Amid a larger shift in how companies build and deploy applications, nearly every enterprise is scurrying to address clichés about digital transformation and avoid being undercut or outflanked by some tech upstart.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


December 2, 2017  12:34 AM

AWS CEO Jassy shares thoughts on the future of AWS, cloud

Trevor Jones Trevor Jones Profile: Trevor Jones

It should come as no surprise, but AWS CEO Andy Jassy is awfully bullish about the company he leads.

Jassy sat down for a press Q&A following his keynote speech here at re:Invent this week. Most of the roughly 45-minute session focused on why he sees AWS as the best place for cloud workloads, but he also shed some light on the future of the platform and mused on the state of IT. The following are selected excerpts from his responses, edited for brevity.

Blockchain

AWS looks closely at the technology and has lots of customers and partners looking to build blockchain on top of AWS. Other cloud vendors such as IBM and Microsoft have added services in this space, but Jassy implied AWS won’t follow suit any time soon:

“We’re watching it carefully. When we talk to customers, they’re really into blockchain as a concept but we don’t yet see a lot of practical concepts that are much broader than using a distributed ledger.”

Jobs

AWS is all about automation, that includes the elimination of humans from the equation. Some jobs and tasks will fall by the wayside. AWS has added AI services that could directly displace employees in areas such as translation and transcription, but Jassy sees the net result of these innovations as more, different jobs in the future:

Even before AI, if you look at part of what’s going on in the U.S. there are so many people who historically followed relatives into the mills and the mines and factories and agricultural fields and those jobs have moved out of the U.S. and they’re not likely to move back any time soon. It’s progress and people find different ways to do things but it usually opens other opportunities.

“If you look at the number of jobs that companies including Amazon have… there are tons of jobs and we don’t have enough people to do those jobs. We as a country and as a world need to change the educational systems so more people are equipped to do the jobs that are available.”

Future growth

AWS operates at an $18 million run rate and has millions of active customers, but unsurprisingly Jassy sees this as just the beginning. Future growth will be “lumpy” because enterprises and the public sector methodically adopt new technology and move tranches of workloads in stages over many years, he said.

Still, that growth likely won’t push Amazon to spin out AWS, Jassy said.

“I would be surprised if we spun out AWS mostly because there isn’t a need to do so. When companies are spun out it’s because either they can’t commit enough capital to that business unit so they do an IPO, or it’s because they don’t want the financial statements of one of those businesses on the overall set of financial statements.

If you look at the history of Amazon, we’re not really focused on the optics of the financial statements. We’re comfortable being misunderstood for long periods of time, so it’s not really a driver for how we behave.

The company has been so gracious committing whatever amounts of capital we need to growing AWS in the first 11 and a half years — and by the way, it has required a lot of capital — that there just hasn’t been a need to do so… There’s a lot of value in having so many internal customers at Amazon who are not shy about telling us how everything works.”

Multi-cloud

There’s a lot of talk these days about multi-cloud strategies. Microsoft and Google, the two companies perceived to be AWS’ closest competitors, often tout this as the way enterprises will adopt cloud in the future, but AWS has been mostly quiet on this front. When asked if AWS would do more to address these needs, Jassy downplayed the concept, saying most companies go with one provider to be their predominate cloud platform.

“We certainly get asked about multi-cloud a lot. What you see is most enterprises, when they’re thinking of making their plan of how to move to the cloud, they start out thinking that they would distribute their workloads somewhat evenly across several providers. When they actually do the homework on what that actually means, very few make that decision because it means you have to standardize at the lowest common denominator and these platforms are nowhere close to the same [as each other] today.

When you’re making a change from on premises to the cloud, that’s a pretty big change… Asking a development teams to be fluid not just on prem to the cloud but to multiple platforms is a lot. And all these cloud providers have volume discounts, so if you split your workloads evenly across a couple or even a few you’re diminishing your buying leverage.”

Data center expansion

AWS currently has 16 regions and 44 availability zones, with plans to add seven regions and 17 availability zones in the next two years. Jassy says that eventually there will be regions in “every major country” to address latency and data sovereignty. Here’s how he described the decision-making process for where to open new regions:

“We look at how many companies are there, how much technology is being consumed, how much technology are companies willing to outsource, what kind of infrastructure is there — what’s the cost structure as well as the quality of the network and the data centers and the power and the things we need to operate effectively. And what’s the connectivity to other parts of the world, because even though our regions have large endemic customer bases, it turns out every time we open a region our customers who are primarily operating in regions outside of that country choose to deploy inside of that region as well.”

Tech and ethics

Major tech companies are increasingly scrutinized over their role to moderate the use of their platforms. With new machine learning services such as SageMaker and the DeepLens AI camera intended to make machine learning more palatable to the average developer, Jassy was asked about his company’s role in responding to potential sinister use of AWS:

“If you look at all the services that AWS has there is a potential for a bad actor to choose one of those services to do something malicious or sinister or ill-intended, in the same way you have the ability to do that if you buy a server yourself or use any other software yourself.

We have very straightforward and clear terms of service and if we find anyone violating those terms of services — and I think anything sinister would violate those terms of services — we suspend those customers and they’re not able to use our platform.”

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


December 1, 2017  10:44 PM

AWS PrivateLink clamps down on endpoint exposure

David Carty Profile: David Carty

AWS continues to push its Virtual Private Cloud as the new norm for cloud development and deployment, and further limit public internet exposure.

AWS PrivateLink enables customers to privately access services while keeping all network traffic within an Amazon Virtual Private Cloud (VPC). Instead of whitelisting approved public IP addresses, IT teams can establish private IP addresses and connect them to services via Elastic Network Interface. Amazon services on PrivateLink also support Direct Connect for on-premises connections.

Amazon later added PrivateLink support for AWS-hosted customer and partner services so developers can securely work with third-party tools. Together, AWS PrivateLink and Network Load Balancer enable administrators to identify the origin of an incoming request and route them.

AWS PrivateLink is the latest in a string of new features that secure cloud connections between resources and regions.

AWS re:Invent 2017

Amazon’s yearly cloud conference, AWS re:Invent 2017, is the launchpad for a number of product and service introductions. Visit our essential guide to catch up on all the news from the conference, plus expert tips for IT professionals across a variety of roles.

New features and support

  • JavaScript library adds to dev possibilities. With this AWS Amplify open source library, developers can code JavaScript applications for web or mobile platforms via a declarative interface, apply best practices and perform common scripting actions to speed software deployment. AWS also unveiled a command-line interface that integrates with the AWS Mobile Hub for developers to code apps from scratch.
  • Data goes on lockdown. Several additional features aim to boost data protection in Amazon Simple Storage Service (S3), which has been subject to numerous data leaks thanks to improper customer configurations. A Default Encryption setting for buckets automatically applies server-side encryption for all objects, and Cross-Region Replication improves efficiency and governance ofobjects encrypted by AWS Key Management Service.
  • Sync up. Amazon Elastic File System (EFS) now includes an EFS File Sync feature that synchronizes on-premises or cloud-based files with the service, and replace file storage and Linux copy tools that required manual configuration.
  • Upgrade your load balancer. A one-step migration wizard enables an IT team to switch from a Classic Load Balancer — formerly Elastic Load Balancing — to a Network or Application Load Balancer. Developers can view and modify load balancer configuration before deployment and add more advanced features afterward.
  • Unclutter your messages. With an added message filter for pub/sub architectures, subscribers to Amazon Simple Notification Service (SNS) can choose specific subsets of messages to receive, and reduce unneeded messages without the need to write and implement their own message filters or routing logic.
  • Personalize viewer content. Three capabilities in Lambda@Edge improve latency and simplify infrastructure. Content-based dynamic origin selection allows attribute-based routing to multiple back-end origins. Developers can also make network calls on CloudFront end user-facing events,, not just from origin-facing events. Lambda@Edge can also make advanced responses that rely on more complex logic to specialize content for specific end users.
  • Extra code protection. AWS CodeBuild now works with VPC resources, for dev teams to build and test code within a VPC and prevent public exposure of resources. Developers can also cache dependencies for more efficiency with software builds.
  • Machine learning boosts data warehouses. A Short Query Acceleration feature in Amazon Redshift uses machine learning to predict which short-running requests should move to a separate queue for faster processing – so, for example, queries such as reports and dashboards aren’t blocked behind larger extract, transform, and load requests. Another Redshift feature hops reads and writes to the next available queue without the need for a restart to improve query performance and efficiency.
  • Automate deployments locally. An update to the AWS CodeDeploy agent enables developers to deploy software code on premises to test and debug, before they move code to production.
  • Pull more strings. AWS OpsWorks now supports Puppet Enterprise, which gives administrators a managed service for Puppet automation tools for infrastructure and application management.
  • Visually modify security policies. Admins can create and manage AWS Identity and Access Management policies with a new visual editor, which makes it easier to grant least privileges with lists of resource types and request conditions.
  • Update state machines. AWS Step Functions enables developers to change state machine definitions and configurations for distributed application workflows. The API call UpdateStateMachine makes it easier to modify applications, which previously required a multi-step process.
  • Cloud carpool. AWS unveiled a reference guide for automotive manufacturers to produce vehicles with secure connectivity to the AWS cloud. The guide includes capabilities for local computing and data processing, which can be used to power voice- and location-based services, car health checks, predictive analytics and more.


Page 1 of 41234

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: