This month, AWS gave its users more machine learning capabilities along with a few opportunities to learn, train and get certified with the technology.
Announced at the AWS Summit in Santa Clara, AWS Deep Learning Containers (DL Containers) enable developers to use Docker images preinstalled with deep learning frameworks, such as TensorFlow and Apache MXNet, and can scale machine learning workloads efficiently.
Developers often use Docker containers for machine learning workloads and custom machine learning environments, but that usually involves days of testing and configuration. DL Containers will help developers deploy these machine learning workloads more quickly on Amazon Elastic Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS),.
DL Containers offers the flexibility to build custom machine learning workflows for training, validation, and deployment and handles container orchestration as well. Along with EKS and ECS, DL Containers will work with Kubernetes on Amazon EC2 as well. This new capability will enable developers to focus on deep learning — building and training new models — instead of tedious container orchestration.
AWS also added a new specialty certification for machine learning. The AWS Certified Machine Learning Specialty certification validates a user’s ability to design, implement, deploy, and maintain AWS machine learning services and processes. The exam costs $40.
Concurrency Scaling for Redshift
AWS now offers Concurrency Scaling to handle high volume requests in Amazon Redshift. Before Concurrency Scaling, Redshift users encountered performance issues when too many business analysts tried to access the database concurrently; Redshift’s compute capability lacked the flexibility to adapt on-demand.
Now, when users enable the Concurrent Scaling feature, Redshift automatically adds additional cluster capacity at peak times. You pay for what you use and can remove the extra processing power when it’s no longer needed.
AWS Direct Connect console completes global transformation
The global AWS Direct Connect console is now generally available with a redesigned UI. The service establishes a dedicated connection between an organization’s datacenter and AWS, but those connections were previously limited to links to Direct Connect locations within the same AWS region. However, users now have the ability to connect to any AWS region — except China — from any AWS Direct Connect location.
AWS also increased connection capacity — available through approved Direct Connect Partners — and lowered prices for low-end users.
DeepRacer League kicks off
The AWS Santa Clara Summit was also opening day for the AWS DeepRacer League’s summer circuit, a workshop and competition with AWS’ little autonomous car that could.
Introduced at re:Invent 2018, AWS DeepRacer is a one-eighth scale car that includes a fully configured environment on Amazon’s cloud. Operators train their vehicles with reinforcement learning models, such as an autonomous driving model. Much like a human or dog, DeepRacer learns via trial and error and users can reward their DeepRacer for success. Reinforcement learning models include reward functions that reward — think of code as a treat here — the car for good behavior, which in this case, means staying on the track. AWS DeepRacer is meant to get developers hands-on experience with reinforcement learning, a recent capability added to Amazon SageMaker.
Congratulations to Cloud Brigade, who with a time of 00:10.43 sits in the pole position on the leaderboard after the first contest. AWS’ toy cars go on sale in April.
In recent years, AWS has grown less dogmatic with regards to hybrid cloud architecture. AWS users already have some capabilities to build AWS hybrid cloud architectures with tools such as AWS Direct Connect, Snowballs, and most notably VMware Cloud on AWS. AWS Outposts, unveiled at re:Invent 2018, is perhaps the exclamation point of AWS’ long transition toward a more hybrid cloud future, with on-premises compute and storage racks made of AWS hardware. And AWS furthered this thread when it acquired the Israeli-based cloud migration company CloudEndure in January.
In February 2019, AWS’ hybrid cloud plans took another step forward with tweaks to some services that simplify the migration and integration of on-premises environments.
AWS’ Server Migration Service, which admins use to automate, schedule, and track the replication of on-premise applications and server volumes to AWS cloud, now enables them to directly import and migrate applications discovered by AWS Migration Hub without the need to recreate server and applications groupings. This will reduce the time to import on-premises applications to AWS cloud and reduce migration errors.
Meanwhile, AWS added the Infrequent Access storage class in Amazon Elastic File System (EFS) as a less expensive option for both on-premises and AWS files and resources that are sporadically used. This is a cheaper way to store larger amounts of data that you don’t use every day. Unlike standard EFS, EFS Infrequent Access carries an additional cost for every access request. Users won’t need to move or delete their data from AWS to manage costs anymore.
Finally, AWS has added architecture reviews for both hybrid cloud and on-premises workloads to its Well-Architected Tool portfolio. Based on the AWS Well-Architected Framework and developed by experienced AWS architects, the AWS Well-Architected Tool recommends adjustments to make workloads more scalable and efficient. To review workloads for their AWS hybrid cloud architecture, users select both the AWS and non-AWS Region (or regions) when they define their workload in the tool.
AWS bolsters bare metal, GuardDuty
AWS has added five EC2 bare metal instances — M5, M5d, R5, R5d and z1D — designed for all-purpose workloads, such as web and application servers, gaming servers, caching fleets and app development environments. The R5 instances target high performance databases, real-time big data analytics and other memory-intensive enterprise applications.
AWS has also added three threat detections for its security monitoring service Amazon GuardDuty: two for penetration testing and one for policy violation.
AWS Solutions opens up shop
AWS continues to put its Well-Architected Framework to use. AWS Solutions is a portfolio of deployment designs and implementations vetted to guide users through common problems and enable them to build faster. Examples include guides for AWS Landing Zone, AWS Instance Scheduler, and live streaming on AWS, among others.
More CloudFormation integrations
AWS CloudFormation now supports Amazon FSx, AWS OpsWorks and WebSocket APIs in API gateway. The interest in Infrastructure as code (IaC) is only growing with tools like Terraform and CloudFormation. But AWS needs to continue to expand its native integrations with CloudFormation to make it a more viable option for IaC.
Amazon this month added a bevy of performance guarantees to its cloud services.
Service-level agreements (SLAs) are standard practice in traditional IT, but cloud SLAs are far from universal. For most enterprises, an IT product that lacks an SLA is a nonstarter, so it makes sense for AWS to provide these contractual assurances to lure more corporate customers to its cloud.
All told, AWS added cloud SLAs to 11 services in January: Elastic File Store, Elastic MapReduce (now simply called “EMR”), Kenesis Data Streams, Kinesis Data Firehouse, Kinesis Video Streams, Elastic Container Service for Kubernetes, Elastic Container Registry, Secrets Manager, Amazon MQ, Cognito and Step Functions. The cloud SLAs vary by service, but they all include 99.9% uptime guarantee per month, with service credits if AWS fails to meet those standards.
AWS has offered SLAs for its core infrastructure services for some time, but these latest agreements follow a trend of marked expansion of Amazon’s cloud SLAs for higher-level services the vendor manages on its own internal infrastructure.
It’s hard to gauge the impact of these cloud SLAs on adoption. For example, EMR has been around for a decade without one, while Lambda, which added an SLA in October, is among the most talked about services on the platform. Still, it’s clear that AWS felt the need to put these terms in writing and is confident enough in its backend to do so.
Acquisitions and added services
The cloud SLAs are important, but no contract language generates the same buzz among IT teams as new tools to play with. In that regard, AWS came out of the gate quickly to start 2019.
It added Worklink, a service to securely connect employee devices to corporate intranets and apps; Backup, a centralized console to manage and automate backups; DocumentDB, a MongoDB-compatible document database; and Media2Cloud, a serverless ingest workflow for video content.
Open source and AWS
DocumentDB added fuel to the fire in the debate about licensing on top of open source software. AWS built MongoDB compatibility through an API, which enabled it to forego licensing restrictions MongoDB added last year.
AWS has a thorny history of contributing back to open source projects, though company leaders contend the reputation no longer fits. But, as is often the case, these things are never quite so black and white. In fact, just this week AWS became a platinum member of the Apache Software Foundation.
December didn’t deliver the avalanche of services and features that surrounded AWS re:Invent in November, but AWS didn’t exactly close out the year quietly. Amazon put its cloud networking services front and center this month with tools to secure connections for cloud-based workloads, and it also added a larger GPU-powered instance type and an EU region in Stockholm.
The newest AWS cloud networking service, AWS Client VPN, enables a customer’s employees to remotely access their company resources either on AWS or inside on-premises data centers. An employee can access the service from anywhere via OpenVPN-based clients. AWS already had a virtual private network (VPN) service, which it now calls AWS Site-to-Site VPN. However, that product only connects offices and branches to an organization’s Amazon Virtual Private Cloud (VPC) environment.
Organizations can already host OpenVPN on Amazon EC2, so they’ll need to determine if it’s cheaper to go that route and incur the charges from both vendors, or opt for this bundled, pay-as-you-go cloud networking service. Client VPN is more expensive than OpenVPN on its own, so it would come down to how much an organization spends on its instances. AWS charges hourly for the service, per active client connections and associated subnets.
Another factor to consider is management, as an organization that uses Client VPN won’t have to maintain any EC2 instances. This is the latest example of AWS’ efforts to offer services that handle the infrastructure for the user — and the cloud vendor plans to do more of this in the future, to attract enterprise clients that don’t want to deal with all those operational complexities.
Organizations can now use a WebSocket API with Amazon API Gateway. Prior to this update, users of the service were limited to the HTTP request/response model, but the WebSocket protocol provides bidirectional communication. This opens the door to a wider range of interactions between end users and services, because the service can push data independent of a specific request.
We’ll have a more thorough analysis on this feature in the coming weeks, but AWS suggests developers can use this functionality to build real-time, serverless applications such as chat apps, multi-player games and collaborative platforms.
Also on the networking front, users can now access Amazon Simple Queue Service (Amazon SQS) and AWS CodePipeline directly through their Amazon VPC, through VPC endpoints and AWS PrivateLink to securely connect services and keep data off the public internet. The Amazon SQS update in particular is a “meat and potato” item that’s more important to some users than flashier services that debuted at re:Invent, according to one prominent AWS engineer.
Lastly, organizations can now share Amazon VPCs with multiple accounts. Large customers use multiple accounts to portion off different business units or teams for security or billing purposes, AWS said. VPC sharing takes responsibility for management and configuration out of the account holder’s hands, and gives it to the IT team, which can then doll out access to these shared environments as needed.
AWS has been fairly secretive about its technology roadmaps, and drops news without warning on its corporate blog or in the flood of news at its annual re:Invent conference.
To be sure, AWS huddles with customers behind the scenes to get their feedback and determine which directions to head next. But anyone who trawls the AWS website in search of a tidy PowerPoint deck that outlines the future of a service important to their business is in for a long and fruitless journey.
Suddenly, however, last week the cloud vendor ever so slightly shifted its approach, when it quietly posted an “experimental” roadmap for AWS’ container strategy on GitHub.
“Knowing about our upcoming products and priorities helps our customers plan,” the company said. “This repository contains information about what we are working on and allows all AWS customers to give direct feedback.”
The AWS container roadmap is split into three categories: “We’re Working On it,” “Coming Soon” and “Just Shipped.” There are no major revelations in any of them; many entries relate to new regions for EKS, AWS’ managed Kubernetes service, while others are on minor to middling feature updates. Nonetheless, it provides a lot more specifics than AWS has been known to let into the wild.
That’s not to say AWS hasn’t hedged its bets. For one thing, the roadmap lists no delivery dates, because “job zero is security and operational stability,” according to AWS. The company did allow that “coming soon” means “a couple of months out, give or take.”
The roadmaps include information on the majority of development for various AWS container-based services — Elastic Container Service, Fargate, EKS and other projects — but the company said it still plans to reveal other technologies without notice, to “surprise and delight our customers.”
Roadmaps are undoubtedly a boon to customers, but they can be a thorny proposition for vendors because they’re officially and publicly on the hook to deliver. To AWS’ credit, many services it unveils are generally available at that time, or in preview. Vaporware hasn’t been an appreciable part of its modus operandi, although some attendees at this year’s re:Invent grumbled at a few rather vague product announcements.
Vendors that provide many roadmaps tend to lard them up with boilerplate exhortations that plans can change. This is particularly true for publicly traded companies, which may consider roadmap details “forward-looking statements,” a phrase that carries legal and financial weight.
Still, roadmaps are more than just a useful tool for customers. Product organizations like them too when constructed in a certain way, judging from discussions on Roadmap.com, a community site for product managers. Roadmaps should come in a number of flavors, according to several Roadmap.com contributors. For example, a development team-facing roadmap should provide realistic estimates of what can get built if no nasty technical surprises crop up. A roadmap geared for sales teams ought to list top features expected in the next couple of quarters.
A third type of roadmap is higher-level and aimed at customers, media and analysts, Roadmap.com users said. It provides a company’s big-picture plans over the next year or two, but shies away from concrete details to give room for tweaks to the strategy.
AWS hasn’t done anything close to this, but again, it’s not as if they toil in a vacuum and shut out customer input—quite the contrary.
Yet someone with influence inside AWS clearly decided more transparency into roadmaps was desirable — even if for now the focus is on containers, where the market grows more competitive by the day. Don’t expect any state secret-level dirt on AWS’ container strategy through the roadmap, but customers with money to spend on existing or new container workloads will appreciate more clarity as they make plans. Now it’s time to wait and see whether AWS’ experimental effort becomes embedded in its culture.
IT and development teams who try to keep pace with AWS’ ever-expanding portfolio have a lot to catch up on this month.
First, to see the most significant news that came out of AWS re:Invent 2018 this week, check out SearchAWS’ end-to-end guide on the show. There, you can find details about AWS’ latest AI services, databases, storage and security features, and hybrid cloud strategy — which now includes an on-premises hardware component in Outposts.
But re:Invent, and the month of November in general, brought about other important AWS features and tools for users who run day-to-day operations on the Amazon cloud – and there’s even some fodder for those who use AWS to explore outer space.
More EC2 options for management, compute
Pause and resume: Admins now have the option to pause, or hibernate, Amazon EC2 instances that are backed by Elastic Block Store (EBS). This feature enables users to maintain a “pre-warmed” set of compute instances, so they can launch applications, particularly memory-intensive ones, more quickly than if those instances had to fully reboot after a shutdown. Amazon likened the process to hibernating a laptop rather than turning it off.
Users can control the pause/resume process via the AWS Management Console, the AWS SDK or the command line interface. The feature applies to the following instance families: M3, M4, M5, C3, C4, C5, R3, R4 and R5 instances that run Amazon Linux 1.
More instance types move in: The Amazon EC2 instance family grew with the addition of A1, P3dn and C5n instance types. Intended for workloads that require high scalability, A1 instances are the first to be fueled by AWS’ ARM-based Graviton processors. The GPU-based P3dn instances, designed for machine learning, deliver four times the network throughput of the cloud provider’s existing P3 instance type. Lastly, the C5n family can use up to 100 Gbps of network bandwidth, making them a good fit for applications that require high network performance.
Additional storage, networking services
Amazon FSx: This managed file share service debuted in two flavors: Amazon FSx for Lustre and Amazon FSx for Windows File Server. The former enables users to deploy the open source Lustre distributed file system on AWS, and is geared toward high-performance computing and machine learning apps. The second version, designed for Microsoft shops, delivers the Windows file system on AWS. It’s built on Windows Server, is compatible with Windows-based apps and supports Active Directory integration and Windows NTFS, AWS said.
AWS Global Accelerator: For enterprises that deliver applications to a global customer base, Global Accelerator is a networking service that directs user traffic to the closest and highest-performing application endpoint. AWS expects the accelerator to ensure high availability and free enterprises from grappling with latency and performance issues over the public internet. In addition, the service uses static IP addresses, which, according to AWS, eliminate the need for users to manage unique IP addresses for different AWS availability zones or regions.
AWS Transit Gateway: Another service intended to simplify network management, Transit Gateway lets customers hitch their own on-premises networks, remote office networks and Amazon VPCs to one centralized gateway. Admins manage one connection from that central gateway to each VPC and on-premises network they use. The cloud provider described it as a hub-and-spoke model; think of Transit Gateway as the “hub” that centrally controls and directs traffic to various network “spokes.”
A big move in microservices
AWS App Mesh: Based on the open source service proxy Envoy, App Mesh streamlines microservices management. App Mesh users can monitor communication between individual microservices, and implement rules that govern that communication. It’ll be interesting to see how App Mesh stacks up against other service mesh options, such as Azure Service Fabric and the open source Istio technology, behind which Google, in particular, has thrown its weight.
AWS takes to space
Other AWS news this month took more of a, well, celestial slant.
The cloud provider teamed up with Lockheed Martin to provide easier and cheaper ways for companies to collect satellite data and move it into the cloud for storage and analysis.
Lockheed’s Verge system of globally distributed, compact satellite antennae will work in conjunction with AWS Ground Station, a service that co-locates ground antennas inside AWS availability zones around the world.
Previously, organizations such as NASA and companies like Mapbox had to write complex business logic and scripts to upload and download satellite data, AWS CEO Andy Jassy said at re:Invent. Ground Station lets users work with satellite streams from the AWS management console, and pay by the minute for antenna time. It’s now in preview in two AWS regions, with 10 more to come early next year.
It’s another example of AWS going after specialized customers, but the partnership could also have broader resonance among AWS’ user base. Research organizations and niche startup companies are the heaviest users of satellite data, but enterprise IT shops in general should also watch the implications of geographic information systems (GIS) and spatial data on their business, said Holger Mueller, VP and principal analyst with Constellation Research in Cupertino, Calif.
“Making that data available in an easy, secure, scalable and affordable way is key for next-generation enterprise apps,” he said.
AWS isn’t first in this market — SAP and the European Space Agency partnered in 2016 to bring satellite data into SAP’s HANA Cloud Platform — but its moves to build out a global satellite antenna network take the idea much further.
*Senior News Writer Chris Kanaracus contributed to this blog.
AWS re:Invent 2018 returned to Las Vegas this month, and brought with it the usual mix of keynotes, breakout sessions and, of course, new cloud services.
AWS Outposts, a managed hardware and software stack that extends Amazon’s reach into on-premises data centers, especially intrigued attendees. Meanwhile, AWS Ground Station pushed the cloud to the final frontier of space, and Amazon SageMaker updates provided more AI and machine learning opportunities for developers.
Between keeping up with these technology updates, and shuffling back and forth between sessions, re:Invent attendees kept busy – but many still had time to share their thoughts on the event via Twitter.
Here’s a look at some tweets that captured the mood of AWS re:Invent 2018:
— Nicolas Bonaldi (@nicolas_bonaldi) November 28, 2018
IMHO, the biggest announcement of the conference: AWS truly gets into the hybrid cloud market. AWS Outpost – AWS on-prem private cloud service, HW+SW. Either AWS native stack (EC2/ECS/EKS/RDS/EMR/Sagemaker) or VMware Cloud on AWS stack. #reinvent
— Lydia Leong (@cloudpundit) November 28, 2018
— Devon Hubner (@DevoKun) November 29, 2018
— tretton37 @ AWS re:Invent takeover (@tretton37) November 29, 2018
Amazing! Karthik Arun 9 years old programmed a moving #IoT with @Raspberry_Pi using Lambda and Alexa. I spoke to him at a demo booth. @Craw @dez_blanchfield @digitalcloudgal @rwang0 @AWSreInvent #reinvent @furrier @JoannMoretti @MarshaCollier @TmanSpeaks @evankirstel @BillMew pic.twitter.com/kbQ8xEw4JV
— Sarbjeet Johal @AWSReinvent (@sarbjeetjohal) November 28, 2018
— Evan Kirstel at #Ideas2Inspire #Singapore (@evankirstel) November 27, 2018
— Julien Stanojevic (@GenuineM7) November 26, 2018
— Abner @reinvent (@abnerg) November 29, 2018
— Daniele Madama (@dmadama) November 29, 2018
Andy announces Ground Station ，allows customers to more easily and cost-effectively control satellite operations, ingest satellite data, and integrate the data with applications and other cloud services running in AWS. #reInvent pic.twitter.com/vZtrjCr9wr
— FionaWang (@WhfAllen) November 27, 2018
LAS VEGAS — What is AWS’ end game with AI and machine learning (ML)? Is it to lead the pack with tools and frameworks? Or dominate other players such as Google and Facebook that built powerful ML libraries or frameworks and placed more skin in the game?
No, AWS’ goal with AI and ML is much the same as the company’s overall mission: put the best tools and cloud infrastructure to build applications into the hands of everyone — and they mean everyone.
At the AWS re:Invent 2018 conference here, AWS CEO Andy Jassy said he wanted to empower “builders” with the tools they need to create apps. Not just developers and data scientists, but also IT staff, ops staff, marketing staff, line-of-business folks — even CIOs, CMOs and CEOs should have access to the same tools.
AI and machine learning are no different — the goal is to democratize access to the best tools, said Joel Minnick, senior manager of product marketing for AI and ML at AWS.
“What we don’t want to see is that machine learning is the toolset of the largest, most technically savvy companies,” Minnick said. “If you are a college student with a great idea about how to build a great application, you should have the same toolset to use that the largest technology companies in the world have to use.”
That’s also why AWS is so focused on data for machine learning models. The company’s SageMaker Ground Truth service, unveiled this week, focuses on data preparation and labeling for machine learning models. Because everyone, not just the biggest, richest companies, should have the ability to clean and prepare their data for machine learning models.
In a lunchtime Q&A with reporters, Jassy said AWS customers are hungry for AI and ML. “I think every cloud app, going forward, will have some type of machine learning and AI infused in it,” he said.
Tens of thousands of AWS customers want to use machine learning and they need help at every layer of the ML stack, Jassy said. Moreover, for machine learning to take off in the enterprise, it must become accessible and usable to everyday developers and data scientists. This was AWS’ goal with its SageMaker machine learning service, which the company enhanced significantly here this week.
Amazon doesn’t really play at the AI/ML level — they don’t have a TensorFlow, PyTorch of Caffe2 — so they’re building infrastructure around and below it to make it easier for folks to do AI, and that strategy plays to AWS’ strengths, said Charles Fitzgerald, managing director of Platformonomics, LLC, a Seattle-based strategy consulting practice.
“They lost the great ML framework war, so now they support everything,” he said.
At his annual re:Invent press conference, Jassy responded to a broad range of questions — from AI ethics, to the company’s relationship with the open source community, to how the Amazon HQ2 search and decision plays for AWS and its hiring.
But he didn’t address something I really wanted to know, but wasn’t going to ask in a press conference: What does he think of those 3-8 New York Giants? Jassy once told me he’s a Giants season ticket holder and comes from a family of Giants season ticket holders. Maybe he should sic some of that smart-alecky machine learning on the Giants’ analytics system to generate a winning team again. Saquon Barkley and “OBJ” Odell Beckham Jr. are great players who typically get their stats, but they can’t do it alone.
LAS VEGAS — AWS CEO Andy Jassy this week defended his company’s ethical decisions around the use of AI on its platform and shed some light on its impending move into corporate data centers.
Jassy’s post-keynote press conference here at re:Invent is generally the only occasion each year the Amazon cloud exec takes questions in this type of setting. In addition to AI and the on-premises infrastructure service AWS Outpost, Jassy talked about AWS’ often criticized relationship with the open source community, and the significance of Amazon’s new second headquarters.
Here are some of the excerpts from the hour-long session, edited for brevity and clarity.
On why AWS has pushed so heavily into AI:
“We’re entering this golden age of what couple be possible with applications; and I don’t know if it’s five years from now or 10 years from now, but virtually every application will have machine learning and AI infused in it.”
On how AWS will enforce ethical use of its AI services:
First, the algorithms that different companies produce have to constantly be benchmarked and iterated and refined so they’re as accurate as possible. It also has to be clear how you recommend them using the services. For example, facial recognition for matching celebrity photos, you can have a confidence level of 80%, but if you’re using facial recognition for something like law enforcement, something that can impact people’s civil liberties, you need to have a very high threshold. We recommend at least [a] 99% threshold for things like law enforcement, and even then it shouldn’t be the sole determinant. There should be a human involved, there should be a number of inputs and machine learning should only one of the inputs.
[Customers] get to make decisions for how they want to use our platform. We give a lot of guidance and if we think people are violating our terms of service, we’ll suspend and disable people from using that. But I think society as a whole, our countries as a whole, we really want them to make rules for how things should be used and we’ll participate and abide by that.”
On what hardware vendors they’ll use for for AWS Outposts:
“We use a lot of different hardware in our data centers, so we can interchange whatever hardware we want. At the beginning of AWS, we primarily used OEMs like the Dells and the HPs, but over time, we’ve done more contract manufacturing and more design of our own chips. We’re always open to whatever hardware providers want to provide to us at the right price point.”
On whether Outposts will eventually provide the full catalog of AWS services:
“It remains to be seen how many total service we’ll have on Outposts. Our initial goal is not to recreate all of AWS in Outposts. They’re kind of different delivery models and folks that have done that today [with attempts at full parity] have frustrated customers and they just haven’t gotten the traction that they wanted.
There are some really basic components that customers would like for us to provide on premises, as they connect to the cloud as well as locally – compute, storage, database, machine learning and analytics are good examples of that.”
On Outposts’ specs and getting into the on-premises server management business:
“We’re not ready to release the specs yet, but stay tuned. In general, there’s going to be racks with the same hardware we run in our AWS regions. They can be configured in different ways, depending on which variants you want and which services you want on those racks. People will be able to customize that to some degree.
In terms of scaling out people, over time we have become pretty good at finding ways to streamline operations and we have quite a large number of people in data center operations around the world… so we have a little bit of an idea of how to do that.
Also, if you think about what Amazon does across its business, we’re very comfortable running high volume. We know how to keep our costs low.”
On whether Amazon’s decision to locate its second headquarters in Virginia and New York will affect AWS:
“As much as we have successfully built a significant-size team in Seattle, we can’t convince everyone to move to Seattle. Just in AWS alone we have people in Boston, Virginia, [Washington] DC, New York, Dublin and Vancouver… Because AWS continues to grow at such a rapid rate, I think it’s a fair bet that we’ll have a lot of [employees] in those two HQ2s.”
On its relationship with the open source community:
“We contribute quite a bit and participate quite a bit. We do it across so many things. If you think about some of the open standards we build on, we have to contribute back. We do it with Linux and for instance, about 50[%] to 60% of the contributions are made by AWS; look at ElasticSearch, FreeRTOS and ARM.
Sometimes people lose sight [of how much we contribute], probably because we don’t do a good enough job making people aware of how many open source contributions we make across how many different projects.”
AWS made one of its simplest services more functional in a month that saw the cloud vendor bogged down in all sorts of complex subjects.
An update to Amazon Lightsail now lets users incorporate managed relational databases with the virtual private server offering, which bundles set amounts of storage, compute and data transfers. These database servers also come in fixed sizes that can be quickly deployed.
AWS folded some of the technology that underlies RDS into Amazon Lightsail so users can create a database in one availability zone (AZ), or replicate it to a second AZ for high availability. The feature can only deploy MySQL databases currently, though AWS said it plans to add PostgreSQL support soon.
Managed databases were the biggest request from Amazon Lightsail users, according to AWS. The service competes with DigitalOcean and Linode, and users should expect more integrations with AWS’ vast array of services, said Rhett Dillingham, an analyst with Moor Insights & Strategy.
Two Lambda items stood out among other October product updates.
AWS tripled the maximum time a function can run per execution, to a cap of 15 minutes, which opened a debate about how to use the service. Some users called it a welcome move but said AWS should extend the window even further to accommodate more workloads. Others raised concerns about runaway costs, and argued that any function that must run that long should be broken down into smaller microservices.
Also, AWS finally added an SLA to Lambda, nearly four years after it debuted at re:Invent 2014. The SLA guarantees a monthly uptime of 99.95%, with a reimbursement in the form of service credits if AWS fails to meet that target.
In other news…
Amazon appears to be the latest cloud vendor embroiled in internal controversy over its use of AI, as employees have protested an apparent pitch to U.S. Immigration and Customs Enforcement officials to use Amazon Rekognition to record and identify individuals.
Meanwhile, the U.S. Department of Defense’s JEDI cloud contract continues to be a contentious subject, which is no surprise, given the feds’ plan to fork over as much as $10 billion to the winning bidder. IBM this month joined Oracle to protest of the scope of the deal, and several lawmakers have called for investigations.
Speaking of Oracle, the company’s CTO and founder Larry Ellison took his annual whack at AWS as part of his OpenWorld keynote, where he likened the use of an AWS database service to a fatal car crash. The two companies’ executives have traded barbs for years as they fight over the lucrative database market, but a leaked AWS memo that same week as OpenWorld shed new light on the companies’ complicated relationship behind the scenes.
Amazon suffered a major embarrassment earlier this year when its ecommerce site stalled on its Prime Day sales event. The outage didn’t impact other AWS users, but since Amazon.com is AWS’ biggest customer, it raised some eyebrows about the company’s claims of infinite scale. Little was said publicly at the time, but it may have involved some features that were lost in a migration from Oracle to Amazon Aurora, which couldn’t provide the same level of reliability. AWS CTO Werner Vogels vehemently denied the story, calling it “silly and misleading.”
Amazon reportedly is in the midst of a massive internal migration off Oracle, with a scheduled completion in 2020. Naturally, Ellison told investors on a recent earnings call that its rival won’t be able to quit Oracle so easily. If past is prelude, expect AWS CEO Andy Jassy to take his own shots at Oracle on stage at re:Invent 2018 in a few weeks.
Also on the report-refuting front, the Bloomberg story that claimed Apple, AWS and others detected malicious Chinese hardware in their data centers continues to be a headscratcher. Jassy echoed Apple CEO Tim Cook in a call for Bloomberg to retract the story. For its part, Bloomberg has stood by its reporting, though it hasn’t provided additional details in the face of staunch pushback from the named tech vendors.