AWS re:Invent 2018 returned to Las Vegas this month, and brought with it the usual mix of keynotes, breakout sessions and, of course, new cloud services.
AWS Outposts, a managed hardware and software stack that extends Amazon’s reach into on-premises data centers, especially intrigued attendees. Meanwhile, AWS Ground Station pushed the cloud to the final frontier of space, and Amazon SageMaker updates provided more AI and machine learning opportunities for developers.
Between keeping up with these technology updates, and shuffling back and forth between sessions, re:Invent attendees kept busy – but many still had time to share their thoughts on the event via Twitter.
Here’s a look at some tweets that captured the mood of AWS re:Invent 2018:
— Nicolas Bonaldi (@nicolas_bonaldi) November 28, 2018
IMHO, the biggest announcement of the conference: AWS truly gets into the hybrid cloud market. AWS Outpost – AWS on-prem private cloud service, HW+SW. Either AWS native stack (EC2/ECS/EKS/RDS/EMR/Sagemaker) or VMware Cloud on AWS stack. #reinvent
— Lydia Leong (@cloudpundit) November 28, 2018
— Devon Hubner (@DevoKun) November 29, 2018
— tretton37 @ AWS re:Invent takeover (@tretton37) November 29, 2018
Amazing! Karthik Arun 9 years old programmed a moving #IoT with @Raspberry_Pi using Lambda and Alexa. I spoke to him at a demo booth. @Craw @dez_blanchfield @digitalcloudgal @rwang0 @AWSreInvent #reinvent @furrier @JoannMoretti @MarshaCollier @TmanSpeaks @evankirstel @BillMew pic.twitter.com/kbQ8xEw4JV
— Sarbjeet Johal @AWSReinvent (@sarbjeetjohal) November 28, 2018
— Julien Stanojevic (@GenuineM7) November 26, 2018
— Abner @reinvent (@abnerg) November 29, 2018
— Daniele Madama (@dmadama) November 29, 2018
Andy announces Ground Station ，allows customers to more easily and cost-effectively control satellite operations, ingest satellite data, and integrate the data with applications and other cloud services running in AWS. #reInvent pic.twitter.com/vZtrjCr9wr
— FionaWang (@WhfAllen) November 27, 2018
LAS VEGAS — What is AWS’ end game with AI and machine learning (ML)? Is it to lead the pack with tools and frameworks? Or dominate other players such as Google and Facebook that built powerful ML libraries or frameworks and placed more skin in the game?
No, AWS’ goal with AI and ML is much the same as the company’s overall mission: put the best tools and cloud infrastructure to build applications into the hands of everyone — and they mean everyone.
At the AWS re:Invent 2018 conference here, AWS CEO Andy Jassy said he wanted to empower “builders” with the tools they need to create apps. Not just developers and data scientists, but also IT staff, ops staff, marketing staff, line-of-business folks — even CIOs, CMOs and CEOs should have access to the same tools.
AI and machine learning are no different — the goal is to democratize access to the best tools, said Joel Minnick, senior manager of product marketing for AI and ML at AWS.
“What we don’t want to see is that machine learning is the toolset of the largest, most technically savvy companies,” Minnick said. “If you are a college student with a great idea about how to build a great application, you should have the same toolset to use that the largest technology companies in the world have to use.”
That’s also why AWS is so focused on data for machine learning models. The company’s SageMaker Ground Truth service, unveiled this week, focuses on data preparation and labeling for machine learning models. Because everyone, not just the biggest, richest companies, should have the ability to clean and prepare their data for machine learning models.
In a lunchtime Q&A with reporters, Jassy said AWS customers are hungry for AI and ML. “I think every cloud app, going forward, will have some type of machine learning and AI infused in it,” he said.
Tens of thousands of AWS customers want to use machine learning and they need help at every layer of the ML stack, Jassy said. Moreover, for machine learning to take off in the enterprise, it must become accessible and usable to everyday developers and data scientists. This was AWS’ goal with its SageMaker machine learning service, which the company enhanced significantly here this week.
Amazon doesn’t really play at the AI/ML level — they don’t have a TensorFlow, PyTorch of Caffe2 — so they’re building infrastructure around and below it to make it easier for folks to do AI, and that strategy plays to AWS’ strengths, said Charles Fitzgerald, managing director of Platformonomics, LLC, a Seattle-based strategy consulting practice.
“They lost the great ML framework war, so now they support everything,” he said.
At his annual re:Invent press conference, Jassy responded to a broad range of questions — from AI ethics, to the company’s relationship with the open source community, to how the Amazon HQ2 search and decision plays for AWS and its hiring.
But he didn’t address something I really wanted to know, but wasn’t going to ask in a press conference: What does he think of those 3-8 New York Giants? Jassy once told me he’s a Giants season ticket holder and comes from a family of Giants season ticket holders. Maybe he should sic some of that smart-alecky machine learning on the Giants’ analytics system to generate a winning team again. Saquon Barkley and “OBJ” Odell Beckham Jr. are great players who typically get their stats, but they can’t do it alone.
LAS VEGAS — AWS CEO Andy Jassy this week defended his company’s ethical decisions around the use of AI on its platform and shed some light on its impending move into corporate data centers.
Jassy’s post-keynote press conference here at re:Invent is generally the only occasion each year the Amazon cloud exec takes questions in this type of setting. In addition to AI and the on-premises infrastructure service AWS Outpost, Jassy talked about AWS’ often criticized relationship with the open source community, and the significance of Amazon’s new second headquarters.
Here are some of the excerpts from the hour-long session, edited for brevity and clarity.
On why AWS has pushed so heavily into AI:
“We’re entering this golden age of what couple be possible with applications; and I don’t know if it’s five years from now or 10 years from now, but virtually every application will have machine learning and AI infused in it.”
On how AWS will enforce ethical use of its AI services:
First, the algorithms that different companies produce have to constantly be benchmarked and iterated and refined so they’re as accurate as possible. It also has to be clear how you recommend them using the services. For example, facial recognition for matching celebrity photos, you can have a confidence level of 80%, but if you’re using facial recognition for something like law enforcement, something that can impact people’s civil liberties, you need to have a very high threshold. We recommend at least [a] 99% threshold for things like law enforcement, and even then it shouldn’t be the sole determinant. There should be a human involved, there should be a number of inputs and machine learning should only one of the inputs.
[Customers] get to make decisions for how they want to use our platform. We give a lot of guidance and if we think people are violating our terms of service, we’ll suspend and disable people from using that. But I think society as a whole, our countries as a whole, we really want them to make rules for how things should be used and we’ll participate and abide by that.”
On what hardware vendors they’ll use for for AWS Outposts:
“We use a lot of different hardware in our data centers, so we can interchange whatever hardware we want. At the beginning of AWS, we primarily used OEMs like the Dells and the HPs, but over time, we’ve done more contract manufacturing and more design of our own chips. We’re always open to whatever hardware providers want to provide to us at the right price point.”
On whether Outposts will eventually provide the full catalog of AWS services:
“It remains to be seen how many total service we’ll have on Outposts. Our initial goal is not to recreate all of AWS in Outposts. They’re kind of different delivery models and folks that have done that today [with attempts at full parity] have frustrated customers and they just haven’t gotten the traction that they wanted.
There are some really basic components that customers would like for us to provide on premises, as they connect to the cloud as well as locally – compute, storage, database, machine learning and analytics are good examples of that.”
On Outposts’ specs and getting into the on-premises server management business:
“We’re not ready to release the specs yet, but stay tuned. In general, there’s going to be racks with the same hardware we run in our AWS regions. They can be configured in different ways, depending on which variants you want and which services you want on those racks. People will be able to customize that to some degree.
In terms of scaling out people, over time we have become pretty good at finding ways to streamline operations and we have quite a large number of people in data center operations around the world… so we have a little bit of an idea of how to do that.
Also, if you think about what Amazon does across its business, we’re very comfortable running high volume. We know how to keep our costs low.”
On whether Amazon’s decision to locate its second headquarters in Virginia and New York will affect AWS:
“As much as we have successfully built a significant-size team in Seattle, we can’t convince everyone to move to Seattle. Just in AWS alone we have people in Boston, Virginia, [Washington] DC, New York, Dublin and Vancouver… Because AWS continues to grow at such a rapid rate, I think it’s a fair bet that we’ll have a lot of [employees] in those two HQ2s.”
On its relationship with the open source community:
“We contribute quite a bit and participate quite a bit. We do it across so many things. If you think about some of the open standards we build on, we have to contribute back. We do it with Linux and for instance, about 50[%] to 60% of the contributions are made by AWS; look at ElasticSearch, FreeRTOS and ARM.
Sometimes people lose sight [of how much we contribute], probably because we don’t do a good enough job making people aware of how many open source contributions we make across how many different projects.”
AWS made one of its simplest services more functional in a month that saw the cloud vendor bogged down in all sorts of complex subjects.
An update to Amazon Lightsail now lets users incorporate managed relational databases with the virtual private server offering, which bundles set amounts of storage, compute and data transfers. These database servers also come in fixed sizes that can be quickly deployed.
AWS folded some of the technology that underlies RDS into Amazon Lightsail so users can create a database in one availability zone (AZ), or replicate it to a second AZ for high availability. The feature can only deploy MySQL databases currently, though AWS said it plans to add PostgreSQL support soon.
Managed databases were the biggest request from Amazon Lightsail users, according to AWS. The service competes with DigitalOcean and Linode, and users should expect more integrations with AWS’ vast array of services, said Rhett Dillingham, an analyst with Moor Insights & Strategy.
Two Lambda items stood out among other October product updates.
AWS tripled the maximum time a function can run per execution, to a cap of 15 minutes, which opened a debate about how to use the service. Some users called it a welcome move but said AWS should extend the window even further to accommodate more workloads. Others raised concerns about runaway costs, and argued that any function that must run that long should be broken down into smaller microservices.
Also, AWS finally added an SLA to Lambda, nearly four years after it debuted at re:Invent 2014. The SLA guarantees a monthly uptime of 99.95%, with a reimbursement in the form of service credits if AWS fails to meet that target.
In other news…
Amazon appears to be the latest cloud vendor embroiled in internal controversy over its use of AI, as employees have protested an apparent pitch to U.S. Immigration and Customs Enforcement officials to use Amazon Rekognition to record and identify individuals.
Meanwhile, the U.S. Department of Defense’s JEDI cloud contract continues to be a contentious subject, which is no surprise, given the feds’ plan to fork over as much as $10 billion to the winning bidder. IBM this month joined Oracle to protest of the scope of the deal, and several lawmakers have called for investigations.
Speaking of Oracle, the company’s CTO and founder Larry Ellison took his annual whack at AWS as part of his OpenWorld keynote, where he likened the use of an AWS database service to a fatal car crash. The two companies’ executives have traded barbs for years as they fight over the lucrative database market, but a leaked AWS memo that same week as OpenWorld shed new light on the companies’ complicated relationship behind the scenes.
Amazon suffered a major embarrassment earlier this year when its ecommerce site stalled on its Prime Day sales event. The outage didn’t impact other AWS users, but since Amazon.com is AWS’ biggest customer, it raised some eyebrows about the company’s claims of infinite scale. Little was said publicly at the time, but it may have involved some features that were lost in a migration from Oracle to Amazon Aurora, which couldn’t provide the same level of reliability. AWS CTO Werner Vogels vehemently denied the story, calling it “silly and misleading.”
Amazon reportedly is in the midst of a massive internal migration off Oracle, with a scheduled completion in 2020. Naturally, Ellison told investors on a recent earnings call that its rival won’t be able to quit Oracle so easily. If past is prelude, expect AWS CEO Andy Jassy to take his own shots at Oracle on stage at re:Invent 2018 in a few weeks.
Also on the report-refuting front, the Bloomberg story that claimed Apple, AWS and others detected malicious Chinese hardware in their data centers continues to be a headscratcher. Jassy echoed Apple CEO Tim Cook in a call for Bloomberg to retract the story. For its part, Bloomberg has stood by its reporting, though it hasn’t provided additional details in the face of staunch pushback from the named tech vendors.
September was a low-key month for AWS, even though it rolled out more than 70 updates to its platform.
AWS advancements in September were a lot of the standard fodder: services expanded to additional regions, deeper integration between tools and a handful of security certifications. All of this is potentially welcome to the respective target audiences.
Still, the updates weren’t completely mundane. There were some serious nods to AWS hybrid cloud architectures along with some intriguing moves aimed at developers.
Let’s start with the enterprise-focused tools that get data to AWS’ cloud. AWS Storage Gateway, a service that connects on-premises and cloud-based data, added a hardware appliance that a company can install in its own data center or remote office. The service addresses storage needs for a range of hybrid cloud architectures – backup, archiving, disaster recovery, migrations and links to AWS analytics tools. This appliance opens Storage Gateway to non-virtualized environments, and comes on a pre-loaded Dell EMC PowerEdge server at a cost of $12,250.
AWS has emphasized database migrations in recent years to lure corporate clients to its public cloud, either through lift-and-shift approaches or transitions to its native, managed services. That continued in September, as Database Migration Service added DynamoDB as a destination for Cassandra databases and Server Migration Service upped the size of data volumes it can handle from 4TB to 16TB.
Speaking of databases, Amazon Aurora continues to get a lot of attention. A month after its serverless flavor became generally available, users now can start and stop Aurora database clusters, a feature geared toward test and development. Another Aurora feature, a Parallel Query tool, opens the managed service to some analytical queries.This could limit the need for a data warehouse service, but there are lingering concerns that AWS has spent too much time on interesting new features and not enough time on core functionality.
Developer tools raise eyebrows
Two other AWS updates in September may pique developers’ interest, or leave them scratching their heads.
CloudFormation Macros processes templates in the same way the Serverless Application Model (SAM) prescribes rules and defines infrastructure as code, but Macros enables custom transformations handled by Lambda functions within a user’s account.
And for Microsoft shops, AWS Lambda now supports .NET developers that want to manage or automate scripts through support for PowerShell Core 6.0. We’ll have more on these features in the coming months, but for now, at least one group of users is a bit confused with the Macros feature and thinks they’ll stick with Terraform instead.
Updates to security, partnerships
On the security front, admins can now use YubiKey security keys for multi-factor authentication. Network Load Balancers and AWS PrivateLink support AWS VPN, which means an enterprise has more options to build an AWS hybrid cloud architecture where on-prem workloads can privately access AWS services.
AWS also expanded its partnership with Salesforce, with tighter integration of services for companies that rely on both providers. And yes, you can use Lambda functions to move trigger actions between the two environments. The two cloud giants have worked together for years, including the $400 million deal Salesforce signed in 2016 to use AWS services.
And stop me if you’ve heard this before, but a Wall Street analyst called for Amazon to split its retail and AWS businesses. As always, the hope is to avoid regulation, boost the value for shareholders and insulate them against the potential struggles of one of the business units.
Amazon executives haven’t responded to the critique, but last winter AWS CEO Andy Jassy said there’s no need to spin off his company. He brushed off the “optics of the financial statements” and said there’s real value in having internal customers that aren’t afraid to share their feedback.
The end of summer is typically slow for the IT world, but AWS this month continues to expand its horde of instance types, and lay the groundwork for a future where its customers won’t even bother with VMs.
The cloud vendor rolled out more instance options and made Amazon Aurora Serverless generally available. And as the annual VMworld user conference closed out the month, the company advanced the ability to run the VMware stack on AWS, and perhaps more importantly, run AWS on-premises with VMware software.
The T3 instance is the next generation of burstable, generable purpose VMs available in EC2. It’s 30% cheaper than the T2 and supports up to 5 Gbps in network bandwidth. The T series of VMs, first added in 2010, is designed for smaller applications with large, infrequent spikes in demand. Like the previous two generations, the T3 comes in seven sizes, with varying amounts of memory and baseline performance.
The T3 is the latest instance type to rely on AWS’ Nitro system. It is hardware-virtual-machine-only and must be used within a Virtual Private Cloud.
AWS also added two instance sizes to Amazon Lightsail, its virtual private server offering. The 16 GB and 32 GB iterations are the largest Lightsail instances yet, and their additions coincided with a 50% price drop on all other existing Lightsail instance sizes.
There appears to be little cadence to AWS’ instance type expansion, but the cloud giant shows no signs of slowing down. Those additions came just weeks after AWS rolled out the z1d, R5 and R5d instances in late July.
Serverless vs. VMs
At the same time, AWS moved Aurora Serverless out of preview. The highly anticipated version of its fastest growing service, first announced last November, enables users to provision and scale database capacity while AWS manages all the underlying servers.
The GA of Aurora Serverless has limitations, however. It’s only available for MySQL and in the following regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland) and Asia Pacific (Tokyo). AWS says it will continue to add regional availability in the coming year. AWS originally said the PostgreSQL version would be available in the back half of this year, but hasn’t updated that timeframe since Aurora Serverless first went into preview.
EC2 continues to host the vast majority of AWS workloads, but it will be worth watching how long AWS remains on these parallel paths of additional VM variety and VM-free services. Many industry observers expect the latter will eventually overshadow the former. AWS has hedged its bets with a container strategy that was slightly late to the game, but even there serverless gets equal footing.
AWS started with serverless in 2014 with the addition of Lambda functions, and is still largely seen as the predominant player in this space, but to maintain that edge it’ll be without one if its key contributors. Tim Wagner, who oversaw the development Lambda, was hired by Coinbase, a digital currency exchange, to be its vice president of engineering. Wagner was general manager for AWS Lambda, Amazon API Gateway and AWS Serverless App Repository at the time of his departure.
AWS: coming to a data center near you
And finally, AWS deepened its ties to VMware in more ways than one during VMworld. VMware Cloud on AWS, a service jointly developed by the two vendors but sold by VMware, added tools to simplify migrations from on premises to AWS and to manage workloads post-migration. VMware also cut the price of the service in half, which could attract organizations still on the fence.
What surprised many industry observers was AWS’ continued march beyond its own facilities. AWS will sell and deliver Amazon Relational Database Service on VMware inside users’ own data centers. The on-premises version of the AWS database service will handle management, provisioning, patches and backups. It will also make it easier for VMware customers to migrate their existing databases to Amazon’s public cloud.
As is often the case, AWS’ yearly Summit in New York provided the scene for some additional features and functionality. While AWS cloud features are unveiled regularly, the free-to-attend conference generates some excitement among both cloud newcomers and experienced shops alike.
SearchAWS.com attended the Summit, and reported on various new AWS cloud features and trends, including:
- New capabilities for AWS Snowball Edge — a boon for enterprises with edge computing needs — as well as a boost to S3 performance, new EC2 instances and a Bring Your Own IP feature;
- Early adoption patterns for Amazon Elastic Container Service for Kubernetes (EKS), as well as attendee reactions to an EKS workshop, and how AWS might improve the service moving forward; and
- A one-on-one discussion with Matt Wood, AWS’ GM of Deep Learning and AI, regarding new SageMaker features, enterprise AI challenges and the ethics of facial recognition technology.
ALB gets its actions together
The continued push from HTTP to HTTPS also gives AWS customers easy way to meet their compliance goals.
Application Load Balancer’s (ALB) content-based routing rules now support redirect and fixed-response actions in all AWS regions, which fills two big networking needs for users. Redirect actions enable an ALB to readdress incoming requests from one URL to another, such as from an HTTP to an HTTPS endpoint, which helps organizations improve security and search ranking. Fixed-response actions enable an ALB to answer incoming requests rather than forward the request to the application, for example to send custom error messages when a problem occurs.
EFS gets a performance boost
For users that encounter Amazon Elastic File System (EFS) performance issues, some relief has arrived.
Provisioned Throughput for Amazon EFS enables a developer to dynamically adjust throughput in accordance with an application’s performance needs, regardless of the amount of data stored on the file system. While users previously could burst EFS throughput for applications with more modest needs, the Provisioned Throughput feature suits applications with more strenuous needs.
DevSecOps gets more robust
Amazon GuardDuty, one of two AWS security services that relies on machine learning, could see more widespread adoption after it gained an important integration with another service.
GuardDuty now works with AWS CloudFormation StackSets, which enable an enterprise security team to automate threat detection across multiple accounts and regions. CloudFormation automates the provisioning of infrastructure and services, giving enterprises the ability to quickly and efficiently watch for threats.
For the good of the hack
A pair of upcoming AWS hackathons aims to put developer brainpower to work for socially conscious causes.
Developers can enter two hackathons — one focused on serverless applications and another on Alexa skills — that offer cash prizes for imaginative projects focused on social good. The Amazon Alexa Skills Challenge offers various cash and participation prizes for apps that use a voice command interface, while its Serverless Apps for Social Good hackathon seeks AWS Serverless Application Repository projects that combat human trafficking.
Chaos ensued shortly after the start of Amazon’s heavily advertised Prime Day retail initiative on July 16, as customers could not access product pages. The Amazon disruption, attributed to a software issue within Amazon’s internal retail system, was severe enough that the company temporarily killed off international traffic.
According to a CNBC report citing internal documents, Amazon manually added server capacity as traffic surged to its retail site, which points to its Auto Scaling feature as the potential culprit that affected its internal Sable system. While the disruption generated negative press for the retail giant, which touted its internal readiness and scalability for Prime Day as recently as last year’s re:Invent conference, it still reported sales of more than 100 million products.
While AWS experienced intermittent errors with its Management Console that afternoon, the company says AWS infrastructure and services were not involved with the Prime Day snafu.
With the VMware-AWS partnership, some customers were either reluctant or unable to use earlier versions of the service. Now, with the passage of time and corresponding product maturity, customers with tight compliance requirements might want to review the offering.
VMware Cloud on AWS (VMC) has made some recent advances. First, VMware said the service would soon be available in the AWS GovCloud region, which is typically restricted to public sector customers. The VMware-AWS service has been slow to expand globally – it is only available in four regions. The companies hope that it finds some takers among cash-strapped government agencies, which are typically slow to migrate to the cloud due to cost and regulatory concerns.
Speaking of regulatory concerns, VMC now offers a HIPAA-eligible hybrid cloud environment after passing a third-party evaluation. While HIPAA eligibility still depends on the manner with which an IT team manages cloud data and resources, healthcare providers could nonetheless see the VMware-AWS platform as a boon to their hybrid cloud operations.
Contain your enthusiasm
In other recent AWS news, Amazon released its answer to Google Kubernetes Engine, which came as welcome news to an eager base of container fanatics tired of standing up and managing the necessary infrastructure to support Kubernetes.
After it was introduced at re:Invent in December last year, Amazon Elastic Container Service for Kubernetes (EKS) became generally available in early June. EKS manages Kubernetes clusters for users and provides some potential benefits for AWS customers, such as high availability and support for load balancers and Identity and Access Management. EKS could make it easier to run microservices apps, perform batch processing or even migrate applications — that is, of course, if you’re willing to pay a bit more for the managed service.
Close your Windows
Amazon’s desktop as a service offering once only offered Windows options for operating systems (OSes). Now, as it does with so many of its other services, AWS has dangled a carrot to lure you away from Microsoft.
Amazon WorkSpaces added support for its Amazon Linux 2 OS, which the IaaS provider designed to handle a variety of cloud workloads. Amazon Linux WorkSpaces could help IT allot CPU and memory more efficiently, thus reducing costs. Based on the MATE Desktop Environment, Amazon Linux WorkSpaces purports to offer benefits for developers and ops alike, such as support for tools like Firefox, Evolution, Pidgin and Libre Office, as well as a better development environment and support for kiosk mode. Though, as some users pointed out, WorkSpaces still lacks a Linux client.
A wish fulfilled — finally
AWS Lambda also added support for Amazon Simple Queue Service (SQS) triggers — a longtime request from serverless developers. In the process, one of AWS’ older services, SQS, now works with one of its newer technologies, Lambda.
SQS messages can now sync with Lambda to trigger functions across distributed cloud systems. This integration eases processes like monitoring and error retries, which were previously made difficult by the aforementioned workarounds many developers introduced to trigger functions from messages. But developers should set Lambda concurrency controls to avoid hitting account limits, and the integration does not yet support FIFO queues.
The first ever cloud associates degree
As part of its Public Sector Summit in Washington D.C., AWS revealed a partnership with Northern Virginia Community College, which will offer a cloud computing specialization as part of its Information Systems Technology associates degree. Teresa Carlson, AWS’ vice president of the worldwide public sector, said it was the first ever cloud associate’s degree. As part of the course work, students receive access to the AWS Educate program.
Infrastructure was AWS’ focus in May, as the cloud provider made good on several of its promises with features that provide more diverse compute options — including some that directly challenge two of its biggest foes.
Customers in five regions can now use EC2 bare-metal instances, which enable access to the memory and processor that runs those instances. Released into preview at re:Invent last year, these EC2 bare-metal instances compete against similar offerings and services from Oracle and IBM. While only available for the I3 storage-optimized instance family, these bare-metal instances can fit a variety of use cases, such as workloads restricted by licenses or lack of support for virtualized instances, and they provide a higher degree of hardware control than previously available.
In addition to several new AWS IaaS features geared toward EC2 instance management, AWS also this month added NVMe storage for its C5 instance family. These instances boost I/O to local storage to help developers take advantage of all available compute capacity. While only available for the C5 family right now, AWS said it plans to introduce NVMe storage to more instances in the coming months.
AWS presses play on Lambda IoT service
AWS’ latest IoT service attempts to make its platform literally push-button simple.
As its name suggests, AWS IoT 1-Click simplifies Lambda triggers to manage simple devices, and perform actions such as send alerts or flag items for inspection.
AWS IoT 1-Click currently supports only two push-button triggers: AWS IoT Enterprise Button (formerly the AWS IoT Button) and the AT&T LTE-M Button, which connect over Wi-Fi and AT&T’s cellular network, respectively. These devices come with their own certificates to protect communication to and from the cloud, and they encrypt outbound data via TLS. In the future, however, AWS plans to support various types of push-button devices, asset trackers, card readers and sensors.
An alien database option
If men are from Mars and women are from Venus, perhaps graph databases can be from Neptune.
Released into general availability in late May across four regions, Amazon Neptune enables developers to build and maintain high-performance graph databases that can scale to store billions of relationships between connected datasets. AWS posits Neptune as the ideal database option for modern applications, which increasingly require large amounts of unstructured data storage and high performance with low latency across the globe. Neptune supports the Property Graph and W3C RDF graph models and their query languages Apache TinkerPop Gremlin and RDF/SPARQL.
Throughout its preview, Amazon said customers used Neptune to build interactive applications that include social networks, fraud detection systems and recommendation engines.
Cracking down on domain abuse
Global AWS customers that want to evade censorship in certain countries were dealt a blow earlier this month, as the cloud provider followed in Google’s footsteps and switched off domain fronting for its CloudFront service. This process enables apps to conceal their network traffic through a cloud CDN, which changes the domain name after it establishes a connection — though it is also a popular means for hackers and attackers to obfuscate their malware’s origin.
The cloud providers’ decisions come after the Russian government in April attempted to block instant messenger app Telegram, which had moved to AWS infrastructure. In doing so, Russia also blocked millions of Amazon and Google IP addresses, including many legitimate web services and companies.
Amazon’s decision to protect against domain fronting follows in line with its terms of service, and AWS said it already polices such violations. At the same time, this crackdown aims to roll more protections directly into the CloudFront service and API.
Breathe easier, serverless application developers — your lengthy wait is over, with AWS Lambda’s added support for a newer Node.js runtime
After a year’s wait, AWS developers can now use Node.js version 8.10 to enable a number of AWS Lambda features that were on their wish lists. The async/await pattern makes it much easier to implement asynchronous calls without muddying up the code with callbacks or promises, which make it difficult to read. The support update also simplifies error handling, which further reduces unnecessary code, and it offers faster runtime and render time speeds.
In the past, AWS has been slow to add Lambda support for other languages as well, including Python, though Amazon’s lengthy code review process, which ensures no potentially damaging code exists in releases, are a big reason for that delay.
Meanwhile, a pair of new security tools unveiled at AWS’ yearly San Francisco Summit in early April.
The AWS Secrets Manager service enables an administrator to abstract the manual process to store, manage and retrieve encryption keys, database credentials and other secrets. The service saves time and cost to stand up infrastructure to specifically manage secrets, a process complicated by increasingly distributed applications. Secret Manager also enables you to rotate credentials with a Lambda function.
With the AWS Firewall Manager, admins can define and apply Amazon Web Application Firewall security rules across various cloud applications and accounts. The service centralizes security management, which enables grouped control and enhances visibility of attacks on Application Load Balancers and CloudFront workloads, to help enterprises adhere to compliance requirements.
Living on the edge
Two AWS offerings became general available in April to help enterprises more quickly process IoT data, in different ways.
AWS IoT Analytics enables users to process raw data directly from IoT devices and sensors. For some enterprises, however, the cost of data transfers is prohibitive, so it’s appealing to preprocess data before it reaches the cloud. The AWS Greengrass ML Inference an enterprise can deploy cloud-trained machine learning models on connected devices for locally collected data. Combined, these two offerings enable real-time data processing at the edge and more detailed analytics when chosen data reaches the cloud.
One other service update doesn’t open up new AWS Lambda features for serverless developers, but it does open up the code base and removes a barrier to automation.
AWS open-sourced its Serverless Application Model (SAM) implementation, with which developers define resources spun up by CloudFormation stacks. Previously, developers submitted feature requests to AWS which would change the implementation. With an open-source SAM implementation, developers can more quickly specify new features and enhancements, and then build serverless apps.