Has hosting and cloud leader Rackspace found a way into the private cloud fray?
A new service offering by Rackspace Cloud Builders will come to you, wherever you may be, and install a bunch of hardware and software that qualifies as cloud; a definite new twist from Rackspace’s usual hosting, co-lo and cloud sales model.
To boot, Rackspace will offer pre-certified arrangements of hardware to go with OpenStack, although it is shy about using that term. At Cloud Connect today, we are told, Dell, Opscode and Rackspace demonstrated ground-up cloud building, which presumably involves some rackable Dell gear and some bootable media.
I must admit to some curiosity: Did every demo start from scratch? Is it cheating if you don’t zero the drives for every “cloud build”? I’d be booting servers off images on my laptop, so presumably it’s only cheating if they changed the BIOS settings.
Rackspace’s new Cloud Builders division consists basically of Anso Labs, acquired last year by Rackspace specifically for its on-prem capabilities. Rackspace claims it can pick and choose from more than 50 companies in the OpenStack project with which to build a private cloud that will be “cost competitive with Amazon,” if customers will only let Rackspace build the a cloud environment.
First of all, that’s ridiculous. Nobody operating a private cloud can compete on cost with Amazon Web Services (AWS). At most, Rackspace might get operating costs down to the point where one could pitch an internal private cloud deployment vs. AWS when security, compliance and governance are taken into account. And companies are not even going to bother to try that until their current infrastructure is fully depreciated.
What this really is a way for everybody involved in OpenStack to put up a viable alternative to cloud-in-a-box offerings from premium IT vendors. It’s more a poor man’s VCE and it’s silly to pretend otherwise.
The writing’s on the wall for IT operations that are not moving off premise for new infrastructure deployments. It’s going to be dense, converged and virtualization-ready; IBM, HP, VCE and now maybe Oracle have a lock on that business. New buys for big hardware will skip right past “converged” for “cloud” since the functionally the same thing with a new set of management tools at the hardware level.
Those big IT vendors sell support, support, installation, and more support. And it’s not cheap — in fact, it’s downright larcenous. In some cases, an Oracle Exalogic with Sun tools, Linux and Oracle database will run $4 million, plus a support contract that might be 30% of your licensing fees (according to a floor rep last year). Plus, you can’t ever, ever switch after you buy. A fully loaded Vblock is plenty comfortable in that price scale, too.
I don’t know about you, but I can fill a rack or two with blades, storage and open source software for a bit less than that kind of cash. I’m betting Rackspace sees the same opportunity (they’re the commodity operations experts, after all).
So let’s not beat around the bush; the service announcement and hardware demos are really about taking a crack at staying alive in big enterprise IT shops, but that’s a harder sell than telling customers they might be able to do what AWS does if they just know the magic words.
Verizon is taking another run at selling SAP CRM as a service, this time with SAP’s Rapid Deployment Solution (RDS) on real cloud infrastructure. A year ago the two companies were selling the full SAP CRM package on traditional hosting. Imagine two old guys at a rave and you’ve got the picture. No one came to that party.
The new offering, available in the US only, is designed to be up and running in eight weeks, a much faster turnaround than traditional SAP installations that can take months. It’s priced on a per user per month subscription which Verizon said should work out at about $100 per seat. There’s also an implementation fee of $80,000 to $220,000 depending on which modules the customer wants.
SAP has other products in its RDS bag including supply chain, product development and HR software that Verizon expects it will also sell as cloud-based services at some point.
It’ll be interesting to watch and see if Verizon gets any traction selling this SaaS offering. It has enormous network reach and is developing cloud infrastructure anyway, so there’s very little financial risk in it selling Software as a Service.
For SAP, building out cloud infrastructure to sell its applications as a service would be super expensive, better to partner with the major network operaters and let them sell it for you.
Salesforce.com is the main competition here with several years head start. Can the old guys catch up?
Data center and Internet Business Exchange (IBX) giant Equinix has taken a bold step up the stack in a way that may show what enterprises are expecting as they look at private cloud…and also discomfit some of Equinix’s best customers. Equinix is integrating Carpathia Hosting’s (a long time Equinix partner) compliance management systems into its own operations to sell directly to its co-lo customers. The company calls the relationship “unique.”
Equinix has been fairly strict in the past about remaining vendor-neutral as a data center floor space provider. It doesn’t push customers onto particular hardware or management systems (beyond racks and power); just pitches its expertise as interconnection and reliability. But with Carpathia’s compliance tools and advanced virtualized networking capabilities, it’s definitely moving beyond floor space. Equinix says the initial target is highly regulated, ‘complex compliancy’ verticals like Federal IT and health care providers, which is what Carpathia specializes in. The company added that customers are asking for this.
Greg Adgate, GM of the Global Enterprise Segment for Equinix, said that users with strict auditing needs wanted more integration at the first layer of infrastructure. They come to Equinix looking to co-locate and own their IT, but as federal guidelines on compliance get stricter, they have a need for advanced GRC tools like the ones Carpathia provides (through a variety of means including other vendors, like Vyatta).
Adgate said that Equinix and Carpathia had worked hard to integrate Carpathia’s compliance expertise, which is largely around documenting processes within the machine environment into its data center operations. Equinix had deliberately avoided that side up until now.
Cloud computing comes in because Carpathia has some interesting offerings designed around delivering compliant infrastructure as a cloud in its hosting business, and it sees a massive, policy-based push to cloud services and cloud platforms in the federal government and health care. U.S. CIO Vivek Kundra wants 20% of the federal IT infrastructure on or in a cloud pretty much now. Legislative mandates (and funds!) are driving adoption of electronic health records (EHRs) and healthcare information exchanges (HE’s).
Carpathia’s not giving up its hosting business. CTO Jon Greaves said the firm has spent the last year building out sophisticated ways to use Equinix’s IBX data centers, where network traffic and connections are traded and exchanged around the world, to make services that operated like clouds but looked like compliant operations. Now Equinix and Carpathia can either sell that technology directly to co-lo users or pitch Carpathia’s managed services as an alternative. Greaves said users can even buy Carpathia’s networking tools and connect to other public cloud providers in secure ways.
Courting cloud computing providers and enterprise investment is a natural evolution of Equinix’s strategy. At some point, said Tier1 analyst Doug Toombs, it had to start delivering more than power and floor space. He said the company’s position as an independent provider still held.
Toombs said the real strength of Equinix is its global network of internet exchanges, which service every major telecom and many others besides, so consumers have a choice on communications providers. That remains staunchly neutral.
Verizon, for instance pitched its cloud services as enterprise ready because you could completely avoid the public internet; Verizon could simply put you on a backhaul link if you didn’t want to use the internet for infrastructure. However, if you didn’t like Verizon, too bad. Equinix can now offer a similar enticement, with the advantage of not getting trapped at one telecom.
Competitors Rackspace and Savvis have seen the writing on the wall for a while now, and offer all kinds of tools and management products beyond floor space and uptime, and it shows. Both firms reported double-digit growth in customers and revenue last year; Equinix, with its co-lo pure play, got killed and had to lower its EBITDA.
But how will this sit with Equinix’s service provider customers, especially cloud? Integrating complex compliance tools at the bottom layer of infrastructure means they can offer the enterprise something Amazon Web Services and OpSource (both giant Equinix customers) can’t. They just cannot meet this. Those clouds can maintain certifications; with Carpathia in its operations, Equinix users can meet their own audits. Equinix says it’s not infringing.
“If you look at the complex compliance market, there really isn’t anyone else doing it so we don’t think we’re taking down any of our customers today. The complexity of it drives a price point that resonates with the price point we bring,” said Greg Adgate. That’s a high price point, Equinix is not cheap.
However, everybody knows that enterprise IT is coming to the cloud and bringing the big money stakes. After all, even if they own their infrastructure, lots of enterprises co-lo and still consider that “in-house infrastructure” and if they want cloud, they’ll look twice at a floor space operator that can do secure, private cloud-y stuff. “There’s some obvious segments like fed and health care but the market opportunity is much bigger than that,“ said Adgate.
Amazon did not respond to requests for comment about its data center provider. Salesforce.com doesn’t do sensitive infrastructure, so they’re rapidly sinking to the SaaS morass and don’t matter. OpSource also hosts in Savvis, so they’re probably fine.
The Enomaly-powered cloud brokerage service SpotCloud has been chugging along for some time, rolling on-demand virtual machine providers into its fold. It’s set to go live next week, a clearing house for cloud computing resources, a bit like a rudimentary RightScale, but unlike RightScale it handles the cash. Sellers pay a brokerage fee and set their own prices based on whatever the hell they fell like.
That’s neat, everyone’s been saying cloud computing is going to make infrastructure a commodity, so along comes the commodities broker to match the bulk producer with the consumer. It’s been tried before- Deutsche Telekom spun out Zimory as a cloud brokerage a few years ago. It wasn’t successful and it’s withered on the vine into a weird sort of cloud management/infrastructure delivery thingie.
SpotCloud might have a fighting chance, though, mostly because it’s about three years later. Vrtualization is much more widespread at the service provider/data center level, the market for public cloud is robust and so on.
It also has actual capacity to offer. SpotCloud says it’s got a solid 10,000 physical servers committed to the platform from scores of operators all over the world. Right there, that puts it at a comparable scale with Rackspace and Amazon’s cloud environments.
It works the same way those do- login, pick a server, launch it, pay for it. It’s incredibly light on features; ‘bare-bones cloud’ comes to mind. But you want a server somewhere? Get a server. Essentially without lifting a finger, SpotCloud just built out one of the largest contiguous cloud computing environments in the world. It remains to be seen whether or not sellers can offer prices that actually tempt buyers.
That’s where the surprising news comes in. One, SpotCloud now supports VMware; Enomaly CTO and SpotCloud creator Reuven Cohen said they were basically forced into it. Originally SpotCloud only worked with Enomaly-powered clouds, but that had to change, since the overwhelming majority of sellers with capacity to offer were on VMware. “Honestly I was a little surprised…VMware is basically everywhere,” he said.
Cohen said he pitched his platform as free and easy, just park it on servers that are only getting used for SpotCloud anyway and away you go, but VMware was king. The other surprise was that the sellers on SpotCloud aren’t all up to date, cloud-ready hosters with unused capacity they want to monetize. Cloud computing firms aren’t the sellers on the cloud computing brokerage.
“It was really, uh, little guys, other types, guys you would not associate with the cloud,” Cohen said. In other words, the excess virtualized capacity was floating around the hosting world. It’s floating around in data centers and colos of all stripes and sizes, from Joe’s Basement Hosting Shoppe to operators stuck with dead weight investments.
Cohen said the pitch had gone from “Join the cloud, get in on the action” to “Clean out your closets, the server-man is here” One large Australian DC had jumped the gun on a customer project and ended up with $3 million worth of infrastructure that was sitting dark. The client had bailed; they had 2 TBs of RAM and servers just sitting there. They signed it up for SpotCloud and at least it’s got a chance; any money is better than none on dead weight and they can just pull out if they find a better use for it. Another example was an LA-based DC that served the entertainment industry.
It’s interesting to me that what is, for all intents and purposes, one of the largest true IaaS clouds a) popped up overnight and b) is built out of scrap iron and used tires. That’s definitely a validation of the cloud computing model and how far we’ve come on platform. The real test though is whether anyone is going to buy enough of it to matter, since a commodity is by definition easy to get a hold of.
“It’s been harder finding the buyers,” admitted Cohen. Color me unsurprised.
Back in August we reported that pharmaceutical giant, Eli Lilly was looking for additional cloud providers to Amazon, for better support.
Lilly has picked Indiana-based hosting and cloud provider, BlueLock, according to a wink and a nudge from Twitter sources close to BlueLock. The hoster is in bed with VMware and the vCloud initiative, providing all the bells and whistles that go along with the vCloud Datacenter service.
Presumably Eli Lilly was willing to pay more for better support than it had been getting at AWS?
We reached out to all parties for comment and got these responses:
“Thanks for contacting us about this topic. Unfortunately, due to conflicting priorities we aren’t able to participate,” said Carole Copeland, corporate communications manager for Eli Lilly, via email.
And this response from BlueLock:
“We will have to kindly decline to comment. It is already public knowledge that Eli Lilly has been in discussion with a few cloud players (BlueLock included), however the discussion and any outcomes are between the companies involved due to competitive and partnership reasons.”
It’s unfortunate everyone is keeping tight-lipped on this cloud implementation as it would be really helpful to understand and learn from what happened here. It’s clear that terms of service and SLAs played an important role in Eli Lilly’s decisions.
A new report from the 451 Group paints a picture of cloud computing poised to spread but bound by limits of infrastructure. It’s still a tiny fraction of the IT market but advanced Asian governments are promoting cloud, with government-run data center and technology parks courting cloud development projects in Malaysia, Singapore, Hong Kong and elsewhere.
Communications and local government initiatives are two major forces driving the deployment of clouds in developing Asia, said Agatha Poon, research manager on global cloud computing for 451. Many parts of Asia are very technologically advanced, often well over what anywhere in the Americas can boast — South Korea is the most broadband-connected country in the world — but it’s localized in small pockets around strategic connection hubs.
There’s massive dark (or dim) gaps where communication infrastructure is lacking, underpowered or evolving in weird new directions with mobile communications (like most of India). But, for example, last year Amazon Web Services (AWS) and IBM opened clouds in Singapore. AWS is (probably) in the giant Equinix facility and IBM in the Chengi Business Park, run by the government’s Infocomm Development Authority of Singapore. AWS has an Availability Zone for its cloud there; IBM is doing something mysterious and high concept, naturally. Chinese telecoms are also experimenting with cloud platforms.
Poon says the market is gunning for SMBs who want new services based around cloud infrastructure; the large enterprises there are going to be more conservative, precisely the same pattern cloud is following in the US and Europe. The growth rate looks exponential — the 451 Group predicts $1 million of cloud spending in 2009 to be $17 million by the end of 2011.
That’s small potatoes, but a significant vote of confidence. Poon said that businesses are going to get more options, but local ICT providers should gird their loins: if they don’t catch up or implement something cloud-like, they’ll get absolutely crushed by the big players in the region.
The big telcos in Asia are moving fast in the cloud space, Poon said, either through partnerships and strategic investments. She pointed to South Korean telco KT, which is making an aggressive shift toward offering cloud infrastructure (compute and storage) through a variety of platforms, like Cloud.com and enStratus, and said the tech giants that supply the world with hardware, like Fujitsu and Samsung, were also gearing up to service the cloud market.
Its another interesting demonstration of the way the market for cloud works; it’s organic, demand-based, almost biological. It’s creeping out along the bright communication hubs, where the activity and the cash and electricity-based nutrition is, and avoiding the leaner pastures.
That’s because cloud computing, properly done, minimizes the fears of risky investment. A provider can go where the action is and drop in a little bit. If it works, they can add some more. There’s no need for anybody, provider or user, to stand up a massive data center operation and just hope for the best. They can just take it, or leave it. As long as there’s a big pipe nearby, of course.
Eucalyptus has announced it is in a technical partnership with Red Hat to bundle Deltacloud, Red Hat’s cloud platform project with the much more mature Eucalyptus platform. Check out the Euca-Hat FAQ here.
Red Hat’s Deltacloud tools will function more or less as a cloud management layer when used with Eucalyptus; their strength is reportedly in enabling the use of multiple public cloud services and internal, private cloud resources in a single view: cloud management, much like enStratus does.
CEO Marten Mickos said in an interview that the user base of the two companies are simpatico, and that’s why he he wanted the deal. “We see a very good overlap; the same people who are downloading Eucalyptus are downloading Red Hat,” he said.
Of course, you can do that for free, so that’s a good sign of interest but not necessarily potential revenue. Mickos said the deal was a good way for Eucalyptus to broaden its appeal and look towards the next few years when, he said, enterprises will be moving almost universally to a hybrid cloud model.
Right now, the products will be offered by both companies as a cloud lineup, but support and updates will come from each company separately. Mickos said this was an opportunity for Red Hat as well.
“For Red Hat, it is great because it allows them to compete against VMware going forward,” he said. Red Hat gets a robust cloud platform and Eucalyptus gets a monster-sized install base. A match made in free/open source software (FOSS) heaven.
Could a buy be in the works? Eucalyptus says it is roaring ahead on customers and capitalized to the tune of $35 million, putting a potential sale price around a minimum $120 million (VC investors like to get four times their money back, goes the common wisdom). Cloud technology is definitely a niche product, but the Mickos MySQL pedigree could be worth a lot…
SimpleCDN had a simple premise: people will buy into a cheap, reliable content distribution network (CDN). Turns out it was too cheap, and maybe too cavalier with its choice of customers. As a result, the service was booted off its hosting provider, leaving thousands of users without access to massive amounts of digital content. It’s a microcosm of all the things that can go wrong in the cloud model.
SimpleCDN went dark for the majority of its customers on Saturday, Dec. 11, followed by an angry, terse explanation from Frank Wilson, senior engineer for SimpleCDN. He said that his company had been summarily booted from its hosting infrastructure at Texas-based SoftLayer, which does dedicated and cloud hosting in three locations in the U.S. SimpleCDN had bought SoftLayer from a reseller called 100TB.com, a subsidiary of the UK2 group. Customers are being pushed to a competitor.
100TB.com offered unlimited, unmetered, network access and did not charge extra for it, as do most hosting providers, claiming you could use up to 100 TB of transit every month and still only pay roughly average VPS hosting costs, ($600 per month of a decent quad-core server).
“I think they were doing about 30 GBps sustained at the end,” mused Jason Read, professional cloud watcher at CloudHarmony. Read recapped that SimpleCDN was doing business with a second-tier hoster at a level most people would consider a full-on DDOS, all day, every day. Read also added that SimpleCDN was able to offer their bargain prices based on 100TB.com’s marketing. “[SimpleCDN] was kind of milking that 100 TB unlimited bandwidth offer and I think SoftLayer told UK2 they had to amend their terms,” said Read.
That may have happened, of course. SimpleCDN’s Wilson states that he thinks that SoftLayer was getting massively undersold on its own CDN business, which is much more expensive than SimpleCDN, or that they were getting slammed by his booming business.
“…our best guess currently is that these organizations could not provide the services that we contracted and paid for, so instead they decided that terminating services would be the best solution for them,” he said.
Of course, CDN services are an incidental part of SoftLayer’s hosting business, probably a tiny percentage of its revenue; something they offer because they can or because customers are asking for it. Many hosters do the same and consider it a value add rather than a critical part of the business. Same with UK2 — they were reselling Akamai as a CDN offering. In the CDN market, there’s Limelight and Akamai, and then there’s everyone else. SoftLayer also lives in Dallas, one of the world’s hubs for Internet connectivity. If they were running out of bandwidth, they could simply buy more and sell it to the UK2 Group. If anyone was getting killed in this deal it was UK2, the middleman.
What led to UK2 terminating SimpleCDN was the nature of the traffic it served. SimpleCDN hosted a lot of live video streams, and anyone who’s taken even a cursory look into it knows that there is a booming business in streaming pirated content to U.S. audiences; some Web communities have users that will post entire seasons of a TV show or movies to an online service for anyone to watch for free, unauthorized marathons that entertainment companies take a very dim view of.
CDNs know this, of course, and “monitor” their networks by pulling streams when they get a DMCA takedown notice, which doesn’t mean a thing to the 99 other pirate streams going at the same time. SimpleCDN even had an automated DMCA action form. Somebody out there got sick of playing whack-a-mole with SimpleCDN and went straight to the provider of record, SoftLayer.
SoftLayer would not comment officially for this story except to say that SimpleCDN was not their customer but rather UK2’s and they had no commercial relationship with SimpleCDN. However, they are still the ones hosting all this content and it can be assumed they got a DMCA notice, which they are going to take VERY seriously, because unlike UK2 or SimpleCDN, they have actual physical infrastructure and assets. They would have gone to UK2 and said, “We are holding you responsible for this.” Wilson’s letter says that UK2 accused him of content violations and changed their Terms of Service (ToS) on the fly to put him in violation.
Why did 100TB/UK2 change the rules of the game, instead of duly passing along SoftLayer’s DMCA, as they should have done? SimpleCDN was murder on their bottom line. Their “free bandwidth” offer was a fiction when put to the test. SimpleCDN took them at face value, ran hundreds of servers and hosted thousands of terabytes of data with them, but it was gone in a flash, because it was based on a false economic premise and shady marketing.
So the lesson for cloud is two-fold: one is “too good to be true” usually is. If SimpleCDN had started directly with SoftLayer, it would have had to pay those bandwidth costs and its prices wouldn’t have been so attractive. Likewise, the DMCA issues would have had one less hop.
Second, for the business user, it’s a new wrinkle in vetting a service. Popular online services haven’t been known to pop like a soap bubble and vanish overnight, taking massive amounts of data with them; that’s the province of shady warehouse distribution operations and basement stock brokerages. Now they do, fueled by the explosion of middlemen and easy access that drives cloud computing. It’s not enough to examine whether a provider is sound; you have to make sure you understand who they rely on too.
UPDATE: Both UK2 Group and SimpleCDN were contacted by phone and email for this article but neither responded by press time.
Startup, CloudSwitch has released version 2.0 of its software that lets enterprise users connect their private data centers with cloud computing services and crucially, extends their internal security policies into the cloud.
With CloudSwitch, applications remain integrated with enterprise data center tools and policies, and are managed as if they were running locally, the company claims.
The new features in Enterprise 2.0 include:
Provisioning of new virtual machines in the cloud (in addition to migration of existing ones), through:
– Network boot support
– ISO support (CD-ROM/DVD)
Web services and command-line interfaces for programmatic scaling to meet peak demands. Broader networking options to extend enterprise network topologies into the cloud:
– Layer-2 connectivity with option for Layer-3 support through software-based firewall/load balancing in the cloud
– Public IP access to cloud resources with full enterprise control
– Multi-subnet support
Enhanced user interface support for better scalability, control and ease of use
Broader geographic coverage (Terremark vCloud Express & eCloud, Amazon EC2 East, West, EU and Asia Pacific regions) .
CloudSwitch officials said the company has landed pharmaceutical giant Novartis and Orange San Francisco, the subsidiary of telecommunications operator Orange. It has about 10 customers in total from pharma, retail and financial services. These customers are using CloudSwitch for a range of use cases including cluster scale-outs, web application hosting, application development and testing and labs on demand.
CloudSwitch Enterprise 2.0 is available now, with a free 15-day trial. Pricing begins at $25,000 for an annual license including basic support and up to 20 concurrent virtual machines under management in the cloud. Additional server packs are available for scaling. Cloud usage fees are paid separately to the cloud provider.
Despite blowback from refusing to supply services to whistleblower website WikiLeaks, controversy over congestion, uptime and customer service (or lack thereof), it seems that cloud computing giant Amazon Web Services (AWS) has never seen better days. The cloud provider is making new services and news announcements at a record clip. Here’s a roundup of the most significant recent announcements.
Cluster Compute GPU instances: Amazon made high-performance computing (HPC) headlines when it launched a special type of high-powered compute instance based on hardware normally found only in supercomputers. It’s impossible to fake or emulate the kinds of uses scientific and functional computing demands, so Amazon built a mini-supercomputer in its own Virginia data center and opened it up to the world. Now they’ve done it again, but this times it’s graphical processing unit (GPU) instances that are available.
Originally driven by the video game market, the chips that run video adapters have gotten so powerful that they’ve paved the way to new areas of HPC. Again, impossible to emulate, so Amazon has clearly laid out some serious capital to build a GPU-powered supercomputer to go along with the one that fires Cluster Compute instances.
This may speak to AWS’ operational maturity; their systems have evolved to the point where they can accommodate a variety of hardware in their billing, provisioning and management systems.
DNS in the cloud: Domain name servers, the street maps of the Internet, have long been the province of Web hosters and Internet service providers. They’re vital to delivering what customers want — Internet traffic — to the right place, and providers must be able to keep control of how traffic is distributed and account for that. Without access to DNS servers, you can’t properly control your email server, for instance.
Amazon, despite hosting one of the world’s signature collections of Internet traffic, hasn’t made DNS available to its customers; now it is. This might be due to the theory that as a pure infrastructure host, users should run their own. That’s led to angst when Amazon, the provider of record, gets blocked or banned by the Internet community for users’ misbehavior. It also effectively crippled the Elastic Load Balancing (ELB) service for many users, since they were unable to point their root domain at the ELB service. “Too complicated,” they were told.
The Route 53 DNS service lets users create zone files for their own domains, a little bit like handing the steering wheel back to the driver for a network admin. It’s based on popular free software (of course) djbdns and goes for $1 per month. Most website hosters provide DNS service for free. Regardless, AWS users are largely ecstatic over this, although it does not fix the zoning problems with Elastic Load Balancers quite yet.
UPDATE: Not all are ecstatic; Cantabrigian Unix developer Tony Finch writes a worthy critique of Route 53.
PCI DSS compliance in the cloud (sort of): This is a big deal to many. That’s because Visa and other credit card companies won’t let you take credit card payments, online or anywhere else, unless you can pass a PCI DSS audit. To date, Amazon hasn’t been able to do that, shutting it out of the market for e-commerce applications (in part). PCI DSS 2.0 was announced, which added new and vague but apparently satisfactory guidelines for virtualization; weeks later, Amazon is now PCI DSS Level 1 compliant.
Does this mean your new online business is automatically PCI compliant if you use Amazon to host it? Absolutely not. All it means is that it is now possible for merchants to consider using EC2 and S3 to process card payments. The responsibility of passing one’s own PCI audit hasn’t gone away. In the event of a data breach, you’re still on the hook if you store customer’s credit card info on AWS.
New SDKs for mobile developers: Amazon is committing to the support of mobile development, releasing new software development kits (SDKs) for the iPhone and Android devices. The SDKs facilitate writing apps that can connect directly to AWS’ infrastructure and do fun stuff. No one in their right mind who is developing for mobile isn’t already using AWS, parts of AWS or some other cloud service, so this isn’t exactly a brainteaser. It’s a significant show of support, however, and again demonstrates the increasing maturity of AWS as an environment.
CloudWatch updates: A raft of updates to AWS’ CloudWatch service, a rudimentary notification system which is less rudimentary now. A shining example of starting off with something that is basically kind of broken (the original CloudWatch was considerably more limited than the monitoring features built into your average microwave oven) and gradually becoming a truly useful part of the tool kit.
That includes features like threshold notifications, health checks and policy-based actions (like not auto-scaling up your application in response to unexpected traffic and leaving you with an eye-watering bill).
5 TB files in S3: Users now have the ability to upload files up to 5 TB in size, several orders of magnitude greater than the previous 5 GB limit. One has to assume they’ve made a major stride in their storage architecture, Dynamo. Jeff Barr, AWS evangelist, posits direct connections between a genome sequencer, S3 and the HPC cluster on EC2 for near-real time data processing.
Of course, keeping it there will cost you a solid $125 per month per TB, and getting it back out of S3 for archiving or other purposes will cost you as well. But if you don’t have a supercomputer handy to go with your genomics research institute, this may look pretty handy. Make sure none of your giant files are classified, too, or you might get “WikiLeaked”…
Whew. Exciting six months, right, kids? Nope, that was just the last three weeks. Crazy.