Posted by: Steve Yellenberg
, Big Data, Big Data Analytics, BigMemory, Financial services, in-memory, In-memory data management, mainframe, Manufacturing, Media, Real-time Big Data, Retail & E-commerce, Telecommunication, Uncategorized
Tech pundits sometimes talk about mainframes as if they disappeared along with leisure suits, punk rock, and the Deutschmark. But as CIOs and technology architects know all too well, mainframes are quite alive — if not altogether well — in a surprising number of today’s big enterprises. In fact, their tendency to become bottlenecks is now a hot topic, and in-memory data management is quickly becoming the solution of choice for offloading mainframe demand.
Over the last decade, as enterprises deployed more products and services through Web, mobile, and API distribution channels, the easiest way to do that was to grab data from existing mainframe services. (Sound familiar?) Unfortunately, mainframe applications for customer service, reservations, and commerce weren’t typically built to handle millions of simultaneous customers, hundreds of thousands of transactions per second, or the kind of instant access to data that’s necessary for real-time Big Data intelligence. The result is that mainframes are increasingly inhibitors of performance, costing millions of dollars for scale-outs to address spikes in demand. Let’s put it this way: If IBM sales reps are camped outside your office waiting for a purchase order for the extra MIPS you’ll need to get through the holidays, it’s time to think about another way.
BigMemory, Terracotta’s in-memory data management platform, allows enterprises to reduce mainframe loads, deliver incredible performance, and reduce costs — without big investments in infrastructure. With BigMemory, enterprises use commodity hardware to keep up to hundreds of terabytes of mission-critical data instantly available in ultra-fast machine memory, or RAM. And many enterprises enjoy huge benefits from BigMemory with far smaller volumes. (Read about just one of our mainframe offload success stories here: Top Online Travel Service Takes Off with BigMemory).
How does it work? Offloading the mainframe with BigMemory can happen in one of four ways, all resulting in data access that is orders of magnitudes faster than directly querying a mainframe:
- Batch offload mainframe data into BigMemory
- Write transactions simultaneously to the mainframe and BigMemory
- Apply results of mainframe queries to BigMemory
- Use BigMemory as an in-memory middle tier, in front of the database, for frequently accessed data
To learn more about how Terracotta BigMemory transform your mainframe bottleneck into a source of real-time Big Data performance that delights customers and removes the need for expensive scale-outs, contact Terracotta sales.