Posted by: Ben Lackey
Analysis, BigMemory, Hadoop, In-memory data management, mainframe, Uncategorized
After the incredibly positive response to our recent post on mainframe offload with in-memory data management, I thought it would be interesting to dive deeper into a particular aspect of the challenge: offloading existing jobs from your mainframe using Terracotta.
Many Terracotta customers oversee large estates of mainframe hardware. This hardware is difficult and expensive to maintain, and it locks future development efforts into decades-old technologies such as JCL and COBOL. Engineers trained on these technologies are scarce and expensive.
Perhaps most costly to the organization is the resulting lack of agility. New technologies do not integrate easily with mainframe systems, making it difficult to move as quickly as competitors who have more modern estates.
But what if you could reduce mainframe load—and costs—by up to 80%?
Terracotta’s parent company, Software AG, has decades of experience in the mainframe integration field. Software AG’s Entire X provides real-time bi-directional integration with your IBM z Series and the webMethods Integration Server. These integrations allow your business data to move transparently into Terracotta BigMemory Max, our in-memory data management platform that is fast, flexible, and runs on inexpensive commodity hardware.
Once your data is in BigMemory, it’s available to any cutting-edge technology. Key among these is Hadoop, which can be connected to BigMemory using the BigMemory-Hadoop connector. With the connector, you can easily get data from BigMemory into Hadoop, providing scalable, enterprise-strength batch processing entirely on commodity hardware.
All in all, users of Terracotta software have moved over 5 petabytes of data into ultra-fast machine memory. To find out if we can help with your mainframe challenges, contact me (email@example.com) or Terracotta’s sales team (firstname.lastname@example.org).