when relevant content is
added and updated.
A name you may not know, Hazelcast, has announced the general availability of Hazelcast 3.5 this week.
The company is a provider of operational in-memory computing.
What is operational in-memory computing?
The ‘operational’ part just means that it is working – but the ‘in-memory computing’ part refers to a combination of hardware and (middleware) software working to allow us to store data in RAM (often shared across a cluster of computers) and then process it in a parallel distributed process manner.
Near cache is nice cache
In this regard then, Hazelcast’s High-Density Memory Store is now available to the Hazelcast ‘client’ (software) as a ‘near cache’ providing access to hundreds of gigabytes of in-memory data on a single application server.
All that data available in local application memory means an instant, massive increase in data access speed on fewer application server instances to power the same total throughput.
The firm insists that this ‘vastly’ increases application performance while reducing hardware footprint and management complexity.
Big apps, big headaches
“Big applications often mean big headaches for operations teams. The 3.5 release introduces a host of new tools and features to make running an operational in-memory computing platform more manageable,” said the company, in a press statement.
New push-button deployment options make short work (says Hazelcast) of provisioning a new cluster, reducing it to an easily reproducible process that completes in minutes.
Predictable latency and expanded monitoring capabilities make Hazelcast more stable to visualise out-of-tolerance events that may require action.
Cutting through Hazelcast CEO Greg Luck’s arguably somewhat self-serving “we’re the best and we’re compelling” commentary, the company chief does also say that his firm offers a “proven business case” for companies who are looking to upgrade to in-memory for breakthrough application speed and scale.
Hazelcast 3.5 open source is available today.