Hazelcast Enhances Open Source Ecosystems with In-Memory Speed

Print Friendly, PDF & Email

Hazelcast, a leading in-memory computing platform that delivers the System of Now™, unveiled new third-party integrations with connectors to accelerate and expand the streaming ecosystem and capabilities for data-centric enterprises.

The Hazelcast in-memory computing platform uniquely integrates proven data grid technology with next-generation event stream processing capabilities. The extremely lightweight footprint allows enterprises to accelerate their business applications on any cloud or edge device, in addition to on-premises data centers.

“Data-centric enterprises are rapidly adopting the use of in-memory technologies to accelerate their business-critical applications and our integrations simplify the on-ramp to leveraging more random access memory in their architectures,” said Greg Luck, CTO of Hazelcast. “The industry’s fastest stream processing engine, Hazelcast Jet is the premier solution for demanding next-gen applications, especially those aimed at the edge.”

Confluent Verified Connector

As part of the Confluent Verified Integration program, Hazelcast is now a Standard-level Verified Connector for Kafka-centric architectures. The new connector enables enterprises to augment and enhance the exchange of data between Apache Kafka and other systems.

Hazelcast can be used as both a source and a sink for Kafka, enabling applications with the highest throughput and lowest latency requirements. This certification demonstrates the company’s commitment to contributing to the Kafka ecosystem.

With the Standard-level Connector, Hazelcast Jet lets enterprises run business-critical use cases such as payment processing, fraud detection, internet of things (IoT) and edge processing. With its in-memory architecture, Hazelcast boosts Kafka-based applications with extremely fast access to a horizontally scalable data store.

New Extension Modules

Hazelcast is also introducing new extension modules to simplify Jet integrations with third-party products and pipelines. Available now are connectors for database providers MongoDB and Redis, which adds to the growing list of Hazelcast connectors that includes JDBC, Apache Cassandra, HDFS and others. These connectors accelerate the productivity of accessing these data sources.

MongoDB commonly serves as a distributed database for modern applications, but it does not always capture data in a usable format. With the connector, Hazelcast Jet can act as a preprocessor to prepare and enrich the data before storing it in MongoDB.

As a key-value store, Redis lacks optimization for analytical workloads, as well as continuous processing of streaming data. In layering Hazelcast Jet on top of Redis, enterprises can significantly improve how they query and write data to another system of record.

Hazelcast Jet Beam Runner

Apache Beam provides a standardized API framework that provides software development kits (SDKs) in multiple languages for defining stream processing pipelines. The result is developers only need to learn the Beam API for programs to run without modifications on the backend. Hazelcast Jet is the latest runner for Apache Beam and provides a lightweight and ultra-fast option for enterprises.

Sign up for the free insideAI News newsletter.

Speak Your Mind