
Stream processing platform achieves one billion events per second
The company’s platform provides high speed, cloud-native, distributed stateful stream processing capabilities integrated with in-memory storage to power demanding high-performance software applications with the highest throughput and lowest latency requirements. In addition to its performance, says the company, its platform is architected to run with maximum efficiency to minimize the hardware footprint, delivering best-in-class total cost of ownership.
The level of performance demonstrated in the latest benchmark, says the company, is particularly valuable for improved machine learning (ML) training, as well as enhancing the decision making of an artificial intelligence (AI) powered application, such as fraud detection and other use cases that require automated, real-time decisions. Such algorithms become more complex and more resource-intensive in the search for greater accuracy, so cost-effective processing power is critical to enable ongoing evolution and improvement while keeping expenditures under control.
“While one billion events per second may seem like overkill today,” says John DesJardins, CTO of Hazelcast, “data is on the cusp of its next explosion with more applications moving to the cloud and eventually the edge, now accelerated by the rollout of 5G. In combining the Hazelcast platform with the collaborative efforts with our partners, including IBM and Intel, customers can breathe easier knowing that their applications can process and compute large volumes of data in real-time, while keeping hardware costs under control.”
The benchmarking effort began with testing on small clusters of c5.4xlarge Amazon Web Services instances, each of which provides 16 vCPUs, using a data stream of one million events per second to identify the 99.99th percentile latency, representing a measurement that is one hundred times more strict than the usual practice of reporting at the 99th percentile. The test used a fast time resolution for updates of 20 milliseconds, which created a far more intensive workload than that of the 1-minute window specified in the NEXMark definition.
Queries from the NEXMark benchmark suite, which simulate an online auction, were run on cluster sizes of five, 10, 15 and 20 nodes to get the baseline latency at the specified percentile. In the 20-node cluster, query 5 of the NEXMark benchmark which asks, “Which auctions have achieved the highest price in the last period,” was the most complex with a 99.99th percentile latency of 16 milliseconds.
The benchmark then focused on finding the maximum throughput Hazelcast can offer at the same low latency as it scales from one node to as many as it needs to query one billion events per second. The update time resolution was relaxed to 500 milliseconds, which was still two orders of magnitude faster than the time window defined by the NEXMark benchmark, says the company. The latency percentile was also relaxed to the standard 99th percentile to keep the benchmark runs manageably short.
Query 5 was first run against a single node, where it achieved 25 million events per second. Repeating the test against increasing cluster sizes, Hazelcast exhibited nearly linear growth of throughput and demonstrated the minimum number of instances to achieve the targeted one billion events per second was 45 nodes, while still achieving a 99th percentile latency of 26 milliseconds.
For more details on the benchmark and its results, see: “Billion Events Per Second with Millisecond Latency: Streaming Analytics at Giga-Scale.”
Related articles:
IBM proposes PCM for in-memory computing
Applied Materials enables emerging memories for IoT, AI & cloud
AI chip startup offers new edge computing solution
