

(metamorworks/Shutterstock)
Organizations today are required to process ever-bigger amounts of data in smaller and smaller windows of time. Those two factors are encouraging a movement away from processing architectures based on traditional databases and an approach based on real-time processing on streams of data. Message buses form the foundation for stream processing systems, and therefore are a critical component for organizations that want to develop stream processing applications.
In an ideal world, organizations would be able to land a piece of data before they need it to make a decision. But increasingly, that database-centric approach is viewed as a luxury. Whether the use case is fraud detection, risk management, network monitoring, or network monitoring, organizations increasingly cannot afford to wait around for data to land on traditional architectures to deliver the answers. They need the answers now.
This time-crunch is driving the industry to make sizable investments in building real-time processing systems that can both move and process data much quicker than traditional database-oriented systems.
There are two main components to real-time systems: the underlying message bus and a stream processing system that sits atop it. Let’s handle the underlying message busses first. The list starts out with the big dog in the space: Apache Kafka.
Apache Kafka
Apache Kafka is a distributed open source messaging bus that was written in Java and Scala. The software implements a publish and subscribe messaging system that’s capable of moving large amounts of event data from sources to sinks, in a high-throughput manner with minimal latency and strong consistency guarantees. The software relies on Apache Zookeeper for management of the underlying cluster.
Kafka is based on the concept of producers and consumers. Event data originating from producers is stored timestamped partitions that are housed within Kafka topics. Meanwhile, consumer processes can read the data stored in Kafka partitions. Kafka automatically replicates partitions across multiple brokers (or nodes in the cluster), which allows Kafka to scale its message streaming service in a fault-tolerant manner.
Kafka uses a pull-based model that’s based on consumers pulling data out of Kafka partitions. Kafka stores a complete history of event data within a given amount of time, which allows consumers to “rewind” history or to read the history from the beginning. This provides Kafka’s basis for streaming applications.
Four main APIs are included with the open source project. Two of those, producer and consumer, deliver the core functionality described above. Meanwhile, the Streams API allows an application to read streams of data from topics in the Kafka cluster (in an exactly once fashion, delivering support for transactions), while the Connect API allows a developer to build connectors that continually pull or push data into or out of Kafka.
Kafka was originally developed at LinkedIn to handle the high volume of event data, and was subsequently donated to the Apache Software Foundation in 2011. In 2014, Kafka creators Jay Kreps, Neha Narkhede, and Jun Rao founded Confluent, which offers a commercial version of Kafka that includes enterprise functions and cloud hosting.
In recent years, Kafka has become a very popular open source project, with thousands of companies building Kafka clusters on-premise or in the cloud. While organizations can build their own real-time streaming applications atop Kafka using the Streams API, many choose to couple their Kafka clusters with dedicated stream processing frameworks, such as Apache Flink, Apache Storm, or Apache Spark Streaming.
Apache Pulsar
Apache Pulsar is an open-source distributed pub-sub messaging system originally created at Yahoo that could pose a challenge to Kafka’s hegemony in the message bus layer.
Like Kafka, Pulsar uses the concept of topics and subscriptions to create order from large amounts of streaming data in a scalable and low-latency manner. In addition to publish and subscribe, Pulsar can support point-to-point message queuing from a single API. Like Kafka, the project relies on Zookeeper for storage, and it also utilizes Apache BookKeeper for ordering guarantees.
The creators of Pulsar say they developed it to address several shortcomings of existing open source messaging systems. It has been running in production at Yahoo since 2014 and was open sourced in 2016. Pulsar is backed by a commercial open source outfit called Streamlio, which employs some of Pulsar’s original creators and sells a commercial product that combines Pulsar with Apache Heron, a stream processing engine platform developed at Twitter.
Pulsar’s strengths, according to Streamlio founders, include multi-tenancy, geo-replication, and strong durability guarantees, high message throughput, as well as a single API for both queuing and publish-subscribe messaging. Scaling a Pulsar cluster is as easy as adding additional nodes, which Streamlio says gives it an advantage over other messaging buses.
RabbitMQ
RabbitMQ is a distributed, open source message bus that can be used to implement various data brokering schemes, including point to point, request/reply, and pub-sub communications. The software was written in Erlang, but today it features client libraries in a variety of languages, making it a more open alternative to message distribution and integration than Java Messaging Service (JMS).
Distributed under the Mozilla Public License, RabbitMQ originally implemented the Advanced Message Queuing Protocol (AMQP) but has since been extended with a plug-in architecture, and it now supports a variety of protocols including Streaming Text Oriented Messaging Protocol (STOMP), Message Queuing Telemetry Transport (MQTT), and others.
RabbitMQ can be deployed on clusters and is often used to offload work from busy Web servers, for workload balancing. Many consider its core strength to be reliable message delivery to large numbers of recipients. With more than 35,000 real-world deployments, it’s been battle-tested in the enterprise. RabbitMQ also benefits a large number of libraries that can extend the messaging software, including for complex messaging schemes.
The software was originally developed by Rabbit Technologies Ltd., which was acquired by a division of VMware in 2010. RabbitMQ became part of Pivotal Software in 2013, and today the company offers a hosted version of RabbitMQ on its Pivotal Cloud Foundry.
Apache ActiveMQ
Apache ActiveMQ is a distributed, open source messaging bus that’s written in Java and fully supports JMS. The software was originally developed at LogicBlaze as an open alternative to proprietary messaging buses, such as WebSphere MQ and TIBCO Messaging, and has been backed by the Apache Software Foundation since 2007.
In addition to being an open implementation of JMS, ActiveMQ also supports other protocols, including STOMP, MQTT, AMQP, REST, and WebSockets. The software scales horizontally, and support several modes for high availability, including use of ZooKeeper.
ActiveMQ is distributed the Apache 2.0 License. It forms the basis for Amazon Web Services‘ message queue service, Amazon MQ.
TIBCO Messaging
TIBCO is one of the original purveyors of high-speed message buses for enterprise customers. In fact, it’s right there in the name: The Information Bus COmpany (TIBCO).
Thousands of customers continue to use TIBCO Messaging, which provides a scalable platform for distributing high volume of messages among a variety of sources and sinks in a low-latency manner. The company’s core Enterprise Message Service is built around the JMS 1.1 and 2.0 standards.
TIBCO today extends its flagship Messaging platform with several other versions, including one based on Apache Kafka. It also offers the Eclipse Mosquitto Distribution of Messaging, which supports the MQTT protocol.
There are many other message buses out there, but these arguably are the most popular. In a future post, we’ll investigate stream processing frameworks that can sit atop these message buses.
June 13, 2025
- PuppyGraph Announces New Native Integration to Support Databricks’ Managed Iceberg Tables
- Striim Announces Neon Serverless Postgres Support
- AMD Advances Open AI Vision with New GPUs, Developer Cloud and Ecosystem Growth
- Databricks Launches Agent Bricks: A New Approach to Building AI Agents
- Basecamp Research Identifies Over 1M New Species to Power Generative Biology
- Informatica Expands Partnership with Databricks as Launch Partner for Managed Iceberg Tables and OLTP Database
- Thales Launches File Activity Monitoring to Strengthen Real-Time Visibility and Control Over Unstructured Data
- Sumo Logic’s New Report Reveals Security Leaders Are Prioritizing AI in New Solutions
June 12, 2025
- Databricks Expands Google Cloud Partnership to Offer Native Access to Gemini AI Models
- Zilliz Releases Milvus 2.6 with Tiered Storage and Int8 Compression to Cut Vector Search Costs
- Databricks and Microsoft Extend Strategic Partnership for Azure Databricks
- ThoughtSpot Unveils DataSpot to Accelerate Agentic Analytics for Every Databricks Customer
- Databricks Eliminates Table Format Lock-in and Adds Capabilities for Business Users with Unity Catalog Advancements
- OpsGuru Signs Strategic Collaboration Agreement with AWS and Expands Services to US
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- MinIO Expands Partner Program to Meet AIStor Demand
- Databricks Donates Declarative Pipelines to Apache Spark Open Source Project
June 11, 2025
- What Are Reasoning Models and Why You Should Care
- The GDPR: An Artificial Intelligence Killer?
- Fine-Tuning LLM Performance: How Knowledge Graphs Can Help Avoid Missteps
- It’s Snowflake Vs. Databricks in Dueling Big Data Conferences
- Snowflake Widens Analytics and AI Reach at Summit 25
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- Why Snowflake Bought Crunchy Data
- Change to Apache Iceberg Could Streamline Queries, Open Data
- Inside the Chargeback System That Made Harvard’s Storage Sustainable
- dbt Labs Cranks the Performance Dial with New Fusion Engine
- More Features…
- Mathematica Helps Crack Zodiac Killer’s Code
- It’s Official: Informatica Agrees to Be Bought by Salesforce for $8 Billion
- AI Agents To Drive Scientific Discovery Within a Year, Altman Predicts
- Solidigm Celebrates World’s Largest SSD with ‘122 Day’
- The Top Five Data Labeling Firms According to Everest Group
- DuckLake Makes a Splash in the Lakehouse Stack – But Can It Break Through?
- Who Is AI Inference Pipeline Builder Chalk?
- ‘The Relational Model Always Wins,’ RelationalAI CEO Says
- IBM to Buy DataStax for Database, GenAI Capabilities
- VAST Says It’s Built an Operating System for AI
- More News In Brief…
- Astronomer Unveils New Capabilities in Astro to Streamline Enterprise Data Orchestration
- Yandex Releases World’s Largest Event Dataset for Advancing Recommender Systems
- Astronomer Introduces Astro Observe to Provide Unified Full-Stack Data Orchestration and Observability
- BigID Reports Majority of Enterprises Lack AI Risk Visibility in 2025
- Databricks Announces Data Intelligence Platform for Communications
- MariaDB Expands Enterprise Platform with Galera Cluster Acquisition
- Snowflake Openflow Unlocks Full Data Interoperability, Accelerating Data Movement for AI Innovation
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Databricks Announces 2025 Data + AI Summit Keynote Lineup and Data Intelligence Programming
- More This Just In…