Skip to main content

Apache Kafka

Server Source code Package Beta

Server-side event streaming to Apache Kafka via the kafkajs client. Each event is serialized as JSON, keyed for partition-friendly ordering, and produced to a configurable topic. Supports SASL/SSL authentication (Confluent Cloud, AWS MSK, SCRAM), configurable compression (gzip, snappy, lz4, zstd), per-rule topic and key overrides, and graceful shutdown via destroy().

Where this fits

Kafka is a server destination in the walkerOS flow:

Receives events server-side from the collector, serializes them as JSON with a partition-friendly message key, and produces them to a Kafka topic for downstream stream processing (Flink, Spark, ksqlDB, Kafka Connect, consumers).

Installation

Loading...
Loading...

Configuration

This destination uses the standard destination config wrapper (consent, data, env, id, ...). For the shared fields see destination configuration. Package-specific fields live under config.settings and are listed below.

Settings

PropertyTypeDescriptionMore
kafka*objectKafka connection and producer settings.
* Required fields

Mapping

Per-event rules under config.mapping. For the standard rule fields (consent, condition, data, batch, name, policy) see mapping.

PropertyTypeDescriptionMore
keystringOverride message key mapping path for this rule (e.g. data.id). Takes precedence over settings.kafka.key.
topicstringOverride Kafka topic for this rule. Takes precedence over settings.kafka.topic.

Examples

default event

Event
Out

ignored event

Event
Mapping
Out

key from user

Event
Out

mapped data

Event
Mapping
Out

mapped event name

Event
Mapping
Out

topic override

Event
Mapping
Out

The destination creates a single long-lived kafkajs producer during init() and calls producer.connect() before accepting events. On flow hot-swap or server shutdown, destroy() calls producer.disconnect() to flush in-flight messages and close TCP connections.

Message format

Events are serialized as JSON and produced with the following structure:

  • topic, from settings.kafka.topic (or mapping.settings.topic override)
  • key, resolved from settings.kafka.key (or mapping.settings.key override) mapping path; defaults to the event name with spaces replaced by _ (e.g. page_view, order_complete) for partition-based ordering
  • value, mapped payload (when data.map is configured) or the full walkerOS event, JSON.stringify()-ed
  • headers, content-type: application/json plus any static settings.kafka.headers
  • timestamp, event timestamp as string (ms since epoch)

Use mapping.settings.topic to route specific events to dedicated topics (e.g. orders to orders-stream, identities to identity-stream). Use mapping.settings.key to set a key path per rule (e.g. data.order_id for order events).

Authentication

Confluent Cloud (SASL/PLAIN)

Loading...

AWS MSK (IAM)

Loading...
💡 Need implementation support?
elbwalker offers hands-on support: setup review, measurement planning, destination mapping, and live troubleshooting. Book a 2-hour session (€399)