Event Consumer
5 minute read
A consumer of messages from Kafka, SQS, RabbitMQ, Pub/Sub, or similar. Reads messages, processes them, often updates state and produces downstream messages. The “public interface” is the topic or queue and the schema of messages on it.
This pattern has problems the API provider and API consumer patterns don’t have: ordering, replay, poison messages, dead-letter queues, and delivery semantics (at-most-once, at-least-once, exactly-once-with-effort).
What needs covered
| Layer | Concern | Test type |
|---|---|---|
| Message handler | Pure transformation per message | Solitary unit tests |
| Idempotency | Same message twice produces the same effect | In-process component tests |
| Poison message handling | Malformed message goes to DLQ, doesn’t crash the consumer | In-process component tests |
| Ordering | Out-of-order messages produce documented outcomes | In-process component tests |
| Backpressure | Consumer slows when downstream is slow | Resilience component tests |
| Broker contract | Topic, schema, headers | Contract tests |
| Broker client | Real protocol behavior, offset commits, consumer group rebalancing | Adapter integration tests against a real broker container |
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
- Well-formed message: produces the expected state change and the documented downstream events.
- Batch processing: processes per documented policy.
- Replay from offset: reproduces the same end state.
- Documented schema versions: are accepted.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
- Malformed message: routes to the DLQ with a correlation ID; the consumer survives.
- Duplicate delivery: absorbed by idempotency.
- Out-of-order delivery: follows the documented behavior.
- Mid-batch downstream failure: the offset is left uncommitted.
- Schema-version skew: handled per the documented policy.
- Slow downstream: applies backpressure rather than OOM.
- Consumer-group rebalance during processing: no in-flight messages are stranded.
Test double validation
The broker double in component tests is validated by adapter integration tests against a real broker container the team controls (Kafka in Docker, ElasticMQ for SQS, Redpanda in Docker). The test exercises the broker client adapter against that controlled instance and asserts the adapter speaks the protocol correctly - it does not assert anything about which messages the broker returns or in what order; that is the broker’s behavior, not the adapter’s. Schema registry double is validated by contract tests pinning each version, plus a post-deploy check against the real registry. Post-deploy synthetic publishes a known message to the real topic in a non-prod environment.
Pipeline placement
Handler unit tests and component tests run in CI Stage 1; adapter integration tests against a team-controlled broker container in CI Stage 1 or Stage 2; adapter integration tests against a managed broker the team can’t pin to a known state run out-of-band on a schedule, alongside the post-deploy synthetic.
Example: idempotency under duplicate delivery
Money.usd takes minor units (cents); 4250 represents $42.50.