Skip to content

Messaging Internals & Architecture

Overview

This page provides a deeper look into the internal components that orchestrate the message consumption lifecycle in the Athomic Layer. Understanding these components is useful for advanced debugging, customization, and contributing to the framework.


1. ConsumerManager

This is the entry point for the entire consumer system. At application startup, its primary responsibilities are:

  • Discovery: It inspects the application's code to find all functions that have been decorated with @subscribe_to or @ordered_consumer.
  • Assembly: For each discovered handler, it uses the ConsumerFactory to build a complete, self-contained consumer service. This involves assembling all necessary parts: the correct consumption strategy, message processor, DLQ handler, and the underlying Kafka client.
  • Lifecycle Management: It registers each of these fully assembled consumer services with the main Lifecycle Manager, which then controls their start and stop sequences along with the rest of the application.

2. The Message Processing Flow

When a raw message is fetched from Kafka, it goes through a well-defined internal pipeline before your handler function is ever called.

RawMessage

This is a simple, framework-agnostic data class that represents a message as it comes from the broker. It contains the raw, unprocessed payload (bytes), key, and headers. This abstraction decouples the core processing logic from any specific client library (like aiokafka).

HeadersOrchestrator

This is the central component for managing context propagation. Before any other processing, the headers from the RawMessage are passed to the HeadersOrchestrator. It works with a series of specialized "adapters" to extract and prepare the context:

  • MessagingTelemetryAdapter: Extracts W3C Trace Context headers (traceparent) and starts a new OpenTelemetry span, correctly linking it to the producer's trace.
  • LineageAdapter: Extracts data lineage information.
  • ContextAdapter: Extracts the main application context (tenant_id, user_id, etc.).

The extracted context is then applied to the current execution scope using restore_context(), so it becomes available throughout the rest of the processing pipeline and in your handler.

Tasks (Message Processors)

The "task" of processing a message is handled by a Message Processor (e.g., SingleMessageProcessor or BatchMessageProcessor). This component is responsible for: 1. Receiving the RawMessage. 2. Passing the raw payload through the Payload Processing Pipeline in reverse (decode) order (e.g., decompress -> decrypt -> deserialize). 3. Invoking your actual handler function with the final, deserialized Pydantic model. 4. Capturing the result of your handler.

Outcomes

After your handler function is executed, the Message Processor wraps the result in a standardized ConsumptionOutcome object.

  • ConsumptionOutcome: If the handler completes successfully, this object holds the result.
  • FailureContext: If the handler raises an exception, this object is created instead. It captures the original raw message, the exception details, and other metadata. This context object is then passed to the DLQ Handler to orchestrate the retry/DLQ logic.

This structured flow ensures that every message is handled consistently and that all necessary context for observability and resilience is available at every step of the process.