Observability with OpenTelemetry in FastWorker
How to trace FastWorker tasks end-to-end with OpenTelemetry, from a FastAPI request through task execution on a subworker, and how to wire it into Jaeger or Honeycomb.
The moment you run distributed task queues in production, you need to be able to trace a single piece of work from the HTTP request that produced it all the way through execution and back. FastWorker includes optional OpenTelemetry support for exactly this.
The problem tracing solves
A user clicks “register” on your signup page. The /signup endpoint takes 2.3 seconds. Is that slow because of your FastAPI handler, the database, the task queue, the task worker, or the external email API the task calls? Without distributed tracing, you’re reading logs and guessing. With it, you get a flame graph.
Enabling OpenTelemetry in FastWorker
FastWorker uses standard OpenTelemetry Python packages. Install them with your app:
pip install fastworker \
opentelemetry-sdk \
opentelemetry-exporter-otlp \
opentelemetry-instrumentation-fastapi
Enable FastWorker’s OTel integration via environment variables:
export FASTWORKER_OTEL_ENABLED=true
export OTEL_SERVICE_NAME=my-fastapi-app
export OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4317
Set the same variables on both the FastAPI app and every FastWorker process (control plane and subworkers). If any of them is missing the config, its spans won’t make it to your collector.
Initializing tracing in the FastAPI app
Standard OpenTelemetry SDK bootstrapping:
# telemetry.py
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
def init_tracing(service_name: str):
resource = Resource.create({"service.name": service_name})
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)
def instrument_fastapi(app):
FastAPIInstrumentor.instrument_app(app)
Call it at startup:
# main.py
from fastapi import FastAPI
from fastworker import Client
from telemetry import init_tracing, instrument_fastapi
init_tracing("my-fastapi-app")
app = FastAPI()
instrument_fastapi(app)
client = Client()
@app.on_event("startup")
async def _start():
await client.start()
@app.post("/signup")
async def signup(user_id: int):
task_id = await client.delay("send_welcome_email", user_id)
return {"task_id": task_id}
That’s the whole app-side setup.
What you get in the trace
With FastWorker’s OTel integration enabled, a single /signup request produces a trace that looks like this:
POST /signup 180ms
├─ fastworker.client.submit 12ms
│ └─ nng.req_reply 10ms
└─ (response flushed to client)
fastworker.control_plane.dispatch 3ms (background)
└─ fastworker.task.execute 430ms
└─ smtp.sendmail 410ms
Three spans from FastWorker:
fastworker.client.submit— fromclient.delay(...)on the FastAPI sidefastworker.control_plane.dispatch— dispatcher picking a subworkerfastworker.task.execute— the actual task body running on the control plane or a subworker
Trace context propagates through task metadata, so the execute span is a child of the FastAPI request span even though it ran in a different process.
Wiring up a collector
For local development, the easiest option is the OpenTelemetry Collector with a Jaeger exporter:
# otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
jaeger:
endpoint: jaeger:14250
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [jaeger]
For production, point the collector at Honeycomb, Grafana Tempo, Datadog, or whatever you already use. FastWorker doesn’t care — it just speaks OTLP.
Metrics
FastWorker also emits a small set of OpenTelemetry metrics when the integration is enabled:
fastworker.tasks.submitted— counterfastworker.tasks.dispatched— counterfastworker.tasks.completed— counter with astatusattribute (success,failure)fastworker.tasks.duration— histogram of execution timefastworker.workers.active— gauge of connected subworkersfastworker.queue.depth— gauge of queued tasks per priority
Feed those into Prometheus or your OTLP metrics pipeline and you get a workable dashboard without having to instrument your tasks by hand.
Production checklist
- Same
OTEL_EXPORTER_OTLP_ENDPOINTon every process. Control plane, subworkers, and FastAPI all need to export to the same collector. - Sampling. For high-throughput apps, enable head sampling (
OTEL_TRACES_SAMPLER=parentbased_traceidratio+OTEL_TRACES_SAMPLER_ARG=0.1) so you trace 10% of requests. - Task argument hygiene. Don’t put secrets or PII into span attributes. FastWorker records task name and id by default, not arguments.
- Healthcheck noise. Drop spans for
/healthand/readyin the collector — they’ll swamp everything else. - Correlate with logs. If you use structured logging, add the
trace_idto your log formatter so a log line in a task worker can be jumped to from the same trace in Jaeger.
When tracing pays for itself
The first time you see a user’s report land in a flame graph showing that 90% of the latency is in smtp.sendmail — not in your code, not in FastWorker, but in the provider you assumed was fast — you’ll know why it’s worth it.
Next steps
- FastWorker vs Celery
- Architecture — internals of dispatch and messaging
- FastAPI consulting — we can help wire OTel into your existing stack
Frequently asked questions
Does FastWorker require OpenTelemetry?
No. It's optional. Enable it by setting the FASTWORKER_OTEL_ENABLED environment variable and pointing an OTLP exporter at your collector.
What spans does FastWorker emit?
A span per task: 'task.submit' on the client side, 'task.dispatch' on the control plane, and 'task.execute' on the worker. Trace context propagates through task metadata so a FastAPI request span becomes the parent of the task spans.
Does this work with Jaeger / Honeycomb / Tempo?
Yes — anything that speaks OTLP. Set OTEL_EXPORTER_OTLP_ENDPOINT to your collector and the traces flow through normal OpenTelemetry tooling.