Scaling FastAPI background tasks beyond BackgroundTasks
When FastAPI's built-in BackgroundTasks stops being enough, here's how to scale to a real distributed task queue with FastWorker — without running Redis or RabbitMQ.
FastAPI ships with a BackgroundTasks class that runs functions after the response is sent. It’s great — until it isn’t. This guide is about what to do when you’ve outgrown it.
What FastAPI BackgroundTasks actually does
from fastapi import BackgroundTasks, FastAPI
app = FastAPI()
def write_log(msg: str):
with open("log.txt", "a") as f:
f.write(msg + "\n")
@app.post("/send-notification")
async def send_notification(msg: str, bg: BackgroundTasks):
bg.add_task(write_log, msg)
return {"message": "ok"}
BackgroundTasks runs the function in the same process, on the same event loop (or thread pool), after the response is flushed. It’s a nicer form of Starlette’s BackgroundTask — good for:
- Writing a log line
- Sending a fire-and-forget analytics event
- Invalidating a cache
It is not good for anything you care about surviving a deploy, a crash, or being retried.
The four breakpoints
You’ve outgrown BackgroundTasks when one of these is true:
1. Work needs to survive the process
You deploy. The old pod goes down. Any tasks that were sitting in BackgroundTasks queues die with it. If that’s a paying user’s welcome email, that’s a problem.
2. You need more than one worker
BackgroundTasks runs in the web process. Scaling web workers scales background capacity only as a side effect, and you fight for the event loop with real requests. If background work gets heavy, it starts eating request latency.
3. You need retries, priorities, or dashboards
There’s no concept of priority, no retry loop, no observability. You can build them, but now you’re writing a task queue.
4. You want to see what’s happening
When a user complains that their email didn’t arrive, you want a dashboard that says “task #abc123 failed at 14:03 with ConnectionError.” BackgroundTasks gives you logs at best.
The usual answer: Celery + Redis
For twenty years the answer has been Celery. It’s battle-tested, has a huge ecosystem, supports chains and groups and chords, and is genuinely good software. It also requires:
- A Redis or RabbitMQ broker
- A result backend (another Redis, or a database)
- A separate worker process
- Flower (or another tool) if you want a dashboard
- Your on-call rotation to understand all of the above
That’s the right tradeoff for teams at scale. For most Python services, it’s more infrastructure than the problem needs.
The brokerless answer: FastWorker
FastWorker is a brokerless task queue for exactly this middle ground. You keep everything FastAPI-native, you gain durability across the request boundary, and you add zero external services. The control plane is a Python process. The dashboard ships with it. Workers are more Python processes.
Here’s the upgrade path from BackgroundTasks:
Before
from fastapi import BackgroundTasks, FastAPI
app = FastAPI()
def send_welcome(email: str):
# blocking SMTP call — bad place for it
smtp.send(email, "Welcome!")
@app.post("/signup")
async def signup(email: str, bg: BackgroundTasks):
bg.add_task(send_welcome, email)
return {"ok": True}
After
# tasks.py
from fastworker import task
@task
def send_welcome(email: str) -> bool:
smtp.send(email, "Welcome!")
return True
# app.py
from fastapi import FastAPI
from fastworker import Client
app = FastAPI()
client = Client()
@app.on_event("startup")
async def _start():
await client.start()
@app.on_event("shutdown")
async def _stop():
client.stop()
@app.post("/signup")
async def signup(email: str):
await client.delay("send_welcome", email)
return {"ok": True}
The request handler is still async-native. client.delay is non-blocking and returns a task id. The control plane (or a subworker) actually sends the email. If the web pod crashes, the task is still in the control plane’s queue.
Scaling out
When one control plane isn’t enough CPU, start subworkers:
fastworker control-plane --task-modules tasks
fastworker subworker --worker-id w1 \
--control-plane-address tcp://control-plane:5555 \
--base-address tcp://0.0.0.0:5561 --task-modules tasks
fastworker subworker --worker-id w2 \
--control-plane-address tcp://control-plane:5555 \
--base-address tcp://0.0.0.0:5565 --task-modules tasks
The control plane auto-discovers workers and load-balances to the least-loaded one.
Prioritizing
Not all background work is equal. Transactional emails should beat weekly digest emails. FastWorker has four levels built in:
from fastworker.tasks.models import TaskPriority
await client.delay("send_welcome", email,
priority=TaskPriority.CRITICAL)
await client.delay("send_weekly_digest", user_id,
priority=TaskPriority.LOW)
No separate queues, no broker topology.
Observability
The built-in GUI at http://127.0.0.1:8080 shows real-time queue depth, workers, and task history. For production, wire the optional OpenTelemetry integration into your existing OTLP collector so a task trace shows up next to its originating HTTP request span.
When to keep BackgroundTasks
Don’t throw BackgroundTasks away. It’s still perfect for work that truly doesn’t matter if it gets dropped:
- Cache warming
- Debug logging
- Audit trail writes to a fire-and-forget sink
- Metrics nudges
Reach for FastWorker the moment the answer to “what happens if this task gets dropped?” is anything other than “nothing.”
Next steps
- FastWorker quickstart — install and run in 5 minutes
- FastWorker vs Celery — honest comparison
- Migrating from Celery to FastWorker
- Priority queues and load balancing
Frequently asked questions
What's wrong with FastAPI BackgroundTasks?
Nothing — until you need durability, retries, priority, horizontal scale, or a dashboard. FastAPI's BackgroundTasks run in the same process after the response is sent. Lose the process, lose the work.
Do I need FastWorker or can I use Celery?
Either works. FastWorker is simpler (no broker). Celery is more featureful (DAGs, persistence, huge ecosystem). Pick FastWorker for moderate-scale services; Celery for extreme scale or complex workflows.
Can I keep using FastAPI BackgroundTasks alongside FastWorker?
Yes. Use BackgroundTasks for fire-and-forget in-process side-effects (logging, cache warming), and FastWorker for anything that matters if the process dies.