worker depends on launcher (now correctly Tier 2) and logz (Tier 1), placing it at Tier 3. The previous docs cited launcher as Tier 1 and logz as Tier 0, both of which were wrong.
3.8 KiB
worker
Concurrent background worker pool with launcher lifecycle integration.
Purpose
worker provides a fixed-size goroutine pool that receives tasks via a buffered
channel. It integrates with launcher for managed startup and graceful shutdown,
and with logz for structured logging. Tasks are plain func(ctx context.Context) error
callables — no task struct, no registration, no reflection.
Tier & Dependencies
Tier: 3 (depends on Tier 1 logz and Tier 2 launcher)
Module: code.nochebuena.dev/go/worker
Direct imports: code.nochebuena.dev/go/launcher, code.nochebuena.dev/go/logz
Note: worker imports logz.Logger directly (not duck-typed). This follows the
framework-to-framework direct import rule from global ADR-001: logz is a peer
framework module, not an app-provided dependency.
Key Design Decisions
- Drain-with-timeout shutdown (ADR-001):
OnStopcloses the queue channel, cancels the pool context, then waits for all goroutines with aShutdownTimeoutdeadline (default 30 s). Timeout is logged as an error;OnStopreturnsnilregardless so the launcher continues. - Per-task timeout (ADR-002): When
TaskTimeout > 0, each worker goroutine creates acontext.WithTimeoutchild before calling the task. Zero means no deadline is imposed. - Channel-based queue (ADR-003): A single buffered
chan Taskis shared by all workers.Dispatchis non-blocking — it returnsfalseimmediately when the buffer is full (backpressure). Workersrangeover the channel; closing it duringOnStopis the drain signal.
Patterns
Constructing and registering:
pool := worker.New(logger, worker.Config{
PoolSize: 5,
BufferSize: 100,
TaskTimeout: 5 * time.Second,
ShutdownTimeout: 30 * time.Second,
})
lc.Append(pool)
Dispatching a task:
ok := pool.Dispatch(func(ctx context.Context) error {
return sendEmail(ctx, msg)
})
if !ok {
// queue was full — caller decides how to handle
}
Interface surface:
worker.Task—func(ctx context.Context) errorworker.Provider—Dispatch(Task) bool(no lifecycle)worker.Component— embedslauncher.Component+Provider(full lifecycle)
Inject Provider into callers that only dispatch; inject Component into the
launcher registration site.
What to Avoid
- Do not call
DispatchafterOnStophas been called. Sending on a closed channel panics. The launcher lifecycle guarantees ordering, but callers that hold a reference and dispatch asynchronously must respect this. - Do not assume a
falsereturn fromDispatchmeans the task will eventually execute. It has been dropped. Implement retry or overflow handling at the call site if loss is unacceptable. - Do not use
workeras a request-scoped concurrency primitive (e.g. fan-out within a single HTTP handler). It is designed for background jobs, not intra-request parallelism. - Do not add a method to
Task(e.g.ID,Priority). Keep tasks as plain functions. Priority queues or named tasks belong in a separate, application-specific abstraction built on top ofProvider.
Testing Notes
compliance_test.gohas a compile-time assertion:var _ worker.Component = worker.New(...). This ensuresNewreturns aComponentand the signature stays stable.worker_test.gotests dispatch, draining, timeout, and pool size via the real implementation — no mocks needed.- All config fields have sane defaults applied in
New:PoolSize <= 0→ 5,BufferSize <= 0→ 100. Tests that want deterministic behaviour should set explicit values. - The pool goroutines start in
OnStart, not inNew. Tests that callDispatchwithout going through the launcher must callOnInitandOnStartfirst, or use the launcher directly.