• v0.9.0 631c98396e

    Release 0.9.0 Stable

    Rene Nochebuena released this 2026-03-19 07:14:40 -06:00 | 0 commits to main since this release

    v0.9.0

    code.nochebuena.dev/go/worker

    Overview

    worker provides a fixed-size goroutine pool that receives background tasks via a
    buffered channel. It integrates with launcher for managed startup and graceful shutdown,
    and uses a duck-typed Logger interface for structured logging. Tasks are plain
    func(ctx context.Context) error callables — no task struct, no registration, no reflection.

    This is the initial stable release. The API has been designed through multiple architecture
    reviews and validated end-to-end via the todo-api POC. It is versioned v0.9.0 rather than
    v1.0.0 because it has not yet been exercised across all production edge cases, and minor
    API refinements may follow.

    What's Included

    • Task type — func(ctx context.Context) error
    • Config — pool settings loaded from environment variables:
      • WORKER_POOL_SIZE — number of concurrent goroutines (default: 5)
      • WORKER_BUFFER_SIZE — task queue capacity (default: 100)
      • WORKER_TASK_TIMEOUT — per-task context deadline; 0 means no deadline (default: 0s)
      • WORKER_SHUTDOWN_TIMEOUT — time to wait for workers to drain on stop (default: 30s)
    • Provider interface — Dispatch(task Task) bool for callers that only dispatch tasks
    • Component interface — embeds launcher.Component + Provider for full lifecycle management
    • New(logger, cfg) Component — constructor; register with lc.Append(pool)
    • Non-blocking Dispatch — returns false immediately when the queue is full (backpressure)
    • Graceful shutdown — closes the task channel, cancels the pool context, then waits up to
      ShutdownTimeout for all goroutines to finish

    Installation

    go get code.nochebuena.dev/go/worker@v0.9.0
    
    import "code.nochebuena.dev/go/worker"
    
    pool := worker.New(logger, worker.Config{
        PoolSize:        5,
        BufferSize:      100,
        TaskTimeout:     5 * time.Second,
        ShutdownTimeout: 30 * time.Second,
    })
    lc.Append(pool)
    
    ok := pool.Dispatch(func(ctx context.Context) error {
        return sendEmail(ctx, msg)
    })
    if !ok {
        // queue full — handle backpressure at the call site
    }
    

    Design Highlights

    Channel-based queue. A single buffered chan Task is shared by all worker goroutines.
    Workers range over the channel; closing it during OnStop is the drain signal.

    Non-blocking dispatch with backpressure. Dispatch uses a non-blocking select. A
    false return means the task has been dropped, not queued. The caller owns the retry or
    overflow decision.

    Per-task timeout. When TaskTimeout > 0, each worker creates a context.WithTimeout
    child before calling the task. Zero imposes no deadline. The pool context is also
    propagated, so tasks are cancelled if the pool is stopping.

    Drain-with-timeout shutdown. OnStop closes the queue channel and cancels the pool
    context, then waits for all goroutines to finish. If ShutdownTimeout elapses before all
    workers exit, the timeout is logged as an error but OnStop returns nil so the launcher
    continues shutting down other components.

    Logger is duck-typed. New accepts logz.Logger (the shared framework interface).
    No custom Logger interface is defined in this package.

    Known Limitations & Edge Cases

    • No task priority. All tasks are processed in FIFO order from the single channel.
    • No result or error collection. Errors returned by tasks are logged but not surfaced to
      the dispatcher. If the caller needs to know whether a task succeeded, it must handle
      that within the task function itself (e.g. by writing to a results channel or database).
    • Queue size is fixed at construction. It cannot be resized at runtime.
    • Calling Dispatch after OnStop has been called will panic (send on closed channel).
      The launcher lifecycle guarantees ordering; callers that hold a Provider reference and
      dispatch asynchronously must respect the shutdown sequence.
    • worker is not suitable as a request-scoped concurrency primitive (e.g. fan-out within
      a single HTTP handler). Use it for background jobs only.

    v0.9.0 → v1.0.0 Roadmap

    • Validate drain behaviour under high queue saturation in production workloads.
    • Evaluate exposing a Len() int method on Provider so callers can observe queue depth.
    • Consider an optional dead-letter callback for dropped tasks (queue full) to replace the
      current log-and-drop behaviour.
    • Confirm behaviour when ShutdownTimeout is exceeded — evaluate whether in-flight tasks
      should receive a distinct cancellation signal distinct from the pool context cancel.
    Downloads