Files
worker/CHANGELOG.md
Rene Nochebuena 631c98396e docs(worker): correct tier from 2 to 3 and fix dependency tier refs
worker depends on launcher (now correctly Tier 2) and logz (Tier 1),
placing it at Tier 3. The previous docs cited launcher as Tier 1 and
logz as Tier 0, both of which were wrong.
2026-03-19 13:13:41 +00:00

2.5 KiB

Changelog

All notable changes to this module will be documented in this file.

The format is based on Keep a Changelog, and this module adheres to Semantic Versioning.

0.9.0 - 2026-03-18

Added

  • Task type — func(ctx context.Context) error; the unit of work dispatched to the pool
  • Config struct — pool settings loaded from environment variables: WORKER_POOL_SIZE (number of concurrent goroutines, default 5), WORKER_BUFFER_SIZE (task queue channel capacity, default 100), WORKER_TASK_TIMEOUT (per-task context deadline; 0 means no deadline, default 0s), WORKER_SHUTDOWN_TIMEOUT (time to wait for workers to drain on stop, default 30s)
  • Provider interface — Dispatch(task Task) bool; for callers that only dispatch tasks; returns false immediately when the queue is full (backpressure, non-blocking)
  • Component interface — embeds launcher.Component and Provider; the full lifecycle-managed surface registered with the launcher
  • New(logger logz.Logger, cfg Config) Component — constructor; applies safe defaults (PoolSize <= 0 → 5, BufferSize <= 0 → 100); returns a Component ready for lc.Append
  • OnInit — logs pool configuration; initialises the buffered task channel
  • OnStart — spawns PoolSize worker goroutines, each ranging over the task channel
  • OnStop — closes the task channel (drain signal), cancels the pool context, then waits up to ShutdownTimeout for all goroutines to finish; logs an error on timeout but returns nil so the launcher continues
  • Per-task timeout — when TaskTimeout > 0, each worker creates a context.WithTimeout child before invoking the task; tasks also receive a cancellation signal when the pool is stopping via the pool context
  • Error logging — task errors are logged with the worker ID; errors are not surfaced to the dispatcher

Design Notes

  • A single buffered chan Task is shared by all workers; closing it during OnStop is the drain signal, avoiding a separate done channel or additional synchronisation primitives.
  • Dispatch is deliberately non-blocking: a false return means the task has been dropped, not queued; the caller owns the retry or overflow decision, keeping backpressure handling out of the pool itself.
  • Provider / Component split follows the framework pattern: inject Provider into callers that only dispatch tasks, inject Component only at the lifecycle registration site.