worker depends on launcher (now correctly Tier 2) and logz (Tier 1), placing it at Tier 3. The previous docs cited launcher as Tier 1 and logz as Tier 0, both of which were wrong.
2.5 KiB
2.5 KiB
Changelog
All notable changes to this module will be documented in this file.
The format is based on Keep a Changelog, and this module adheres to Semantic Versioning.
0.9.0 - 2026-03-18
Added
Tasktype —func(ctx context.Context) error; the unit of work dispatched to the poolConfigstruct — pool settings loaded from environment variables:WORKER_POOL_SIZE(number of concurrent goroutines, default5),WORKER_BUFFER_SIZE(task queue channel capacity, default100),WORKER_TASK_TIMEOUT(per-task context deadline;0means no deadline, default0s),WORKER_SHUTDOWN_TIMEOUT(time to wait for workers to drain on stop, default30s)Providerinterface —Dispatch(task Task) bool; for callers that only dispatch tasks; returnsfalseimmediately when the queue is full (backpressure, non-blocking)Componentinterface — embedslauncher.ComponentandProvider; the full lifecycle-managed surface registered with the launcherNew(logger logz.Logger, cfg Config) Component— constructor; applies safe defaults (PoolSize <= 0→ 5,BufferSize <= 0→ 100); returns aComponentready forlc.AppendOnInit— logs pool configuration; initialises the buffered task channelOnStart— spawnsPoolSizeworker goroutines, each ranging over the task channelOnStop— closes the task channel (drain signal), cancels the pool context, then waits up toShutdownTimeoutfor all goroutines to finish; logs an error on timeout but returnsnilso the launcher continues- Per-task timeout — when
TaskTimeout > 0, each worker creates acontext.WithTimeoutchild before invoking the task; tasks also receive a cancellation signal when the pool is stopping via the pool context - Error logging — task errors are logged with the worker ID; errors are not surfaced to the dispatcher
Design Notes
- A single buffered
chan Taskis shared by all workers; closing it duringOnStopis the drain signal, avoiding a separate done channel or additional synchronisation primitives. Dispatchis deliberately non-blocking: afalsereturn means the task has been dropped, not queued; the caller owns the retry or overflow decision, keeping backpressure handling out of the pool itself.Provider/Componentsplit follows the framework pattern: injectProviderinto callers that only dispatch tasks, injectComponentonly at the lifecycle registration site.