feat(postgres): initial stable release v0.9.0

pgx v5-native PostgreSQL client with launcher lifecycle, health check, unit-of-work via context injection, and structured error mapping.

What's included:
- Executor / Tx / Client / Component interfaces using pgx native types (pgconn.CommandTag, pgx.Rows, pgx.Row)
- New(logger, cfg) constructor; pgxpool initialised in OnInit
- Config struct with env-tag support for all pool tuning parameters
- UnitOfWork via context injection; GetExecutor(ctx) returns active Tx or pool
- HandleError mapping pgerrcode constants to xerrors codes (AlreadyExists, InvalidInput, NotFound, Internal)
- health.Checkable at LevelCritical; HealthCheck delegates to pgxpool.Ping

Tested-via: todo-api POC integration
Reviewed-against: docs/adr/
This commit is contained in:
2026-03-19 13:18:07 +00:00
commit 2baafa6a0c
16 changed files with 949 additions and 0 deletions

View File

@@ -0,0 +1,34 @@
# ADR-001: pgx Native Types
**Status:** Accepted
**Date:** 2026-03-18
## Context
Go's standard `database/sql` package provides a database-agnostic interface. Using it with PostgreSQL requires a `database/sql`-compatible driver and means working with `sql.Result`, `*sql.Rows`, and `*sql.Row` — types that were designed for the lowest common denominator across all SQL databases.
`github.com/jackc/pgx/v5` is a PostgreSQL-specific driver and toolkit that exposes its own richer type system: `pgx.Rows`, `pgx.Row`, and `pgconn.CommandTag`. It provides better performance, native support for PostgreSQL-specific types (arrays, hstore, composite types, etc.), and a more accurate representation of PostgreSQL semantics (e.g., `CommandTag` carries `RowsAffected` as well as the SQL command string).
The tradeoff is that choosing pgx means explicitly not supporting other databases through the same client type.
## Decision
The `postgres` module uses pgx native types throughout its public API. The `Executor` interface uses:
```go
Exec(ctx context.Context, sql string, args ...any) (pgconn.CommandTag, error)
Query(ctx context.Context, sql string, args ...any) (pgx.Rows, error)
QueryRow(ctx context.Context, sql string, args ...any) pgx.Row
```
The connection pool is `*pgxpool.Pool` (from `pgx/v5/pgxpool`). The transaction type wraps `pgx.Tx`. There is no `database/sql` adapter layer.
Repository code in application layers imports `pgx` types directly when scanning rows or reading `CommandTag`. This is an explicit, honest API: it says "this is PostgreSQL via pgx" rather than pretending to be database-agnostic.
## Consequences
- **Positive**: Full access to PostgreSQL-specific capabilities (binary encoding, COPY protocol, listen/notify, array types, etc.) without impedance mismatch.
- **Positive**: `pgconn.CommandTag` carries richer information than `sql.Result` (includes the command string, not just rows affected).
- **Positive**: `pgx.Rows` and `pgx.Row` support pgx scan helpers and named arguments.
- **Negative**: Repository code cannot be trivially swapped to use the `mysql` module or any other `database/sql` driver — it imports pgx types. This is acceptable because the tier system isolates database clients at Tier 3; application logic in higher tiers operates through domain interfaces, not directly on `Executor`.
- **Negative**: `pgx.Rows` must be closed after iteration (`defer rows.Close()`). Forgetting this leaks connections. This is the same discipline as `database/sql` but worth noting.

View File

@@ -0,0 +1,36 @@
# ADR-002: Local Executor Interface
**Status:** Accepted
**Date:** 2026-03-18
## Context
The `Executor` interface — the common query interface shared by the connection pool and an active transaction — must be defined somewhere. Earlier iterations of this codebase placed it in a shared `dbutil` package that both `postgres` and `mysql` imported. This created a cross-cutting dependency: every database module depended on `dbutil`, and `dbutil` had to make choices (e.g., which type system to use) that were appropriate for only one of them.
`dbutil` was eliminated as part of the monorepo refactor (see `plan/decisions.md`).
## Decision
The `Executor` interface is defined locally inside the `postgres` package:
```go
type Executor interface {
Exec(ctx context.Context, sql string, args ...any) (pgconn.CommandTag, error)
Query(ctx context.Context, sql string, args ...any) (pgx.Rows, error)
QueryRow(ctx context.Context, sql string, args ...any) pgx.Row
}
```
The `mysql` package defines its own separate `Executor` using `database/sql` types. The two are not interchangeable by design — they represent different type systems.
`Tx` extends `Executor` with `Commit(ctx context.Context) error` and `Rollback(ctx context.Context) error`. `Client` provides `GetExecutor`, `Begin`, `Ping`, and `HandleError`. `Component` composes `Client`, `launcher.Component`, and `health.Checkable`.
Repository code in application layers should depend on `postgres.Executor` (or the higher-level `postgres.Client`) — not on the concrete `*pgxpool.Pool` or `pgTx` types.
## Consequences
- **Positive**: No shared `dbutil` dependency. Each database module owns its interface and can evolve it independently.
- **Positive**: The interface methods use pgx-native types, so there is no impedance mismatch between the interface and the implementation.
- **Positive**: Mocking `postgres.Executor` in tests requires only implementing three methods with pgx return types — no wrapper types needed.
- **Negative**: If a project uses both `postgres` and `mysql`, neither module's `Executor` is compatible with the other. Cross-database abstractions must be built at the application domain interface layer, not by sharing a common `Executor`.
- **Note**: `pgComponent` itself also implements `Executor` directly (forwarding to the pool), which means a `*pgComponent` can be used wherever an `Executor` is expected without calling `GetExecutor`. This is intentional for ergonomics in simple cases where no transaction management is needed.

View File

@@ -0,0 +1,48 @@
# ADR-003: Unit of Work via Context Injection
**Status:** Accepted
**Date:** 2026-03-18
## Context
Database transactions must span multiple repository calls without requiring each repository method to accept a `Tx` parameter explicitly. Passing `Tx` as a parameter would leak transaction concepts into repository method signatures and force every call site to decide whether it is inside a transaction.
An alternative is ambient transaction state stored in a thread-local or goroutine-local variable, but Go has no such construct, and package-level state would break concurrent use.
## Decision
The active transaction is stored in the request `context.Context` under an unexported key type `ctxTxKey{}`:
```go
type ctxTxKey struct{}
```
`UnitOfWork.Do` begins a transaction, injects it into the context, and calls the user-supplied function with the enriched context:
```go
ctx = context.WithValue(ctx, ctxTxKey{}, tx)
fn(ctx)
```
`Client.GetExecutor(ctx)` checks the context for an active transaction first:
```go
if tx, ok := ctx.Value(ctxTxKey{}).(Executor); ok {
return tx
}
// fall back to pool
```
If there is no active transaction, `GetExecutor` returns the pool. This means repository code uses `db.GetExecutor(ctx)` uniformly and is agnostic about whether it is inside a transaction.
`Tx.Commit(ctx)` and `Tx.Rollback(ctx)` both accept `ctx` — this is supported by `pgx.Tx` and matches the overall pgx API convention.
On function error, `UnitOfWork.Do` calls `Rollback` and returns the original error. Rollback failures are logged but do not replace the original error.
## Consequences
- **Positive**: Repository methods need only `ctx context.Context` and `db postgres.Client`; they do not need a separate `Tx` parameter.
- **Positive**: Nesting `UnitOfWork.Do` calls is safe — the inner call will pick up the already-injected transaction from the context, so a single transaction spans all nested calls. (pgx savepoints are not used; the outer transaction is reused.)
- **Positive**: The unexported `ctxTxKey{}` type prevents collisions with other packages that store values in the context.
- **Negative**: The transaction is invisible from a type-system perspective — there is no way to statically verify that a function is called inside a `UnitOfWork.Do`. Violations are runtime errors, not compile-time errors.
- **Negative**: Passing a context that carries a transaction to a goroutine that outlives the `UnitOfWork.Do` call would use a closed transaction. Callers must not spawn goroutines from inside the `Do` function that outlive `Do`.