Chapter 11: Database Strategy
PostgreSQL 16 on Shared Infrastructure
11.1 Overview
This chapter defines the database strategy for the POS Platform, using PostgreSQL 16 on shared infrastructure. The strategy balances performance, isolation, and operational simplicity for a multi-tenant SaaS application.
Key Decisions
| Decision | Choice | Rationale |
|---|---|---|
| Database Engine | PostgreSQL 16 | JSONB support, excellent concurrency, mature ecosystem |
| Multi-Tenancy | Schema-per-tenant | Strong isolation, easy backup/restore, no RLS overhead |
| Shared Tables | shared. schema | Platform-wide users, tenants, sessions |
| Connection Pooling | PgBouncer | Essential for multi-tenant connection efficiency |
| Hosting | Shared container (postgres16) | Existing infrastructure, reduced ops complexity |
11.2 Infrastructure Architecture
Physical Deployment
┌─────────────────────────────────────────────────────────────────────────────┐
│ SYNOLOGY NAS (192.168.1.26) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ PostgreSQL 16 Container │ │
│ │ (postgres16) │ │
│ │ │ │
│ │ Port: 5433 (external) → 5432 (internal) │ │
│ │ Data: /volume1/docker/postgres/data │ │
│ │ Network: postgres_default │ │
│ │ │ │
│ │ ┌───────────────────────────────────────────────────────────────┐ │ │
│ │ │ pos_platform Database │ │ │
│ │ │ │ │ │
│ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │
│ │ │ │ shared │ │ tenant_0001 │ │ tenant_0002 │ ... │ │ │
│ │ │ │ schema │ │ schema │ │ schema │ │ │ │
│ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │
│ │ │ │ │ │
│ │ └───────────────────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Other Databases: salessight_db, stanly_db, shopsyncflow_db, ... │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────┐ ┌──────────────────────────────────────┐ │
│ │ PgBouncer │◄───────►│ Application Containers │ │
│ │ Port: 6432 │ │ (pos-api, pos-admin, etc.) │ │
│ └─────────────────────┘ └──────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Database Creation
-- Create the main POS platform database
CREATE DATABASE pos_platform
WITH OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TEMPLATE = template0;
-- Connect to the new database
\c pos_platform
-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; -- UUID generation
CREATE EXTENSION IF NOT EXISTS "pgcrypto"; -- Cryptographic functions
CREATE EXTENSION IF NOT EXISTS "btree_gist"; -- GiST index support
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- Trigram text search
-- Create the shared schema for platform tables
CREATE SCHEMA shared;
-- Grant usage to application role
GRANT USAGE ON SCHEMA shared TO pos_app;
11.3 Schema-Per-Tenant Architecture
Why Schema Isolation?
| Approach | Pros | Cons | Our Choice |
|---|---|---|---|
| Row-level (tenant_id) | Simple, single schema | RLS complexity, performance, no backup isolation | No |
| Schema-per-tenant | Strong isolation, easy backup, no RLS | Many schemas, connection overhead | Yes |
| Database-per-tenant | Maximum isolation | High resource usage, complex management | No |
Schema Naming Convention
shared -- Platform-wide tables (tenants, users, sessions)
tenant_0001 -- Tenant with ID 0001
tenant_0002 -- Tenant with ID 0002
tenant_XXXX -- Pattern: tenant_{4-digit-id}
Schema Provisioning Workflow
-- Function to provision a new tenant schema
CREATE OR REPLACE FUNCTION shared.provision_tenant_schema(
p_tenant_id UUID,
p_schema_name VARCHAR(63)
)
RETURNS VOID AS $$
DECLARE
v_schema_exists BOOLEAN;
BEGIN
-- Check if schema already exists
SELECT EXISTS(
SELECT 1 FROM information_schema.schemata
WHERE schema_name = p_schema_name
) INTO v_schema_exists;
IF v_schema_exists THEN
RAISE EXCEPTION 'Schema % already exists', p_schema_name;
END IF;
-- Create the tenant schema
EXECUTE format('CREATE SCHEMA %I', p_schema_name);
-- Grant permissions to application role
EXECUTE format('GRANT USAGE ON SCHEMA %I TO pos_app', p_schema_name);
EXECUTE format('GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA %I TO pos_app', p_schema_name);
EXECUTE format('GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA %I TO pos_app', p_schema_name);
-- Set default privileges for future tables
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO pos_app', p_schema_name);
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO pos_app', p_schema_name);
-- Create all tenant tables using template
PERFORM shared.create_tenant_tables(p_schema_name);
-- Seed default data (roles, settings)
PERFORM shared.seed_tenant_data(p_schema_name, p_tenant_id);
-- Update tenant record
UPDATE shared.tenants
SET status = 'active', schema_name = p_schema_name
WHERE id = p_tenant_id;
RAISE NOTICE 'Tenant schema % provisioned successfully', p_schema_name;
END;
$$ LANGUAGE plpgsql;
Connection Search Path Pattern
-- Application layer sets search_path per request
-- This enables transparent schema resolution
-- Example: User authenticated for tenant_0001
SET search_path TO tenant_0001, shared;
-- Queries now resolve correctly:
SELECT * FROM products; -- → tenant_0001.products
SELECT * FROM tenants; -- → shared.tenants (fallback)
SELECT * FROM users; -- → shared.users (fallback)
-- Cross-schema join example
SELECT u.email, tu.role
FROM users u -- shared.users
JOIN tenant_users tu ON u.id = tu.user_id -- tenant_0001.tenant_users
WHERE tu.is_active = TRUE;
11.4 Connection Pooling Strategy
PgBouncer Configuration
; /etc/pgbouncer/pgbouncer.ini
[databases]
; Route all connections through pooler
pos_platform = host=postgres16 port=5432 dbname=pos_platform
[pgbouncer]
; Pool mode: transaction (best for multi-tenant)
pool_mode = transaction
; Pooler ports
listen_addr = 0.0.0.0
listen_port = 6432
; Authentication
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
; Pool sizing
; For multi-tenant: (tenants * 2) + (admin connections)
default_pool_size = 20
max_client_conn = 1000
min_pool_size = 5
; Reserve connections for admin
reserve_pool_size = 5
reserve_pool_timeout = 3
; Connection limits
max_db_connections = 100
max_user_connections = 100
; Timeouts
server_connect_timeout = 5
server_idle_timeout = 60
server_lifetime = 3600
query_timeout = 30
; Logging
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1
stats_period = 60
Pool Mode Comparison
| Mode | Behavior | Use Case |
|---|---|---|
| Session | Connection per session | Long-running sessions |
| Transaction | Connection per transaction | Multi-tenant APIs |
| Statement | Connection per statement | Read replicas only |
Recommendation: Use transaction mode for the POS API to maximize connection reuse across tenants.
Docker Compose Integration
# docker-compose.yml
services:
pgbouncer:
image: bitnami/pgbouncer:latest
container_name: pos-pgbouncer
environment:
- PGBOUNCER_DATABASE=pos_platform
- PGBOUNCER_PORT=6432
- PGBOUNCER_POOL_MODE=transaction
- PGBOUNCER_MAX_CLIENT_CONN=1000
- PGBOUNCER_DEFAULT_POOL_SIZE=20
- POSTGRESQL_HOST=postgres16
- POSTGRESQL_PORT=5432
- POSTGRESQL_USERNAME=pos_app
- POSTGRESQL_PASSWORD=${DB_PASSWORD}
ports:
- "6432:6432"
networks:
- postgres_default
depends_on:
- postgres16
healthcheck:
test: ["CMD", "pg_isready", "-h", "localhost", "-p", "6432"]
interval: 10s
timeout: 5s
retries: 5
networks:
postgres_default:
external: true
11.5 Backup and Restore Per-Tenant
Backup Strategy Overview
| Backup Type | Frequency | Retention | Purpose |
|---|---|---|---|
| Full Database | Daily | 30 days | Disaster recovery |
| Tenant Schema | On-demand | 90 days | Tenant migration, recovery |
| WAL Archives | Continuous | 7 days | Point-in-time recovery |
| Shared Schema | Daily | 30 days | Platform recovery |
Full Database Backup
#!/bin/bash
# /volume1/docker/scripts/backup-pos-full.sh
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/volume1/backup/pos_platform"
CONTAINER="postgres16"
DB_NAME="pos_platform"
# Create backup directory
mkdir -p "$BACKUP_DIR/full"
# Full database dump with compression
docker exec $CONTAINER pg_dump \
-U postgres \
-d $DB_NAME \
-Fc \
-Z 9 \
-f /tmp/pos_platform_${DATE}.dump
# Copy to backup location
docker cp $CONTAINER:/tmp/pos_platform_${DATE}.dump \
"$BACKUP_DIR/full/"
# Cleanup container temp file
docker exec $CONTAINER rm /tmp/pos_platform_${DATE}.dump
# Remove backups older than 30 days
find "$BACKUP_DIR/full" -name "*.dump" -mtime +30 -delete
echo "Full backup completed: pos_platform_${DATE}.dump"
Tenant-Specific Backup
#!/bin/bash
# /volume1/docker/scripts/backup-tenant.sh
# Usage: ./backup-tenant.sh tenant_0001
TENANT_SCHEMA=$1
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/volume1/backup/pos_platform/tenants"
CONTAINER="postgres16"
DB_NAME="pos_platform"
if [ -z "$TENANT_SCHEMA" ]; then
echo "Usage: $0 <tenant_schema>"
exit 1
fi
# Create backup directory
mkdir -p "$BACKUP_DIR/$TENANT_SCHEMA"
# Backup tenant schema only
docker exec $CONTAINER pg_dump \
-U postgres \
-d $DB_NAME \
-n $TENANT_SCHEMA \
-Fc \
-Z 9 \
-f /tmp/${TENANT_SCHEMA}_${DATE}.dump
# Copy to backup location
docker cp $CONTAINER:/tmp/${TENANT_SCHEMA}_${DATE}.dump \
"$BACKUP_DIR/$TENANT_SCHEMA/"
# Cleanup
docker exec $CONTAINER rm /tmp/${TENANT_SCHEMA}_${DATE}.dump
echo "Tenant backup completed: ${TENANT_SCHEMA}_${DATE}.dump"
Tenant Restore Procedure
#!/bin/bash
# /volume1/docker/scripts/restore-tenant.sh
# Usage: ./restore-tenant.sh tenant_0001 backup_file.dump
TENANT_SCHEMA=$1
BACKUP_FILE=$2
CONTAINER="postgres16"
DB_NAME="pos_platform"
if [ -z "$TENANT_SCHEMA" ] || [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <tenant_schema> <backup_file>"
exit 1
fi
# Copy backup to container
docker cp "$BACKUP_FILE" $CONTAINER:/tmp/restore.dump
# Drop existing schema (if restoring over existing)
docker exec $CONTAINER psql -U postgres -d $DB_NAME -c \
"DROP SCHEMA IF EXISTS $TENANT_SCHEMA CASCADE;"
# Restore schema
docker exec $CONTAINER pg_restore \
-U postgres \
-d $DB_NAME \
-n $TENANT_SCHEMA \
/tmp/restore.dump
# Cleanup
docker exec $CONTAINER rm /tmp/restore.dump
echo "Tenant restore completed: $TENANT_SCHEMA"
Tenant Migration (Between Databases)
-- Export tenant to SQL file for migration to different server
-- Run on source server
-- 1. Create migration dump
pg_dump -U postgres -d pos_platform \
-n tenant_0001 \
-n shared \
--no-owner \
--no-privileges \
-f tenant_0001_migration.sql
-- 2. On target server, restore
psql -U postgres -d pos_platform_new < tenant_0001_migration.sql
-- 3. Update tenant registry on target
UPDATE shared.tenants
SET status = 'active',
schema_name = 'tenant_0001'
WHERE id = 'original-tenant-uuid';
11.6 Performance Considerations
Table Partitioning Strategy
For high-volume time-series tables, use declarative partitioning:
-- Partition inventory_transactions by month
CREATE TABLE tenant_0001.inventory_transactions (
id BIGSERIAL,
variant_id INT NOT NULL,
location_id INT NOT NULL,
transaction_type VARCHAR(20) NOT NULL,
quantity_change INT NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
-- ... other columns
PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE tenant_0001.inventory_transactions_2025_01
PARTITION OF tenant_0001.inventory_transactions
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE tenant_0001.inventory_transactions_2025_02
PARTITION OF tenant_0001.inventory_transactions
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
-- Automate partition creation
CREATE OR REPLACE FUNCTION shared.create_monthly_partitions()
RETURNS VOID AS $$
DECLARE
r RECORD;
next_month DATE;
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
-- Get all tenant schemas
FOR r IN SELECT schema_name FROM shared.tenants WHERE status = 'active'
LOOP
-- Create partitions for next 3 months
FOR i IN 0..2 LOOP
next_month := date_trunc('month', CURRENT_DATE + (i || ' months')::interval);
start_date := next_month;
end_date := next_month + '1 month'::interval;
partition_name := r.schema_name || '.inventory_transactions_' ||
to_char(next_month, 'YYYY_MM');
-- Check if partition exists
IF NOT EXISTS (
SELECT 1 FROM pg_class c
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE n.nspname = r.schema_name
AND c.relname = 'inventory_transactions_' || to_char(next_month, 'YYYY_MM')
) THEN
EXECUTE format(
'CREATE TABLE %I.inventory_transactions_%s
PARTITION OF %I.inventory_transactions
FOR VALUES FROM (%L) TO (%L)',
r.schema_name, to_char(next_month, 'YYYY_MM'),
r.schema_name, start_date, end_date
);
END IF;
END LOOP;
END LOOP;
END;
$$ LANGUAGE plpgsql;
Connection Pooling Metrics
Monitor these key metrics in PgBouncer:
-- Connect to PgBouncer admin console
-- psql -h localhost -p 6432 -U postgres pgbouncer
-- Show pool status
SHOW POOLS;
-- Key metrics to monitor:
-- cl_active: Active client connections
-- cl_waiting: Clients waiting for connection
-- sv_active: Active server connections
-- sv_idle: Idle server connections
-- sv_used: Server connections in use
-- Show statistics
SHOW STATS;
-- Alert thresholds:
-- cl_waiting > 10: Pool exhaustion risk
-- sv_active / default_pool_size > 0.8: Near capacity
Query Timeout Configuration
-- Set statement timeout to prevent long-running queries
SET statement_timeout = '30s';
-- For specific operations, extend timeout
SET LOCAL statement_timeout = '5m';
-- Connection-level setting in PgBouncer
; query_timeout = 30
-- PostgreSQL server-level (postgresql.conf)
statement_timeout = 30000 -- 30 seconds
lock_timeout = 10000 -- 10 seconds
idle_in_transaction_session_timeout = 60000 -- 1 minute
Memory Configuration
# postgresql.conf optimizations for multi-tenant
# Shared memory (25% of RAM)
shared_buffers = 4GB
# Work memory per query (be conservative with many tenants)
work_mem = 64MB
# Maintenance operations
maintenance_work_mem = 512MB
# Effective cache size (75% of RAM)
effective_cache_size = 12GB
# Connection limits
max_connections = 200 # Higher limit, pooler handles distribution
# WAL settings
wal_buffers = 64MB
checkpoint_completion_target = 0.9
max_wal_size = 2GB
min_wal_size = 512MB
# Query planning
random_page_cost = 1.1 # SSD storage
effective_io_concurrency = 200 # SSD storage
11.7 High Availability Considerations
Replication Setup (Future)
# docker-compose-ha.yml (for future HA deployment)
services:
postgres-primary:
image: postgres:16-alpine
environment:
- POSTGRES_REPLICATION_MODE=master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=${REPL_PASSWORD}
volumes:
- postgres-primary-data:/var/lib/postgresql/data
postgres-replica:
image: postgres:16-alpine
environment:
- POSTGRES_REPLICATION_MODE=slave
- POSTGRES_MASTER_HOST=postgres-primary
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=${REPL_PASSWORD}
volumes:
- postgres-replica-data:/var/lib/postgresql/data
depends_on:
- postgres-primary
Read Replica Routing
# PgBouncer configuration for read replicas
[databases]
pos_platform = host=postgres-primary port=5432 dbname=pos_platform
pos_platform_ro = host=postgres-replica port=5432 dbname=pos_platform
Application routing:
// Database connection factory
class DatabaseConnection {
getPrimaryConnection(): Pool {
return new Pool({
connectionString: process.env.DATABASE_URL
});
}
getReadReplicaConnection(): Pool {
return new Pool({
connectionString: process.env.DATABASE_URL_READONLY
});
}
// Route based on operation type
getConnection(operation: 'read' | 'write'): Pool {
return operation === 'read'
? this.getReadReplicaConnection()
: this.getPrimaryConnection();
}
}
11.8 Monitoring and Alerting
Key Database Metrics
-- Database size by schema
SELECT
schemaname AS schema,
pg_size_pretty(SUM(pg_total_relation_size(schemaname || '.' || tablename))) AS total_size
FROM pg_tables
WHERE schemaname LIKE 'tenant_%' OR schemaname = 'shared'
GROUP BY schemaname
ORDER BY SUM(pg_total_relation_size(schemaname || '.' || tablename)) DESC;
-- Active connections by tenant
SELECT
CASE
WHEN query LIKE '%search_path%tenant_%' THEN
substring(query FROM 'tenant_[0-9]+')
ELSE 'unknown'
END AS tenant,
COUNT(*) AS connections
FROM pg_stat_activity
WHERE datname = 'pos_platform'
AND state = 'active'
GROUP BY 1
ORDER BY 2 DESC;
-- Table bloat check
SELECT
schemaname || '.' || relname AS table_name,
pg_size_pretty(pg_relation_size(schemaname || '.' || relname)) AS size,
n_dead_tup AS dead_tuples,
last_autovacuum
FROM pg_stat_user_tables
WHERE schemaname LIKE 'tenant_%'
AND n_dead_tup > 10000
ORDER BY n_dead_tup DESC
LIMIT 20;
-- Slow queries (requires pg_stat_statements)
SELECT
query,
calls,
mean_exec_time::numeric(10,2) AS avg_ms,
total_exec_time::numeric(10,2) AS total_ms
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
Alerting Thresholds
| Metric | Warning | Critical | Action |
|---|---|---|---|
| Connection usage | 70% | 90% | Scale pool, investigate |
| Disk usage | 70% | 85% | Cleanup, expand storage |
| Replication lag | 10s | 60s | Check network, replica health |
| Long-running queries | 30s | 60s | Investigate, possibly kill |
| Dead tuples | 1M | 5M | Force vacuum |
| Cache hit ratio | <95% | <90% | Increase shared_buffers |
11.9 Security Configuration
Role-Based Access
-- Create application role (minimal privileges)
CREATE ROLE pos_app WITH LOGIN PASSWORD 'secure_password';
GRANT CONNECT ON DATABASE pos_platform TO pos_app;
GRANT USAGE ON SCHEMA shared TO pos_app;
-- Tenant schemas granted during provisioning
-- Create admin role (elevated privileges)
CREATE ROLE pos_admin WITH LOGIN PASSWORD 'admin_password';
GRANT ALL PRIVILEGES ON DATABASE pos_platform TO pos_admin;
-- Create read-only role (for reporting)
CREATE ROLE pos_readonly WITH LOGIN PASSWORD 'readonly_password';
GRANT CONNECT ON DATABASE pos_platform TO pos_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA shared TO pos_readonly;
-- Grant SELECT on tenant schemas as needed
-- Prevent direct access to sensitive columns
REVOKE ALL ON shared.users FROM PUBLIC;
GRANT SELECT (id, email, first_name, last_name, is_platform_admin, email_verified, created_at)
ON shared.users TO pos_app;
-- password_hash, mfa_secret not accessible
SSL/TLS Configuration
# postgresql.conf
ssl = on
ssl_cert_file = '/var/lib/postgresql/server.crt'
ssl_key_file = '/var/lib/postgresql/server.key'
ssl_ca_file = '/var/lib/postgresql/root.crt'
ssl_min_protocol_version = 'TLSv1.2'
# Require SSL for external connections
# pg_hba.conf
hostssl pos_platform pos_app 0.0.0.0/0 scram-sha-256
11.10 Quick Reference
Connection Strings
# Direct PostgreSQL connection
postgres://pos_app:password@192.168.1.26:5433/pos_platform
# Through PgBouncer (recommended)
postgres://pos_app:password@192.168.1.26:6432/pos_platform
# Application environment variables
DATABASE_URL=postgres://pos_app:password@pgbouncer:6432/pos_platform
DATABASE_URL_READONLY=postgres://pos_readonly:password@pgbouncer:6432/pos_platform_ro
Common Operations
# Connect to database
docker exec -it postgres16 psql -U postgres -d pos_platform
# List all tenant schemas
docker exec postgres16 psql -U postgres -d pos_platform -c \
"SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'tenant_%';"
# Check tenant table counts
docker exec postgres16 psql -U postgres -d pos_platform -c \
"SELECT schemaname, COUNT(*) FROM pg_tables WHERE schemaname LIKE 'tenant_%' GROUP BY 1;"
# Backup specific tenant
./backup-tenant.sh tenant_0001
# Vacuum full (maintenance window only)
docker exec postgres16 psql -U postgres -d pos_platform -c "VACUUM FULL ANALYZE tenant_0001.products;"
Next Chapter: Chapter 12: Schema Design - Detailed schema structure with 51 tables across 13 domains.
Chapter 11 | Database Strategy | POS Platform Blueprint v1.0.0