Chapter 04: Architecture Styles Analysis

Purpose

This chapter documents the formal architecture styles evaluation for the Nexus POS Platform. It provides the decision rationale for selecting the primary architecture style and supporting patterns, updated per expert panel review against BRD v18.0.

Source: Architecture Styles Worksheet v2.0 (Expert Panel-Reviewed) Project: POS Platform (Nexus) Architect/Team: Cloud AI Architecture Agents Date: February 19, 2026 Panel Review Score: 6.50/10 → Updated per 4-member expert panel recommendations


L.1 Candidate Architecture Styles

Based on the identified driving characteristics (Availability, Interoperability, Data Consistency), the following architecture styles were evaluated.

L.1.1 Event-Driven Architecture (EDA)

AttributeValue
DescriptionA distributed asynchronous architecture pattern used to produce highly scalable and high-performance applications.
Relevance to NexusDeeply aligned with “Interoperability” and “Data Consistency” (Sync) requirements. External channels (Amazon, Shopify) and local POS terminals produce disjointed events that must be reconciled eventually.
DecisionSelected (Communication Layer)
Key TechnologyPostgreSQL Event Tables + LISTEN/NOTIFY (v1.0); Apache Kafka (v2.0, when scale justifies)

v18.0 Update: BRD designs around PostgreSQL tables for idempotency_records and integration_dead_letters (not Kafka topics). Amazon SP-API polls every 2 minutes; Google Merchant batches 2x/day. Streaming infrastructure is not required at launch. PostgreSQL event tables with LISTEN/NOTIFY provide sufficient event notification for v1.0. Kafka adoption deferred to v2.0 when transaction volume or real-time analytics requirements justify the operational overhead (ZooKeeper/KRaft cluster management).


L.1.2 Microservices Architecture

AttributeValue
DescriptionAn architecture style that structures an application as a collection of loosely coupled services, each with its own database.
Relevance to NexusEvaluated for “Scalability,” but rejected as the primary style for the Core API.
DecisionRejected
RationaleThe operational complexity of managing separate databases for 50+ services is unnecessary for the current scale.

L.1.3 Microkernel (Plugin) Architecture

AttributeValue
DescriptionA core system with a plugin interface to add additional features.
Relevance to NexusDirectly addresses the “Modifiability” requirement. The Blueprint specifies “Integration Adapters” (Payment, Tax) and a “Hardware Layer” in the client, fitting this pattern.
DecisionSelected (Client)

L.1.4 Modular Monolith (Layered) Architecture

AttributeValue
DescriptionA single deployable unit (“Central API”) structured into distinct, loosely coupled modules (Catalog, Sales, Inventory) that enforce strict boundaries.
Relevance to NexusHigh Fit. The Blueprint describes a “Central API Layer” (Stateless) containing all core services. This offers the modularity of microservices without the distributed complexity, aligning with the “Simplicity” and “Maintenance” goals.
DecisionSelected (Core API)

v18.0 Update — Extractable Integration Gateway: Module 6 (Integrations, 4,800+ lines) is designed as a logically separate module within the monolith with explicit boundary contracts: IIntegrationProvider interface, async messaging via Transactional Outbox, and dedicated error handling (ERR-6xxx range). This module can be extracted to a separate service when scale demands independent deployment, without changing the core POS modules. Circuit breaker isolation ensures external API failures (Amazon, Google, Shopify) cannot cascade to POS checkout operations.


L.1.5 Service-Based Architecture

AttributeValue
DescriptionA hybrid style with coarse-grained services (e.g., Inventory, Sales, HR) often sharing a database.
Relevance to NexusOffers a middle ground. The Blueprint’s “Service Layer” within the Central API follows this structure logically.
DecisionMiddle ground (influences internal structure)

L.1.6 Space-Based Architecture

AttributeValue
DescriptionDesigned for high scalability and concurrency using tuple spaces (distributed caching/in-memory grids).
Relevance to NexusCould handle “Black Friday” spikes, but data consistency (synchronization to persistent storage) is too complex for the strict financial audit requirements.
DecisionRejected
RationaleToo complex for financial audit requirements

L.1.7 Event Sourcing (Architecture Pattern)

AttributeValue
DescriptionA data persistence pattern where state transitions are stored as a sequence of immutable events (e.g., ItemAdded, PaymentAuthorized) rather than just the current state.
Relevance to NexusCritical. The Blueprint (Section L.4A below) mandates this for the “Sales” and “Inventory” domains to enable “Offline Conflict Resolution,” “Complete Audit Trails,” and “Temporal Queries” (Time Travel).
DecisionSelected (Sales & Inventory Domains)
Key TechnologyPostgreSQL 16 (Append-Only Event Table), Apache Kafka (Streaming Platform)

L.1.8 Online-First with Offline Fallback (Architecture Pattern)

AttributeValue
DescriptionPOS terminals connect directly to the Central API when online (99.99% of time). A thin SQLite fallback (2 tables: product cache + sales queue) ensures sales continue during rare, brief outages.
Relevance to NexusCritical. Sales must never be blocked. Online-first provides real-time data consistency while preserving offline resilience.
DecisionSelected (Client) — supersedes offline-first (ADR-048)
Key TechnologyReact Query (online), SQLite WASM via sql.js + OPFS (offline fallback)

L.1.9 Integration Patterns (BRD v18.0 Module 6)

BRD v18.0 Section 6.2 mandates 5 integration patterns that are architecturally significant. These were evaluated during the expert panel review and all selected.

PatternDescriptionDecisionBRD Reference
Circuit BreakerState machine (CLOSED → OPEN → HALF_OPEN) that prevents cascading failures from external APIs. Trips after 5 failures within 60 seconds; 30-second cooldown.Selected§6.2.4
Transactional OutboxAtomic write of business data + outbox event in the same database transaction. A relay process polls the outbox and publishes events, guaranteeing at-least-once delivery without distributed transactions.Selected§6.2.3, §6.7.3
Provider Abstraction (Strategy)IIntegrationProvider interface with 5 standard methods (Connect, Sync, Validate, Publish, HealthCheck) implemented per provider. Enables uniform handling regardless of provider protocol.Selected§6.2.1
Anti-Corruption Layer (ACL)Per-provider translation layer preventing external schema changes from leaking into core domain models. Each provider maps external DTOs to internal domain events.Selected§6.2.7
Saga / OrchestrationCross-platform inventory sync orchestrated as a saga with compensation actions. If a Shopify inventory update succeeds but Amazon fails, the saga compensates by rolling back the Shopify change.Selected (cross-platform flows)§6.7

Circuit Breaker State Machine:

┌──────────────────────────────────────────────────────────┐
│              CIRCUIT BREAKER STATE MACHINE                 │
├──────────────────────────────────────────────────────────┤
│                                                           │
│  ┌──────────┐   5 failures   ┌──────────┐               │
│  │  CLOSED  │ ──────────────►│   OPEN   │               │
│  │ (Normal) │   in 60 sec    │ (Reject) │               │
│  └────┬─────┘                └────┬─────┘               │
│       ▲                           │                      │
│       │ success                   │ 30 sec cooldown      │
│       │                           ▼                      │
│       │                    ┌───────────┐                 │
│       └────────────────────│ HALF_OPEN │                 │
│                            │ (1 probe) │                 │
│          failure ──────────└───────────┘──► OPEN         │
│                                                           │
└──────────────────────────────────────────────────────────┘

L.2 Style Evaluation Matrix

Ratings: 1 (Poor) to 5 (Excellent)

Monolithic Styles

StyleAvailabilityInteroperabilityData ConsistencyOverall Fit
Layered (Traditional)★★☆☆☆★★☆☆☆★★★★☆Backend only
Modular Monolith★★★☆☆★★★☆☆★★★★☆Selected (Core)
Microkernel (Plugin)★★★☆☆★★★★★★★★☆☆Selected (Client)

v18.0 Note: Modular Monolith Interoperability reduced from 4★ to 3★. Module 6 requires 6 provider families with different scaling needs — a monolith cannot independently scale individual providers. Mitigated by Extractable Integration Gateway design.

Distributed Styles

StyleAvailabilityInteroperabilityData ConsistencyOverall Fit
Service-Based★★★★☆★★★★☆★★★☆☆Eventual
Event-Driven (EDA)★★★★★★★★★★★★☆☆☆Selected (Comm Layer)
Space-Based★★★★★★★★☆☆★☆☆☆☆Too Complex
Microservices★★★★☆★★★★☆★☆☆☆☆Hard Sync

v18.0 Note: Service-Based Interoperability raised from 3★ to 4★. Coarse-grained services can independently deploy integration providers.

Patterns

PatternAvailabilityInteroperabilityData ConsistencyOverall Fit
Event Sourcing★★★☆☆★★★★☆★★★★★Selected (Audit/Sync)
Online-First + Offline Fallback★★★★★★★★★☆★★★★☆Selected (Client)
Integration Patterns★★★★☆★★★★★★★★★☆Selected (Module 6)

L.3 Key Trade-off Analysis

Trade-off 1: Availability vs. Consistency

AspectDecision
ConflictThe online-first strategy requires real-time API access; brief outages create eventual consistency windows.
ResolutionAccept Eventual Consistency during rare offline periods (minutes/year). Online 99.99% of the time provides near-immediate consistency.
MitigationFlag-on-sync detects price discrepancies; safety buffers protect channel inventory; idempotent sales queue flush prevents duplicates.

Trade-off 2: Complexity (Event Sourcing + PostgreSQL Events)

AspectDecision
ConflictEvent Sourcing adds complexity compared to standard CRUD. Original design included Apache Kafka for streaming, adding operational burden (ZooKeeper/KRaft).
ResolutionEvent Sourcing retained for Sales and Inventory domains. Kafka deferred to v2.0. v1.0 uses PostgreSQL event tables with LISTEN/NOTIFY for event notification and Transactional Outbox for guaranteed delivery.
BenefitPreserves event replay capability and audit trail while eliminating Kafka operational complexity. PostgreSQL event tables match BRD’s existing idempotency_records and integration_dead_letters table designs.

Trade-off 3: Deployment Simplicity (Modular Monolith)

AspectDecision
ConflictMicroservices offer independent scaling but add operational overhead.
ResolutionChoosing a Modular Monolith (“Central API”) over Microservices. Row-Level Isolation with RLS for multi-tenancy.
BenefitReduces deployment complexity (one container vs. dozens). Module 6 designed as Extractable Integration Gateway — can be split into a separate service when scale demands it, without changing core POS modules.

L.4 Selected Architecture Strategy

Primary Declaration

AttributeSelection
Primary StyleEvent-Driven Modular Monolith (Central API)
Key PatternsEvent Sourcing (scoped), CQRS (scoped), Online-First + Offline Fallback, Row-Level Isolation with RLS
Event InfrastructurePostgreSQL Event Tables + LISTEN/NOTIFY (v1.0); Apache Kafka (v2.0)
Integration StrategyExtractable Integration Gateway (Module 6)
Credential ManagementHashiCorp Vault

Architecture Layer Mapping

LayerStyle/PatternTechnology
Nexus POSMicrokernel (Plugin) + Online-First with Offline FallbackReact/TypeScript (Vite), React Query, SQLite WASM (fallback)
Central APIModular MonolithNode.js + Express/Fastify (TypeScript)
CommunicationEvent-DrivenPostgreSQL Events + LISTEN/NOTIFY (v1.0)
Real-timeWebSocket PushSocket.io
Data PersistenceEvent Sourcing (scoped) + CQRS (scoped)PostgreSQL 16
Multi-TenancyRow-Level Isolation with RLSPostgreSQL RLS + tenant_id
IntegrationExtractable Integration GatewayModule 6, IIntegrationProvider
SecretsCredential VaultHashiCorp Vault (Docker)

L.4A CQRS & Event Sourcing Scope

The expert panel identified that CQRS and Event Sourcing scope was undefined. This section clarifies which modules use which patterns, per user decision.

ModuleCQRSEvent SourcingPattern Description
Module 1: SalesFull CQRSFull Event SourcingSeparate read/write models. Events: SaleCreated, PaymentProcessed, ReturnInitiated, VoidExecuted. Event replay for audit and conflict resolution.
Module 2: CustomersStandard CRUDNoneDirect query against current-state tables. Simple read/write through repository pattern.
Module 3: CatalogStandard CRUDNoneRead-heavy workload optimized with caching (Redis). Product data served from current-state tables.
Module 4: InventoryMaterialized read modelES for audit trailCurrent inventory levels maintained in materialized view. Event Sourcing captures all stock movements for audit trail and conflict resolution (offline sync).
Module 5: SetupStandard CRUDNoneConfiguration data accessed directly. Changes logged but not event-sourced.
Module 6: IntegrationsStandard CRUDAudit-trail-only ESSync logs stored as event stream for debugging and compliance. No event replay for operational queries — current sync state maintained in tables.
Section 7: State MachinesN/AEvents drive transitions16 state machines powered by domain events. State transitions recorded as events. Database-driven implementation (see below).

State Machine Implementation: Database-driven pattern using a state column on the entity table plus a state_transitions reference table. This approach provides:

  • State column: Each stateful entity (e.g., orders.status, returns.status) stores current state directly
  • Transition table: state_transitions(from_state, to_state, event, guard_condition, action) defines allowed transitions per entity type
  • Validation: Application layer validates transitions against the table before applying (preventing invalid state changes)
  • Audit: Every transition logged with timestamp, actor, and triggering event
  • Benefits: Declarative (non-code) transition rules, easy to modify without deployment, queryable transition history

Design Note: State machines are NOT implemented via Event Sourcing replay. The state column holds current truth; ES events record the history. This separation keeps state lookups O(1) while maintaining full audit trail.

Event Sourcing vs. Audit Log Relationship: Event Sourcing and the audit log serve separate concerns and are complementary:

  • Event Sourcing (Modules 1, 4, 6): Domain events that represent business state changes. Used for: event replay (Sales), conflict resolution (Inventory), sync debugging (Integrations). Stored in event store tables.
  • Audit Log: Cross-cutting compliance record of who did what and when. Captures: user identity, IP address, action performed, timestamp, before/after values. Stored in dedicated audit_log table.
  • Relationship: ES events feed INTO the audit log (via event handlers) but the audit log also captures non-ES actions (e.g., login attempts, configuration changes, report generation). The audit log is the compliance artifact; ES is the domain modeling tool.

Event Sourcing Implementation Pattern:

┌──────────────────────────────────────────────────────────┐
│           EVENT SOURCING PATTERN (Sales Module)           │
├──────────────────────────────────────────────────────────┤
│                                                           │
│  Command ──► Aggregate ──► Domain Events ──► Event Store  │
│                                      │                    │
│                                      ▼                    │
│                              Event Handlers               │
│                              ┌─────────────┐              │
│                              │ Read Model  │ (CQRS)       │
│                              │ Projections │              │
│                              └─────────────┘              │
│                              ┌─────────────┐              │
│                              │ Audit Log   │              │
│                              │ (Immutable) │              │
│                              └─────────────┘              │
│                              ┌─────────────┐              │
│                              │ Integration │              │
│                              │ Outbox      │              │
│                              └─────────────┘              │
│                                                           │
│  Queries ──► Read Model (Materialized View) ──► Response  │
│                                                           │
└──────────────────────────────────────────────────────────┘

L.4A.1 Event Store Implementation

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

Event Store Schema (PostgreSQL)

The append-only event store is the source of truth:

-- Event Store Schema

CREATE TABLE events (
    id              BIGSERIAL PRIMARY KEY,
    event_id        UUID UNIQUE NOT NULL DEFAULT gen_random_uuid(),
    aggregate_type  VARCHAR(100) NOT NULL,     -- 'Sale', 'Inventory', 'Customer'
    aggregate_id    UUID NOT NULL,             -- The entity this event belongs to
    event_type      VARCHAR(100) NOT NULL,     -- 'SaleCreated', 'ItemAdded'
    event_data      JSONB NOT NULL,            -- Full event payload
    metadata        JSONB NOT NULL DEFAULT '{}', -- Correlation, causation IDs
    version         INTEGER NOT NULL,          -- Aggregate version (for optimistic concurrency)
    created_at      TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    created_by      UUID,                      -- Employee who caused the event

    -- Optimistic concurrency: aggregate_id + version must be unique
    UNIQUE (aggregate_type, aggregate_id, version)
);

-- Indexes for common queries
CREATE INDEX idx_events_aggregate ON events (aggregate_type, aggregate_id);
CREATE INDEX idx_events_type ON events (event_type);
CREATE INDEX idx_events_created_at ON events USING BRIN (created_at);
CREATE INDEX idx_events_metadata ON events USING GIN (metadata);

-- Snapshots table (for performance on long event streams)
CREATE TABLE snapshots (
    id              BIGSERIAL PRIMARY KEY,
    aggregate_type  VARCHAR(100) NOT NULL,
    aggregate_id    UUID NOT NULL,
    version         INTEGER NOT NULL,
    state           JSONB NOT NULL,            -- Serialized aggregate state
    created_at      TIMESTAMPTZ NOT NULL DEFAULT NOW(),

    UNIQUE (aggregate_type, aggregate_id)
);

-- Outbox table (for reliable event publishing)
CREATE TABLE event_outbox (
    id              BIGSERIAL PRIMARY KEY,
    event_id        UUID NOT NULL REFERENCES events(event_id),
    destination     VARCHAR(100) NOT NULL,     -- 'socketio', 'webhook', 'sync'
    status          VARCHAR(20) DEFAULT 'pending',
    attempts        INTEGER DEFAULT 0,
    last_error      TEXT,
    created_at      TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    processed_at    TIMESTAMPTZ
);

Event Sourcing Architecture Diagram

Event Sourcing Architecture
===========================

+-------------------------------------------------------------------------+
|                      NEXUS POS (React Web App)                           |
|                                                                          |
|   +------------------+    +-------------------+    +-----------------+   |
|   |  Command Handler |    |   Event Store     |    |   Projector     |   |
|   |                  |    | (SQLite WASM /    |    |   (Read Model)  |   |
|   |                  |    |  sql.js + OPFS)   |    |                 |   |
|   |  CreateSale      |--->|                   |--->|                 |   |
|   |  VoidSale        |    | SaleCreated       |    | sale_summaries  |   |
|   |  AddPayment      |    | ItemAdded         |    | inventory_view  |   |
|   +------------------+    | PaymentReceived   |    +-----------------+   |
|                           +-------------------+                          |
|                                    |                                     |
+-------------------------------------------------------------------------+
                                     | Sync
                                     v
+-------------------------------------------------------------------------+
|                            CENTRAL API                                   |
|                                                                          |
|   +------------------+    +-------------------+    +-----------------+   |
|   |  Command Handler |    |   Event Store     |    |   Projector     |   |
|   |  (Validates)     |    |   (PostgreSQL)    |    |   (Read Model)  |   |
|   |                  |<---|                   |--->|                 |   |
|   |  Deduplication   |    | All tenant events |    | sales           |   |
|   |  Conflict Check  |    | Append-only       |    | inventory_items |   |
|   +------------------+    | Immutable         |    | customers       |   |
|                           +-------------------+    +-----------------+   |
+-------------------------------------------------------------------------+

CQRS Pattern

CQRS Pattern
============

                          +----------------------+
                          |     User Action      |
                          +----------+-----------+
                                     |
              +----------------------+----------------------+
              |                                             |
              v                                             v
    +-------------------+                        +-------------------+
    |     COMMAND       |                        |      QUERY        |
    |     (Write)       |                        |      (Read)       |
    +-------------------+                        +-------------------+
              |                                             |
              v                                             v
    +-------------------+                        +-------------------+
    | Command Handler   |                        | Query Handler     |
    | - Validate        |                        | - No validation   |
    | - Business rules  |                        | - Fast lookup     |
    | - Generate events |                        | - Denormalized    |
    +-------------------+                        +-------------------+
              |                                             ^
              v                                             |
    +-------------------+                        +-------------------+
    |   Event Store     |----------------------->|   Read Models     |
    |   (Append-only)   |     Projections       |   (Optimized)     |
    +-------------------+                        +-------------------+

Write Side (Commands)

// Commands - Express intent

interface CreateSaleCommand {
  saleId: string;        // UUID
  locationId: string;    // UUID
  employeeId: string;    // UUID
  customerId?: string;   // UUID
  lineItems: SaleLineItemDto[];
}

interface VoidSaleCommand {
  saleId: string;        // UUID
  employeeId: string;    // UUID
  reason: string;
}

interface AddPaymentCommand {
  saleId: string;        // UUID
  paymentMethod: string;
  amount: number;        // Decimal as number (use Prisma.Decimal for DB)
  reference?: string;
}

Read Side (Queries)

// Queries - Request data
interface GetSaleByIdQuery { saleId: string; }
interface GetDailySalesQuery { locationId: string; date: Date; }
interface GetInventoryLevelQuery { sku: string; locationId: string; }

// Read models - Optimized for queries
interface SaleSummaryView {
  id: string;
  saleNumber: string;
  customerName: string;   // Denormalized
  employeeName: string;   // Denormalized
  total: number;
  status: string;
  createdAt: Date;
}

L.4A.2 Event Streaming (Apache Kafka) — v2.0 Future

v2.0 FUTURE: This entire section describes the Kafka-based event streaming architecture planned for v2.0. For v1.0, the platform uses PostgreSQL event tables + LISTEN/NOTIFY as the event infrastructure (see ADR in L.10A.4 and Ch 02 ADR-001). The Kafka architecture below is preserved as the migration target when the platform outgrows PostgreSQL-based events.

Note: Code samples in sections L.4A.2–L.4A.3 retain C# syntax from the pre-v6.1.0 architecture. These will be converted to TypeScript (using kafkajs) when the Kafka v2.0 migration is planned. The patterns and architecture remain valid regardless of implementation language.

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

Technology Selection

AttributeSelection
PlatformApache Kafka
Version3.6+ (with KRaft mode)
Primary RationaleReplayability

Why Kafka over alternatives?

AlternativeWhy Not Selected
RabbitMQNo native replay; messages deleted after consumption
Redis StreamsLess durable; not designed for long-term event storage
AWS SQSNo replay capability; messages expire
PostgreSQL LISTEN/NOTIFYNot scalable; no persistence

Kafka Replayability

+------------------------------------------------------------------+
|                    KAFKA REPLAYABILITY                            |
+------------------------------------------------------------------+
|                                                                   |
|  Event Log (Immutable, Ordered):                                  |
|                                                                   |
|  Partition 0:  [E1] -> [E2] -> [E3] -> [E4] -> [E5] -> ...       |
|                         ^              ^                          |
|                         |              |                          |
|  Consumer Group A: ─────┘              |  (Processed up to E2)   |
|  Consumer Group B: ────────────────────┘  (Processed up to E4)   |
|                                                                   |
|  NEW Consumer Group C can start from E1 and replay ALL events!   |
|                                                                   |
+------------------------------------------------------------------+

Kafka Topics Architecture

POS Kafka Topics
================

┌────────────────────────────────────────────────────────────────┐
│                     TOPIC STRUCTURE                             │
├────────────────────────────────────────────────────────────────┤
│                                                                 │
│  pos.events.sales         - All sale-related events            │
│  ├── Partition 0 (Location A)                                  │
│  ├── Partition 1 (Location B)                                  │
│  └── Partition N (Location N)                                  │
│                                                                 │
│  pos.events.inventory     - Inventory movements                │
│  ├── Partition 0-N (By SKU hash)                               │
│                                                                 │
│  pos.events.customers     - Customer activity                  │
│  ├── Partition 0-N (By customer hash)                          │
│                                                                 │
│  pos.sync.outbound        - Events to sync to external systems │
│  ├── Shopify, Amazon, etc.                                     │
│                                                                 │
│  pos.sync.inbound         - Events from external systems       │
│  ├── Online orders, inventory updates                          │
│                                                                 │
└────────────────────────────────────────────────────────────────┘

Kafka Configuration (Docker Compose)

# docker-compose.kafka.yml

services:
  kafka:
    image: confluentinc/cp-kafka:7.5.0
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_RETENTION_HOURS: 168  # 7 days
      KAFKA_LOG_RETENTION_BYTES: 10737418240  # 10GB per partition
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
    ports:
      - "9092:9092"
    volumes:
      - kafka_data:/var/lib/kafka/data

  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    environment:
      KAFKA_CLUSTERS_0_NAME: pos-cluster
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
    ports:
      - "8090:8080"

Event Publishing Pattern

Note: These C# examples illustrate v2.0 Kafka event-sourcing patterns. TypeScript equivalents (using kafkajs) will replace these when Kafka is adopted.

// KafkaEventPublisher.cs

public class KafkaEventPublisher : IEventPublisher
{
    private readonly IProducer<string, string> _producer;
    private readonly ILogger<KafkaEventPublisher> _logger;

    public async Task PublishAsync<T>(T @event, CancellationToken ct = default)
        where T : IDomainEvent
    {
        var topic = GetTopicForEvent(@event);
        var key = GetPartitionKey(@event);  // e.g., LocationId for ordering

        var message = new Message<string, string>
        {
            Key = key,
            Value = JsonSerializer.Serialize(@event),
            Headers = new Headers
            {
                { "event-type", Encoding.UTF8.GetBytes(@event.GetType().Name) },
                { "correlation-id", Encoding.UTF8.GetBytes(@event.CorrelationId.ToString()) },
                { "tenant-id", Encoding.UTF8.GetBytes(@event.TenantId.ToString()) }
            }
        };

        var result = await _producer.ProduceAsync(topic, message, ct);

        _logger.LogDebug(
            "Published {EventType} to {Topic}:{Partition}@{Offset}",
            @event.GetType().Name,
            result.Topic,
            result.Partition.Value,
            result.Offset.Value
        );
    }

    private string GetTopicForEvent(IDomainEvent @event) => @event switch
    {
        SaleCreated or SaleCompleted or SaleVoided => "pos.events.sales",
        InventoryReceived or InventorySold => "pos.events.inventory",
        CustomerCreated or LoyaltyPointsEarned => "pos.events.customers",
        _ => "pos.events.general"
    };
}

Schema Registry & Event Versioning

Overview

As the POS platform evolves, event schemas will change. Schema Registry provides:

  • Schema Validation: Prevent incompatible events from being published
  • Schema Evolution: Safe migrations without breaking consumers
  • Schema History: Version tracking for all event types
AttributeSelection
ToolConfluent Schema Registry
FormatAvro (Primary) or Protobuf
StrategyBACKWARD compatibility

Schema Registry Architecture

┌─────────────────────────────────────────────────────────────────┐
│                    SCHEMA REGISTRY FLOW                          │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌─────────────┐     ┌──────────────────┐     ┌─────────────┐   │
│  │  Producer   │     │  Schema Registry │     │   Consumer  │   │
│  │  (POS API)  │     │   (Confluent)    │     │ (Analytics) │   │
│  └──────┬──────┘     └────────┬─────────┘     └──────┬──────┘   │
│         │                     │                      │          │
│    1. Register/Get Schema     │                      │          │
│         │ ─────────────────>  │                      │          │
│         │                     │                      │          │
│    2. Schema ID returned      │                      │          │
│         │ <─────────────────  │                      │          │
│         │                     │                      │          │
│    3. Publish event with      │                      │          │
│       schema ID prefix        │                      │          │
│         │ ─────────────────────────────────────────> │          │
│         │                     │                      │          │
│                               │  4. Consumer fetches │          │
│                               │     schema by ID     │          │
│                               │ <─────────────────── │          │
│                               │                      │          │
│                               │  5. Deserialize with │          │
│                               │     correct schema   │          │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Avro Schema Definition (SaleCreated)

// schemas/sale-created.avsc
{
  "type": "record",
  "name": "SaleCreated",
  "namespace": "io.posplatform.events.sales",
  "doc": "Event fired when a new sale is initiated",
  "fields": [
    {
      "name": "eventId",
      "type": { "type": "string", "logicalType": "uuid" },
      "doc": "Unique event identifier"
    },
    {
      "name": "saleId",
      "type": { "type": "string", "logicalType": "uuid" },
      "doc": "Sale aggregate identifier"
    },
    {
      "name": "tenantId",
      "type": { "type": "string", "logicalType": "uuid" }
    },
    {
      "name": "locationId",
      "type": { "type": "string", "logicalType": "uuid" }
    },
    {
      "name": "employeeId",
      "type": { "type": "string", "logicalType": "uuid" }
    },
    {
      "name": "customerId",
      "type": ["null", { "type": "string", "logicalType": "uuid" }],
      "default": null,
      "doc": "Optional customer for loyalty"
    },
    {
      "name": "saleNumber",
      "type": "string"
    },
    {
      "name": "createdAt",
      "type": { "type": "long", "logicalType": "timestamp-millis" }
    },
    {
      "name": "metadata",
      "type": {
        "type": "map",
        "values": "string"
      },
      "default": {}
    }
  ]
}

Schema Evolution Rules (BACKWARD Compatibility)

ChangeAllowed?Notes
Add field with defaultYesNew consumers can read old messages
Remove field with defaultYesOld consumers ignore missing field
Add field without defaultNoOld messages fail validation
Remove required fieldNoNew messages fail for old consumers
Change field typeNoType mismatch errors
Rename fieldNoUse aliases instead

Schema Evolution Example (v2)

// schemas/sale-created-v2.avsc (BACKWARD COMPATIBLE)
{
  "type": "record",
  "name": "SaleCreated",
  "namespace": "io.posplatform.events.sales",
  "fields": [
    // ... existing fields ...

    // NEW FIELD - Added with default value (BACKWARD COMPATIBLE)
    {
      "name": "channel",
      "type": "string",
      "default": "in_store",
      "doc": "Sales channel: in_store, online, mobile"
    },

    // NEW OPTIONAL FIELD (BACKWARD COMPATIBLE)
    {
      "name": "referralCode",
      "type": ["null", "string"],
      "default": null
    }
  ]
}

Producer Configuration with Schema Registry

Note: These C# examples illustrate v2.0 Kafka event-sourcing patterns. TypeScript equivalents (using kafkajs) will replace these when Kafka is adopted.

// Infrastructure/Messaging/SchemaRegistryProducer.cs

using Confluent.Kafka;
using Confluent.SchemaRegistry;
using Confluent.SchemaRegistry.Serdes;

public class SchemaRegistryProducer<TKey, TValue> : IEventPublisher
    where TValue : ISpecificRecord
{
    private readonly IProducer<TKey, TValue> _producer;

    public SchemaRegistryProducer(
        string bootstrapServers,
        string schemaRegistryUrl)
    {
        var schemaRegistryConfig = new SchemaRegistryConfig
        {
            Url = schemaRegistryUrl
        };

        var schemaRegistry = new CachedSchemaRegistryClient(schemaRegistryConfig);

        var producerConfig = new ProducerConfig
        {
            BootstrapServers = bootstrapServers,
            Acks = Acks.All,  // Wait for all replicas
            EnableIdempotence = true
        };

        _producer = new ProducerBuilder<TKey, TValue>(producerConfig)
            .SetKeySerializer(new AvroSerializer<TKey>(schemaRegistry))
            .SetValueSerializer(new AvroSerializer<TValue>(schemaRegistry, new AvroSerializerConfig
            {
                // Fail if schema is not compatible
                AutoRegisterSchemas = false,
                SubjectNameStrategy = SubjectNameStrategy.TopicRecord
            }))
            .Build();
    }

    public async Task PublishAsync(
        string topic,
        TKey key,
        TValue value,
        CancellationToken ct = default)
    {
        var result = await _producer.ProduceAsync(topic, new Message<TKey, TValue>
        {
            Key = key,
            Value = value
        }, ct);

        _logger.LogDebug(
            "Published {EventType} to {Topic} with schema ID {SchemaId}",
            typeof(TValue).Name,
            result.Topic,
            result.Value
        );
    }
}

CI/CD Schema Validation

# .github/workflows/schema-validation.yml

name: Schema Validation

on:
  pull_request:
    paths:
      - 'schemas/**'

jobs:
  validate-schemas:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Start Schema Registry
        run: |
          docker compose -f docker/docker-compose.kafka.yml up -d schema-registry
          sleep 10

      - name: Test Schema Compatibility
        run: |
          for schema in schemas/*.avsc; do
            subject=$(basename "$schema" .avsc)-value
            echo "Testing compatibility for $subject"

            # Check if schema is BACKWARD compatible with existing
            curl -X POST \
              -H "Content-Type: application/vnd.schemaregistry.v1+json" \
              -d @"$schema" \
              "http://localhost:8081/compatibility/subjects/$subject/versions/latest" \
              | jq -e '.is_compatible == true' || exit 1
          done

      - name: Register Schemas (on merge to main)
        if: github.event_name == 'push' && github.ref == 'refs/heads/main'
        run: |
          for schema in schemas/*.avsc; do
            subject=$(basename "$schema" .avsc)-value
            curl -X POST \
              -H "Content-Type: application/vnd.schemaregistry.v1+json" \
              -d "{\"schema\": $(cat "$schema" | jq -Rs .)}" \
              "http://localhost:8081/subjects/$subject/versions"
          done

Docker Compose with Schema Registry

# docker/docker-compose.kafka.yml (updated)

services:
  schema-registry:
    image: confluentinc/cp-schema-registry:7.5.0
    container_name: pos-schema-registry
    depends_on:
      - kafka
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:9092
      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
      # Enforce BACKWARD compatibility by default
      SCHEMA_REGISTRY_SCHEMA_COMPATIBILITY_LEVEL: BACKWARD

L.4A.3 Dead Letter Queue Pattern

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

Overview

When event processing fails (malformed data, business rule violations, transient errors), messages go to a Dead Letter Queue for investigation and replay.

AttributeSelection
PurposeCapture failed messages without blocking main flow
Retention30 days
MonitoringAlert when DLQ depth > threshold

DLQ Architecture

┌─────────────────────────────────────────────────────────────────┐
│                       DLQ PATTERN                                │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌───────────────┐    ┌───────────────┐    ┌───────────────┐    │
│  │ pos.events.   │    │    Consumer   │    │   Handler     │    │
│  │    sales      │───>│    Group      │───>│   Logic       │    │
│  │  (Main Topic) │    │               │    │               │    │
│  └───────────────┘    └───────────────┘    └───────┬───────┘    │
│                                                     │            │
│                                             ┌───────┴───────┐    │
│                                             │   Success?    │    │
│                                             └───────┬───────┘    │
│                                        Yes ┌───────┴───────┐ No │
│                                            │               │     │
│                                            ▼               ▼     │
│                                      ┌──────────┐  ┌───────────┐ │
│                                      │  Commit  │  │  Retry    │ │
│                                      │  Offset  │  │  Logic    │ │
│                                      └──────────┘  └─────┬─────┘ │
│                                                          │       │
│                                                   ┌──────┴─────┐ │
│                                                   │ Max Retries│ │
│                                                   │  Exceeded? │ │
│                                                   └──────┬─────┘ │
│                                              No ┌────────┴──────┐│
│                                                 │               ││
│                                                 ▼               ▼│
│                                           ┌──────────┐  ┌────────┴──┐
│                                           │  Retry   │  │    DLQ    │
│                                           │  Topic   │  │   Topic   │
│                                           └──────────┘  └───────────┘
│                                                         pos.events.
│                                                         sales.dlq
└─────────────────────────────────────────────────────────────────┘

DLQ Consumer Implementation

Note: These C# examples illustrate v2.0 Kafka event-sourcing patterns. TypeScript equivalents (using kafkajs) will replace these when Kafka is adopted.

// Infrastructure/Messaging/DlqAwareConsumer.cs

public class DlqAwareConsumer<TKey, TValue>
{
    private readonly IConsumer<TKey, TValue> _consumer;
    private readonly IProducer<string, DeadLetterMessage> _dlqProducer;
    private readonly ILogger _logger;

    private const int MAX_RETRIES = 3;
    private readonly TimeSpan[] _retryDelays = new[]
    {
        TimeSpan.FromSeconds(1),
        TimeSpan.FromSeconds(5),
        TimeSpan.FromSeconds(30)
    };

    public async Task ConsumeWithDlqAsync(
        string topic,
        Func<ConsumeResult<TKey, TValue>, Task> handler,
        CancellationToken ct)
    {
        _consumer.Subscribe(topic);

        while (!ct.IsCancellationRequested)
        {
            var result = _consumer.Consume(ct);
            var retryCount = GetRetryCount(result.Message.Headers);

            try
            {
                await handler(result);
                _consumer.Commit(result);
            }
            catch (TransientException ex) when (retryCount < MAX_RETRIES)
            {
                _logger.LogWarning(
                    ex,
                    "Transient error processing message. Retry {Retry}/{Max}",
                    retryCount + 1,
                    MAX_RETRIES
                );

                await Task.Delay(_retryDelays[retryCount], ct);
                await PublishToRetryTopicAsync(result, retryCount + 1);
                _consumer.Commit(result);
            }
            catch (Exception ex)
            {
                _logger.LogError(
                    ex,
                    "Failed to process message after {Retries} retries. Sending to DLQ.",
                    retryCount
                );

                await PublishToDlqAsync(result, ex, retryCount);
                _consumer.Commit(result);
            }
        }
    }

    private async Task PublishToDlqAsync(
        ConsumeResult<TKey, TValue> result,
        Exception exception,
        int retryCount)
    {
        var dlqMessage = new DeadLetterMessage
        {
            OriginalTopic = result.Topic,
            OriginalPartition = result.Partition.Value,
            OriginalOffset = result.Offset.Value,
            Key = result.Message.Key?.ToString(),
            Value = SerializeValue(result.Message.Value),
            Headers = ExtractHeaders(result.Message.Headers),
            ErrorType = exception.GetType().FullName,
            ErrorMessage = exception.Message,
            StackTrace = exception.StackTrace,
            RetryCount = retryCount,
            FirstFailedAt = GetFirstFailedAt(result.Message.Headers),
            LastFailedAt = DateTime.UtcNow,
            ConsumerGroup = _consumerGroup,
            ConsumerInstance = Environment.MachineName
        };

        var dlqTopic = $"{result.Topic}.dlq";
        await _dlqProducer.ProduceAsync(dlqTopic, new Message<string, DeadLetterMessage>
        {
            Key = result.Message.Key?.ToString(),
            Value = dlqMessage
        });
    }
}

DLQ Message Structure

Note: These C# examples illustrate v2.0 Kafka event-sourcing patterns. TypeScript equivalents (using kafkajs) will replace these when Kafka is adopted.

// Domain/Events/DeadLetterMessage.cs

public record DeadLetterMessage
{
    /// <summary>Original Kafka topic</summary>
    public string OriginalTopic { get; init; }

    /// <summary>Original partition</summary>
    public int OriginalPartition { get; init; }

    /// <summary>Original offset</summary>
    public long OriginalOffset { get; init; }

    /// <summary>Original message key</summary>
    public string Key { get; init; }

    /// <summary>Original message value (base64 if binary)</summary>
    public string Value { get; init; }

    /// <summary>Original headers</summary>
    public Dictionary<string, string> Headers { get; init; }

    /// <summary>Error details</summary>
    public string ErrorType { get; init; }
    public string ErrorMessage { get; init; }
    public string StackTrace { get; init; }

    /// <summary>Processing metadata</summary>
    public int RetryCount { get; init; }
    public DateTime FirstFailedAt { get; init; }
    public DateTime LastFailedAt { get; init; }
    public string ConsumerGroup { get; init; }
    public string ConsumerInstance { get; init; }
}

DLQ Monitoring & Alerting

# prometheus/alerts/dlq-alerts.yml

groups:
  - name: kafka-dlq-alerts
    rules:
      - alert: DLQMessagesAccumulating
        expr: kafka_consumer_group_lag{topic=~".*\\.dlq"} > 100
        for: 15m
        labels:
          severity: warning
        annotations:
          summary: "DLQ has {{ $value }} unprocessed messages"
          description: "Topic {{ $labels.topic }} has accumulated messages"

      - alert: DLQCriticalBacklog
        expr: kafka_consumer_group_lag{topic=~".*\\.dlq"} > 1000
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "CRITICAL: DLQ backlog exceeds 1000 messages"
          runbook_url: "https://wiki.internal/runbooks/dlq-overflow"

DLQ Replay Tool

Note: These C# examples illustrate v2.0 Kafka event-sourcing patterns. TypeScript equivalents (using kafkajs) will replace these when Kafka is adopted.

// Tools/DlqReplayService.cs

public class DlqReplayService
{
    public async Task ReplayMessagesAsync(
        string dlqTopic,
        DateTime? from = null,
        DateTime? to = null,
        Func<DeadLetterMessage, bool>? filter = null)
    {
        var consumer = CreateDlqConsumer(dlqTopic);
        var producer = CreateMainTopicProducer();

        var messages = await ReadDlqMessagesAsync(consumer, from, to);

        foreach (var dlqMessage in messages)
        {
            if (filter != null && !filter(dlqMessage))
            {
                _logger.LogDebug("Skipping message by filter: {Key}", dlqMessage.Key);
                continue;
            }

            _logger.LogInformation(
                "Replaying message from DLQ: Topic={Topic}, Offset={Offset}",
                dlqMessage.OriginalTopic,
                dlqMessage.OriginalOffset
            );

            // Publish back to original topic
            await producer.ProduceAsync(dlqMessage.OriginalTopic, new Message<string, string>
            {
                Key = dlqMessage.Key,
                Value = dlqMessage.Value,
                Headers = new Headers
                {
                    { "x-dlq-replay", Encoding.UTF8.GetBytes("true") },
                    { "x-dlq-original-offset", Encoding.UTF8.GetBytes(dlqMessage.OriginalOffset.ToString()) }
                }
            });
        }

        _logger.LogInformation("Replayed {Count} messages from DLQ", messages.Count);
    }
}
# CLI usage for DLQ replay
npx tsx tools/dlq-replay.ts \
  --topic pos.events.sales.dlq \
  --from "2026-01-20T00:00:00Z" \
  --filter "ErrorType contains 'Transient'"

L.4A.4 Domain Events Catalog

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

Sale Aggregate Events

Sale Events
===========

SaleCreated
+-----------------------+----------------------------------------+
| Field                 | Description                            |
+-----------------------+----------------------------------------+
| sale_id               | UUID of the new sale                   |
| sale_number           | Human-readable sale number             |
| location_id           | Where the sale occurred                |
| register_id           | Which register                         |
| employee_id           | Who created the sale                   |
| customer_id           | Customer (if any)                      |
| created_at            | Timestamp                              |
+-----------------------+----------------------------------------+

SaleLineItemAdded
+-----------------------+----------------------------------------+
| sale_id               | Parent sale                            |
| line_item_id          | UUID of the line item                  |
| product_id            | Product being sold                     |
| variant_id            | Variant (if any)                       |
| sku                   | SKU at time of sale                    |
| name                  | Product name at time of sale           |
| quantity              | Quantity sold                          |
| unit_price            | Price per unit                         |
| discount_amount       | Line discount                          |
| tax_amount            | Line tax                               |
+-----------------------+----------------------------------------+

SaleLineItemRemoved
+-----------------------+----------------------------------------+
| sale_id               | Parent sale                            |
| line_item_id          | UUID of removed item                   |
| reason                | Why removed                            |
+-----------------------+----------------------------------------+

PaymentReceived
+-----------------------+----------------------------------------+
| sale_id               | Parent sale                            |
| payment_id            | UUID of payment                        |
| payment_method        | cash, credit, debit, etc.              |
| amount                | Payment amount                         |
| reference             | Card last 4, check #, etc.             |
| auth_code             | Authorization code                     |
+-----------------------+----------------------------------------+

SaleCompleted
+-----------------------+----------------------------------------+
| sale_id               | The sale being completed               |
| subtotal              | Final subtotal                         |
| discount_total        | Total discounts                        |
| tax_total             | Total tax                              |
| total                 | Final total                            |
| completed_at          | Timestamp                              |
+-----------------------+----------------------------------------+

SaleVoided
+-----------------------+----------------------------------------+
| sale_id               | The voided sale                        |
| voided_by             | Employee who voided                    |
| reason                | Void reason                            |
| voided_at             | Timestamp                              |
+-----------------------+----------------------------------------+

Inventory Aggregate Events

Inventory Events
================

InventoryReceived
+-----------------------+----------------------------------------+
| location_id           | Where received                         |
| product_id            | Product                                |
| variant_id            | Variant (if any)                       |
| quantity              | Amount received                        |
| cost                  | Unit cost                              |
| reference             | PO number, transfer #                  |
| received_by           | Employee                               |
+-----------------------+----------------------------------------+

InventoryAdjusted
+-----------------------+----------------------------------------+
| location_id           | Location                               |
| product_id            | Product                                |
| variant_id            | Variant (if any)                       |
| quantity_change       | +/- amount                             |
| new_quantity          | New on-hand quantity                   |
| reason                | count, damage, theft, return           |
| adjusted_by           | Employee                               |
| notes                 | Additional context                     |
+-----------------------+----------------------------------------+

InventorySold
+-----------------------+----------------------------------------+
| location_id           | Where sold                             |
| product_id            | Product                                |
| variant_id            | Variant (if any)                       |
| quantity              | Amount sold (positive)                 |
| sale_id               | Related sale                           |
+-----------------------+----------------------------------------+

InventoryTransferred
+-----------------------+----------------------------------------+
| transfer_id           | Transfer document                      |
| from_location_id      | Source location                        |
| to_location_id        | Destination location                   |
| product_id            | Product                                |
| variant_id            | Variant (if any)                       |
| quantity              | Amount transferred                     |
| transferred_by        | Employee                               |
+-----------------------+----------------------------------------+

InventoryCounted
+-----------------------+----------------------------------------+
| location_id           | Location                               |
| product_id            | Product                                |
| variant_id            | Variant                                |
| expected_quantity     | System quantity before count           |
| actual_quantity       | Physical count                         |
| variance              | Difference                             |
| counted_by            | Employee                               |
| count_session_id      | Batch count session                    |
+-----------------------+----------------------------------------+

Customer Aggregate Events

Customer Events
===============

CustomerCreated
+-----------------------+----------------------------------------+
| customer_id           | New customer UUID                      |
| customer_number       | Human-readable ID                      |
| first_name            | First name                             |
| last_name             | Last name                              |
| email                 | Email address                          |
| phone                 | Phone number                           |
| created_by            | Employee                               |
+-----------------------+----------------------------------------+

CustomerUpdated
+-----------------------+----------------------------------------+
| customer_id           | Customer UUID                          |
| changes               | Map of field -> {old, new}             |
| updated_by            | Employee                               |
+-----------------------+----------------------------------------+

LoyaltyPointsEarned
+-----------------------+----------------------------------------+
| customer_id           | Customer                               |
| points                | Points earned                          |
| sale_id               | Related sale                           |
| new_balance           | Updated balance                        |
+-----------------------+----------------------------------------+

LoyaltyPointsRedeemed
+-----------------------+----------------------------------------+
| customer_id           | Customer                               |
| points                | Points redeemed                        |
| sale_id               | Related sale                           |
| new_balance           | Updated balance                        |
+-----------------------+----------------------------------------+

StoreCreditIssued
+-----------------------+----------------------------------------+
| customer_id           | Customer                               |
| credit_id             | Credit UUID                            |
| amount                | Credit amount                          |
| reason                | Why issued                             |
| issued_by             | Employee                               |
+-----------------------+----------------------------------------+

Employee Aggregate Events

Employee Events
===============

EmployeeClockIn
+-----------------------+----------------------------------------+
| employee_id           | Employee UUID                          |
| location_id           | Where clocking in                      |
| shift_id              | New shift UUID                         |
| clocked_in_at         | Timestamp                              |
+-----------------------+----------------------------------------+

EmployeeClockOut
+-----------------------+----------------------------------------+
| employee_id           | Employee UUID                          |
| shift_id              | Shift being closed                     |
| clocked_out_at        | Timestamp                              |
| break_minutes         | Total break time                       |
+-----------------------+----------------------------------------+

EmployeeBreakStarted
+-----------------------+----------------------------------------+
| employee_id           | Employee UUID                          |
| shift_id              | Current shift                          |
| started_at            | Break start time                       |
+-----------------------+----------------------------------------+

EmployeeBreakEnded
+-----------------------+----------------------------------------+
| employee_id           | Employee UUID                          |
| shift_id              | Current shift                          |
| ended_at              | Break end time                         |
| duration_minutes      | Break duration                         |
+-----------------------+----------------------------------------+

CashDrawer Aggregate Events

Cash Drawer Events
==================

DrawerOpened
+-----------------------+----------------------------------------+
| drawer_id             | Drawer UUID                            |
| register_id           | Register UUID                          |
| employee_id           | Who opened                             |
| opening_balance       | Starting cash amount                   |
| opened_at             | Timestamp                              |
+-----------------------+----------------------------------------+

DrawerCashDrop
+-----------------------+----------------------------------------+
| drawer_id             | Drawer UUID                            |
| amount                | Amount dropped to safe                 |
| employee_id           | Who dropped                            |
| dropped_at            | Timestamp                              |
+-----------------------+----------------------------------------+

DrawerPaidIn
+-----------------------+----------------------------------------+
| drawer_id             | Drawer UUID                            |
| amount                | Amount added                           |
| reason                | Why (petty cash, etc.)                 |
| employee_id           | Who added                              |
+-----------------------+----------------------------------------+

DrawerPaidOut
+-----------------------+----------------------------------------+
| drawer_id             | Drawer UUID                            |
| amount                | Amount removed                         |
| reason                | Why (vendor payment, etc.)             |
| employee_id           | Who removed                            |
+-----------------------+----------------------------------------+

DrawerClosed
+-----------------------+----------------------------------------+
| drawer_id             | Drawer UUID                            |
| employee_id           | Who closed                             |
| closing_balance       | Actual cash counted                    |
| expected_balance      | System calculated                      |
| variance              | Difference (over/short)                |
| closed_at             | Timestamp                              |
+-----------------------+----------------------------------------+

L.4A.5 Event Projection Patterns

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

Projection Architecture
=======================

+-------------------+
|   Event Stream    |
|                   |
| SaleCreated       |
| ItemAdded         |
| ItemAdded         |
| PaymentReceived   |
| SaleCompleted     |
+--------+----------+
         |
         | Projector reads events
         v
+-------------------+     +-------------------+     +-------------------+
| Sale Projector    |     |Inventory Projector|     |Customer Projector |
|                   |     |                   |     |                   |
| - Build sale view |     | - Update stock    |     | - Update stats    |
| - Calculate totals|     | - Track movements |     | - Loyalty points  |
+--------+----------+     +--------+----------+     +--------+----------+
         |                         |                         |
         v                         v                         v
+-------------------+     +-------------------+     +-------------------+
| sale_summaries    |     | inventory_levels  |     | customer_stats    |
| (Read Model)      |     | (Read Model)      |     | (Read Model)      |
+-------------------+     +-------------------+     +-------------------+

Sale Projector Implementation

// sale-projector.ts

import { PrismaClient } from '@prisma/client';
import type { SaleCreated, SaleLineItemAdded, SaleCompleted, SaleVoided } from './domain-events';

const prisma = new PrismaClient();

export async function handleSaleCreated(event: SaleCreated): Promise<void> {
  await prisma.saleSummary.create({
    data: {
      id: event.saleId,
      saleNumber: event.saleNumber,
      locationId: event.locationId,
      employeeId: event.employeeId,
      customerId: event.customerId ?? null,
      status: 'draft',
      subtotal: 0,
      total: 0,
      createdAt: event.createdAt,
    },
  });
}

export async function handleSaleLineItemAdded(event: SaleLineItemAdded): Promise<void> {
  const sale = await prisma.saleSummary.findUnique({ where: { id: event.saleId } });
  if (!sale) return;

  const lineTotal = event.quantity * event.unitPrice - event.discountAmount;

  await prisma.saleSummary.update({
    where: { id: event.saleId },
    data: {
      subtotal: { increment: lineTotal },
      itemCount: { increment: event.quantity },
    },
  });
}

export async function handleSaleCompleted(event: SaleCompleted): Promise<void> {
  await prisma.saleSummary.update({
    where: { id: event.saleId },
    data: {
      status: 'completed',
      discountTotal: event.discountTotal,
      taxTotal: event.taxTotal,
      total: event.total,
      completedAt: event.completedAt,
    },
  });
}

export async function handleSaleVoided(event: SaleVoided): Promise<void> {
  await prisma.saleSummary.update({
    where: { id: event.saleId },
    data: {
      status: 'voided',
      voidedAt: event.voidedAt,
      voidedBy: event.voidedBy,
      voidReason: event.reason,
    },
  });
}

L.4A.6 Temporal Queries

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

Event sourcing enables powerful temporal queries:

-- What was inventory on a specific date?
SELECT
    product_id,
    SUM(CASE
        WHEN event_type = 'InventoryReceived' THEN (event_data->>'quantity')::int
        WHEN event_type = 'InventorySold' THEN -(event_data->>'quantity')::int
        WHEN event_type = 'InventoryAdjusted' THEN (event_data->>'quantity_change')::int
        ELSE 0
    END) as quantity
FROM events
WHERE aggregate_type = 'Inventory'
  AND (event_data->>'location_id')::uuid = '...'
  AND created_at <= '2025-12-15 15:00:00'
GROUP BY product_id;

-- Sales trend for specific product
SELECT
    date_trunc('day', created_at) as date,
    SUM((event_data->>'quantity')::int) as units_sold
FROM events
WHERE event_type = 'InventorySold'
  AND (event_data->>'product_id')::uuid = '...'
  AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY date_trunc('day', created_at)
ORDER BY date;

-- Audit trail for specific sale
SELECT
    event_type,
    event_data,
    created_at,
    created_by
FROM events
WHERE aggregate_type = 'Sale'
  AND aggregate_id = '...'
ORDER BY version;

L.4A.7 Snapshots for Performance

Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):

For aggregates with many events, snapshots prevent replaying the entire stream:

Snapshot Strategy
=================

Without Snapshots:
Event 1 -> Event 2 -> ... -> Event 5000 -> Current State
(Slow for aggregates with many events)

With Snapshots:
Event 1 -> ... -> Event 1000 -> [Snapshot @ v1000]
                                      |
                                      -> Event 1001 -> ... -> Event 1050 -> Current State
(Load snapshot, then only replay 50 events)

Snapshot Implementation

// aggregate-repository.ts

import { PrismaClient } from '@prisma/client';
import type { AggregateRoot, DomainEvent } from './types';

const prisma = new PrismaClient();
const SNAPSHOT_THRESHOLD = 100;

export async function loadAggregate<T extends AggregateRoot>(
  id: string,
  factory: () => T
): Promise<T> {
  const aggregate = factory();

  // 1. Try to load snapshot
  const snapshot = await prisma.snapshot.findUnique({
    where: { aggregateType_aggregateId: { aggregateType: aggregate.type, aggregateId: id } },
  });

  let fromVersion = 0;

  if (snapshot) {
    aggregate.restoreFromSnapshot(snapshot.state as Record<string, unknown>);
    fromVersion = snapshot.version;
  }

  // 2. Load events after snapshot
  const events = await prisma.event.findMany({
    where: { aggregateId: id, version: { gt: fromVersion } },
    orderBy: { version: 'asc' },
  });

  for (const event of events) {
    aggregate.apply(event as unknown as DomainEvent);
  }

  return aggregate;
}

export async function saveAggregate<T extends AggregateRoot>(aggregate: T): Promise<void> {
  const newEvents = aggregate.getUncommittedEvents();

  // 1. Append events
  await prisma.event.createMany({
    data: newEvents.map((event, i) => ({
      aggregateType: aggregate.type,
      aggregateId: aggregate.id,
      eventType: event.eventType,
      eventData: event as unknown as Record<string, unknown>,
      version: aggregate.version + i + 1,
      createdBy: event.createdBy,
    })),
  });

  // 2. Create snapshot if threshold reached
  if (aggregate.version % SNAPSHOT_THRESHOLD === 0) {
    const snapshotState = aggregate.createSnapshot();
    await prisma.snapshot.upsert({
      where: { aggregateType_aggregateId: { aggregateType: aggregate.type, aggregateId: aggregate.id } },
      create: { aggregateType: aggregate.type, aggregateId: aggregate.id, version: aggregate.version, state: snapshotState },
      update: { version: aggregate.version, state: snapshotState },
    });
  }

  aggregate.clearUncommittedEvents();
}

L.4B Integration Architecture Patterns

BRD v18.0 Module 6 defines integration patterns that are architecturally significant. This section documents their implementation strategy.

Transactional Outbox Pattern

Guarantees atomic business data persistence + event publication without distributed transactions.

┌──────────────────────────────────────────────────────────┐
│             TRANSACTIONAL OUTBOX PATTERN                   │
├──────────────────────────────────────────────────────────┤
│                                                           │
│  Application                    Outbox Relay              │
│  ┌─────────────────┐           ┌──────────────────┐      │
│  │ BEGIN TRANSACTION│           │ Poll outbox table│      │
│  │                  │           │ every 5 seconds  │      │
│  │ 1. Write to      │           └────────┬─────────┘      │
│  │    business table│                    │                │
│  │                  │                    ▼                │
│  │ 2. Write to      │           ┌──────────────────┐      │
│  │    outbox table  │           │ Publish event    │      │
│  │                  │           │ via LISTEN/NOTIFY│      │
│  │ COMMIT           │           └────────┬─────────┘      │
│  └─────────────────┘                    │                │
│                                          ▼                │
│                                 ┌──────────────────┐      │
│                                 │ Mark as published│      │
│                                 │ (idempotent)     │      │
│                                 └──────────────────┘      │
│                                                           │
└──────────────────────────────────────────────────────────┘

Provider Abstraction (Strategy Pattern)

┌──────────────────────────────────────────────────────────┐
│              PROVIDER ABSTRACTION PATTERN                  │
├──────────────────────────────────────────────────────────┤
│                                                           │
│              IIntegrationProvider                         │
│              ┌──────────────────┐                         │
│              │ + Connect()      │                         │
│              │ + SyncProducts() │                         │
│              │ + SyncInventory()│                         │
│              │ + ValidateData() │                         │
│              │ + HealthCheck()  │                         │
│              └────────┬─────────┘                         │
│                       │                                   │
│         ┌─────────────┼─────────────┐                    │
│         ▼             ▼             ▼                    │
│  ┌────────────┐┌────────────┐┌─────────────────┐        │
│  │  Shopify   ││  Amazon    ││  Google          │        │
│  │  Provider  ││  Provider  ││  Merchant        │        │
│  │            ││            ││  Provider        │        │
│  │ GraphQL    ││ REST/LWA   ││ REST/Service Acct│        │
│  │ 50pts/sec  ││ Burst+Tok  ││ Quota-based     │        │
│  │ Webhooks   ││ 2min Poll  ││ 2x/day Batch    │        │
│  └────────────┘└────────────┘└─────────────────┘        │
│                                                           │
└──────────────────────────────────────────────────────────┘

Safety Buffer Computation

Per BRD Section 6.7.2, channel-available quantity is calculated as:

Channel Available = POS Available - Safety Buffer

┌──────────────────────────────────────────────────────────┐
│            SAFETY BUFFER COMPUTATION                      │
├──────────────────────────────────────────────────────────┤
│                                                           │
│  4-Level Priority Resolution:                            │
│  1. Product-Level Override (highest priority)            │
│  2. Category-Level Default                               │
│  3. Channel-Level Default                                │
│  4. Global Default (lowest priority)                     │
│                                                           │
│  3 Calculation Modes:                                    │
│  ┌──────────────────────────────────────────────────┐   │
│  │ FIXED:       Buffer = fixed_quantity              │   │
│  │ PERCENTAGE:  Buffer = pos_available * percentage  │   │
│  │ MIN_RESERVE: Buffer = pos_available - min_reserve │   │
│  └──────────────────────────────────────────────────┘   │
│                                                           │
│  Example (FIXED mode, buffer = 2):                       │
│  POS Available: 10 → Channel Available: 8                │
│                                                           │
│  Example (PERCENTAGE mode, 20%):                         │
│  POS Available: 10 → Buffer: 2 → Channel Available: 8   │
│                                                           │
└──────────────────────────────────────────────────────────┘

L.5 Architecture Documentation & Traceability

To ensure “soft architecture” matches the code and enables rapid root-cause analysis.

AspectSelection
Strategy“Diagrams as Code” to prevent documentation drift
ToolingStructurizr (C4 Model) or Mermaid.js
ImplementationArchitecture diagrams committed to Git repository alongside source code
AutomationUse Claude Code CLI to auto-generate updates to diagrams during refactoring

C4 Model Levels

+-------------------------------------------------------------------+
|                        C4 MODEL HIERARCHY                          |
+-------------------------------------------------------------------+
|                                                                    |
|  Level 1: System Context                                           |
|  +------------------+     +------------------+     +-------------+ |
|  |   Nexus POS      |<--->|   Central API    |<--->|   Shopify   | |
|  |   (Terminals)    |     |   (Cloud)        |     |   Amazon    | |
|  +------------------+     +------------------+     +-------------+ |
|                                                                    |
|  Level 2: Container Diagram                                        |
|  +------------------+     +------------------+     +-------------+ |
|  |   POS App        |     |   API Gateway    |     |   Kafka     | |
|  |   (SQLite)       |     |   Auth Service   |     |   Cluster   | |
|  +------------------+     |   Sales Module   |     +-------------+ |
|                           |   Inventory Mod  |     +-------------+ |
|                           +------------------+     |  PostgreSQL | |
|                                                    +-------------+ |
|                                                                    |
|  Level 3: Component Diagram (per module)                           |
|  Level 4: Code Diagram (class/sequence)                            |
|                                                                    |
+-------------------------------------------------------------------+

L.6 Quality Assurance (QA) & Testing Strategy

To ensure end-to-end reliability for financial transactions.

E2E (End-to-End) Testing

AttributeSelection
ToolCypress or Playwright
ScopeFull simulation: Cashier login → Scan Item → Process Payment → Print Receipt

Example Test Flow:

1. Cashier authenticates with PIN
2. Scan barcode (NXJ1078)
3. Apply discount (if applicable)
4. Select payment method (Cash/Card)
5. Process payment
6. Print/email receipt
7. Verify inventory decremented
8. Verify domain event appended to events table (PostgreSQL) and NOTIFY dispatched

Load Testing

AttributeSelection
Toolk6 or JMeter
ScopeSimulate “Black Friday” traffic (500 concurrent transactions)

Black Friday Scenario:

Concurrent Users: 500
Duration: 30 minutes
Target TPS: 1000 transactions/second
Acceptable Latency: p99 < 500ms

Code Management

AttributeSelection
PlatformGitHub/GitLab
VersioningSemantic Versioning (tags v1.x.x)
TraceabilityExact code version deployed to each POS terminal

L.7 Observability & Monitoring Strategy

Primary Pattern

AttributeSelection
PatternOpenTelemetry (OTel) “Trace-to-Code” Pipeline
RationaleIndustry-standard OTel protocol prevents vendor lock-in and enables tracing an error from a specific store directly to the line of code

Technology Stack (The “LGTM” Stack)

ComponentToolPurpose
L - LogsLokiLog aggregation
G - GrafanaGrafanaVisualization dashboards
T - TracesTempo (or Jaeger)Distributed tracing
M - MetricsPrometheusMetrics collection

Instrumentation

LayerInstrumentation
APIOpenTelemetry auto-instrumentation (@opentelemetry/sdk-node)
DatabaseQuery tracing, slow query logging
EventsPostgreSQL event tables with LISTEN/NOTIFY (v1.0), correlation IDs for tracing
Nexus POSLocal telemetry buffer, sync on reconnect

L.8 Security & Compliance Strategy

Primary Pattern

AttributeSelection
Pattern6-Gate Security Test Pyramid with DevSecOps for PCI Compliance
RationaleClaude Code agents generate the full codebase. A single SonarQube gate is insufficient to catch missing authorization checks, incorrect OAuth implementation, SAQ-A violations, architecture drift, or insecure CORS/CSP headers. The 6-gate pyramid ensures defense-in-depth for AI-generated code.

6-Gate Security Test Pyramid

GateToolPurposeBlocks Merge?
1. SASTSonarQube / CodeQLStatic code vulnerability scanning (SQLi, XSS, hardcoded secrets)Yes
2. SCASnyk / OWASP Dependency-CheckPackage vulnerability scanning + SBOM generation (PCI-DSS 4.0 Req 6.3.2)Yes
3. Secrets DetectionGitLeaks / TruffleHogCredential leak prevention in source code and commit historyYes
4. Architecture Conformancedependency-cruiserModule boundary enforcement, dependency rules (e.g., Module 6 cannot directly access Module 1 internals)Yes
5. Contract TestsPactShopify/Amazon/Google sandbox API contract verification; webhook signature validationYes
6. Manual Security ReviewHuman reviewerSecurity-critical paths: payment flows, credential vault access, OAuth token handling, PCI boundaryYes (tagged PRs only)
┌──────────────────────────────────────────────────────────┐
│             6-GATE SECURITY TEST PYRAMID                  │
├──────────────────────────────────────────────────────────┤
│                                                           │
│                      ┌─────────┐                         │
│                      │ Manual  │  Gate 6                  │
│                      │ Review  │  (Security-critical PRs) │
│                    ┌─┴─────────┴─┐                       │
│                    │  Contract   │  Gate 5                 │
│                    │  Tests      │  (Pact + Sandboxes)    │
│                  ┌─┴─────────────┴─┐                     │
│                  │  Architecture   │  Gate 4               │
│                  │  Conformance    │  (dep-cruiser)       │
│                ┌─┴─────────────────┴─┐                   │
│                │  Secrets Detection  │  Gate 3             │
│                │  (GitLeaks)         │                    │
│              ┌─┴─────────────────────┴─┐                 │
│              │  SCA (Snyk + SBOM)      │  Gate 2          │
│            ┌─┴─────────────────────────┴─┐               │
│            │  SAST (SonarQube / CodeQL)  │  Gate 1        │
│            └─────────────────────────────┘               │
│                                                           │
└──────────────────────────────────────────────────────────┘

FIM (File Integrity Monitoring) - PCI Requirement

AttributeSelection
ToolWazuh or OSSEC
ActionMonitors POS terminals and servers for unauthorized file changes
PCI ReferencePCI-DSS 4.0 Req 11.5.1
CriticalityEssential for detecting skimmers, tampering, and supply chain compromise

Credential Vault Architecture

AttributeSelection
TechnologyHashiCorp Vault (Docker container)
DeploymentSingle Vault instance with auto-unseal; Docker Compose alongside PostgreSQL

Key Hierarchy:

Master Encryption Key (Vault auto-unseal)
└── Tenant-Specific Keys
    ├── tenant_nexus_key
    │   ├── Shopify OAuth tokens
    │   ├── Amazon LWA credentials
    │   ├── Google Service Account key
    │   ├── Payment processor tokens
    │   ├── SMTP credentials
    │   └── Webhook signing keys
    └── tenant_acme_key
        └── ... (same structure)

6 Credential Types:

#Credential TypeProviderAuth MethodRotation
1Shopify OAuth tokenShopifyOAuth 2.0 / PKCEOn expiry + 90-day forced
2Amazon LWA credentialsAmazonLogin with Amazon (OAuth)On expiry + 90-day forced
3Google Service AccountGoogleService Account JSON key90-day rotation
4Payment processor tokenVariousAPI key / OAuth90-day rotation
5SMTP credentialsEmail providerUsername/password90-day rotation
6Webhook signing keysAll providersHMAC-SHA256On compromise + 90-day

Access Policy: Least privilege; application-role-based access. Integration services can only read their own provider credentials. Credential writes require admin role with MFA.

DevSecOps Pipeline

┌───────────────────────────────────────────────────────────────────┐
│                     DEVSECOPS PIPELINE (v2.0)                      │
├───────────────────────────────────────────────────────────────────┤
│                                                                    │
│  Developer / Claude Code Agent                                     │
│       │                                                            │
│       ▼                                                            │
│  ┌────────────┐   ┌────────────┐   ┌────────────┐                │
│  │ Pre-commit │──►│ Gate 1:    │──►│ Gate 2:    │                │
│  │ Hooks      │   │ SAST       │   │ SCA + SBOM │                │
│  └────────────┘   └────────────┘   └────────────┘                │
│                                          │                         │
│       ┌──────────────────────────────────┘                         │
│       ▼                                                            │
│  ┌────────────┐   ┌────────────┐   ┌────────────┐                │
│  │ Gate 3:    │──►│ Gate 4:    │──►│ Gate 5:    │                │
│  │ Secrets    │   │ dep-cruise │   │ Pact Tests │                │
│  └────────────┘   └────────────┘   └────────────┘                │
│                                          │                         │
│       ┌──────────────────────────────────┘                         │
│       ▼                                                            │
│  ┌────────────┐   ┌────────────┐   ┌────────────┐                │
│  │ E2E Tests  │──►│ Gate 6:    │──►│ Deploy     │                │
│  │(Playwright)│   │ Manual     │   │ + Wazuh    │                │
│  └────────────┘   │ (if tagged)│   │ FIM        │                │
│                   └────────────┘   └────────────┘                │
│                                                                    │
└───────────────────────────────────────────────────────────────────┘

Offline Queue Security

POS terminals operating offline accumulate queued transactions that must be protected against tampering, interception, and replay attacks.

ControlImplementationPurpose
Queue EncryptionAES-256-GCM with device-specific keyProtects queued transactions at rest on SQLite
Tamper DetectionHMAC-SHA256 over each queued transactionDetects modification of queued data before sync
Transaction SigningDevice certificate signs each transactionNon-repudiation; proves transaction originated from authorized terminal
Replay PreventionMonotonic sequence number + timestampPrevents re-submission of previously synced transactions
Key StorageDevice secure enclave / TPM where availableProtects encryption keys from extraction
┌──────────────────────────────────────────────────────────┐
│              OFFLINE QUEUE SECURITY MODEL                  │
├──────────────────────────────────────────────────────────┤
│                                                           │
│  Transaction Created (Offline)                            │
│       │                                                   │
│       ▼                                                   │
│  ┌─────────────┐    ┌──────────────┐    ┌────────────┐  │
│  │ Serialize   │───►│ HMAC-SHA256  │───►│ AES-256    │  │
│  │ Transaction │    │ (Integrity)  │    │ Encrypt    │  │
│  └─────────────┘    └──────────────┘    └──────┬─────┘  │
│                                                 │        │
│                                                 ▼        │
│                                          ┌───────────┐   │
│                                          │  SQLite   │   │
│                                          │  Queue    │   │
│                                          └───────────┘   │
│                                                 │        │
│                     Network Restored            │        │
│                                                 ▼        │
│  ┌─────────────┐    ┌──────────────┐    ┌────────────┐  │
│  │ Verify      │◄───│ Decrypt      │◄───│ Read from  │  │
│  │ HMAC + Seq  │    │ AES-256      │    │ Queue      │  │
│  └──────┬──────┘    └──────────────┘    └────────────┘  │
│         │                                                │
│         ▼                                                │
│  ┌─────────────┐                                        │
│  │ Sync to     │                                        │
│  │ Central API │                                        │
│  └─────────────┘                                        │
│                                                           │
└──────────────────────────────────────────────────────────┘

L.9 Diagrammatic Overview

System Architecture (Mermaid)

graph TD
    subgraph Client_Device ["Nexus POS"]
        UI[UI Layer]
        SL[Service Layer]
        DB_Local[(SQLite)]
        SL --> DB_Local
    end

    subgraph Cloud_Infrastructure ["Cloud Infrastructure"]
        LB[Load Balancer]
        subgraph Central_API ["Central API (Modular Monolith)"]
            Auth[Auth Module]
            Sales[Sales Module]
            Inv[Inventory Module]
        end
        subgraph Data_Layer ["Data Layer"]
            PG[(PostgreSQL)]
            Events[(PG Events)]
        end
    end

    subgraph DevOps_Pipeline ["DevSecOps & Traceability"]
        Git[GitHub - Semantic Ver]
        Struct[Structurizr - Docs]
        Sonar[SonarQube - SAST]
        Cypress[Cypress - E2E]
        Wazuh[Wazuh - FIM/PCI]
    end

    SL --> LB
    LB --> Auth
    Auth --> Sales
    Sales --> Events
    Sales --> PG

    Git --> Sonar
    Sonar --> Cypress
    Cypress --> Struct
    Wazuh -.-> Central_API
    Wazuh -.-> Client_Device

ASCII Version

+------------------------------------------------------------------+
|                    NEXUS POS ARCHITECTURE                         |
+------------------------------------------------------------------+
|                                                                   |
|  ┌─────────────────────────────────────────────────────────────┐ |
|  │                   NEXUS POS CLIENT (STORE)                   │ |
|  │  ┌──────────┐    ┌──────────────┐    ┌──────────────────┐   │ |
|  │  │    UI    │───▶│ Service Layer│───▶│  SQLite (Local)  │   │ |
|  │  │ (React   │    │   (Plugins)  │    │  (Offline Data)  │   │ |
|  │  │  Web App)│    │              │    │  (sql.js + OPFS) │   │ |
|  │  └──────────┘    └──────────────┘    └──────────────────┘   │ |
|  └──────────────────────────┬──────────────────────────────────┘ |
|                             │                                     |
|                             ▼ (Sync when online)                  |
|  ┌─────────────────────────────────────────────────────────────┐ |
|  │                   CLOUD INFRASTRUCTURE                       │ |
|  │                                                               │ |
|  │  ┌──────────────────────────────────────────────────────┐   │ |
|  │  │           CENTRAL API (Modular Monolith)              │   │ |
|  │  │  ┌────────┐  ┌────────┐  ┌──────────┐  ┌──────────┐  │   │ |
|  │  │  │  Auth  │  │ Sales  │  │Inventory │  │ Catalog  │  │   │ |
|  │  │  └────────┘  └────────┘  └──────────┘  └──────────┘  │   │ |
|  │  └──────────────────────┬───────────────────────────────┘   │ |
|  │                         │                                    │ |
|  │         ┌───────────────┼───────────────┐                   │ |
|  │         ▼               ▼               ▼                   │ |
|  │  ┌───────────┐   ┌───────────┐   ┌───────────────┐         │ |
|  │  │PostgreSQL │   │ HashiCorp │   │   External    │         │ |
|  │  │(Events +  │   │   Vault   │   │   Systems     │         │ |
|  │  │ State)    │   │(Secrets)  │   │(Shopify, etc.)│         │ |
|  │  └───────────┘   └───────────┘   └───────────────┘         │ |
|  └─────────────────────────────────────────────────────────────┘ |
|                                                                   |
+------------------------------------------------------------------+

L.9A System Architecture Reference

Detailed Implementation Reference (from former High-Level Architecture chapter, now consolidated here):

Complete System Architecture Diagram

+===========================================================================+
|                           CLOUD LAYER                                      |
|  +------------------+  +------------------+  +------------------+          |
|  |   Shopify API    |  | Payment Gateway  |  |   Tax Service    |          |
|  |  (E-commerce)    |  |  (Stripe/Square) |  |   (TaxJar)       |          |
|  +--------+---------+  +--------+---------+  +--------+---------+          |
|           |                     |                     |                    |
+===========|=====================|=====================|====================+
            |                     |                     |
            v                     v                     v
+===========================================================================+
|                         API GATEWAY LAYER                                  |
|  +---------------------------------------------------------------------+  |
|  |                      Kong / NGINX Gateway                            |  |
|  |  +-------------+  +-------------+  +-------------+  +-------------+  |  |
|  |  | Rate Limit  |  |    Auth     |  |   Routing   |  |   Logging   |  |  |
|  |  +-------------+  +-------------+  +-------------+  +-------------+  |  |
|  +---------------------------------------------------------------------+  |
+===========================================================================+
            |
            v
+===========================================================================+
|                       CENTRAL API LAYER                                    |
|                    (Node.js + Express/Fastify — TypeScript)                |
|                                                                            |
|  +------------------+  +------------------+  +------------------+          |
|  |  Catalog Service |  |  Sales Service   |  |Inventory Service|          |
|  |                  |  |                  |  |                  |          |
|  | - Products       |  | - Transactions   |  | - Stock Levels   |          |
|  | - Categories     |  | - Receipts       |  | - Adjustments    |          |
|  | - Pricing        |  | - Refunds        |  | - Transfers      |          |
|  | - Variants       |  | - Layaways       |  | - Counts         |          |
|  +------------------+  +------------------+  +------------------+          |
|                                                                            |
|  +------------------+  +------------------+  +------------------+          |
|  |Customer Service  |  |Employee Service  |  |  Sync Service    |          |
|  |                  |  |                  |  |                  |          |
|  | - Profiles       |  | - Users          |  | - Shopify Sync   |          |
|  | - Loyalty        |  | - Roles          |  | - Offline Sync   |          |
|  | - History        |  | - Permissions    |  | - Event Queue    |          |
|  | - Credits        |  | - Shifts         |  | - Conflict Res   |          |
|  +------------------+  +------------------+  +------------------+          |
|                                                                            |
+===========================================================================+
            |
            v
+===========================================================================+
|                        DATA LAYER                                          |
|  +---------------------------------------------------------------------+  |
|  |                     PostgreSQL 16 Cluster                            |  |
|  |                                                                       |  |
|  |  +-----------------+  +----------------------------------------------+  |
|  |  |  shared schema  |  |           public schema (RLS)               |  |
|  |  |  (platform)     |  |  All tenant data with tenant_id + RLS      |  |
|  |  +-----------------+  +----------------------------------------------+  |
|  |                                                                       |  |
|  +---------------------------------------------------------------------+  |
|  +------------------+  +------------------+                               |
|  |     Redis        |  |  Event Store     |                               |
|  |  (Cache/Queue)   |  |  (Append-Only)   |                               |
|  +------------------+  +------------------+                               |
+===========================================================================+

+===========================================================================+
|                      CLIENT APPLICATIONS                                   |
|                                                                            |
|  +-------------------------------+           +------------------+          |
|  |         Nexus POS             |           |  Nexus Raptag    |          |
|  |    (React Web App — Vite)     |           | (React Native +  |          |
|  |                               |           |  Expo)           |          |
|  | - Sales Terminal (Cashier)    |           | - RFID Scanning  |          |
|  | - Dashboard / Reports (Mgr)  |           | - Inventory      |          |
|  | - Configuration (Admin)      |           | - Quick Counts   |          |
|  | - Offline SQLite WASM        |           | - Transfers      |          |
|  | - Role-based feature access  |           |                  |          |
|  +-------------------------------+           +------------------+          |
|                                                                            |
+===========================================================================+

Three-Tier Architecture Detail

Tier 1: Cloud Layer (External Services)

ServicePurposeProtocolData Flow
Shopify APIE-commerce syncREST/GraphQLBidirectional
Payment GatewayCard processingREST + WebhooksRequest/Response
Tax ServiceTax calculationRESTRequest/Response
Email ServiceNotificationsSMTP/APIOutbound only
SMS ServiceAlertsAPIOutbound only
Cloud Integration Flow
======================

Shopify                Payment Gateway              Tax Service
   |                        |                           |
   | Products, Orders       | Authorization             | Rate Lookup
   | Inventory              | Capture                   | Calculation
   |                        | Refund                    |
   v                        v                           v
+----------------------------------------------------------------+
|                    Integration Adapters                         |
|  +---------------+  +------------------+  +------------------+  |
|  |ShopifyAdapter |  | PaymentAdapter   |  |  TaxAdapter      |  |
|  +---------------+  +------------------+  +------------------+  |
+----------------------------------------------------------------+
                              |
                              v
                    [Central API Services]

Tier 2: Central API Layer (Application Services)

API Gateway
Request Flow Through Gateway
============================

Client Request
      |
      v
+--------------------------------------------------+
|                  API GATEWAY                      |
|                                                   |
|  1. [Rate Limiting] -----> 100 req/min/client    |
|           |                                       |
|           v                                       |
|  2. [Authentication] ----> JWT Validation        |
|           |                                       |
|           v                                       |
|  3. [Tenant Resolution] -> Extract tenant_id     |
|           |                                       |
|           v                                       |
|  4. [Request Logging] ---> Correlation ID        |
|           |                                       |
|           v                                       |
|  5. [Route to Service] --> /api/v1/sales/*       |
|                                                   |
+--------------------------------------------------+
            |
            v
      Service Handler
Core Services
ServiceResponsibilitiesKey Endpoints
Catalog ServiceProducts, categories, pricing, variants/api/v1/products/*
Sales ServiceTransactions, receipts, refunds, holds/api/v1/sales/*
Inventory ServiceStock levels, adjustments, transfers/api/v1/inventory/*
Customer ServiceProfiles, loyalty, purchase history/api/v1/customers/*
Employee ServiceUsers, roles, permissions, shifts/api/v1/employees/*
Sync ServiceOffline sync, conflict resolution/api/v1/sync/*

Tier 3: Data Layer (Persistence)

Data Layer Architecture
=======================

+------------------+     +------------------+     +------------------+
|   PostgreSQL     |     |      Redis       |     |   Event Store    |
|   (Primary DB)   |     |   (Cache/Queue)  |     | (Append-Only)    |
+------------------+     +------------------+     +------------------+
        |                        |                        |
        |                        |                        |
+-------v------------------------v------------------------v--------+
|                                                                   |
|   Schema: shared         Cache Keys           Events              |
|   +--------------+       +------------+       +-------------+     |
|   | tenants      |       | product:   |       | SaleCreated |     |
|   | plans        |       |   {id}     |       | ItemAdded   |     |
|   | features     |       | session:   |       | PaymentRcvd |     |
|   +--------------+       |   {token}  |       | StockAdj    |     |
|                          | inventory: |       +-------------+     |
|   Schema: public (RLS)   |   {sku}    |                           |
|   +--------------+       +------------+                           |
|   | products     |  (all tables have tenant_id + RLS policies)    |
|   | sales        |                                                |
|   | inventory    |                                                |
|   | customers    |                                                |
|   +--------------+                                                |
|                                                                   |
+-------------------------------------------------------------------+

Client Applications

Nexus POS (Web Application)

Nexus POS Architecture
======================

+-------------------------------------------------------------------+
|          NEXUS POS (React Web App — Vite / TypeScript)              |
|                                                                    |
|  +-----------------------+      +---------------------------+     |
|  |      UI Layer         |      |     Local Storage         |     |
|  |  (React + Zustand +   |      |  +--------------------+   |     |
|  |   React Query)        |      |  | SQLite WASM        |   |     |
|  |  +----------------+   |      |  | (sql.js + OPFS)    |   |     |
|  |  | Sales Screen   |   |      |  |                    |   |     |
|  |  +----------------+   |      |  | - product_cache    |   |     |
|  |  | Product Grid   |   |      |  | - sales_queue      |   |     |
|  |  +----------------+   |      |  +--------------------+   |     |
|  |  | Cart Panel     |   |      |                           |     |
|  |  +----------------+   |      +---------------------------+     |
|  |  | Payment Dialog |   |                                        |
|  |  +----------------+   |                                        |
|  |  | Dashboard/Admin|   |   (Role-based: Cashier sees sales,     |
|  |  | (role-gated)   |   |    Manager sees reports, Admin sees    |
|  |  +----------------+   |    configuration — single app)         |
|  +-----------------------+                                        |
|                                                                    |
|  +-----------------------+      +---------------------------+     |
|  |    Service Layer      |      |    Hardware Layer         |     |
|  |  (React Query +       |      |  (Web APIs / SDKs)        |     |
|  |   Zustand stores)     |      |  +--------------------+   |     |
|  |  +----------------+   |      |  | Star WebPRNT       |   |     |
|  |  | SaleService    |   |      |  | (Receipt Printer)  |   |     |
|  |  +----------------+   |      |  +--------------------+   |     |
|  |  | SyncService    |   |      |  | USB HID Wedge      |   |     |
|  |  +----------------+   |      |  | (Barcode Scanner)  |   |     |
|  |  | OfflineService |   |      |  +--------------------+   |     |
|  |  +----------------+   |      |  | Kick-out Cable     |   |     |
|  +-----------------------+      |  | (via Printer Port) |   |     |
|                                 |  +--------------------+   |     |
|                                 |  | Stripe Terminal SDK|   |     |
|                                 |  | (Card Reader)      |   |     |
|                                 |  +--------------------+   |     |
|                                 +---------------------------+     |
+-------------------------------------------------------------------+

Note: Nexus Admin was merged into Nexus POS in v7.0.0 (ADR-052). All admin features (Dashboard, Products, Reports, Settings, User Management) are now role-gated screens within the single Nexus POS web application. See the Nexus POS architecture diagram above.

Nexus Raptag (Mobile RFID)

Nexus Raptag Architecture
=========================

+-------------------------------------------------------------------+
|               NEXUS RAPTAG (React Native + Expo)                   |
|                                                                    |
|  +------------------------+    +---------------------------+      |
|  |      RFID Layer        |    |       UI Layer            |      |
|  |  +------------------+  |    |  (React Native +          |      |
|  |  | Zebra SDK        |  |    |   React Query + Zustand)  |      |
|  |  | (Native Module)  |  |    |  +---------------------+  |      |
|  |  +------------------+  |    |  | Scan Screen         |  |      |
|  |  | Tag Parser       |  |    |  +---------------------+  |      |
|  |  +------------------+  |    |  | Inventory Count     |  |      |
|  |  | Batch Processor  |  |    |  +---------------------+  |      |
|  |  +------------------+  |    |  | Transfer Screen     |  |      |
|  +------------------------+    |  +---------------------+  |      |
|                                +---------------------------+      |
|                                                                    |
|  +------------------------+    +---------------------------+      |
|  |    Local Storage       |    |     API Client            |      |
|  |  +------------------+  |    |  +---------------------+  |      |
|  |  | SQLite           |  |    |  | fetch / axios       |  |      |
|  |  | (expo-sqlite)    |  |    |  +---------------------+  |      |
|  |  +------------------+  |    |  | Offline Queue       |  |      |
|  |  | Scan Buffer      |  |    |  +---------------------+  |      |
|  |  +------------------+  |    |                           |      |
|  +------------------------+    +---------------------------+      |
+-------------------------------------------------------------------+

Service Boundaries

Service Boundary Diagram
========================

+-------------------+      +-------------------+      +-------------------+
|  Catalog Service  |      |   Sales Service   |      |Inventory Service  |
|                   |      |                   |      |                   |
| OWNS:             |      | OWNS:             |      | OWNS:             |
| - products        |      | - sales           |      | - inventory_items |
| - categories      |      | - line_items      |      | - stock_levels    |
| - pricing_rules   |      | - payments        |      | - adjustments     |
| - product_variants|      | - refunds         |      | - transfers       |
| - product_images  |      | - holds           |      | - count_sessions  |
|                   |      |                   |      |                   |
| REFERENCES:       |      | REFERENCES:       |      | REFERENCES:       |
| (none)            |      | - product_id      |      | - product_id      |
|                   |      | - customer_id     |      | - location_id     |
|                   |      | - employee_id     |      |                   |
+-------------------+      +-------------------+      +-------------------+

+-------------------+      +-------------------+
| Customer Service  |      | Employee Service  |
|                   |      |                   |
| OWNS:             |      | OWNS:             |
| - customers       |      | - employees       |
| - loyalty_cards   |      | - roles           |
| - store_credits   |      | - permissions     |
| - addresses       |      | - shifts          |
|                   |      | - time_entries    |
| REFERENCES:       |      |                   |
| (none)            |      | REFERENCES:       |
|                   |      | - location_id     |
+-------------------+      +-------------------+

Technology Stack Summary

LayerTechnologyJustification
API GatewayKong or NGINXProven, scalable, plugin ecosystem
Central APINode.js + Express/Fastify (TypeScript)Unified TypeScript stack, async I/O, rich npm ecosystem
ORMPrismaType-safe queries, auto-generated client, declarative migrations
ValidationZodRuntime + compile-time schema validation, TypeScript-native
DatabasePostgreSQL 16Multi-tenant support, JSON support, reliability
CacheRedis (ioredis)Session storage, real-time features
Event StorePostgreSQL (append-only)Simplicity, same DB engine
Nexus POSReact/TypeScript (Vite) + TailwindCSS + shadcn/uiSingle web app for all roles (cashier, manager, admin). Hardware via web APIs (Star WebPRNT, USB HID wedge, Stripe Terminal SDK). Offline fallback via SQLite WASM (sql.js + OPFS). React Query + Zustand for state.
Nexus RaptagReact Native + ExpoCross-platform mobile, Zebra RFID SDK (native module)
Real-timeSocket.ioInventory broadcasts, notifications, WebSocket with fallback
Authjose + argon2RS256 JWT signing, Argon2id password hashing
LoggingPinoStructured JSON logging, high performance
TestingVitestFast unit/integration testing, TypeScript-native
Telemetry@opentelemetry/sdk-nodeTraces, metrics, logs — vendor-neutral
Package ManagerpnpmFast installs, strict dependency resolution, workspace support

Deployment Topology

Production Deployment
=====================

                        +------------------+
                        |   Load Balancer  |
                        |   (HAProxy/ALB)  |
                        +--------+---------+
                                 |
          +----------------------+----------------------+
          |                      |                      |
+---------v--------+   +---------v--------+   +---------v--------+
|   API Server 1   |   |   API Server 2   |   |   API Server 3   |
|                  |   |                  |   |                  |
|  - Central API   |   |  - Central API   |   |  - Central API   |
|  - Stateless     |   |  - Stateless     |   |  - Stateless     |
+--------+---------+   +---------+--------+   +---------+--------+
         |                       |                      |
         +----------+------------+-----------+----------+
                    |                        |
          +---------v--------+     +---------v--------+
          |   PostgreSQL     |     |      Redis       |
          |   (Primary)      |     |   (Cluster)      |
          +--------+---------+     +------------------+
                   |
          +--------v---------+
          |   PostgreSQL     |
          |   (Replica)      |
          +------------------+

Store Locations (5 stores):
+----------------+   +----------------+   +----------------+
| GM Store       |   | HM Store       |   | LM Store       |
| +------------+ |   | +------------+ |   | +------------+ |
| |Nexus POS  1| |   | |Nexus POS  1| |   | |Nexus POS  1| |
| +------------+ |   | +------------+ |   | +------------+ |
| |Nexus POS  2| |   | +------------+ |   +----------------+
| +------------+ |   | |Nexus POS  2| |
+----------------+   | +------------+ |
                     +----------------+

Security Architecture

Security Layers
===============

+------------------------------------------------------------------+
|                        INTERNET                                   |
+---------------------------+--------------------------------------+
                            |
                            v
+---------------------------+--------------------------------------+
|                    TLS TERMINATION                                |
|                    (Let's Encrypt)                                |
+---------------------------+--------------------------------------+
                            |
                            v
+------------------------------------------------------------------+
|                    API GATEWAY                                    |
|  +-----------------------+  +-----------------------+             |
|  | Rate Limiting         |  | IP Whitelisting       |             |
|  | 100 req/min/client    |  | IP Whitelisting       |             |
|  +-----------------------+  +-----------------------+             |
+---------------------------+--------------------------------------+
                            |
                            v
+------------------------------------------------------------------+
|                    AUTHENTICATION                                 |
|  +-----------------------+  +-----------------------+             |
|  | JWT Validation        |  | PIN Verification      |             |
|  | - Signature check     |  | - Employee clock-in   |             |
|  | - Expiry check        |  | - Sensitive actions   |             |
|  | - Tenant claim        |  +-----------------------+             |
|  +-----------------------+                                        |
+---------------------------+--------------------------------------+
                            |
                            v
+------------------------------------------------------------------+
|                    AUTHORIZATION                                  |
|  +-----------------------+  +-----------------------+             |
|  | Role-Based (RBAC)     |  | Permission Policies   |             |
|  | - Admin               |  | - can:create_sale     |             |
|  | - Manager             |  | - can:void_sale       |             |
|  | - Cashier             |  | - can:view_reports    |             |
|  +-----------------------+  +-----------------------+             |
+------------------------------------------------------------------+

L.9B Data Flow Reference

Detailed Implementation Reference (from former High-Level Architecture chapter, now consolidated here):

Pattern 1: Online Sale Flow

Online Sale Flow
================

[Nexus POS]                     [Central API]                   [Database]
     |                               |                               |
     | 1. POST /sales                |                               |
     |------------------------------>|                               |
     |                               | 2. Validate request           |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 3. Begin transaction          |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 4. Create sale record         |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 5. Decrement inventory        |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 6. Log sale event             |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 7. Commit transaction         |
     |                               |------------------------------>|
     |                               |                               |
     | 8. Return sale confirmation   |                               |
     |<------------------------------|                               |
     |                               |                               |
     | 9. Print receipt              |                               |
     |                               |                               |

Pattern 2: Offline Sale Flow

Offline Sale Flow
=================

[Nexus POS]                     [Local SQLite]                  [Sync Queue]
     |                               |                               |
     | 1. Create sale locally        |                               |
     |------------------------------>|                               |
     |                               | 2. Generate local UUID        |
     |                               |                               |
     | 3. Decrement local inventory  |                               |
     |------------------------------>|                               |
     |                               |                               |
     | 4. Queue for sync             |                               |
     |-------------------------------------------------------------->|
     |                               |                               |
     | 5. Print receipt              |                               |
     |                               |                               |

--- Later, when online ---

[Sync Service]                  [Central API]                   [Database]
     |                               |                               |
     | 1. Pop from queue             |                               |
     |                               |                               |
     | 2. POST /sync/sales           |                               |
     |------------------------------>|                               |
     |                               | 3. Validate (check for dupe)  |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 4. Insert with local UUID     |
     |                               |------------------------------>|
     |                               |                               |
     | 5. Mark synced                |                               |
     |<------------------------------|                               |

Pattern 3: Inventory Sync Flow

Inventory Sync from Shopify
===========================

[Shopify]                      [Webhook Handler]               [Inventory Svc]
     |                               |                               |
     | 1. inventory_levels/update    |                               |
     |------------------------------>|                               |
     |                               | 2. Validate webhook           |
     |                               |                               |
     |                               | 3. Parse inventory update     |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 4. Update stock level         |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 5. Log inventory event        |
     |                               |------------------------------>|
     |                               |                               |
     |                               | 6. Broadcast to POS clients   |
     |                               |------------------------------>|
     |                               |          (Socket.io)          |

L.9C Domain Model Reference

Domain Model Overview (from former Domain Model chapter, now consolidated here): NOTE: Only bounded contexts, aggregates, and ER diagram included here. Detailed entity field definitions are in Part III Database chapters.

Bounded Contexts Overview

Domain Bounded Contexts
=======================

+------------------------------------------------------------------+
|                         POS PLATFORM                              |
|                                                                   |
|  +-------------+  +-------------+  +-------------+               |
|  |   CATALOG   |  |    SALES    |  | INVENTORY   |               |
|  |             |  |             |  |             |               |
|  | Products    |  | Sales       |  | StockLevels |               |
|  | Variants    |  | LineItems   |  | Adjustments |               |
|  | Categories  |  | Payments    |  | Transfers   |               |
|  | Pricing     |  | Refunds     |  | Counts      |               |
|  +-------------+  +-------------+  +-------------+               |
|                                                                   |
|  +-------------+  +-------------+  +-------------+               |
|  |  CUSTOMER   |  |  EMPLOYEE   |  |  LOCATION   |               |
|  |             |  |             |  |             |               |
|  | Customers   |  | Employees   |  | Locations   |               |
|  | Addresses   |  | Roles       |  | Registers   |               |
|  | Loyalty     |  | Permissions |  | Settings    |               |
|  | Credits     |  | Shifts      |  | TaxRates    |               |
|  +-------------+  +-------------+  +-------------+               |
|                                                                   |
+------------------------------------------------------------------+

Context Summary Table

ContextEntitiesPurpose
CatalogProduct, Variant, Category, PricingRuleProduct management
SalesSale, LineItem, Payment, RefundTransaction processing
InventoryInventoryItem, Adjustment, TransferStock management
CustomerCustomer, Address, Credit, LoyaltyCustomer management
EmployeeEmployee, Role, Permission, ShiftStaff management
LocationLocation, Register, Drawer, TaxRateStore configuration

Entity Relationship Diagram

Entity Relationships
====================

                                 +----------+
                                 | Category |
                                 +----+-----+
                                      |
                                      | 1:N
                                      v
+----------+     1:N      +----------+     1:N      +----------------+
| Location |<-------------| Product  |------------->| ProductVariant |
+----+-----+              +----+-----+              +-------+--------+
     |                         |                            |
     |                         |                            |
     | 1:N                     |                            |
     v                         |                            |
+----------+                   |                            |
| Register |                   v                            v
+----+-----+              +----------+              +----------------+
     |                    |Inventory |              |  Adjustment    |
     |                    |   Item   |              |     Item       |
     | 1:N                +----------+              +----------------+
     v
+----------+
|CashDrawer|
+----------+


+----------+     1:N      +----------+     1:N      +----------+
| Customer |------------->|   Sale   |------------->| LineItem |
+----+-----+              +----+-----+              +----------+
     |                         |
     |                         | 1:N
     | 1:N                     v
     v                    +----------+
+----------+              | Payment  |
|  Credit  |              +----------+
+----------+


+----------+     N:1      +----------+     1:N      +----------+
| Employee |------------->|   Role   |------------->|Permission|
+----+-----+              +----------+              +----------+
     |
     | 1:N
     v
+----------+
|  Shift   |
+----------+

Aggregate Boundaries

Each aggregate has a root entity and encapsulates related entities:

Aggregate Definitions
=====================

SALE Aggregate
+------------------------------------------+
| Sale (Root)                              |
|   +-- SaleLineItem[] (owned)             |
|   +-- Payment[] (owned)                  |
|   +-- Refund[] (reference: sale_id)      |
+------------------------------------------+

INVENTORY_ADJUSTMENT Aggregate
+------------------------------------------+
| InventoryAdjustment (Root)               |
|   +-- InventoryAdjustmentItem[] (owned)  |
+------------------------------------------+

INVENTORY_TRANSFER Aggregate
+------------------------------------------+
| InventoryTransfer (Root)                 |
|   +-- InventoryTransferItem[] (owned)    |
+------------------------------------------+

CUSTOMER Aggregate
+------------------------------------------+
| Customer (Root)                          |
|   +-- CustomerAddress[] (owned)          |
|   +-- StoreCredit[] (reference)          |
|   +-- LoyaltyTransaction[] (reference)   |
+------------------------------------------+

PRODUCT Aggregate
+------------------------------------------+
| Product (Root)                           |
|   +-- ProductVariant[] (owned)           |
+------------------------------------------+

L.10 Risks & Mitigations

RiskMitigation Strategy
Sync ConflictsUse Event Sourcing to replay conflicting events deterministically. First-commit-wins for inventory with backorder escalation.
Observability OverloadLGTM stack with integration-specific dashboards: circuit breaker state, DLQ depth, sync latency, safety buffer violations, disapproval rate per channel.
GenAI Code Risks6-Gate Security Pyramid: SAST + SCA + Secrets + dependency-cruiser + Pact + Manual Review. Architecture conformance tests prevent module boundary violations.
PCI-DSS Non-ComplianceFIM via Wazuh agents on all POS nodes. SCA via Snyk. SBOM generation. Session management with 15-minute timeout.
Supply Chain AttacksPackage firewall at proxy level. Real-time SBOM. Automated dependency updates with vulnerability scanning.
External API Cascade FailureCircuit breaker (5 failures/60s → OPEN). Module 6 as Extractable Integration Gateway with failure isolation. Bulkheaded thread pools.
Credential CompromiseHashiCorp Vault with key hierarchy. 90-day automated rotation. Emergency rotation procedures. Least-privilege access policies.
Overselling Across ChannelsSafety buffer computation with 4-level priority resolution. Transactional Outbox for atomic inventory + event. First-commit-wins with backorder escalation.

L.10A Key Architecture Decisions (BRD-v12)

This section documents critical architecture decisions derived from BRD-v12 requirements analysis. Each decision follows the Architecture Decision Record (ADR) format.

L.10A.1 Online-First with Offline Fallback

AttributeValue
Decision IDADR-048
ContextPOS terminals operate in retail environments with reliable internet (outages measured in minutes/year). The original offline-first design (ADR-002) created daily complexity for a rare event.
DecisionOnline-First with thin offline safety net (2-table SQLite fallback)
Alternatives Considered1) Offline-first with 6-table SQLite + CRDTs (ADR-002, superseded), 2) Online-first with thin fallback (selected), 3) Online-only, no offline capability (rejected)
RationaleOnline-first optimizes for 99.99% case; 2-table SQLite provides minimum viable offline sales; eliminates CRDTs, platform-aware hooks, and sync priority tiers
ReferenceADR-048, BRD §1.16

Data Access Strategy:

┌─────────────────────────────────────────────────────────────┐
│              ONLINE-FIRST DATA ACCESS                        │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  UI Component → useProduct(barcode)                          │
│                      │                                       │
│       ┌──────────────┼──────────────┐                        │
│       │              │              │                        │
│       ▼              ▼              ▼                        │
│  ┌─────────┐   ┌──────────┐   ┌──────────┐                  │
│  │ ONLINE  │   │ DEGRADED │   │ OFFLINE  │                  │
│  │         │   │          │   │          │                  │
│  │ React   │   │ Try API  │   │ SQLite   │                  │
│  │ Query → │   │ (2s) →   │   │ product  │                  │
│  │ Central │   │ fallback │   │ _cache   │                  │
│  │ API     │   │ SQLite   │   │          │                  │
│  └─────────┘   └──────────┘   └──────────┘                  │
│                                                              │
│  Writes:                                                     │
│  ONLINE    → POST to Central API directly                    │
│  DEGRADED  → POST to API + queue locally as backup           │
│  OFFLINE   → Append to sales_queue (flush on recovery)       │
└─────────────────────────────────────────────────────────────┘

Operations During Offline:

online_first:
  # All operations go through Central API when online.
  # During offline fallback, only these are available:

  allowed_offline:
    - sale_new              # Prices from product_cache, sale to sales_queue
    - return_with_receipt   # Receipt data available locally
    - price_check           # From product_cache
    - parked_sale_create    # Local cart state
    - parked_sale_retrieve  # Local cart state

  blocked_offline:
    - customer_create        # Requires uniqueness check
    - credit_limit_check     # Requires real-time balance
    - on_account_payment     # Risk of exceeding limit
    - multi_store_inventory  # Requires network
    - gift_card_activation   # Must register immediately
    - gift_card_reload       # Risk of double-load
    - transfer_request       # Requires other store
    - reservation_create     # Requires other store

  staleness_warning:
    threshold_minutes: 60   # Show "prices may be outdated" banner

L.10A.1A Nexus POS Architecture (Online-First)

Online-first architecture: POS terminal connects directly to Central API via React Query. SQLite provides offline fallback only.

Nexus POS Client Architecture (Online-First)
=============================================

+-----------------------------------------------------------------------+
|                NEXUS POS (React Web App — Vite/TypeScript)             |
|                                                                        |
|  +------------------------+        +-------------------------------+  |
|  |      Presentation      |        |   Offline Fallback (SQLite)   |  |
|  |                        |        |                               |  |
|  |  +------------------+  |        |  +-------------------------+  |  |
|  |  |   Sales Screen   |  |        |  |   product_cache         |  |  |
|  |  +------------------+  |        |  |   (read-only, server-   |  |  |
|  |  |  Product Grid    |  |        |  |   authoritative)        |  |  |
|  |  +------------------+  |        |  +-------------------------+  |  |
|  |  |   Cart Panel     |  |        |  |   sales_queue           |  |  |
|  |  +------------------+  |        |  |   (append-only, flush   |  |  |
|  |  |  Payment Dialog  |  |        |  |   on recovery)          |  |  |
|  |  +------------------+  |        |  +-------------------------+  |  |
|  |  |  Receipt Print   |  |        |                               |  |
|  |  +------------------+  |        +-------------------------------+  |
|  +------------------------+                    ^                       |
|             |                                  | (OFFLINE/DEGRADED     |
|             v                                  |  fallback only)       |
|  +------------------------+                    |                       |
|  |   Data Access Layer    |--------------------+                      |
|  |                        |                                           |
|  |  useProduct(barcode)   |   Routes transparently based on           |
|  |  useCompleteSale()     |   connection state. Components            |
|  |  useInventory()        |   never know which path was taken.        |
|  +------------------------+                                           |
|             | (ONLINE: primary path)                                  |
|             v                                                         |
|  +------------------------+        +-------------------------------+  |
|  |  React Query + Cache   |        | Connection Monitor (3-State)  |  |
|  |                        |        |                               |  |
|  |  - In-memory cache     |<------>|  - ONLINE: API + WebSocket    |  |
|  |  - Stale-while-revali- |        |  - DEGRADED: try API, fall-   |  |
|  |    date pattern        |        |    back to SQLite cache        |  |
|  |  - Background refetch  |        |  - OFFLINE: SQLite only        |  |
|  +------------------------+        +-------------------------------+  |
|             |                                                         |
+-------------|----------------------------------------------------------+
              | (always, when online)
              v
+-----------------------------------------------------------------------+
|                          CENTRAL API                                   |
|  - WebSocket push (config changes, inventory updates)                 |
|  - REST API (reads + writes)                                          |
|  - Redis cache (product lookups <5ms server-side)                     |
+-----------------------------------------------------------------------+

L.10A.1B Local Database Schema (SQLite — 2 Tables)

Minimal offline fallback: Only 2 tables needed — a read-only product cache for pricing lookups and an append-only sales queue for offline transactions. All other data (inventory, customers, settings) is accessed via Central API in real-time.

-- SQLite Schema for Nexus POS Offline Fallback (sql.js WASM + OPFS)
-- Only 2 tables — minimal footprint for rare offline events

-- Product cache (read-only, server-authoritative)
-- Pre-warmed on startup, updated by WebSocket push events
CREATE TABLE product_cache (
    id              TEXT PRIMARY KEY,
    sku             TEXT UNIQUE NOT NULL,
    barcode         TEXT,
    name            TEXT NOT NULL,
    category_name   TEXT,
    price           REAL NOT NULL,
    cost            REAL,
    tax_code        TEXT,
    is_taxable      INTEGER DEFAULT 1,
    variants_json   TEXT,              -- JSON array of variants
    last_refreshed  TEXT NOT NULL      -- For staleness detection
);

CREATE INDEX idx_product_cache_barcode ON product_cache(barcode);
CREATE INDEX idx_product_cache_sku ON product_cache(sku);

-- Sales queue (append-only, offline transactions)
-- Written only during OFFLINE/DEGRADED states
-- Flushed to Central API on recovery (FIFO, oldest first)
CREATE TABLE sales_queue (
    sale_id         TEXT PRIMARY KEY,  -- UUID for idempotent upsert
    sale_number     TEXT UNIQUE NOT NULL,
    location_id     TEXT NOT NULL,
    register_id     TEXT NOT NULL,
    employee_id     TEXT NOT NULL,
    customer_id     TEXT,
    subtotal        REAL NOT NULL,
    discount_total  REAL DEFAULT 0,
    tax_total       REAL DEFAULT 0,
    total           REAL NOT NULL,
    line_items_json TEXT NOT NULL,     -- JSON array of line items
    payments_json   TEXT NOT NULL,     -- JSON array of payments
    created_at      TEXT DEFAULT (datetime('now')),
    synced_at       TEXT,              -- NULL until synced
    sync_error      TEXT              -- Last error if sync failed
);

CREATE INDEX idx_sales_queue_pending ON sales_queue(synced_at) WHERE synced_at IS NULL;

Why only 2 tables (down from 6 in ADR-002):

  • inventory_cache removed — inventory levels queried from API in real-time; not critical for completing a sale during brief offline
  • customers_cache removed — customer lookup via API; offline sales can proceed without customer association
  • event_queue removed — replaced by sales_queue (only sales need offline queuing; no priority tiers needed)
  • sync_status removed — cache freshness tracked by product_cache.last_refreshed; no multi-entity sync timestamps needed

Cache pre-warming (on POS startup while online):

// On POS application startup (sql.js WASM + OPFS)
import type { Database } from 'sql.js';

async function warmProductCache(db: Database, api: ApiClient): Promise<void> {
  const locationId = getLocationConfig().locationId;
  const products = await api.getProducts({ locationId, includeVariants: true });

  db.run('BEGIN TRANSACTION');
  const stmt = db.prepare(`
    INSERT OR REPLACE INTO product_cache
    (id, sku, barcode, name, category_name, price, cost, tax_code, is_taxable, variants_json, last_refreshed)
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
  `);

  for (const p of products) {
    stmt.run([p.id, p.sku, p.barcode, p.name, p.categoryName, p.price, p.cost,
              p.taxCode, p.isTaxable ? 1 : 0, JSON.stringify(p.variants), new Date().toISOString()]);
  }

  stmt.free();
  db.run('COMMIT');

  // Persist to OPFS
  const data = db.export();
  await persistToOPFS(data);
}

Incremental cache updates (via WebSocket during the day):

// Listen for product changes pushed by Central API (sql.js WASM)
socket.on('product.updated', async (product: ProductUpdate) => {
  const stmt = db.prepare(`
    INSERT OR REPLACE INTO product_cache
    (id, sku, barcode, name, category_name, price, cost, tax_code, is_taxable, variants_json, last_refreshed)
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
  `);
  stmt.run([product.id, product.sku, product.barcode, product.name, product.categoryName,
            product.price, product.cost, product.taxCode, product.isTaxable ? 1 : 0,
            JSON.stringify(product.variants), new Date().toISOString()]);
  stmt.free();

  // Persist to OPFS (debounced in production)
  await persistToOPFS(db.export());
});

L.10A.1C Sales Queue Flush Design

Simple FIFO flush: No priority tiers needed. Only sales are queued during offline. Flush in order when connectivity restores.

Sales Queue Flush (OFFLINE → ONLINE Recovery)
=============================================

  Connection Restores
         |
         v
  ┌─────────────────────────────────────────┐
  │  1. Read sales_queue WHERE synced_at    │
  │     IS NULL ORDER BY created_at ASC     │
  │     (oldest first, FIFO)                │
  │                                         │
  │  2. For each queued sale:               │
  │     POST /api/sales                     │
  │     Body: { sale_id, items, total, ... }│
  │     ↓                                   │
  │  3. API processes sale:                 │
  │     - Upsert by sale_id (idempotent)    │
  │     - Compare unit_price vs current     │
  │     - Flag discrepancy if different     │
  │     - Adjust inventory                  │
  │     - Fire domain events                │
  │     ↓                                   │
  │  4. On success:                         │
  │     UPDATE sales_queue                  │
  │     SET synced_at = NOW()               │
  │     WHERE sale_id = ?                   │
  │     ↓                                   │
  │  5. On failure:                         │
  │     UPDATE sales_queue                  │
  │     SET sync_error = error_message      │
  │     WHERE sale_id = ?                   │
  │     (retry on next cycle)               │
  └─────────────────────────────────────────┘
         |
         v
  ┌─────────────────────────────────────────┐
  │  6. Refresh product_cache               │
  │     (prices may have changed)           │
  │                                         │
  │  7. Resume WebSocket subscription       │
  │                                         │
  │  8. Switch to ONLINE mode               │
  └─────────────────────────────────────────┘

Idempotency guarantee: Each sale has a UUID sale_id generated at creation time. The Central API uses this as an idempotency key — if the same sale_id is submitted twice (e.g., partial flush + retry), the API returns success without creating a duplicate.


L.10A.1D Data Consistency (No Conflicts by Design)

Online-first eliminates traditional conflict resolution. The product cache is read-only (server-authoritative). The sales queue is append-only (UUID-keyed). No two-way data merge is needed.

Why CRDTs are not needed:

Data TypeOnline-First ApproachWhy No Conflict
ProductsRead-only cache, server pushes updatesPOS never writes to product data
InventoryAPI query in real-time (online), not tracked locally (offline)No local inventory state to conflict
CustomersAPI query in real-time (online), not cached locallyNo local customer state to conflict
SalesAppend-only queue with UUID keysEach sale is unique; idempotent upsert prevents duplicates
SettingsAPI query in real-time, pushed via WebSocketPOS never writes settings

Contrast with offline-first (ADR-002, superseded):

  • Offline-first required conflict resolution because multiple data types (products, inventory, customers) were cached locally and could diverge from the server
  • Online-first eliminates this: only the product cache exists locally, and it’s read-only (server always wins)
  • The sales queue is append-only and uses UUID-based idempotent processing — no conflicts possible

L.10A.1E Flag-on-Sync Price Discrepancy Detection

Problem: During offline mode, the POS uses cached product prices. If an admin changed a price while the terminal was offline, the sale was recorded at the stale price. Flag-on-sync catches this automatically.

Flag-on-Sync Workflow:

Price Discrepancy Detection (during sales queue flush)
=====================================================

  For each queued sale being synced:

  ┌─────────────────────────────────────────────────────┐
  │  1. API receives sale with line items               │
  │     Each item has: { sku, unit_price, quantity }    │
  │                                                     │
  │  2. For each line item:                             │
  │     Compare sale.unit_price vs product.current_price│
  │                                                     │
  │     ├── Prices MATCH → accept normally              │
  │     └── Prices DIFFER → accept + flag:              │
  │         {                                           │
  │           price_discrepancy: true,                  │
  │           sold_price: 29.99,                        │
  │           current_price: 24.99,                     │
  │           difference: +5.00,                        │
  │           reason: "offline_cache_stale"             │
  │         }                                           │
  │                                                     │
  │  3. Fire event: sale.price_discrepancy              │
  │     → Admin notification in Nexus POS                │
  └─────────────────────────────────────────────────────┘

Admin Discrepancy View (in Nexus POS manager/admin dashboard):

Sale IDProductSold PriceCurrent PriceDiffTimeAction
abc-123Blue T-Shirt$29.99$24.99+$5.002:10 PM[Issue Credit] [Dismiss]

Sales Queue Flush Service:

// sales-queue-flush.ts (sql.js WASM)

import type { Database } from 'sql.js';
import pino from 'pino';
import type { ApiClient, ConnectionState } from './types';

const logger = pino({ name: 'SalesQueueFlush' });

interface QueuedSale {
  sale_id: string;
  sale_number: string;
  location_id: string;
  register_id: string;
  employee_id: string;
  customer_id: string | null;
  subtotal: number;
  discount_total: number;
  tax_total: number;
  total: number;
  line_items_json: string;
  payments_json: string;
  created_at: string;
}

export class SalesQueueFlush {
  private isFlushing = false;

  constructor(
    private db: Database,
    private api: ApiClient,
  ) {}

  async flush(): Promise<{ synced: number; errors: number }> {
    if (this.isFlushing) return { synced: 0, errors: 0 };
    this.isFlushing = true;

    let synced = 0;
    let errors = 0;

    try {
      // sql.js: query pending sales from WASM SQLite
      const pending = queryAll<QueuedSale>(this.db,
        'SELECT * FROM sales_queue WHERE synced_at IS NULL ORDER BY created_at ASC'
      );

      for (const sale of pending) {
        try {
          // POST with idempotent sale_id
          await this.api.post('/api/sales', {
            saleId: sale.sale_id,
            saleNumber: sale.sale_number,
            locationId: sale.location_id,
            registerId: sale.register_id,
            employeeId: sale.employee_id,
            customerId: sale.customer_id,
            subtotal: sale.subtotal,
            discountTotal: sale.discount_total,
            taxTotal: sale.tax_total,
            total: sale.total,
            lineItems: JSON.parse(sale.line_items_json),
            payments: JSON.parse(sale.payments_json),
            createdAt: sale.created_at,
            source: 'offline_queue',
          });

          // Mark as synced
          this.db.run(
            'UPDATE sales_queue SET synced_at = ? WHERE sale_id = ?',
            [new Date().toISOString(), sale.sale_id]
          );

          synced++;
          logger.info({ saleId: sale.sale_id }, 'Queued sale synced');
        } catch (err) {
          // Record error but continue with next sale
          this.db.run(
            'UPDATE sales_queue SET sync_error = ? WHERE sale_id = ?',
            [String(err), sale.sale_id]
          );

          errors++;
          logger.error({ saleId: sale.sale_id, err }, 'Failed to sync sale');
        }
      }

      // After flush: refresh product cache (prices may have changed)
      if (synced > 0) {
        logger.info({ synced, errors }, 'Queue flush complete, refreshing cache');
        await persistToOPFS(this.db.export());
      }
    } finally {
      this.isFlushing = false;
    }

    return { synced, errors };
  }
}

L.10A.1F Sale Creation Flow (Online-First with Offline Fallback)

Online path (99.99%): Sale goes directly to Central API. Offline path (rare): Sale saved to local SQLite queue, flushed on recovery.

Sale Flow (Online-First)
========================

1. Cashier scans items
   ┌────────────────────────────────────────────┐
   │ ONLINE:   React Query → Central API        │
   │           (cached after first scan)         │
   │ DEGRADED: Try API (2s) → SQLite cache      │
   │ OFFLINE:  SQLite product_cache              │
   └────────────────────────────────────────────┘
         |
         v
2. Add to cart (in-memory, no network needed)
   +----------------+
   | In-Memory Cart |
   +----------------+
         |
         v
3. Customer pays
   +----------------+
   | Payment Dialog |
   | (card or cash) |
   +----------------+
         |
         v
4. Complete sale
   ┌────────────────────────────────────────────┐
   │ ONLINE:   POST /api/sales → Central API    │
   │           (immediate, real-time)            │
   │                                            │
   │ OFFLINE:  INSERT INTO sales_queue           │
   │           (local SQLite, flush later)       │
   └────────────────────────────────────────────┘
         |
         v
5. Print receipt
   +----------------+
   | Receipt ready  |
   | (no waiting)   |
   +----------------+

Sale Service Implementation (Online-First)

// sale-service.ts (sql.js WASM)

import type { Database } from 'sql.js';
import type { ApiClient, ConnectionState, ReceiptPrinter, Cart, Payment, Sale } from './types';

export class SaleService {
  constructor(
    private api: ApiClient,
    private db: Database,
    private connectionState: ConnectionState,
    private printer: ReceiptPrinter
  ) {}

  async completeSale(cart: Cart, payments: Payment[]): Promise<Sale> {
    const saleId = crypto.randomUUID(); // Web Crypto API (browser-native)
    const saleNumber = this.generateSaleNumber();

    const sale: Sale = {
      id: saleId,
      saleNumber,
      locationId: this.getLocationId(),
      registerId: this.getRegisterId(),
      employeeId: this.getEmployeeId(),
      customerId: cart.customerId ?? null,
      subtotal: cart.subtotal,
      discountTotal: cart.discountTotal,
      taxTotal: cart.taxTotal,
      total: cart.total,
      lineItems: cart.items,
      payments,
      createdAt: new Date(),
    };

    if (this.connectionState.isOnline()) {
      // ONLINE: send directly to Central API
      await this.api.post('/api/sales', {
        saleId: sale.id,
        saleNumber: sale.saleNumber,
        locationId: sale.locationId,
        registerId: sale.registerId,
        employeeId: sale.employeeId,
        customerId: sale.customerId,
        subtotal: sale.subtotal,
        discountTotal: sale.discountTotal,
        taxTotal: sale.taxTotal,
        total: sale.total,
        lineItems: sale.lineItems,
        payments: sale.payments,
        createdAt: sale.createdAt.toISOString(),
        source: 'online',
      });
    } else {
      // OFFLINE: queue locally for later flush (sql.js WASM)
      this.db.run(`
        INSERT INTO sales_queue
        (sale_id, sale_number, location_id, register_id, employee_id,
         customer_id, subtotal, discount_total, tax_total, total,
         line_items_json, payments_json, created_at)
        VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
      `, [
        sale.id, sale.saleNumber, sale.locationId, sale.registerId,
        sale.employeeId, sale.customerId, sale.subtotal, sale.discountTotal,
        sale.taxTotal, sale.total, JSON.stringify(sale.lineItems),
        JSON.stringify(sale.payments), sale.createdAt.toISOString()
      ]);
      // Persist to OPFS immediately (critical: don't lose offline sales)
      await persistToOPFS(this.db.export());
    }

    // Print receipt regardless of online/offline
    void this.printer.printReceipt(sale);

    return sale;
  }

  private generateSaleNumber(): string {
    const location = this.getLocationCode();
    const date = new Date().toISOString().slice(0, 10).replace(/-/g, '');
    const sequence = this.getNextSequence();
    return `${location}-${date}-${String(sequence).padStart(4, '0')}`;
  }

  private getLocationId(): string { /* from localStorage/session config */ return ''; }
  private getRegisterId(): string { /* from localStorage/session config */ return ''; }
  private getEmployeeId(): string { /* from auth context */ return ''; }
  private getLocationCode(): string { /* from localStorage/session config */ return ''; }
  private getNextSequence(): number { /* from SQLite WASM sequence or API */ return 1; }
}

L.10A.1G Connection Monitor (3-State)

3-state model prevents rapid flapping between online and offline during spotty internet. The DEGRADED state tries API first with cache fallback.

StateDetectionData ReadsData WritesUI Indicator
ONLINEWebSocket connected + health ping OKReact Query → APIPOST → APIGreen dot
DEGRADEDWebSocket dropped, ping intermittentTry API (2s timeout) → SQLite cachePOST → API + local backupYellow dot
OFFLINE3 consecutive pings fail (~15s)SQLite product_cacheSQLite sales_queueRed dot + banner

Detection layers (fastest → most reliable):

  1. Socket.io eventsconnect/disconnect callbacks (instant)
  2. Health ping — HTTP GET /health every 5 seconds (catches stale WebSocket state)
  3. navigator.onLine — browser API (instant hint, verified by ping)
// connection-monitor.ts

import { EventEmitter } from 'eventemitter3'; // Browser-compatible EventEmitter
import pino from 'pino';
import type { ApiClient } from './types';

const logger = pino({ name: 'ConnectionMonitor' });

export type ConnectionState = 'ONLINE' | 'DEGRADED' | 'OFFLINE';

export class ConnectionMonitor extends EventEmitter {
  private pingTimer: ReturnType<typeof setInterval> | null = null;
  private consecutiveFailures = 0;
  private state: ConnectionState = 'ONLINE';

  constructor(
    private apiClient: ApiClient,
    private socket: SocketIOClient.Socket,
  ) {
    super();
  }

  get currentState(): ConnectionState {
    return this.state;
  }

  isOnline(): boolean {
    return this.state === 'ONLINE';
  }

  start(): void {
    // Layer 1: Socket.io events (instant signal)
    this.socket.on('connect', () => {
      this.consecutiveFailures = 0;
      this.transition('ONLINE');
    });

    this.socket.on('disconnect', () => {
      this.transition('DEGRADED');
    });

    // Layer 2: Health ping every 5 seconds
    this.pingTimer = setInterval(() => void this.healthCheck(), 5_000);
    void this.healthCheck();
  }

  private async healthCheck(): Promise<void> {
    try {
      const controller = new AbortController();
      const timeout = setTimeout(() => controller.abort(), 2_000);

      const response = await this.apiClient.ping({ signal: controller.signal });
      clearTimeout(timeout);

      if (response.ok) {
        this.consecutiveFailures = 0;
        if (this.state !== 'ONLINE' && this.socket.connected) {
          this.transition('ONLINE');
        } else if (this.state === 'OFFLINE') {
          this.transition('DEGRADED');
        }
      } else {
        this.handlePingFailure();
      }
    } catch {
      this.handlePingFailure();
    }
  }

  private handlePingFailure(): void {
    this.consecutiveFailures++;

    if (this.consecutiveFailures >= 3 && this.state !== 'OFFLINE') {
      this.transition('OFFLINE');
    } else if (this.consecutiveFailures >= 1 && this.state === 'ONLINE') {
      this.transition('DEGRADED');
    }
  }

  private transition(newState: ConnectionState): void {
    if (this.state === newState) return;

    const previous = this.state;
    this.state = newState;
    logger.info({ from: previous, to: newState }, 'Connection state changed');
    this.emit('stateChanged', newState, previous);

    // Trigger sales queue flush when returning to ONLINE
    if (newState === 'ONLINE' && previous !== 'ONLINE') {
      this.emit('recoveryStarted');
    }
  }

  stop(): void {
    if (this.pingTimer) clearInterval(this.pingTimer);
  }
}

Connection Status UI

Connection Status Indicator
===========================

ONLINE (green dot):
+-----------------------------------------------------------------------+
|  [=] NEXUS POS                              [●] Connected   [GM Store]|
+-----------------------------------------------------------------------+

DEGRADED (yellow dot):
+-----------------------------------------------------------------------+
|  [=] NEXUS POS                          [●] Unstable       [GM Store] |
+-----------------------------------------------------------------------+

OFFLINE (red dot + banner):
+-----------------------------------------------------------------------+
|  [=] NEXUS POS                          [●] OFFLINE        [GM Store] |
|  +-----------------------------------------------------------------+  |
|  | Working offline. 3 sales queued. Prices may be outdated.        |  |
|  +-----------------------------------------------------------------+  |
+-----------------------------------------------------------------------+

RECOVERING (yellow dot + progress):
+-----------------------------------------------------------------------+
|  [=] NEXUS POS                     [●] Syncing 2/3...      [GM Store] |
+-----------------------------------------------------------------------+

L.10A.1H Removed: CRDTs (No Longer Required)

This section previously contained CRDT (Conflict-free Replicated Data Type) implementations for offline conflict resolution. With the pivot to online-first (ADR-048), CRDTs are no longer needed for the POS platform. The product cache is read-only (server-authoritative) and the sales queue is append-only (UUID-keyed idempotent) — neither requires conflict-free merge logic. See L.10A.1D for the simplified data consistency model.

The previous CRDT content (G-Counter, PN-Counter, LWW-Register, OR-Set, MV-Register implementations, sync protocol, and reference libraries) has been removed. For historical reference, see the v6.1.0 tag.

Note: CRDTs may still be relevant for the Nexus Raptag mobile RFID app (ADR-047), which retains full offline-first capability for counting sessions. If CRDT-based dedup is needed for multi-operator RFID scanning, it would be scoped to Raptag only — not the POS platform.


L.10A.2 Tax Engine Decision

AttributeValue
Decision IDADR-BRD-002
ContextNeed flexible tax calculation supporting multiple jurisdictions
DecisionCustom-Built Tax Engine with modular jurisdiction support
Alternatives Considered1) Third-party service (Avalara/TaxJar), 2) Custom-built (selected)
RationaleFull control over rules; no per-transaction fees; offline support; expansion flexibility
ReferenceBRD-v12 §1.17

Tax Calculation Hierarchy (Priority Order):

┌─────────────────────────────────────────────────────────────┐
│                  TAX CALCULATION HIERARCHY                   │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  1. PRODUCT-LEVEL OVERRIDE (Highest Priority)               │
│     └── Example: "Grocery Food - 1.5%"                      │
│     └── Example: "Prepared Food - 10%"                      │
│     └── Example: "Prescription Drugs - 0%"                  │
│                                                              │
│  2. CUSTOMER-LEVEL EXEMPTION                                 │
│     └── Example: "Reseller Certificate"                     │
│     └── Example: "Non-Profit 501(c)(3)"                     │
│     └── Example: "Diplomatic Status"                        │
│                                                              │
│  3. LOCATION-BASED TAX (Default)                            │
│     └── State Tax + County Tax + City Tax + District Tax    │
│     └── Based on store physical address                     │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Virginia Initial Configuration:

tax_jurisdictions:
  virginia:
    state_rate: 4.3
    default_local_rate: 1.0

    # Regional additional taxes
    regions:
      hampton_roads:
        counties: ["Norfolk", "Virginia Beach", "Newport News", "Hampton"]
        additional_rate: 0.7
      northern_virginia:
        counties: ["Arlington", "Fairfax", "Loudoun", "Prince William"]
        additional_rate: 0.7
      central_virginia:
        counties: ["Henrico", "Chesterfield", "Richmond City"]
        additional_rate: 0.0

    # Product exemptions/reduced rates
    exemptions:
      - category: "grocery_food"
        rate: 1.5  # Reduced rate
      - category: "prescription_drugs"
        rate: 0.0
      - category: "medical_equipment"
        rate: 0.0

Expansion Roadmap:

jurisdiction_modules:
  virginia:     { status: "active" }
  california:   { status: "planned", notes: "Complex district taxes, no gift card expiry" }
  oregon:       { status: "planned", notes: "No sales tax state" }
  canada:       { status: "planned", notes: "GST/PST/HST complexity" }
  european_union: { status: "planned", notes: "VAT with reverse charge" }

L.10A.3 Payment Integration Decision

AttributeValue
Decision IDADR-BRD-003
ContextNeed PCI-compliant card payment processing with minimal compliance burden
DecisionSAQ-A Semi-Integrated terminals (no card data touches our system)
Alternatives Considered1) Full integration SAQ-D, 2) Semi-integrated SAQ-A (selected), 3) Redirect-only
RationaleSimplest PCI compliance; card data never in our scope; supports offline void via token
ReferenceBRD-v12 §1.18

Payment Flow Architecture:

┌─────────────────────────────────────────────────────────────┐
│                SAQ-A PAYMENT ARCHITECTURE                    │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  ┌──────────┐     ┌──────────┐     ┌──────────┐            │
│  │ POS UI   │────►│ Backend  │────►│ Terminal │            │
│  │          │     │ API      │     │          │            │
│  └──────────┘     └──────────┘     └────┬─────┘            │
│       ▲                                  │                   │
│       │                                  ▼                   │
│       │           ┌─────────────────────────────────────┐   │
│       │           │         PAYMENT PROCESSOR            │   │
│       │           │     (Card data ONLY here)            │   │
│       │           └─────────────────────────────────────┘   │
│       │                          │                          │
│       │                          ▼                          │
│       │           ┌─────────────────────────────────────┐   │
│       └───────────│  Token + Approval + Masked Card     │   │
│                   │  (NO full PAN, CVV, or track data)  │   │
│                   └─────────────────────────────────────┘   │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Data Storage Rules:

┌─────────────────────────────────────────────────────────────┐
│              PAYMENT DATA STORAGE RULES                      │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  ✅ STORED (Allowed):              ❌ PROHIBITED (Never):   │
│  ├── Payment token                 ├── Full card number     │
│  ├── Approval code                 ├── CVV/CVC              │
│  ├── Masked card (****1234)        ├── Track data           │
│  ├── Card brand (Visa/MC/Amex)     ├── PIN block            │
│  ├── Entry method (chip/tap)       ├── EMV cryptogram (raw) │
│  ├── Terminal ID                   │                        │
│  └── Timestamp                     │                        │
│                                                              │
└─────────────────────────────────────────────────────────────┘

L.10A.4 Multi-Tenancy Decision

AttributeValue
Decision IDADR-BRD-004 (Revised)
ContextPlatform must support multiple retail tenants with strong data isolation
DecisionRow-Level Isolation with PostgreSQL RLS
Alternatives Considered1) Database-per-tenant, 2) Schema-per-tenant, 3) Row-level isolation with RLS (selected)
RationaleMatches BRD v18.0 data models (135 occurrences of tenant_id FK across all modules). Simpler operations — no per-tenant schema migration tooling. RLS enforces isolation at the database level, preventing accidental cross-tenant data access.
ReferenceBRD-v18.0, Chapter 05 (Architecture Components)

v18.0 Update: The original Architecture Styles Worksheet v1.6 specified Schema-Per-Tenant. Expert panel review identified a contradiction: every data model table in BRD v18.0 includes tenant_id UUID FK (row-level isolation pattern, 135 occurrences). This revision aligns the architecture decision with the actual BRD data models.

Database Structure:

database: pos_production
│
├── schema: shared
│   ├── tax_rates (global, no tenant_id)
│   ├── system_config (global)
│   └── tenant_registry (tenant metadata)
│
└── schema: public (all tenant data)
    ├── orders          (tenant_id UUID FK + RLS)
    ├── customers        (tenant_id UUID FK + RLS)
    ├── inventory        (tenant_id UUID FK + RLS)
    ├── products         (tenant_id UUID FK + RLS)
    ├── integration_providers (tenant_id UUID FK + RLS)
    └── ... (all other tables with tenant_id + RLS)

RLS Policy Implementation:

-- Enable RLS on every tenant table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;

-- Create isolation policy
CREATE POLICY tenant_isolation ON orders
  USING (tenant_id = current_setting('app.current_tenant')::uuid);

-- Force RLS for non-superuser roles
ALTER TABLE orders FORCE ROW LEVEL SECURITY;

Connection Pattern:

// Tenant resolution via Express middleware
import type { Request, Response, NextFunction } from 'express';
import { prisma } from './prisma-client';

export async function tenantMiddleware(req: Request, res: Response, next: NextFunction) {
  const tenantId = resolveTenantFromJwt(req);

  // Set PostgreSQL session variable for RLS
  await prisma.$executeRaw`SET app.current_tenant = ${tenantId}`;

  next();
}

Benefits:

  • Simpler connection pooling (shared pool, not per-schema)
  • Standard query patterns (no search_path manipulation)
  • Easier migrations (single schema, applied once)
  • RLS enforcement at database level (defense-in-depth)
  • Matches BRD v18.0 data model conventions

Trade-offs:

  • Less physical isolation than schema separation (mitigated by RLS)
  • All tenants share same table structure (flexibility limited)
  • RLS policies must be applied to every table (automated via migration scripts)

L.10A.4A Multi-Tenancy Strategies Comparison

Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):

Multi-Tenancy Strategies
========================

Strategy 1: Shared Tables (Row-Level)
+----------------------------------+
| products                          |
| +--------+--------+------------+ |
| | tenant | id     | name       | |
| +--------+--------+------------+ |
| | nexus  | 1      | T-Shirt    | |
| | acme   | 2      | Jacket     | |
| | nexus  | 3      | Jeans      | |
| +--------+--------+------------+ |
+----------------------------------+
Pros: Simple, low overhead
Cons: Risk of data leakage, complex queries, no isolation

Strategy 2: Separate Databases
+-------------+    +-------------+    +-------------+
| nexus_db    |    | acme_db     |    | beta_db     |
| +--------+  |    | +--------+  |    | +--------+  |
| |products|  |    | |products|  |    | |products|  |
| +--------+  |    | +--------+  |    | +--------+  |
| |sales   |  |    | |sales   |  |    | |sales   |  |
| +--------+  |    | +--------+  |    | +--------+  |
+-------------+    +-------------+    +-------------+
Pros: Complete isolation
Cons: Connection overhead, backup complexity, cost at scale

Strategy 3: Schema-Per-Tenant
+-----------------------------------------------------+
| pos_platform database                                |
|                                                      |
| +-----------+  +--------------+  +--------------+   |
| | shared    |  | tenant_nexus |  | tenant_acme  |   |
| +-----------+  +--------------+  +--------------+   |
| | tenants   |  | products     |  | products     |   |
| | plans     |  | sales        |  | sales        |   |
| | features  |  | inventory    |  | inventory    |   |
| +-----------+  | customers    |  | customers    |   |
|                +--------------+  +--------------+   |
+-----------------------------------------------------+
Pros: Isolation + efficiency, easy backup/restore per tenant
Cons: More complex migrations (but manageable)

Decision Matrix

RequirementShared TablesSeparate DBsSchema-Per-Tenant
Data IsolationPoorExcellentExcellent
PerformanceGoodExcellentVery Good
Backup/RestoreComplexSimpleSimple
Connection OverheadLowHighLow
Query ComplexityHighLowLow
Compliance (SOC2)DifficultEasyEasy
Cost at ScaleLowHighMedium
Migration ComplexityLowLowMedium

Note: The Architecture Styles analysis (L.10A.4 above) selected Row-Level Isolation with PostgreSQL RLS as the production strategy, which aligns with BRD v18.0 data models (135 occurrences of tenant_id). The Schema-Per-Tenant comparison above is preserved for reference and as an alternative should physical isolation requirements change.


L.10A.4B Tenant Resolution & Middleware

Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):

Tenant Resolution Flow

Tenant Resolution Flow (Row-Level Security)
=============================================

                     +---------------------------+
                     |   Incoming Request        |
                     |   nexus.pos-platform.com  |
                     +-------------+-------------+
                                   |
                                   v
                     +---------------------------+
                     |   Extract Subdomain       |
                     |   subdomain = "nexus"     |
                     +-------------+-------------+
                                   |
                                   v
                     +---------------------------+
                     |   Lookup in shared.tenants|
                     |   WHERE subdomain = ?     |
                     +-------------+-------------+
                                   |
            +----------------------+----------------------+
            |                                             |
      [Found]                                       [Not Found]
            |                                             |
            v                                             v
+---------------------------+               +---------------------------+
| SET LOCAL                 |               | Return 404               |
| app.current_tenant_id    |               | "Tenant not found"       |
| = '<tenant-uuid>'        |               +---------------------------+
+-------------+-------------+
              |
              v
+---------------------------+
| Continue with request     |
| RLS policies filter all   |
| queries by tenant_id      |
+---------------------------+

Express Tenant Middleware (RLS)

// src/middleware/tenant-middleware.ts

import type { Request, Response, NextFunction } from 'express';
import pino from 'pino';
import { prisma } from '../prisma-client';

const logger = pino({ name: 'TenantMiddleware' });

export function createTenantMiddleware(tenantService: TenantService) {
  return async (req: Request, res: Response, next: NextFunction) => {
    // 1. Extract subdomain from host
    const host = req.hostname;
    const subdomain = extractSubdomain(host);

    if (!subdomain) {
      res.status(400).json({ error: 'Invalid tenant' });
      return;
    }

    // 2. Lookup tenant in shared schema (cached in Redis)
    const tenant = await tenantService.getBySubdomain(subdomain);

    if (!tenant) {
      res.status(404).json({ error: 'Tenant not found' });
      return;
    }

    if (tenant.status === 'suspended') {
      res.status(403).json({ error: 'Account suspended' });
      return;
    }

    // 3. Store tenant in request for downstream use
    req.tenant = tenant;
    req.tenantId = tenant.id;

    logger.debug({ tenantSlug: tenant.slug, tenantId: tenant.id }, 'Resolved tenant');

    // 4. Set RLS context on the Prisma connection
    await prisma.$executeRaw`SET app.current_tenant = ${tenant.id}`;

    next();
  };
}

function extractSubdomain(host: string): string | null {
  // nexus.pos-platform.com -> nexus
  // localhost:5000 -> null (development fallback)
  const parts = host.split('.');
  if (parts.length >= 3) {
    return parts[0];
  }
  // Development fallback: check X-Tenant-Id header
  return null;
}

// src/services/tenant-service.ts

import { PrismaClient, type Tenant } from '@prisma/client';

interface CreateTenantRequest {
  slug: string;
  name: string;
  subdomain: string;
  planId: string;
}

export class TenantService {
  constructor(private prisma: PrismaClient) {}

  async getBySubdomain(subdomain: string): Promise<Tenant | null> {
    return this.prisma.tenant.findUnique({ where: { subdomain } });
  }

  async getBySlug(slug: string): Promise<Tenant | null> {
    return this.prisma.tenant.findUnique({ where: { slug } });
  }

  async createTenant(request: CreateTenantRequest): Promise<string> {
    // 1. Create tenant record (RLS — no schema creation needed)
    const tenant = await this.prisma.tenant.create({
      data: {
        slug: request.slug,
        name: request.name,
        subdomain: request.subdomain,
        planId: request.planId,
        status: 'active',
      },
    });

    // 2. Seed default data with tenant context
    await this.seedTenantDefaults(tenant.id);

    logger.info({ slug: request.slug, tenantId: tenant.id }, 'Created tenant');

    return tenant.id;
  }

  private async seedTenantDefaults(tenantId: string): Promise<void> {
    // Set RLS context for seeding
    await this.prisma.$executeRaw`SET app.current_tenant = ${tenantId}`;

    // Seed default roles, permissions, subscription features
    // All rows automatically scoped by tenant_id
  }
}

Prisma Client with RLS Tenant Context

// src/prisma-client.ts

import { PrismaClient } from '@prisma/client';

// Base Prisma client
export const prisma = new PrismaClient();

// Prisma client extended with tenant context via middleware
prisma.$use(async (params, next) => {
  // Defense-in-depth: ensure tenant_id is always set on creates
  // RLS handles query filtering at the database level
  if (params.action === 'create' && params.args.data && !params.args.data.tenantId) {
    // tenantId should be set by the service layer
    // This middleware logs a warning if it's missing
    console.warn(`Missing tenantId on ${params.model} create`);
  }
  return next(params);
});

RLS Session Variable via Prisma Extension

// src/prisma-tenant.ts

import { PrismaClient } from '@prisma/client';

/**
 * Creates a tenant-scoped Prisma client that sets the RLS session variable
 * on every query. Use this in request handlers after tenant resolution.
 */
export function createTenantPrisma(tenantId: string): PrismaClient {
  const prisma = new PrismaClient();

  // Set RLS session variable before each query
  prisma.$use(async (params, next) => {
    await prisma.$executeRaw`SET app.current_tenant = ${tenantId}`;
    return next(params);
  });

  return prisma;
}

L.10A.4C Tenant Provisioning (RLS)

Detailed Implementation Reference — Row-Level Security provisioning (no schema creation):

New Tenant Signup Flow (RLS)
=============================

[Nexus POS]                         [API]                          [Database]
      |                               |                                  |
      | 1. POST /tenants              |                                  |
      |   { name, slug, plan }        |                                  |
      |------------------------------>|                                  |
      |                               |                                  |
      |                               | 2. Validate slug uniqueness      |
      |                               |--------------------------------->|
      |                               |                                  |
      |                               | 3. INSERT INTO tenants           |
      |                               |   (returns tenant_id UUID)       |
      |                               |--------------------------------->|
      |                               |                                  |
      |                               | 4. SET app.current_tenant =      |
      |                               |   '{tenant_id}'                  |
      |                               |--------------------------------->|
      |                               |                                  |
      |                               | 5. Seed default data             |
      |                               |   (roles, permissions)           |
      |                               |   All rows get tenant_id via RLS |
      |                               |--------------------------------->|
      |                               |                                  |
      |                               | 6. Create admin user             |
      |                               |   (tenant_id set automatically)  |
      |                               |--------------------------------->|
      |                               |                                  |
      | 7. Return tenant details      |                                  |
      |   { id, subdomain, status }   |                                  |
      |<------------------------------|                                  |
      |                               |                                  |
      | 8. Redirect to tenant portal  |                                  |
      |   nexus.pos-platform.com      |                                  |
      |                               |                                  |

Key difference from schema-per-tenant: No CREATE SCHEMA step. All tenant data lives in the public schema with tenant_id columns. RLS policies enforce isolation at the database level via SET app.current_tenant.


L.10A.4D Migration Strategy (Single Schema + RLS)

Detailed Implementation Reference — With RLS, all tenants share the same schema. Migrations apply once to the public schema:

Prisma Migrate (Single Schema)

# Generate migration from schema changes
npx prisma migrate dev --name add_loyalty_tier

# Apply in production (called at deploy time or server startup)
npx prisma migrate deploy

Migration Script Example

-- Migration: Add loyalty_tier to customers
-- File: prisma/migrations/20250115_add_loyalty_tier/migration.sql
-- Note: Single ALTER TABLE — RLS means all tenant rows are in one table

ALTER TABLE customers
  ADD COLUMN IF NOT EXISTS loyalty_tier VARCHAR(20) DEFAULT 'bronze';

-- Backfill existing rows (optional)
UPDATE customers SET loyalty_tier = 'bronze' WHERE loyalty_tier IS NULL;

Key advantage of RLS over schema-per-tenant: Migrations run once against the public schema instead of looping through N tenant schemas. Prisma Migrate handles this natively with a single migration directory.

Tenants Table SQL Reference

-- Tenant Registry (public schema — RLS exempted, admin-only access)
CREATE TABLE tenants (
    id              UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    slug            VARCHAR(50) UNIQUE NOT NULL,   -- 'nexus', 'acme'
    name            VARCHAR(255) NOT NULL,         -- 'Nexus Clothing'
    subdomain       VARCHAR(100) UNIQUE NOT NULL,  -- 'nexus.pos-platform.com'
    plan_id         UUID REFERENCES subscription_plans(id),
    status          VARCHAR(20) DEFAULT 'active',  -- active, suspended, trial
    trial_ends_at   TIMESTAMPTZ,
    created_at      TIMESTAMPTZ DEFAULT NOW(),
    updated_at      TIMESTAMPTZ DEFAULT NOW()
);

-- Subscription Plans (admin-only, no tenant_id — global config)
CREATE TABLE subscription_plans (
    id              UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name            VARCHAR(100) NOT NULL,         -- 'Starter', 'Professional'
    code            VARCHAR(50) UNIQUE NOT NULL,   -- 'starter', 'pro', 'enterprise'
    price_monthly   DECIMAL(10,2),
    price_yearly    DECIMAL(10,2),
    max_locations   INTEGER DEFAULT 1,
    max_registers   INTEGER DEFAULT 2,
    max_employees   INTEGER DEFAULT 5,
    max_products    INTEGER DEFAULT 1000,
    features        JSONB DEFAULT '{}',            -- Feature flags
    is_active       BOOLEAN DEFAULT TRUE,
    created_at      TIMESTAMPTZ DEFAULT NOW()
);

-- Feature Flags (admin-only, no tenant_id — global config)
CREATE TABLE feature_flags (
    id              UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    key             VARCHAR(100) UNIQUE NOT NULL,  -- 'loyalty_program'
    name            VARCHAR(255) NOT NULL,
    description     TEXT,
    default_enabled BOOLEAN DEFAULT FALSE,
    created_at      TIMESTAMPTZ DEFAULT NOW()
);

-- Insert default plans
INSERT INTO subscription_plans (name, code, price_monthly, max_locations, max_registers, max_employees, max_products) VALUES
('Starter', 'starter', 49.00, 1, 2, 5, 1000),
('Professional', 'pro', 149.00, 3, 10, 25, 10000),
('Enterprise', 'enterprise', 499.00, -1, -1, -1, -1);  -- -1 = unlimited

L.10A.5 Commission Reversal Decision

AttributeValue
Decision IDADR-BRD-005
ContextNeed fair commission adjustment when sales are voided or items are returned
DecisionProportional Reversal on returns, Full Reversal on voids
Alternatives Considered1) Full reversal always, 2) Proportional (selected), 3) No reversal
RationaleFair to employees; maintains incentive alignment; distinguishes mistakes (voids) from returns
ReferenceBRD-v12 §1.8

Commission Reversal Rules:

┌─────────────────────────────────────────────────────────────┐
│              COMMISSION REVERSAL LOGIC                       │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  VOID (Same day, before drawer close):                      │
│  ├── Reversal: 100% (full)                                  │
│  ├── Rationale: Mistake correction, sale didn't happen      │
│  └── Example: $6 commission → reverse $6                    │
│                                                              │
│  RETURN (After sale completed):                             │
│  ├── Reversal: Proportional to returned value               │
│  ├── Formula: Original Commission × (Returned / Original)  │
│  └── Example:                                               │
│      Sale: $120, Commission: $6 (5%)                        │
│      Return: $80 of items                                   │
│      Reversal: $6 × ($80/$120) = $4.00                     │
│      Net Commission: $6 - $4 = $2.00                       │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Configuration:

commissions:
  default_rate_percent: 2.0
  category_rates:
    electronics: 3.0
    services: 5.0

  # Reversal rules
  reverse_on_void: true
  void_reversal_method: "full"        # 100%

  reduce_on_return: true
  return_reversal_method: "proportional"  # Based on value

L.10A.6 Geographic Expansion Strategy

AttributeValue
Decision IDADR-BRD-006
ContextInitial deployment in Virginia with planned expansion to other US states and international
DecisionVirginia-First with modular jurisdiction architecture
Phases1) Virginia (Day 1), 2) US expansion (Year 2), 3) International (Year 3+)
ReferenceBRD-v12 §1.17.3

Expansion Strategy:

┌─────────────────────────────────────────────────────────────┐
│              GEOGRAPHIC EXPANSION ROADMAP                    │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  PHASE 1: Virginia (Day 1)                                  │
│  ├── Tax: State 4.3% + Local 1% + Regional 0.7%            │
│  ├── Gift Cards: 5-year minimum expiry allowed              │
│  └── Compliance: Virginia Consumer Protection Act           │
│                                                              │
│  PHASE 2: US Expansion                                       │
│  ├── California: No gift card expiry, $10 cash-out rule    │
│  ├── Oregon: No sales tax                                   │
│  ├── New York: Complex local taxes                          │
│  └── Florida: No income tax, tourism taxes                  │
│                                                              │
│  PHASE 3: International                                      │
│  ├── Canada: GST/HST/PST provincial variations              │
│  ├── EU: VAT with reverse charge for B2B                    │
│  └── UK: Post-Brexit VAT rules                              │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Design Principle: Always design for the most restrictive jurisdiction (California for US), then enable features where permitted.

Gift Card Jurisdiction Matrix:

JurisdictionExpiry AllowedInactivity FeeCash-Out Required
VirginiaYes (5yr min)Yes (after 12mo)No
CaliforniaNoNoYes ($10 threshold)
New YorkNoNoNo
DefaultNoNoNo

L.10A.7 Decision Dependency Graph

┌─────────────────────────────────────────────────────────────┐
│              ARCHITECTURE DECISION DEPENDENCIES              │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│                    ┌──────────────────┐                     │
│                    │ Geographic Scope │                     │
│                    │   (ADR-BRD-006)  │                     │
│                    └────────┬─────────┘                     │
│                             │                                │
│              ┌──────────────┼──────────────┐                │
│              ▼              ▼              ▼                │
│     ┌────────────┐  ┌────────────┐  ┌────────────┐        │
│     │ Tax Engine │  │ Gift Card  │  │ Compliance │        │
│     │(ADR-BRD-002)│  │   Rules    │  │   Rules    │        │
│     └──────┬─────┘  └────────────┘  └────────────┘        │
│            │                                                │
│            ▼                                                │
│     ┌────────────┐                                         │
│     │  Offline   │───────────────────────────┐             │
│     │(ADR-BRD-001)│                          │             │
│     └──────┬─────┘                           │             │
│            │                                  ▼             │
│            ▼                          ┌────────────┐       │
│     ┌────────────┐                    │ Payment    │       │
│     │ Multi-     │                    │(ADR-BRD-003)│       │
│     │ Tenancy    │                    └────────────┘       │
│     │(ADR-BRD-004)│                                        │
│     └────────────┘                                         │
│                                                              │
└─────────────────────────────────────────────────────────────┘

L.11 Style Decision Summary

Final Selection

+------------------------------------------------------------------+
|                   ARCHITECTURE DECISION SUMMARY                    |
|                        (v2.0 - Panel Reviewed)                    |
+------------------------------------------------------------------+
|                                                                   |
|  QUESTION: What is the primary architecture style?                |
|  ANSWER:   Event-Driven Modular Monolith                          |
|                                                                   |
|  ┌─────────────────────────────────────────────────────────────┐ |
|  │                    SELECTED PATTERNS                         │ |
|  ├─────────────────────────────────────────────────────────────┤ |
|  │  ✅ Modular Monolith      → Central API                     │ |
|  │  ✅ Microkernel (Plugin)  → Nexus POS                       │ |
|  │  ✅ Event-Driven          → PostgreSQL Events (v1.0)        │ |
|  │                             Kafka (v2.0, when justified)    │ |
|  │  ✅ Event Sourcing        → Sales (Full) + Inventory (Audit)│ |
|  │                             + Integrations (Audit-trail)    │ |
|  │  ✅ CQRS                  → Sales Module (Read/Write split) │ |
|  │  ✅ Online-First+Fallback → Nexus POS (API + SQLite cache)  │ |
|  │  ✅ Row-Level with RLS    → Multi-Tenant Isolation          │ |
|  │  ✅ Integration Gateway   → Module 6 (Extractable)          │ |
|  │  ✅ Circuit Breaker       → External API Resilience         │ |
|  │  ✅ Transactional Outbox  → Guaranteed Event Delivery       │ |
|  │  ✅ Provider Abstraction  → IIntegrationProvider Interface  │ |
|  │  ✅ Credential Vault      → HashiCorp Vault                 │ |
|  └─────────────────────────────────────────────────────────────┘ |
|                                                                   |
|  ┌─────────────────────────────────────────────────────────────┐ |
|  │                    REJECTED PATTERNS                         │ |
|  ├─────────────────────────────────────────────────────────────┤ |
|  │  ❌ Microservices         → Too complex for current scale   │ |
|  │  ❌ Space-Based           → Too complex for financial audit │ |
|  │  ❌ Schema-Per-Tenant     → Replaced by Row-Level with RLS │ |
|  │  ❌ Kafka (v1.0)          → Deferred to v2.0               │ |
|  └─────────────────────────────────────────────────────────────┘ |
|                                                                   |
+------------------------------------------------------------------+

Document Information

AttributeValue
Version7.0.0
Created2026-01-24
Updated2026-03-02
SourceArchitecture Styles Worksheet v2.0, BRD-v18.0, Chapters 02-06
AuthorClaude Code
ReviewerExpert Panel (Marcus Chen, Sarah Rodriguez, James O’Brien, Priya Patel)
StatusActive
PartII - Architecture
Chapter04 of 9
PreviousChapter 12 v2.0.0 (pre-restructure numbering)

Change Log

VersionDateChanges
7.0.02026-03-02Unified web app pivot: Tauri desktop wrapper removed, Nexus POS is now a single React web application (Vite). ADR-046 (dual deployment) superseded by ADR-052 (unified web app). Nexus Admin merged into Nexus POS as role-gated screens. better-sqlite3 replaced by SQLite WASM (sql.js + OPFS) for browser-based offline fallback. Hardware layer rewritten: Tauri Rust bridge removed, replaced by Star WebPRNT (receipts), USB HID keyboard wedge (scanners), Stripe Terminal SDK (payments), kick-out cable via printer port (cash drawers). All code samples updated to sql.js WASM API patterns. L.9A client application diagrams consolidated (POS+Admin→single app). Technology stack summary merged. All Tauri, better-sqlite3, and Nexus Admin references updated throughout. ADR-048 (online-first) remains active with WASM runtime change only.
6.3.02026-03-01Comprehensive review: 50 findings resolved. 3 new ADRs (049-051). ADR-015/037 superseded. 6 missing tables added (69 total). 13 FK type fixes. 9 RFID RLS fixes. Ch 03 K.2.1 rewritten. Ch 05: 8 offline locations rewritten, state machine 7.12 updated. 25 BRD-v12→v20 NFR citations. Appendix F: Module 7 traceability added. 51 ADRs total.
6.2.02026-03-01Online-first pivot (ADR-048): Rewrote L.10A.1 from “Offline-First Strategy” to “Online-First with Offline Fallback”. SQLite schema reduced from 6 tables to 2 (product_cache + sales_queue). Removed L.10A.1H CRDTs entirely (~350 lines). Replaced complex sync queue with simple FIFO sales flush. Upgraded connection monitor from binary to 3-state (ONLINE/DEGRADED/OFFLINE). Added flag-on-sync price discrepancy detection. Updated all code samples (SyncService→SalesQueueFlush, SaleService→online-first, ConnectionMonitor→3-state).
6.1.02026-02-28Tech stack pivot from .NET/C# to TypeScript/Node.js. Rebranded to “Nexus”. Central API: Node.js + Express/Fastify (TypeScript) with Prisma ORM. POS Client: Tauri 2.0 + React/TypeScript. Admin Portal: React + TailwindCSS + shadcn/ui. Raptag Mobile: React Native + Expo. All C# implementation blocks converted to TypeScript (SyncService, SaleService, ConnectionMonitor, CRDTs, Projectors, Tenant Middleware, DbContext). Kafka v2.0 C# blocks annotated for future kafkajs conversion. Naming pass: SignalR→Socket.io, EF Core→Prisma, FluentValidation→Zod, Serilog→Pino, xUnit→Vitest, StackExchange.Redis→ioredis, ArchUnit→dependency-cruiser. Full tech stack summary table rewritten.
1.0.02026-01-24Initial document
1.1.02026-01-26Added Section L.10A (Key Architecture Decisions from BRD-v12) with 6 ADRs
2.0.02026-02-19Expert panel review (6.50/10): Replaced Schema-Per-Tenant with Row-Level RLS; deferred Kafka to v2.0 (PostgreSQL Events for v1.0); added Extractable Integration Gateway for Module 6; added L.1.9 Integration Patterns (Circuit Breaker, Transactional Outbox, Provider Abstraction, ACL, Saga); added L.4A CQRS/ES Scope per module; added L.4B Integration Architecture Patterns with diagrams; replaced SonarQube-only security with 6-Gate Security Test Pyramid; added HashiCorp Vault credential architecture; updated Style Evaluation Matrix scores; added integration-specific risks and mitigations
3.0.02026-02-22Consolidated implementation references from Chapters 05-09: Added L.4A.1-7 (Event Store schema, Kafka architecture, Schema Registry, DLQ pattern, Domain Events catalog, Projections, Temporal Queries, Snapshots from Ch 08); Added L.9A-9B (System Architecture diagrams, Data Flow patterns from Ch 05); Added L.9C (Domain Model bounded contexts, aggregates, ER diagram from Ch 07); Added L.10A.1A-1H (POS Client architecture, SQLite schema, Sync Queue, Conflict Resolution, Sync Processor, Sale Creation Flow, Connection Monitor, CRDTs from Ch 09); Added L.10A.4A-4D (Multi-Tenancy strategies comparison, Tenant Middleware, Provisioning workflow, Migration strategy from Ch 06)

Next Chapter: Chapter 05: Architecture Components (BRD v20.0)


This chapter is part of the POS Blueprint Book. All content is self-contained.