Chapter 04: Architecture Styles Analysis
Purpose
This chapter documents the formal architecture styles evaluation for the Nexus POS Platform. It provides the decision rationale for selecting the primary architecture style and supporting patterns, updated per expert panel review against BRD v18.0.
Source: Architecture Styles Worksheet v2.0 (Expert Panel-Reviewed) Project: POS Platform (RapOS) - Implementation for Tenant “Nexus” Architect/Team: Cloud AI Architecture Agents Date: February 19, 2026 Panel Review Score: 6.50/10 → Updated per 4-member expert panel recommendations
L.1 Candidate Architecture Styles
Based on the identified driving characteristics (Availability, Interoperability, Data Consistency), the following architecture styles were evaluated.
L.1.1 Event-Driven Architecture (EDA)
| Attribute | Value |
|---|---|
| Description | A distributed asynchronous architecture pattern used to produce highly scalable and high-performance applications. |
| Relevance to Nexus | Deeply aligned with “Interoperability” and “Data Consistency” (Sync) requirements. External channels (Amazon, Shopify) and local POS terminals produce disjointed events that must be reconciled eventually. |
| Decision | Selected (Communication Layer) |
| Key Technology | PostgreSQL Event Tables + LISTEN/NOTIFY (v1.0); Apache Kafka (v2.0, when scale justifies) |
v18.0 Update: BRD designs around PostgreSQL tables for
idempotency_recordsandintegration_dead_letters(not Kafka topics). Amazon SP-API polls every 2 minutes; Google Merchant batches 2x/day. Streaming infrastructure is not required at launch. PostgreSQL event tables with LISTEN/NOTIFY provide sufficient event notification for v1.0. Kafka adoption deferred to v2.0 when transaction volume or real-time analytics requirements justify the operational overhead (ZooKeeper/KRaft cluster management).
L.1.2 Microservices Architecture
| Attribute | Value |
|---|---|
| Description | An architecture style that structures an application as a collection of loosely coupled services, each with its own database. |
| Relevance to Nexus | Evaluated for “Scalability,” but rejected as the primary style for the Core API. |
| Decision | Rejected |
| Rationale | The operational complexity of managing separate databases for 50+ services is unnecessary for the current scale. |
L.1.3 Microkernel (Plugin) Architecture
| Attribute | Value |
|---|---|
| Description | A core system with a plugin interface to add additional features. |
| Relevance to Nexus | Directly addresses the “Modifiability” requirement. The Blueprint specifies “Integration Adapters” (Payment, Tax) and a “Hardware Layer” in the client, fitting this pattern. |
| Decision | Selected (Client) |
L.1.4 Modular Monolith (Layered) Architecture
| Attribute | Value |
|---|---|
| Description | A single deployable unit (“Central API”) structured into distinct, loosely coupled modules (Catalog, Sales, Inventory) that enforce strict boundaries. |
| Relevance to Nexus | High Fit. The Blueprint describes a “Central API Layer” (Stateless) containing all core services. This offers the modularity of microservices without the distributed complexity, aligning with the “Simplicity” and “Maintenance” goals. |
| Decision | Selected (Core API) |
v18.0 Update — Extractable Integration Gateway: Module 6 (Integrations, 4,800+ lines) is designed as a logically separate module within the monolith with explicit boundary contracts:
IIntegrationProviderinterface, async messaging via Transactional Outbox, and dedicated error handling (ERR-6xxx range). This module can be extracted to a separate service when scale demands independent deployment, without changing the core POS modules. Circuit breaker isolation ensures external API failures (Amazon, Google, Shopify) cannot cascade to POS checkout operations.
L.1.5 Service-Based Architecture
| Attribute | Value |
|---|---|
| Description | A hybrid style with coarse-grained services (e.g., Inventory, Sales, HR) often sharing a database. |
| Relevance to Nexus | Offers a middle ground. The Blueprint’s “Service Layer” within the Central API follows this structure logically. |
| Decision | Middle ground (influences internal structure) |
L.1.6 Space-Based Architecture
| Attribute | Value |
|---|---|
| Description | Designed for high scalability and concurrency using tuple spaces (distributed caching/in-memory grids). |
| Relevance to Nexus | Could handle “Black Friday” spikes, but data consistency (synchronization to persistent storage) is too complex for the strict financial audit requirements. |
| Decision | Rejected |
| Rationale | Too complex for financial audit requirements |
L.1.7 Event Sourcing (Architecture Pattern)
| Attribute | Value |
|---|---|
| Description | A data persistence pattern where state transitions are stored as a sequence of immutable events (e.g., ItemAdded, PaymentAuthorized) rather than just the current state. |
| Relevance to Nexus | Critical. The Blueprint (Section L.4A below) mandates this for the “Sales” and “Inventory” domains to enable “Offline Conflict Resolution,” “Complete Audit Trails,” and “Temporal Queries” (Time Travel). |
| Decision | Selected (Sales & Inventory Domains) |
| Key Technology | PostgreSQL 16 (Append-Only Event Table), Apache Kafka (Streaming Platform) |
L.1.8 Offline-First (Architecture Pattern)
| Attribute | Value |
|---|---|
| Description | Design pattern where the application functions fully offline with local data storage, syncing when connectivity is available. |
| Relevance to Nexus | Critical. POS terminals must operate during network outages. |
| Decision | Selected (Client) |
| Key Technology | SQLite (Local Storage) |
L.1.9 Integration Patterns (BRD v18.0 Module 6)
BRD v18.0 Section 6.2 mandates 5 integration patterns that are architecturally significant. These were evaluated during the expert panel review and all selected.
| Pattern | Description | Decision | BRD Reference |
|---|---|---|---|
| Circuit Breaker | State machine (CLOSED → OPEN → HALF_OPEN) that prevents cascading failures from external APIs. Trips after 5 failures within 60 seconds; 30-second cooldown. | Selected | §6.2.4 |
| Transactional Outbox | Atomic write of business data + outbox event in the same database transaction. A relay process polls the outbox and publishes events, guaranteeing at-least-once delivery without distributed transactions. | Selected | §6.2.3, §6.7.3 |
| Provider Abstraction (Strategy) | IIntegrationProvider interface with 5 standard methods (Connect, Sync, Validate, Publish, HealthCheck) implemented per provider. Enables uniform handling regardless of provider protocol. | Selected | §6.2.1 |
| Anti-Corruption Layer (ACL) | Per-provider translation layer preventing external schema changes from leaking into core domain models. Each provider maps external DTOs to internal domain events. | Selected | §6.2.7 |
| Saga / Orchestration | Cross-platform inventory sync orchestrated as a saga with compensation actions. If a Shopify inventory update succeeds but Amazon fails, the saga compensates by rolling back the Shopify change. | Selected (cross-platform flows) | §6.7 |
Circuit Breaker State Machine:
┌──────────────────────────────────────────────────────────┐
│ CIRCUIT BREAKER STATE MACHINE │
├──────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ 5 failures ┌──────────┐ │
│ │ CLOSED │ ──────────────►│ OPEN │ │
│ │ (Normal) │ in 60 sec │ (Reject) │ │
│ └────┬─────┘ └────┬─────┘ │
│ ▲ │ │
│ │ success │ 30 sec cooldown │
│ │ ▼ │
│ │ ┌───────────┐ │
│ └────────────────────│ HALF_OPEN │ │
│ │ (1 probe) │ │
│ failure ──────────└───────────┘──► OPEN │
│ │
└──────────────────────────────────────────────────────────┘
L.2 Style Evaluation Matrix
Ratings: 1 (Poor) to 5 (Excellent)
Monolithic Styles
| Style | Availability | Interoperability | Data Consistency | Overall Fit |
|---|---|---|---|---|
| Layered (Traditional) | ★★☆☆☆ | ★★☆☆☆ | ★★★★☆ | Backend only |
| Modular Monolith | ★★★☆☆ | ★★★☆☆ | ★★★★☆ | Selected (Core) |
| Microkernel (Plugin) | ★★★☆☆ | ★★★★★ | ★★★☆☆ | Selected (Client) |
v18.0 Note: Modular Monolith Interoperability reduced from 4★ to 3★. Module 6 requires 6 provider families with different scaling needs — a monolith cannot independently scale individual providers. Mitigated by Extractable Integration Gateway design.
Distributed Styles
| Style | Availability | Interoperability | Data Consistency | Overall Fit |
|---|---|---|---|---|
| Service-Based | ★★★★☆ | ★★★★☆ | ★★★☆☆ | Eventual |
| Event-Driven (EDA) | ★★★★★ | ★★★★★ | ★★☆☆☆ | Selected (Comm Layer) |
| Space-Based | ★★★★★ | ★★★☆☆ | ★☆☆☆☆ | Too Complex |
| Microservices | ★★★★☆ | ★★★★☆ | ★☆☆☆☆ | Hard Sync |
v18.0 Note: Service-Based Interoperability raised from 3★ to 4★. Coarse-grained services can independently deploy integration providers.
Patterns
| Pattern | Availability | Interoperability | Data Consistency | Overall Fit |
|---|---|---|---|---|
| Event Sourcing | ★★★☆☆ | ★★★★☆ | ★★★★★ | Selected (Audit/Sync) |
| Offline-First | ★★★★★ | ★★☆☆☆ | ★★★☆☆ | Selected (Client) |
| Integration Patterns | ★★★★☆ | ★★★★★ | ★★★★☆ | Selected (Module 6) |
L.3 Key Trade-off Analysis
Trade-off 1: Availability vs. Consistency
| Aspect | Decision |
|---|---|
| Conflict | The “Offline First” requirement mandates we cannot rely on immediate cloud consistency. |
| Resolution | We must accept Eventual Consistency for inventory sync. |
| Mitigation | Event Sourcing enables deterministic replay to resolve conflicts. |
Trade-off 2: Complexity (Event Sourcing + PostgreSQL Events)
| Aspect | Decision |
|---|---|
| Conflict | Event Sourcing adds complexity compared to standard CRUD. Original design included Apache Kafka for streaming, adding operational burden (ZooKeeper/KRaft). |
| Resolution | Event Sourcing retained for Sales and Inventory domains. Kafka deferred to v2.0. v1.0 uses PostgreSQL event tables with LISTEN/NOTIFY for event notification and Transactional Outbox for guaranteed delivery. |
| Benefit | Preserves event replay capability and audit trail while eliminating Kafka operational complexity. PostgreSQL event tables match BRD’s existing idempotency_records and integration_dead_letters table designs. |
Trade-off 3: Deployment Simplicity (Modular Monolith)
| Aspect | Decision |
|---|---|
| Conflict | Microservices offer independent scaling but add operational overhead. |
| Resolution | Choosing a Modular Monolith (“Central API”) over Microservices. Row-Level Isolation with RLS for multi-tenancy. |
| Benefit | Reduces deployment complexity (one container vs. dozens). Module 6 designed as Extractable Integration Gateway — can be split into a separate service when scale demands it, without changing core POS modules. |
L.4 Selected Architecture Strategy
Primary Declaration
| Attribute | Selection |
|---|---|
| Primary Style | Event-Driven Modular Monolith (Central API) |
| Key Patterns | Event Sourcing (scoped), CQRS (scoped), Offline-First, Row-Level Isolation with RLS |
| Event Infrastructure | PostgreSQL Event Tables + LISTEN/NOTIFY (v1.0); Apache Kafka (v2.0) |
| Integration Strategy | Extractable Integration Gateway (Module 6) |
| Credential Management | HashiCorp Vault |
Architecture Layer Mapping
| Layer | Style/Pattern | Technology |
|---|---|---|
| POS Client | Microkernel (Plugin) + Offline-First | .NET MAUI, SQLite |
| Central API | Modular Monolith | ASP.NET Core 8.0 |
| Communication | Event-Driven | PostgreSQL Events + LISTEN/NOTIFY (v1.0) |
| Data Persistence | Event Sourcing (scoped) + CQRS (scoped) | PostgreSQL 16 |
| Multi-Tenancy | Row-Level Isolation with RLS | PostgreSQL RLS + tenant_id |
| Integration | Extractable Integration Gateway | Module 6, IIntegrationProvider |
| Secrets | Credential Vault | HashiCorp Vault (Docker) |
L.4A CQRS & Event Sourcing Scope
The expert panel identified that CQRS and Event Sourcing scope was undefined. This section clarifies which modules use which patterns, per user decision.
| Module | CQRS | Event Sourcing | Pattern Description |
|---|---|---|---|
| Module 1: Sales | Full CQRS | Full Event Sourcing | Separate read/write models. Events: SaleCreated, PaymentProcessed, ReturnInitiated, VoidExecuted. Event replay for audit and conflict resolution. |
| Module 2: Customers | Standard CRUD | None | Direct query against current-state tables. Simple read/write through repository pattern. |
| Module 3: Catalog | Standard CRUD | None | Read-heavy workload optimized with caching (Redis). Product data served from current-state tables. |
| Module 4: Inventory | Materialized read model | ES for audit trail | Current inventory levels maintained in materialized view. Event Sourcing captures all stock movements for audit trail and conflict resolution (offline sync). |
| Module 5: Setup | Standard CRUD | None | Configuration data accessed directly. Changes logged but not event-sourced. |
| Module 6: Integrations | Standard CRUD | Audit-trail-only ES | Sync logs stored as event stream for debugging and compliance. No event replay for operational queries — current sync state maintained in tables. |
| Section 7: State Machines | N/A | Events drive transitions | 16 state machines powered by domain events. State transitions recorded as events. Database-driven implementation (see below). |
State Machine Implementation: Database-driven pattern using a state column on the entity table plus a state_transitions reference table. This approach provides:
- State column: Each stateful entity (e.g.,
orders.status,returns.status) stores current state directly - Transition table:
state_transitions(from_state, to_state, event, guard_condition, action)defines allowed transitions per entity type - Validation: Application layer validates transitions against the table before applying (preventing invalid state changes)
- Audit: Every transition logged with timestamp, actor, and triggering event
- Benefits: Declarative (non-code) transition rules, easy to modify without deployment, queryable transition history
Design Note: State machines are NOT implemented via Event Sourcing replay. The
statecolumn holds current truth; ES events record the history. This separation keeps state lookups O(1) while maintaining full audit trail.
Event Sourcing vs. Audit Log Relationship: Event Sourcing and the audit log serve separate concerns and are complementary:
- Event Sourcing (Modules 1, 4, 6): Domain events that represent business state changes. Used for: event replay (Sales), conflict resolution (Inventory), sync debugging (Integrations). Stored in event store tables.
- Audit Log: Cross-cutting compliance record of who did what and when. Captures: user identity, IP address, action performed, timestamp, before/after values. Stored in dedicated
audit_logtable. - Relationship: ES events feed INTO the audit log (via event handlers) but the audit log also captures non-ES actions (e.g., login attempts, configuration changes, report generation). The audit log is the compliance artifact; ES is the domain modeling tool.
Event Sourcing Implementation Pattern:
┌──────────────────────────────────────────────────────────┐
│ EVENT SOURCING PATTERN (Sales Module) │
├──────────────────────────────────────────────────────────┤
│ │
│ Command ──► Aggregate ──► Domain Events ──► Event Store │
│ │ │
│ ▼ │
│ Event Handlers │
│ ┌─────────────┐ │
│ │ Read Model │ (CQRS) │
│ │ Projections │ │
│ └─────────────┘ │
│ ┌─────────────┐ │
│ │ Audit Log │ │
│ │ (Immutable) │ │
│ └─────────────┘ │
│ ┌─────────────┐ │
│ │ Integration │ │
│ │ Outbox │ │
│ └─────────────┘ │
│ │
│ Queries ──► Read Model (Materialized View) ──► Response │
│ │
└──────────────────────────────────────────────────────────┘
L.4A.1 Event Store Implementation
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Event Store Schema (PostgreSQL)
The append-only event store is the source of truth:
-- Event Store Schema
CREATE TABLE events (
id BIGSERIAL PRIMARY KEY,
event_id UUID UNIQUE NOT NULL DEFAULT gen_random_uuid(),
aggregate_type VARCHAR(100) NOT NULL, -- 'Sale', 'Inventory', 'Customer'
aggregate_id UUID NOT NULL, -- The entity this event belongs to
event_type VARCHAR(100) NOT NULL, -- 'SaleCreated', 'ItemAdded'
event_data JSONB NOT NULL, -- Full event payload
metadata JSONB NOT NULL DEFAULT '{}', -- Correlation, causation IDs
version INTEGER NOT NULL, -- Aggregate version (for optimistic concurrency)
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by UUID, -- Employee who caused the event
-- Optimistic concurrency: aggregate_id + version must be unique
UNIQUE (aggregate_type, aggregate_id, version)
);
-- Indexes for common queries
CREATE INDEX idx_events_aggregate ON events (aggregate_type, aggregate_id);
CREATE INDEX idx_events_type ON events (event_type);
CREATE INDEX idx_events_created_at ON events USING BRIN (created_at);
CREATE INDEX idx_events_metadata ON events USING GIN (metadata);
-- Snapshots table (for performance on long event streams)
CREATE TABLE snapshots (
id BIGSERIAL PRIMARY KEY,
aggregate_type VARCHAR(100) NOT NULL,
aggregate_id UUID NOT NULL,
version INTEGER NOT NULL,
state JSONB NOT NULL, -- Serialized aggregate state
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE (aggregate_type, aggregate_id)
);
-- Outbox table (for reliable event publishing)
CREATE TABLE event_outbox (
id BIGSERIAL PRIMARY KEY,
event_id UUID NOT NULL REFERENCES events(event_id),
destination VARCHAR(100) NOT NULL, -- 'signalr', 'webhook', 'sync'
status VARCHAR(20) DEFAULT 'pending',
attempts INTEGER DEFAULT 0,
last_error TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
processed_at TIMESTAMPTZ
);
Event Sourcing Architecture Diagram
Event Sourcing Architecture
===========================
+-------------------------------------------------------------------------+
| POS CLIENT |
| |
| +------------------+ +-------------------+ +-----------------+ |
| | Command Handler | | Event Store | | Projector | |
| | | | (Local SQLite) | | (Read Model) | |
| | CreateSale |--->| |--->| | |
| | VoidSale | | SaleCreated | | sale_summaries | |
| | AddPayment | | ItemAdded | | inventory_view | |
| +------------------+ | PaymentReceived | +-----------------+ |
| +-------------------+ |
| | |
+-------------------------------------------------------------------------+
| Sync
v
+-------------------------------------------------------------------------+
| CENTRAL API |
| |
| +------------------+ +-------------------+ +-----------------+ |
| | Command Handler | | Event Store | | Projector | |
| | (Validates) | | (PostgreSQL) | | (Read Model) | |
| | |<---| |--->| | |
| | Deduplication | | All tenant events | | sales | |
| | Conflict Check | | Append-only | | inventory_items | |
| +------------------+ | Immutable | | customers | |
| +-------------------+ +-----------------+ |
+-------------------------------------------------------------------------+
CQRS Pattern
CQRS Pattern
============
+----------------------+
| User Action |
+----------+-----------+
|
+----------------------+----------------------+
| |
v v
+-------------------+ +-------------------+
| COMMAND | | QUERY |
| (Write) | | (Read) |
+-------------------+ +-------------------+
| |
v v
+-------------------+ +-------------------+
| Command Handler | | Query Handler |
| - Validate | | - No validation |
| - Business rules | | - Fast lookup |
| - Generate events | | - Denormalized |
+-------------------+ +-------------------+
| ^
v |
+-------------------+ +-------------------+
| Event Store |----------------------->| Read Models |
| (Append-only) | Projections | (Optimized) |
+-------------------+ +-------------------+
Write Side (Commands)
// Commands - Express intent
public record CreateSaleCommand(
Guid SaleId,
Guid LocationId,
Guid EmployeeId,
Guid? CustomerId,
List<SaleLineItemDto> LineItems
);
public record VoidSaleCommand(
Guid SaleId,
Guid EmployeeId,
string Reason
);
public record AddPaymentCommand(
Guid SaleId,
string PaymentMethod,
decimal Amount,
string? Reference
);
Read Side (Queries)
// Queries - Request data
public record GetSaleByIdQuery(Guid SaleId);
public record GetDailySalesQuery(Guid LocationId, DateTime Date);
public record GetInventoryLevelQuery(string Sku, Guid LocationId);
// Read models - Optimized for queries
public class SaleSummaryView
{
public Guid Id { get; set; }
public string SaleNumber { get; set; }
public string CustomerName { get; set; } // Denormalized
public string EmployeeName { get; set; } // Denormalized
public decimal Total { get; set; }
public string Status { get; set; }
public DateTime CreatedAt { get; set; }
}
L.4A.2 Event Streaming (Apache Kafka)
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Technology Selection
| Attribute | Selection |
|---|---|
| Platform | Apache Kafka |
| Version | 3.6+ (with KRaft mode) |
| Primary Rationale | Replayability |
Why Kafka over alternatives?
| Alternative | Why Not Selected |
|---|---|
| RabbitMQ | No native replay; messages deleted after consumption |
| Redis Streams | Less durable; not designed for long-term event storage |
| AWS SQS | No replay capability; messages expire |
| PostgreSQL LISTEN/NOTIFY | Not scalable; no persistence |
Kafka Replayability
+------------------------------------------------------------------+
| KAFKA REPLAYABILITY |
+------------------------------------------------------------------+
| |
| Event Log (Immutable, Ordered): |
| |
| Partition 0: [E1] -> [E2] -> [E3] -> [E4] -> [E5] -> ... |
| ^ ^ |
| | | |
| Consumer Group A: ─────┘ | (Processed up to E2) |
| Consumer Group B: ────────────────────┘ (Processed up to E4) |
| |
| NEW Consumer Group C can start from E1 and replay ALL events! |
| |
+------------------------------------------------------------------+
Kafka Topics Architecture
POS Kafka Topics
================
┌────────────────────────────────────────────────────────────────┐
│ TOPIC STRUCTURE │
├────────────────────────────────────────────────────────────────┤
│ │
│ pos.events.sales - All sale-related events │
│ ├── Partition 0 (Location A) │
│ ├── Partition 1 (Location B) │
│ └── Partition N (Location N) │
│ │
│ pos.events.inventory - Inventory movements │
│ ├── Partition 0-N (By SKU hash) │
│ │
│ pos.events.customers - Customer activity │
│ ├── Partition 0-N (By customer hash) │
│ │
│ pos.sync.outbound - Events to sync to external systems │
│ ├── Shopify, Amazon, etc. │
│ │
│ pos.sync.inbound - Events from external systems │
│ ├── Online orders, inventory updates │
│ │
└────────────────────────────────────────────────────────────────┘
Kafka Configuration (Docker Compose)
# docker-compose.kafka.yml
services:
kafka:
image: confluentinc/cp-kafka:7.5.0
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_RETENTION_HOURS: 168 # 7 days
KAFKA_LOG_RETENTION_BYTES: 10737418240 # 10GB per partition
KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
ports:
- "9092:9092"
volumes:
- kafka_data:/var/lib/kafka/data
kafka-ui:
image: provectuslabs/kafka-ui:latest
environment:
KAFKA_CLUSTERS_0_NAME: pos-cluster
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
ports:
- "8090:8080"
Event Publishing Pattern
// KafkaEventPublisher.cs
public class KafkaEventPublisher : IEventPublisher
{
private readonly IProducer<string, string> _producer;
private readonly ILogger<KafkaEventPublisher> _logger;
public async Task PublishAsync<T>(T @event, CancellationToken ct = default)
where T : IDomainEvent
{
var topic = GetTopicForEvent(@event);
var key = GetPartitionKey(@event); // e.g., LocationId for ordering
var message = new Message<string, string>
{
Key = key,
Value = JsonSerializer.Serialize(@event),
Headers = new Headers
{
{ "event-type", Encoding.UTF8.GetBytes(@event.GetType().Name) },
{ "correlation-id", Encoding.UTF8.GetBytes(@event.CorrelationId.ToString()) },
{ "tenant-id", Encoding.UTF8.GetBytes(@event.TenantId.ToString()) }
}
};
var result = await _producer.ProduceAsync(topic, message, ct);
_logger.LogDebug(
"Published {EventType} to {Topic}:{Partition}@{Offset}",
@event.GetType().Name,
result.Topic,
result.Partition.Value,
result.Offset.Value
);
}
private string GetTopicForEvent(IDomainEvent @event) => @event switch
{
SaleCreated or SaleCompleted or SaleVoided => "pos.events.sales",
InventoryReceived or InventorySold => "pos.events.inventory",
CustomerCreated or LoyaltyPointsEarned => "pos.events.customers",
_ => "pos.events.general"
};
}
Schema Registry & Event Versioning
Overview
As the POS platform evolves, event schemas will change. Schema Registry provides:
- Schema Validation: Prevent incompatible events from being published
- Schema Evolution: Safe migrations without breaking consumers
- Schema History: Version tracking for all event types
| Attribute | Selection |
|---|---|
| Tool | Confluent Schema Registry |
| Format | Avro (Primary) or Protobuf |
| Strategy | BACKWARD compatibility |
Schema Registry Architecture
┌─────────────────────────────────────────────────────────────────┐
│ SCHEMA REGISTRY FLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌──────────────────┐ ┌─────────────┐ │
│ │ Producer │ │ Schema Registry │ │ Consumer │ │
│ │ (POS API) │ │ (Confluent) │ │ (Analytics) │ │
│ └──────┬──────┘ └────────┬─────────┘ └──────┬──────┘ │
│ │ │ │ │
│ 1. Register/Get Schema │ │ │
│ │ ─────────────────> │ │ │
│ │ │ │ │
│ 2. Schema ID returned │ │ │
│ │ <───────────────── │ │ │
│ │ │ │ │
│ 3. Publish event with │ │ │
│ schema ID prefix │ │ │
│ │ ─────────────────────────────────────────> │ │
│ │ │ │ │
│ │ 4. Consumer fetches │ │
│ │ schema by ID │ │
│ │ <─────────────────── │ │
│ │ │ │
│ │ 5. Deserialize with │ │
│ │ correct schema │ │
│ │
└─────────────────────────────────────────────────────────────────┘
Avro Schema Definition (SaleCreated)
// schemas/sale-created.avsc
{
"type": "record",
"name": "SaleCreated",
"namespace": "io.posplatform.events.sales",
"doc": "Event fired when a new sale is initiated",
"fields": [
{
"name": "eventId",
"type": { "type": "string", "logicalType": "uuid" },
"doc": "Unique event identifier"
},
{
"name": "saleId",
"type": { "type": "string", "logicalType": "uuid" },
"doc": "Sale aggregate identifier"
},
{
"name": "tenantId",
"type": { "type": "string", "logicalType": "uuid" }
},
{
"name": "locationId",
"type": { "type": "string", "logicalType": "uuid" }
},
{
"name": "employeeId",
"type": { "type": "string", "logicalType": "uuid" }
},
{
"name": "customerId",
"type": ["null", { "type": "string", "logicalType": "uuid" }],
"default": null,
"doc": "Optional customer for loyalty"
},
{
"name": "saleNumber",
"type": "string"
},
{
"name": "createdAt",
"type": { "type": "long", "logicalType": "timestamp-millis" }
},
{
"name": "metadata",
"type": {
"type": "map",
"values": "string"
},
"default": {}
}
]
}
Schema Evolution Rules (BACKWARD Compatibility)
| Change | Allowed? | Notes |
|---|---|---|
| Add field with default | Yes | New consumers can read old messages |
| Remove field with default | Yes | Old consumers ignore missing field |
| Add field without default | No | Old messages fail validation |
| Remove required field | No | New messages fail for old consumers |
| Change field type | No | Type mismatch errors |
| Rename field | No | Use aliases instead |
Schema Evolution Example (v2)
// schemas/sale-created-v2.avsc (BACKWARD COMPATIBLE)
{
"type": "record",
"name": "SaleCreated",
"namespace": "io.posplatform.events.sales",
"fields": [
// ... existing fields ...
// NEW FIELD - Added with default value (BACKWARD COMPATIBLE)
{
"name": "channel",
"type": "string",
"default": "in_store",
"doc": "Sales channel: in_store, online, mobile"
},
// NEW OPTIONAL FIELD (BACKWARD COMPATIBLE)
{
"name": "referralCode",
"type": ["null", "string"],
"default": null
}
]
}
Producer Configuration with Schema Registry
// Infrastructure/Messaging/SchemaRegistryProducer.cs
using Confluent.Kafka;
using Confluent.SchemaRegistry;
using Confluent.SchemaRegistry.Serdes;
public class SchemaRegistryProducer<TKey, TValue> : IEventPublisher
where TValue : ISpecificRecord
{
private readonly IProducer<TKey, TValue> _producer;
public SchemaRegistryProducer(
string bootstrapServers,
string schemaRegistryUrl)
{
var schemaRegistryConfig = new SchemaRegistryConfig
{
Url = schemaRegistryUrl
};
var schemaRegistry = new CachedSchemaRegistryClient(schemaRegistryConfig);
var producerConfig = new ProducerConfig
{
BootstrapServers = bootstrapServers,
Acks = Acks.All, // Wait for all replicas
EnableIdempotence = true
};
_producer = new ProducerBuilder<TKey, TValue>(producerConfig)
.SetKeySerializer(new AvroSerializer<TKey>(schemaRegistry))
.SetValueSerializer(new AvroSerializer<TValue>(schemaRegistry, new AvroSerializerConfig
{
// Fail if schema is not compatible
AutoRegisterSchemas = false,
SubjectNameStrategy = SubjectNameStrategy.TopicRecord
}))
.Build();
}
public async Task PublishAsync(
string topic,
TKey key,
TValue value,
CancellationToken ct = default)
{
var result = await _producer.ProduceAsync(topic, new Message<TKey, TValue>
{
Key = key,
Value = value
}, ct);
_logger.LogDebug(
"Published {EventType} to {Topic} with schema ID {SchemaId}",
typeof(TValue).Name,
result.Topic,
result.Value
);
}
}
CI/CD Schema Validation
# .github/workflows/schema-validation.yml
name: Schema Validation
on:
pull_request:
paths:
- 'schemas/**'
jobs:
validate-schemas:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Start Schema Registry
run: |
docker compose -f docker/docker-compose.kafka.yml up -d schema-registry
sleep 10
- name: Test Schema Compatibility
run: |
for schema in schemas/*.avsc; do
subject=$(basename "$schema" .avsc)-value
echo "Testing compatibility for $subject"
# Check if schema is BACKWARD compatible with existing
curl -X POST \
-H "Content-Type: application/vnd.schemaregistry.v1+json" \
-d @"$schema" \
"http://localhost:8081/compatibility/subjects/$subject/versions/latest" \
| jq -e '.is_compatible == true' || exit 1
done
- name: Register Schemas (on merge to main)
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: |
for schema in schemas/*.avsc; do
subject=$(basename "$schema" .avsc)-value
curl -X POST \
-H "Content-Type: application/vnd.schemaregistry.v1+json" \
-d "{\"schema\": $(cat "$schema" | jq -Rs .)}" \
"http://localhost:8081/subjects/$subject/versions"
done
Docker Compose with Schema Registry
# docker/docker-compose.kafka.yml (updated)
services:
schema-registry:
image: confluentinc/cp-schema-registry:7.5.0
container_name: pos-schema-registry
depends_on:
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:9092
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
# Enforce BACKWARD compatibility by default
SCHEMA_REGISTRY_SCHEMA_COMPATIBILITY_LEVEL: BACKWARD
L.4A.3 Dead Letter Queue Pattern
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Overview
When event processing fails (malformed data, business rule violations, transient errors), messages go to a Dead Letter Queue for investigation and replay.
| Attribute | Selection |
|---|---|
| Purpose | Capture failed messages without blocking main flow |
| Retention | 30 days |
| Monitoring | Alert when DLQ depth > threshold |
DLQ Architecture
┌─────────────────────────────────────────────────────────────────┐
│ DLQ PATTERN │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
│ │ pos.events. │ │ Consumer │ │ Handler │ │
│ │ sales │───>│ Group │───>│ Logic │ │
│ │ (Main Topic) │ │ │ │ │ │
│ └───────────────┘ └───────────────┘ └───────┬───────┘ │
│ │ │
│ ┌───────┴───────┐ │
│ │ Success? │ │
│ └───────┬───────┘ │
│ Yes ┌───────┴───────┐ No │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────┐ ┌───────────┐ │
│ │ Commit │ │ Retry │ │
│ │ Offset │ │ Logic │ │
│ └──────────┘ └─────┬─────┘ │
│ │ │
│ ┌──────┴─────┐ │
│ │ Max Retries│ │
│ │ Exceeded? │ │
│ └──────┬─────┘ │
│ No ┌────────┴──────┐│
│ │ ││
│ ▼ ▼│
│ ┌──────────┐ ┌────────┴──┐
│ │ Retry │ │ DLQ │
│ │ Topic │ │ Topic │
│ └──────────┘ └───────────┘
│ pos.events.
│ sales.dlq
└─────────────────────────────────────────────────────────────────┘
DLQ Consumer Implementation
// Infrastructure/Messaging/DlqAwareConsumer.cs
public class DlqAwareConsumer<TKey, TValue>
{
private readonly IConsumer<TKey, TValue> _consumer;
private readonly IProducer<string, DeadLetterMessage> _dlqProducer;
private readonly ILogger _logger;
private const int MAX_RETRIES = 3;
private readonly TimeSpan[] _retryDelays = new[]
{
TimeSpan.FromSeconds(1),
TimeSpan.FromSeconds(5),
TimeSpan.FromSeconds(30)
};
public async Task ConsumeWithDlqAsync(
string topic,
Func<ConsumeResult<TKey, TValue>, Task> handler,
CancellationToken ct)
{
_consumer.Subscribe(topic);
while (!ct.IsCancellationRequested)
{
var result = _consumer.Consume(ct);
var retryCount = GetRetryCount(result.Message.Headers);
try
{
await handler(result);
_consumer.Commit(result);
}
catch (TransientException ex) when (retryCount < MAX_RETRIES)
{
_logger.LogWarning(
ex,
"Transient error processing message. Retry {Retry}/{Max}",
retryCount + 1,
MAX_RETRIES
);
await Task.Delay(_retryDelays[retryCount], ct);
await PublishToRetryTopicAsync(result, retryCount + 1);
_consumer.Commit(result);
}
catch (Exception ex)
{
_logger.LogError(
ex,
"Failed to process message after {Retries} retries. Sending to DLQ.",
retryCount
);
await PublishToDlqAsync(result, ex, retryCount);
_consumer.Commit(result);
}
}
}
private async Task PublishToDlqAsync(
ConsumeResult<TKey, TValue> result,
Exception exception,
int retryCount)
{
var dlqMessage = new DeadLetterMessage
{
OriginalTopic = result.Topic,
OriginalPartition = result.Partition.Value,
OriginalOffset = result.Offset.Value,
Key = result.Message.Key?.ToString(),
Value = SerializeValue(result.Message.Value),
Headers = ExtractHeaders(result.Message.Headers),
ErrorType = exception.GetType().FullName,
ErrorMessage = exception.Message,
StackTrace = exception.StackTrace,
RetryCount = retryCount,
FirstFailedAt = GetFirstFailedAt(result.Message.Headers),
LastFailedAt = DateTime.UtcNow,
ConsumerGroup = _consumerGroup,
ConsumerInstance = Environment.MachineName
};
var dlqTopic = $"{result.Topic}.dlq";
await _dlqProducer.ProduceAsync(dlqTopic, new Message<string, DeadLetterMessage>
{
Key = result.Message.Key?.ToString(),
Value = dlqMessage
});
}
}
DLQ Message Structure
// Domain/Events/DeadLetterMessage.cs
public record DeadLetterMessage
{
/// <summary>Original Kafka topic</summary>
public string OriginalTopic { get; init; }
/// <summary>Original partition</summary>
public int OriginalPartition { get; init; }
/// <summary>Original offset</summary>
public long OriginalOffset { get; init; }
/// <summary>Original message key</summary>
public string Key { get; init; }
/// <summary>Original message value (base64 if binary)</summary>
public string Value { get; init; }
/// <summary>Original headers</summary>
public Dictionary<string, string> Headers { get; init; }
/// <summary>Error details</summary>
public string ErrorType { get; init; }
public string ErrorMessage { get; init; }
public string StackTrace { get; init; }
/// <summary>Processing metadata</summary>
public int RetryCount { get; init; }
public DateTime FirstFailedAt { get; init; }
public DateTime LastFailedAt { get; init; }
public string ConsumerGroup { get; init; }
public string ConsumerInstance { get; init; }
}
DLQ Monitoring & Alerting
# prometheus/alerts/dlq-alerts.yml
groups:
- name: kafka-dlq-alerts
rules:
- alert: DLQMessagesAccumulating
expr: kafka_consumer_group_lag{topic=~".*\\.dlq"} > 100
for: 15m
labels:
severity: warning
annotations:
summary: "DLQ has {{ $value }} unprocessed messages"
description: "Topic {{ $labels.topic }} has accumulated messages"
- alert: DLQCriticalBacklog
expr: kafka_consumer_group_lag{topic=~".*\\.dlq"} > 1000
for: 5m
labels:
severity: critical
annotations:
summary: "CRITICAL: DLQ backlog exceeds 1000 messages"
runbook_url: "https://wiki.internal/runbooks/dlq-overflow"
DLQ Replay Tool
// Tools/DlqReplayService.cs
public class DlqReplayService
{
public async Task ReplayMessagesAsync(
string dlqTopic,
DateTime? from = null,
DateTime? to = null,
Func<DeadLetterMessage, bool>? filter = null)
{
var consumer = CreateDlqConsumer(dlqTopic);
var producer = CreateMainTopicProducer();
var messages = await ReadDlqMessagesAsync(consumer, from, to);
foreach (var dlqMessage in messages)
{
if (filter != null && !filter(dlqMessage))
{
_logger.LogDebug("Skipping message by filter: {Key}", dlqMessage.Key);
continue;
}
_logger.LogInformation(
"Replaying message from DLQ: Topic={Topic}, Offset={Offset}",
dlqMessage.OriginalTopic,
dlqMessage.OriginalOffset
);
// Publish back to original topic
await producer.ProduceAsync(dlqMessage.OriginalTopic, new Message<string, string>
{
Key = dlqMessage.Key,
Value = dlqMessage.Value,
Headers = new Headers
{
{ "x-dlq-replay", Encoding.UTF8.GetBytes("true") },
{ "x-dlq-original-offset", Encoding.UTF8.GetBytes(dlqMessage.OriginalOffset.ToString()) }
}
});
}
_logger.LogInformation("Replayed {Count} messages from DLQ", messages.Count);
}
}
# CLI usage for DLQ replay
dotnet run --project tools/DlqReplay -- \
--topic pos.events.sales.dlq \
--from "2026-01-20T00:00:00Z" \
--filter "ErrorType contains 'Transient'"
L.4A.4 Domain Events Catalog
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Sale Aggregate Events
Sale Events
===========
SaleCreated
+-----------------------+----------------------------------------+
| Field | Description |
+-----------------------+----------------------------------------+
| sale_id | UUID of the new sale |
| sale_number | Human-readable sale number |
| location_id | Where the sale occurred |
| register_id | Which register |
| employee_id | Who created the sale |
| customer_id | Customer (if any) |
| created_at | Timestamp |
+-----------------------+----------------------------------------+
SaleLineItemAdded
+-----------------------+----------------------------------------+
| sale_id | Parent sale |
| line_item_id | UUID of the line item |
| product_id | Product being sold |
| variant_id | Variant (if any) |
| sku | SKU at time of sale |
| name | Product name at time of sale |
| quantity | Quantity sold |
| unit_price | Price per unit |
| discount_amount | Line discount |
| tax_amount | Line tax |
+-----------------------+----------------------------------------+
SaleLineItemRemoved
+-----------------------+----------------------------------------+
| sale_id | Parent sale |
| line_item_id | UUID of removed item |
| reason | Why removed |
+-----------------------+----------------------------------------+
PaymentReceived
+-----------------------+----------------------------------------+
| sale_id | Parent sale |
| payment_id | UUID of payment |
| payment_method | cash, credit, debit, etc. |
| amount | Payment amount |
| reference | Card last 4, check #, etc. |
| auth_code | Authorization code |
+-----------------------+----------------------------------------+
SaleCompleted
+-----------------------+----------------------------------------+
| sale_id | The sale being completed |
| subtotal | Final subtotal |
| discount_total | Total discounts |
| tax_total | Total tax |
| total | Final total |
| completed_at | Timestamp |
+-----------------------+----------------------------------------+
SaleVoided
+-----------------------+----------------------------------------+
| sale_id | The voided sale |
| voided_by | Employee who voided |
| reason | Void reason |
| voided_at | Timestamp |
+-----------------------+----------------------------------------+
Inventory Aggregate Events
Inventory Events
================
InventoryReceived
+-----------------------+----------------------------------------+
| location_id | Where received |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity | Amount received |
| cost | Unit cost |
| reference | PO number, transfer # |
| received_by | Employee |
+-----------------------+----------------------------------------+
InventoryAdjusted
+-----------------------+----------------------------------------+
| location_id | Location |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity_change | +/- amount |
| new_quantity | New on-hand quantity |
| reason | count, damage, theft, return |
| adjusted_by | Employee |
| notes | Additional context |
+-----------------------+----------------------------------------+
InventorySold
+-----------------------+----------------------------------------+
| location_id | Where sold |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity | Amount sold (positive) |
| sale_id | Related sale |
+-----------------------+----------------------------------------+
InventoryTransferred
+-----------------------+----------------------------------------+
| transfer_id | Transfer document |
| from_location_id | Source location |
| to_location_id | Destination location |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity | Amount transferred |
| transferred_by | Employee |
+-----------------------+----------------------------------------+
InventoryCounted
+-----------------------+----------------------------------------+
| location_id | Location |
| product_id | Product |
| variant_id | Variant |
| expected_quantity | System quantity before count |
| actual_quantity | Physical count |
| variance | Difference |
| counted_by | Employee |
| count_session_id | Batch count session |
+-----------------------+----------------------------------------+
Customer Aggregate Events
Customer Events
===============
CustomerCreated
+-----------------------+----------------------------------------+
| customer_id | New customer UUID |
| customer_number | Human-readable ID |
| first_name | First name |
| last_name | Last name |
| email | Email address |
| phone | Phone number |
| created_by | Employee |
+-----------------------+----------------------------------------+
CustomerUpdated
+-----------------------+----------------------------------------+
| customer_id | Customer UUID |
| changes | Map of field -> {old, new} |
| updated_by | Employee |
+-----------------------+----------------------------------------+
LoyaltyPointsEarned
+-----------------------+----------------------------------------+
| customer_id | Customer |
| points | Points earned |
| sale_id | Related sale |
| new_balance | Updated balance |
+-----------------------+----------------------------------------+
LoyaltyPointsRedeemed
+-----------------------+----------------------------------------+
| customer_id | Customer |
| points | Points redeemed |
| sale_id | Related sale |
| new_balance | Updated balance |
+-----------------------+----------------------------------------+
StoreCreditIssued
+-----------------------+----------------------------------------+
| customer_id | Customer |
| credit_id | Credit UUID |
| amount | Credit amount |
| reason | Why issued |
| issued_by | Employee |
+-----------------------+----------------------------------------+
Employee Aggregate Events
Employee Events
===============
EmployeeClockIn
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| location_id | Where clocking in |
| shift_id | New shift UUID |
| clocked_in_at | Timestamp |
+-----------------------+----------------------------------------+
EmployeeClockOut
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| shift_id | Shift being closed |
| clocked_out_at | Timestamp |
| break_minutes | Total break time |
+-----------------------+----------------------------------------+
EmployeeBreakStarted
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| shift_id | Current shift |
| started_at | Break start time |
+-----------------------+----------------------------------------+
EmployeeBreakEnded
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| shift_id | Current shift |
| ended_at | Break end time |
| duration_minutes | Break duration |
+-----------------------+----------------------------------------+
CashDrawer Aggregate Events
Cash Drawer Events
==================
DrawerOpened
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| register_id | Register UUID |
| employee_id | Who opened |
| opening_balance | Starting cash amount |
| opened_at | Timestamp |
+-----------------------+----------------------------------------+
DrawerCashDrop
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| amount | Amount dropped to safe |
| employee_id | Who dropped |
| dropped_at | Timestamp |
+-----------------------+----------------------------------------+
DrawerPaidIn
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| amount | Amount added |
| reason | Why (petty cash, etc.) |
| employee_id | Who added |
+-----------------------+----------------------------------------+
DrawerPaidOut
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| amount | Amount removed |
| reason | Why (vendor payment, etc.) |
| employee_id | Who removed |
+-----------------------+----------------------------------------+
DrawerClosed
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| employee_id | Who closed |
| closing_balance | Actual cash counted |
| expected_balance | System calculated |
| variance | Difference (over/short) |
| closed_at | Timestamp |
+-----------------------+----------------------------------------+
L.4A.5 Event Projection Patterns
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Projection Architecture
=======================
+-------------------+
| Event Stream |
| |
| SaleCreated |
| ItemAdded |
| ItemAdded |
| PaymentReceived |
| SaleCompleted |
+--------+----------+
|
| Projector reads events
v
+-------------------+ +-------------------+ +-------------------+
| Sale Projector | |Inventory Projector| |Customer Projector |
| | | | | |
| - Build sale view | | - Update stock | | - Update stats |
| - Calculate totals| | - Track movements | | - Loyalty points |
+--------+----------+ +--------+----------+ +--------+----------+
| | |
v v v
+-------------------+ +-------------------+ +-------------------+
| sale_summaries | | inventory_levels | | customer_stats |
| (Read Model) | | (Read Model) | | (Read Model) |
+-------------------+ +-------------------+ +-------------------+
Sale Projector Implementation
// SaleProjector.cs
public class SaleProjector : IEventHandler
{
private readonly IDbContextFactory<ReadModelDbContext> _dbFactory;
public SaleProjector(IDbContextFactory<ReadModelDbContext> dbFactory)
{
_dbFactory = dbFactory;
}
public async Task HandleAsync(SaleCreated @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var view = new SaleSummaryView
{
Id = @event.SaleId,
SaleNumber = @event.SaleNumber,
LocationId = @event.LocationId,
EmployeeId = @event.EmployeeId,
CustomerId = @event.CustomerId,
Status = "draft",
Subtotal = 0,
Total = 0,
CreatedAt = @event.CreatedAt
};
db.SaleSummaries.Add(view);
await db.SaveChangesAsync();
}
public async Task HandleAsync(SaleLineItemAdded @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var sale = await db.SaleSummaries.FindAsync(@event.SaleId);
if (sale == null) return;
var lineTotal = @event.Quantity * @event.UnitPrice - @event.DiscountAmount;
sale.Subtotal += lineTotal;
sale.ItemCount += @event.Quantity;
await db.SaveChangesAsync();
}
public async Task HandleAsync(SaleCompleted @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var sale = await db.SaleSummaries.FindAsync(@event.SaleId);
if (sale == null) return;
sale.Status = "completed";
sale.DiscountTotal = @event.DiscountTotal;
sale.TaxTotal = @event.TaxTotal;
sale.Total = @event.Total;
sale.CompletedAt = @event.CompletedAt;
await db.SaveChangesAsync();
}
public async Task HandleAsync(SaleVoided @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var sale = await db.SaleSummaries.FindAsync(@event.SaleId);
if (sale == null) return;
sale.Status = "voided";
sale.VoidedAt = @event.VoidedAt;
sale.VoidedBy = @event.VoidedBy;
sale.VoidReason = @event.Reason;
await db.SaveChangesAsync();
}
}
L.4A.6 Temporal Queries
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Event sourcing enables powerful temporal queries:
-- What was inventory on a specific date?
SELECT
product_id,
SUM(CASE
WHEN event_type = 'InventoryReceived' THEN (event_data->>'quantity')::int
WHEN event_type = 'InventorySold' THEN -(event_data->>'quantity')::int
WHEN event_type = 'InventoryAdjusted' THEN (event_data->>'quantity_change')::int
ELSE 0
END) as quantity
FROM events
WHERE aggregate_type = 'Inventory'
AND (event_data->>'location_id')::uuid = '...'
AND created_at <= '2025-12-15 15:00:00'
GROUP BY product_id;
-- Sales trend for specific product
SELECT
date_trunc('day', created_at) as date,
SUM((event_data->>'quantity')::int) as units_sold
FROM events
WHERE event_type = 'InventorySold'
AND (event_data->>'product_id')::uuid = '...'
AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY date_trunc('day', created_at)
ORDER BY date;
-- Audit trail for specific sale
SELECT
event_type,
event_data,
created_at,
created_by
FROM events
WHERE aggregate_type = 'Sale'
AND aggregate_id = '...'
ORDER BY version;
L.4A.7 Snapshots for Performance
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
For aggregates with many events, snapshots prevent replaying the entire stream:
Snapshot Strategy
=================
Without Snapshots:
Event 1 -> Event 2 -> ... -> Event 5000 -> Current State
(Slow for aggregates with many events)
With Snapshots:
Event 1 -> ... -> Event 1000 -> [Snapshot @ v1000]
|
-> Event 1001 -> ... -> Event 1050 -> Current State
(Load snapshot, then only replay 50 events)
Snapshot Implementation
// AggregateRepository.cs
public class AggregateRepository<T> where T : AggregateRoot
{
private readonly IEventStore _eventStore;
private readonly ISnapshotStore _snapshotStore;
private const int SNAPSHOT_THRESHOLD = 100;
public async Task<T> LoadAsync(Guid id)
{
var aggregate = Activator.CreateInstance<T>();
// 1. Try to load snapshot
var snapshot = await _snapshotStore.GetAsync<T>(id);
int fromVersion = 0;
if (snapshot != null)
{
aggregate.RestoreFromSnapshot(snapshot.State);
fromVersion = snapshot.Version;
}
// 2. Load events after snapshot
var events = await _eventStore.GetEventsAsync(id, fromVersion);
foreach (var @event in events)
{
aggregate.Apply(@event);
}
return aggregate;
}
public async Task SaveAsync(T aggregate)
{
var newEvents = aggregate.GetUncommittedEvents();
// 1. Append events
await _eventStore.AppendAsync(aggregate.Id, newEvents, aggregate.Version);
// 2. Create snapshot if threshold reached
if (aggregate.Version % SNAPSHOT_THRESHOLD == 0)
{
var snapshot = aggregate.CreateSnapshot();
await _snapshotStore.SaveAsync(aggregate.Id, aggregate.Version, snapshot);
}
aggregate.ClearUncommittedEvents();
}
}
L.4B Integration Architecture Patterns
BRD v18.0 Module 6 defines integration patterns that are architecturally significant. This section documents their implementation strategy.
Transactional Outbox Pattern
Guarantees atomic business data persistence + event publication without distributed transactions.
┌──────────────────────────────────────────────────────────┐
│ TRANSACTIONAL OUTBOX PATTERN │
├──────────────────────────────────────────────────────────┤
│ │
│ Application Outbox Relay │
│ ┌─────────────────┐ ┌──────────────────┐ │
│ │ BEGIN TRANSACTION│ │ Poll outbox table│ │
│ │ │ │ every 5 seconds │ │
│ │ 1. Write to │ └────────┬─────────┘ │
│ │ business table│ │ │
│ │ │ ▼ │
│ │ 2. Write to │ ┌──────────────────┐ │
│ │ outbox table │ │ Publish event │ │
│ │ │ │ via LISTEN/NOTIFY│ │
│ │ COMMIT │ └────────┬─────────┘ │
│ └─────────────────┘ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Mark as published│ │
│ │ (idempotent) │ │
│ └──────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
Provider Abstraction (Strategy Pattern)
┌──────────────────────────────────────────────────────────┐
│ PROVIDER ABSTRACTION PATTERN │
├──────────────────────────────────────────────────────────┤
│ │
│ IIntegrationProvider │
│ ┌──────────────────┐ │
│ │ + Connect() │ │
│ │ + SyncProducts() │ │
│ │ + SyncInventory()│ │
│ │ + ValidateData() │ │
│ │ + HealthCheck() │ │
│ └────────┬─────────┘ │
│ │ │
│ ┌─────────────┼─────────────┐ │
│ ▼ ▼ ▼ │
│ ┌────────────┐┌────────────┐┌─────────────────┐ │
│ │ Shopify ││ Amazon ││ Google │ │
│ │ Provider ││ Provider ││ Merchant │ │
│ │ ││ ││ Provider │ │
│ │ GraphQL ││ REST/LWA ││ REST/Service Acct│ │
│ │ 50pts/sec ││ Burst+Tok ││ Quota-based │ │
│ │ Webhooks ││ 2min Poll ││ 2x/day Batch │ │
│ └────────────┘└────────────┘└─────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
Safety Buffer Computation
Per BRD Section 6.7.2, channel-available quantity is calculated as:
Channel Available = POS Available - Safety Buffer
┌──────────────────────────────────────────────────────────┐
│ SAFETY BUFFER COMPUTATION │
├──────────────────────────────────────────────────────────┤
│ │
│ 4-Level Priority Resolution: │
│ 1. Product-Level Override (highest priority) │
│ 2. Category-Level Default │
│ 3. Channel-Level Default │
│ 4. Global Default (lowest priority) │
│ │
│ 3 Calculation Modes: │
│ ┌──────────────────────────────────────────────────┐ │
│ │ FIXED: Buffer = fixed_quantity │ │
│ │ PERCENTAGE: Buffer = pos_available * percentage │ │
│ │ MIN_RESERVE: Buffer = pos_available - min_reserve │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ Example (FIXED mode, buffer = 2): │
│ POS Available: 10 → Channel Available: 8 │
│ │
│ Example (PERCENTAGE mode, 20%): │
│ POS Available: 10 → Buffer: 2 → Channel Available: 8 │
│ │
└──────────────────────────────────────────────────────────┘
L.5 Architecture Documentation & Traceability
To ensure “soft architecture” matches the code and enables rapid root-cause analysis.
| Aspect | Selection |
|---|---|
| Strategy | “Diagrams as Code” to prevent documentation drift |
| Tooling | Structurizr (C4 Model) or Mermaid.js |
| Implementation | Architecture diagrams committed to Git repository alongside source code |
| Automation | Use Claude Code CLI to auto-generate updates to diagrams during refactoring |
C4 Model Levels
+-------------------------------------------------------------------+
| C4 MODEL HIERARCHY |
+-------------------------------------------------------------------+
| |
| Level 1: System Context |
| +------------------+ +------------------+ +-------------+ |
| | POS Client |<--->| Central API |<--->| Shopify | |
| | (Terminals) | | (Cloud) | | Amazon | |
| +------------------+ +------------------+ +-------------+ |
| |
| Level 2: Container Diagram |
| +------------------+ +------------------+ +-------------+ |
| | POS App | | API Gateway | | Kafka | |
| | (SQLite) | | Auth Service | | Cluster | |
| +------------------+ | Sales Module | +-------------+ |
| | Inventory Mod | +-------------+ |
| +------------------+ | PostgreSQL | |
| +-------------+ |
| |
| Level 3: Component Diagram (per module) |
| Level 4: Code Diagram (class/sequence) |
| |
+-------------------------------------------------------------------+
L.6 Quality Assurance (QA) & Testing Strategy
To ensure end-to-end reliability for financial transactions.
E2E (End-to-End) Testing
| Attribute | Selection |
|---|---|
| Tool | Cypress or Playwright |
| Scope | Full simulation: Cashier login → Scan Item → Process Payment → Print Receipt |
Example Test Flow:
1. Cashier authenticates with PIN
2. Scan barcode (NXJ1078)
3. Apply discount (if applicable)
4. Select payment method (Cash/Card)
5. Process payment
6. Print/email receipt
7. Verify inventory decremented
8. Verify event published to Kafka
Load Testing
| Attribute | Selection |
|---|---|
| Tool | k6 or JMeter |
| Scope | Simulate “Black Friday” traffic (500 concurrent transactions) |
Black Friday Scenario:
Concurrent Users: 500
Duration: 30 minutes
Target TPS: 1000 transactions/second
Acceptable Latency: p99 < 500ms
Code Management
| Attribute | Selection |
|---|---|
| Platform | GitHub/GitLab |
| Versioning | Semantic Versioning (tags v1.x.x) |
| Traceability | Exact code version deployed to each POS terminal |
L.7 Observability & Monitoring Strategy
Primary Pattern
| Attribute | Selection |
|---|---|
| Pattern | OpenTelemetry (OTel) “Trace-to-Code” Pipeline |
| Rationale | Industry-standard OTel protocol prevents vendor lock-in and enables tracing an error from a specific store directly to the line of code |
Technology Stack (The “LGTM” Stack)
| Component | Tool | Purpose |
|---|---|---|
| L - Logs | Loki | Log aggregation |
| G - Grafana | Grafana | Visualization dashboards |
| T - Traces | Tempo (or Jaeger) | Distributed tracing |
| M - Metrics | Prometheus | Metrics collection |
Instrumentation
| Layer | Instrumentation |
|---|---|
| API | OpenTelemetry auto-instrumentation (.NET) |
| Database | Query tracing, slow query logging |
| Events | PostgreSQL event tables with LISTEN/NOTIFY (v1.0), correlation IDs for tracing |
| POS Client | Local telemetry buffer, sync on reconnect |
L.8 Security & Compliance Strategy
Primary Pattern
| Attribute | Selection |
|---|---|
| Pattern | 6-Gate Security Test Pyramid with DevSecOps for PCI Compliance |
| Rationale | Claude Code agents generate the full codebase. A single SonarQube gate is insufficient to catch missing authorization checks, incorrect OAuth implementation, SAQ-A violations, architecture drift, or insecure CORS/CSP headers. The 6-gate pyramid ensures defense-in-depth for AI-generated code. |
6-Gate Security Test Pyramid
| Gate | Tool | Purpose | Blocks Merge? |
|---|---|---|---|
| 1. SAST | SonarQube / CodeQL | Static code vulnerability scanning (SQLi, XSS, hardcoded secrets) | Yes |
| 2. SCA | Snyk / OWASP Dependency-Check | Package vulnerability scanning + SBOM generation (PCI-DSS 4.0 Req 6.3.2) | Yes |
| 3. Secrets Detection | GitLeaks / TruffleHog | Credential leak prevention in source code and commit history | Yes |
| 4. Architecture Conformance | ArchUnit / NetArchTest | Module boundary enforcement, dependency rules (e.g., Module 6 cannot directly access Module 1 internals) | Yes |
| 5. Contract Tests | Pact | Shopify/Amazon/Google sandbox API contract verification; webhook signature validation | Yes |
| 6. Manual Security Review | Human reviewer | Security-critical paths: payment flows, credential vault access, OAuth token handling, PCI boundary | Yes (tagged PRs only) |
┌──────────────────────────────────────────────────────────┐
│ 6-GATE SECURITY TEST PYRAMID │
├──────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ │
│ │ Manual │ Gate 6 │
│ │ Review │ (Security-critical PRs) │
│ ┌─┴─────────┴─┐ │
│ │ Contract │ Gate 5 │
│ │ Tests │ (Pact + Sandboxes) │
│ ┌─┴─────────────┴─┐ │
│ │ Architecture │ Gate 4 │
│ │ Conformance │ (ArchUnit) │
│ ┌─┴─────────────────┴─┐ │
│ │ Secrets Detection │ Gate 3 │
│ │ (GitLeaks) │ │
│ ┌─┴─────────────────────┴─┐ │
│ │ SCA (Snyk + SBOM) │ Gate 2 │
│ ┌─┴─────────────────────────┴─┐ │
│ │ SAST (SonarQube / CodeQL) │ Gate 1 │
│ └─────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
FIM (File Integrity Monitoring) - PCI Requirement
| Attribute | Selection |
|---|---|
| Tool | Wazuh or OSSEC |
| Action | Monitors POS terminals and servers for unauthorized file changes |
| PCI Reference | PCI-DSS 4.0 Req 11.5.1 |
| Criticality | Essential for detecting skimmers, tampering, and supply chain compromise |
Credential Vault Architecture
| Attribute | Selection |
|---|---|
| Technology | HashiCorp Vault (Docker container) |
| Deployment | Single Vault instance with auto-unseal; Docker Compose alongside PostgreSQL |
Key Hierarchy:
Master Encryption Key (Vault auto-unseal)
└── Tenant-Specific Keys
├── tenant_nexus_key
│ ├── Shopify OAuth tokens
│ ├── Amazon LWA credentials
│ ├── Google Service Account key
│ ├── Payment processor tokens
│ ├── SMTP credentials
│ └── Webhook signing keys
└── tenant_acme_key
└── ... (same structure)
6 Credential Types:
| # | Credential Type | Provider | Auth Method | Rotation |
|---|---|---|---|---|
| 1 | Shopify OAuth token | Shopify | OAuth 2.0 / PKCE | On expiry + 90-day forced |
| 2 | Amazon LWA credentials | Amazon | Login with Amazon (OAuth) | On expiry + 90-day forced |
| 3 | Google Service Account | Service Account JSON key | 90-day rotation | |
| 4 | Payment processor token | Various | API key / OAuth | 90-day rotation |
| 5 | SMTP credentials | Email provider | Username/password | 90-day rotation |
| 6 | Webhook signing keys | All providers | HMAC-SHA256 | On compromise + 90-day |
Access Policy: Least privilege; application-role-based access. Integration services can only read their own provider credentials. Credential writes require admin role with MFA.
DevSecOps Pipeline
┌───────────────────────────────────────────────────────────────────┐
│ DEVSECOPS PIPELINE (v2.0) │
├───────────────────────────────────────────────────────────────────┤
│ │
│ Developer / Claude Code Agent │
│ │ │
│ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Pre-commit │──►│ Gate 1: │──►│ Gate 2: │ │
│ │ Hooks │ │ SAST │ │ SCA + SBOM │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │ │
│ ┌──────────────────────────────────┘ │
│ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Gate 3: │──►│ Gate 4: │──►│ Gate 5: │ │
│ │ Secrets │ │ ArchUnit │ │ Pact Tests │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │ │
│ ┌──────────────────────────────────┘ │
│ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ E2E Tests │──►│ Gate 6: │──►│ Deploy │ │
│ │ (Cypress) │ │ Manual │ │ + Wazuh │ │
│ └────────────┘ │ (if tagged)│ │ FIM │ │
│ └────────────┘ └────────────┘ │
│ │
└───────────────────────────────────────────────────────────────────┘
Offline Queue Security
POS terminals operating offline accumulate queued transactions that must be protected against tampering, interception, and replay attacks.
| Control | Implementation | Purpose |
|---|---|---|
| Queue Encryption | AES-256-GCM with device-specific key | Protects queued transactions at rest on SQLite |
| Tamper Detection | HMAC-SHA256 over each queued transaction | Detects modification of queued data before sync |
| Transaction Signing | Device certificate signs each transaction | Non-repudiation; proves transaction originated from authorized terminal |
| Replay Prevention | Monotonic sequence number + timestamp | Prevents re-submission of previously synced transactions |
| Key Storage | Device secure enclave / TPM where available | Protects encryption keys from extraction |
┌──────────────────────────────────────────────────────────┐
│ OFFLINE QUEUE SECURITY MODEL │
├──────────────────────────────────────────────────────────┤
│ │
│ Transaction Created (Offline) │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────┐ │
│ │ Serialize │───►│ HMAC-SHA256 │───►│ AES-256 │ │
│ │ Transaction │ │ (Integrity) │ │ Encrypt │ │
│ └─────────────┘ └──────────────┘ └──────┬─────┘ │
│ │ │
│ ▼ │
│ ┌───────────┐ │
│ │ SQLite │ │
│ │ Queue │ │
│ └───────────┘ │
│ │ │
│ Network Restored │ │
│ ▼ │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────┐ │
│ │ Verify │◄───│ Decrypt │◄───│ Read from │ │
│ │ HMAC + Seq │ │ AES-256 │ │ Queue │ │
│ └──────┬──────┘ └──────────────┘ └────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Sync to │ │
│ │ Central API │ │
│ └─────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
L.9 Diagrammatic Overview
System Architecture (Mermaid)
graph TD
subgraph Client_Device ["POS Client"]
UI[UI Layer]
SL[Service Layer]
DB_Local[(SQLite)]
SL --> DB_Local
end
subgraph Cloud_Infrastructure ["Cloud Infrastructure"]
LB[Load Balancer]
subgraph Central_API ["Central API (Modular Monolith)"]
Auth[Auth Module]
Sales[Sales Module]
Inv[Inventory Module]
end
subgraph Data_Layer ["Data Layer"]
PG[(PostgreSQL)]
Events[(PG Events)]
end
end
subgraph DevOps_Pipeline ["DevSecOps & Traceability"]
Git[GitHub - Semantic Ver]
Struct[Structurizr - Docs]
Sonar[SonarQube - SAST]
Cypress[Cypress - E2E]
Wazuh[Wazuh - FIM/PCI]
end
SL --> LB
LB --> Auth
Auth --> Sales
Sales --> Events
Sales --> PG
Git --> Sonar
Sonar --> Cypress
Cypress --> Struct
Wazuh -.-> Central_API
Wazuh -.-> Client_Device
ASCII Version
+------------------------------------------------------------------+
| NEXUS POS ARCHITECTURE |
+------------------------------------------------------------------+
| |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ POS CLIENT (STORE) │ |
| │ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ │ |
| │ │ UI │───▶│ Service Layer│───▶│ SQLite (Local) │ │ |
| │ │ (MAUI) │ │ (Plugins) │ │ (Offline Data) │ │ |
| │ └──────────┘ └──────────────┘ └──────────────────┘ │ |
| └──────────────────────────┬──────────────────────────────────┘ |
| │ |
| ▼ (Sync when online) |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ CLOUD INFRASTRUCTURE │ |
| │ │ |
| │ ┌──────────────────────────────────────────────────────┐ │ |
| │ │ CENTRAL API (Modular Monolith) │ │ |
| │ │ ┌────────┐ ┌────────┐ ┌──────────┐ ┌──────────┐ │ │ |
| │ │ │ Auth │ │ Sales │ │Inventory │ │ Catalog │ │ │ |
| │ │ └────────┘ └────────┘ └──────────┘ └──────────┘ │ │ |
| │ └──────────────────────┬───────────────────────────────┘ │ |
| │ │ │ |
| │ ┌───────────────┼───────────────┐ │ |
| │ ▼ ▼ ▼ │ |
| │ ┌───────────┐ ┌───────────┐ ┌───────────────┐ │ |
| │ │PostgreSQL │ │ HashiCorp │ │ External │ │ |
| │ │(Events + │ │ Vault │ │ Systems │ │ |
| │ │ State) │ │(Secrets) │ │(Shopify, etc.)│ │ |
| │ └───────────┘ └───────────┘ └───────────────┘ │ |
| └─────────────────────────────────────────────────────────────┘ |
| |
+------------------------------------------------------------------+
L.9A System Architecture Reference
Detailed Implementation Reference (from former High-Level Architecture chapter, now consolidated here):
Complete System Architecture Diagram
+===========================================================================+
| CLOUD LAYER |
| +------------------+ +------------------+ +------------------+ |
| | Shopify API | | Payment Gateway | | Tax Service | |
| | (E-commerce) | | (Stripe/Square) | | (TaxJar) | |
| +--------+---------+ +--------+---------+ +--------+---------+ |
| | | | |
+===========|=====================|=====================|====================+
| | |
v v v
+===========================================================================+
| API GATEWAY LAYER |
| +---------------------------------------------------------------------+ |
| | Kong / NGINX Gateway | |
| | +-------------+ +-------------+ +-------------+ +-------------+ | |
| | | Rate Limit | | Auth | | Routing | | Logging | | |
| | +-------------+ +-------------+ +-------------+ +-------------+ | |
| +---------------------------------------------------------------------+ |
+===========================================================================+
|
v
+===========================================================================+
| CENTRAL API LAYER |
| (ASP.NET Core 8.0 / Node.js) |
| |
| +------------------+ +------------------+ +------------------+ |
| | Catalog Service | | Sales Service | |Inventory Service| |
| | | | | | | |
| | - Products | | - Transactions | | - Stock Levels | |
| | - Categories | | - Receipts | | - Adjustments | |
| | - Pricing | | - Refunds | | - Transfers | |
| | - Variants | | - Layaways | | - Counts | |
| +------------------+ +------------------+ +------------------+ |
| |
| +------------------+ +------------------+ +------------------+ |
| |Customer Service | |Employee Service | | Sync Service | |
| | | | | | | |
| | - Profiles | | - Users | | - Shopify Sync | |
| | - Loyalty | | - Roles | | - Offline Sync | |
| | - History | | - Permissions | | - Event Queue | |
| | - Credits | | - Shifts | | - Conflict Res | |
| +------------------+ +------------------+ +------------------+ |
| |
+===========================================================================+
|
v
+===========================================================================+
| DATA LAYER |
| +---------------------------------------------------------------------+ |
| | PostgreSQL 16 Cluster | |
| | | |
| | +-----------------+ +-----------------+ +-----------------+ | |
| | | shared schema | | tenant_nexus | | tenant_acme | | |
| | | (platform) | | (Nexus Clothing)| | (Acme Retail) | | |
| | +-----------------+ +-----------------+ +-----------------+ | |
| | | |
| +---------------------------------------------------------------------+ |
| +------------------+ +------------------+ |
| | Redis | | Event Store | |
| | (Cache/Queue) | | (Append-Only) | |
| +------------------+ +------------------+ |
+===========================================================================+
+===========================================================================+
| CLIENT APPLICATIONS |
| |
| +------------------+ +------------------+ +------------------+ |
| | POS Client | | Admin Portal | | Raptag Mobile | |
| | (Desktop App) | | (React SPA) | | (.NET MAUI) | |
| | | | | | | |
| | - Sales Terminal | | - Dashboard | | - RFID Scanning | |
| | - Offline Mode | | - Reports | | - Inventory | |
| | - Local SQLite | | - Configuration | | - Quick Counts | |
| | - Receipt Print | | - User Mgmt | | - Transfers | |
| +------------------+ +------------------+ +------------------+ |
| |
+===========================================================================+
Three-Tier Architecture Detail
Tier 1: Cloud Layer (External Services)
| Service | Purpose | Protocol | Data Flow |
|---|---|---|---|
| Shopify API | E-commerce sync | REST/GraphQL | Bidirectional |
| Payment Gateway | Card processing | REST + Webhooks | Request/Response |
| Tax Service | Tax calculation | REST | Request/Response |
| Email Service | Notifications | SMTP/API | Outbound only |
| SMS Service | Alerts | API | Outbound only |
Cloud Integration Flow
======================
Shopify Payment Gateway Tax Service
| | |
| Products, Orders | Authorization | Rate Lookup
| Inventory | Capture | Calculation
| | Refund |
v v v
+----------------------------------------------------------------+
| Integration Adapters |
| +---------------+ +------------------+ +------------------+ |
| |ShopifyAdapter | | PaymentAdapter | | TaxAdapter | |
| +---------------+ +------------------+ +------------------+ |
+----------------------------------------------------------------+
|
v
[Central API Services]
Tier 2: Central API Layer (Application Services)
API Gateway
Request Flow Through Gateway
============================
Client Request
|
v
+--------------------------------------------------+
| API GATEWAY |
| |
| 1. [Rate Limiting] -----> 100 req/min/client |
| | |
| v |
| 2. [Authentication] ----> JWT Validation |
| | |
| v |
| 3. [Tenant Resolution] -> Extract tenant_id |
| | |
| v |
| 4. [Request Logging] ---> Correlation ID |
| | |
| v |
| 5. [Route to Service] --> /api/v1/sales/* |
| |
+--------------------------------------------------+
|
v
Service Handler
Core Services
| Service | Responsibilities | Key Endpoints |
|---|---|---|
| Catalog Service | Products, categories, pricing, variants | /api/v1/products/* |
| Sales Service | Transactions, receipts, refunds, holds | /api/v1/sales/* |
| Inventory Service | Stock levels, adjustments, transfers | /api/v1/inventory/* |
| Customer Service | Profiles, loyalty, purchase history | /api/v1/customers/* |
| Employee Service | Users, roles, permissions, shifts | /api/v1/employees/* |
| Sync Service | Offline sync, conflict resolution | /api/v1/sync/* |
Tier 3: Data Layer (Persistence)
Data Layer Architecture
=======================
+------------------+ +------------------+ +------------------+
| PostgreSQL | | Redis | | Event Store |
| (Primary DB) | | (Cache/Queue) | | (Append-Only) |
+------------------+ +------------------+ +------------------+
| | |
| | |
+-------v------------------------v------------------------v--------+
| |
| Schema: shared Cache Keys Events |
| +--------------+ +------------+ +-------------+ |
| | tenants | | product: | | SaleCreated | |
| | plans | | {id} | | ItemAdded | |
| | features | | session: | | PaymentRcvd | |
| +--------------+ | {token} | | StockAdj | |
| | inventory: | +-------------+ |
| Schema: tenant_xxx | {sku} | |
| +--------------+ +------------+ |
| | products | |
| | sales | |
| | inventory | |
| | customers | |
| +--------------+ |
| |
+-------------------------------------------------------------------+
Client Applications
POS Client (Desktop)
POS Client Architecture
=======================
+-------------------------------------------------------------------+
| POS CLIENT (Electron/Tauri) |
| |
| +-----------------------+ +---------------------------+ |
| | UI Layer | | Local Storage | |
| | +----------------+ | | +--------------------+ | |
| | | Sales Screen | | | | SQLite Database | | |
| | +----------------+ | | | | | |
| | | Product Grid | | | | - products_cache | | |
| | +----------------+ | | | - pending_sales | | |
| | | Cart Panel | | | | - sync_queue | | |
| | +----------------+ | | +--------------------+ | |
| | | Payment Dialog | | | | |
| | +----------------+ | +---------------------------+ |
| +-----------------------+ |
| |
| +-----------------------+ +---------------------------+ |
| | Service Layer | | Hardware Layer | |
| | +----------------+ | | +--------------------+ | |
| | | SaleService | | | | Receipt Printer | | |
| | +----------------+ | | +--------------------+ | |
| | | SyncService | | | | Barcode Scanner | | |
| | +----------------+ | | +--------------------+ | |
| | | OfflineService | | | | Cash Drawer | | |
| | +----------------+ | | +--------------------+ | |
| +-----------------------+ | | Card Reader | | |
| | +--------------------+ | |
| +---------------------------+ |
+-------------------------------------------------------------------+
Admin Portal (Web)
Admin Portal Architecture
=========================
+-------------------------------------------------------------------+
| ADMIN PORTAL (React SPA) |
| |
| +------------------------+ +---------------------------+ |
| | Navigation | | Main Content | |
| | +------------------+ | | +---------------------+ | |
| | | Dashboard | | | | Dashboard View | | |
| | +------------------+ | | | - KPIs | | |
| | | Products | | | | - Charts | | |
| | +------------------+ | | | - Alerts | | |
| | | Sales | | | +---------------------+ | |
| | +------------------+ | | +---------------------+ | |
| | | Inventory | | | | Product Manager | | |
| | +------------------+ | | | - CRUD | | |
| | | Customers | | | | - Bulk Import | | |
| | +------------------+ | | | - Sync Status | | |
| | | Employees | | | +---------------------+ | |
| | +------------------+ | | | |
| | | Reports | | | | |
| | +------------------+ | | | |
| | | Settings | | | | |
| | +------------------+ | | | |
| +------------------------+ +---------------------------+ |
| |
| State Management: React Query + Context |
| Routing: React Router |
| UI Framework: TailwindCSS |
+-------------------------------------------------------------------+
Raptag Mobile (RFID)
Raptag Mobile Architecture
==========================
+-------------------------------------------------------------------+
| RAPTAG MOBILE (.NET MAUI) |
| |
| +------------------------+ +---------------------------+ |
| | RFID Layer | | UI Layer | |
| | +------------------+ | | +---------------------+ | |
| | | Zebra SDK | | | | Scan Screen | | |
| | +------------------+ | | +---------------------+ | |
| | | Tag Parser | | | | Inventory Count | | |
| | +------------------+ | | +---------------------+ | |
| | | Batch Processor | | | | Transfer Screen | | |
| | +------------------+ | | +---------------------+ | |
| +------------------------+ +---------------------------+ |
| |
| +------------------------+ +---------------------------+ |
| | Local Storage | | API Client | |
| | +------------------+ | | +---------------------+ | |
| | | SQLite | | | | HTTP Client | | |
| | +------------------+ | | +---------------------+ | |
| | | Scan Buffer | | | | Offline Queue | | |
| | +------------------+ | | +---------------------+ | |
| +------------------------+ +---------------------------+ |
+-------------------------------------------------------------------+
Service Boundaries
Service Boundary Diagram
========================
+-------------------+ +-------------------+ +-------------------+
| Catalog Service | | Sales Service | |Inventory Service |
| | | | | |
| OWNS: | | OWNS: | | OWNS: |
| - products | | - sales | | - inventory_items |
| - categories | | - line_items | | - stock_levels |
| - pricing_rules | | - payments | | - adjustments |
| - product_variants| | - refunds | | - transfers |
| - product_images | | - holds | | - count_sessions |
| | | | | |
| REFERENCES: | | REFERENCES: | | REFERENCES: |
| (none) | | - product_id | | - product_id |
| | | - customer_id | | - location_id |
| | | - employee_id | | |
+-------------------+ +-------------------+ +-------------------+
+-------------------+ +-------------------+
| Customer Service | | Employee Service |
| | | |
| OWNS: | | OWNS: |
| - customers | | - employees |
| - loyalty_cards | | - roles |
| - store_credits | | - permissions |
| - addresses | | - shifts |
| | | - time_entries |
| REFERENCES: | | |
| (none) | | REFERENCES: |
| | | - location_id |
+-------------------+ +-------------------+
Technology Stack Summary
| Layer | Technology | Justification |
|---|---|---|
| API Gateway | Kong or NGINX | Proven, scalable, plugin ecosystem |
| Central API | ASP.NET Core 8.0 | Performance, C# ecosystem, EF Core |
| Database | PostgreSQL 16 | Multi-tenant support, JSON support, reliability |
| Cache | Redis | Session storage, real-time features |
| Event Store | PostgreSQL (append-only) | Simplicity, same DB engine |
| POS Client | Electron or Tauri | Cross-platform desktop, offline SQLite |
| Admin Portal | React + TypeScript | Modern SPA, rich ecosystem |
| Mobile App | .NET MAUI | C# codebase, Zebra RFID SDK support |
| Real-time | SignalR | Inventory broadcasts, notifications |
Deployment Topology
Production Deployment
=====================
+------------------+
| Load Balancer |
| (HAProxy/ALB) |
+--------+---------+
|
+----------------------+----------------------+
| | |
+---------v--------+ +---------v--------+ +---------v--------+
| API Server 1 | | API Server 2 | | API Server 3 |
| | | | | |
| - Central API | | - Central API | | - Central API |
| - Stateless | | - Stateless | | - Stateless |
+--------+---------+ +---------+--------+ +---------+--------+
| | |
+----------+------------+-----------+----------+
| |
+---------v--------+ +---------v--------+
| PostgreSQL | | Redis |
| (Primary) | | (Cluster) |
+--------+---------+ +------------------+
|
+--------v---------+
| PostgreSQL |
| (Replica) |
+------------------+
Store Locations (5 stores):
+----------------+ +----------------+ +----------------+
| GM Store | | HM Store | | LM Store |
| +------------+ | | +------------+ | | +------------+ |
| |POS Client 1| | | |POS Client 1| | | |POS Client 1| |
| +------------+ | | +------------+ | | +------------+ |
| |POS Client 2| | | +------------+ | +----------------+
| +------------+ | | |POS Client 2| |
+----------------+ | +------------+ |
+----------------+
Security Architecture
Security Layers
===============
+------------------------------------------------------------------+
| INTERNET |
+---------------------------+--------------------------------------+
|
v
+---------------------------+--------------------------------------+
| TLS TERMINATION |
| (Let's Encrypt) |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| API GATEWAY |
| +-----------------------+ +-----------------------+ |
| | Rate Limiting | | IP Whitelisting | |
| | 100 req/min/client | | (Admin Portal only) | |
| +-----------------------+ +-----------------------+ |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| AUTHENTICATION |
| +-----------------------+ +-----------------------+ |
| | JWT Validation | | PIN Verification | |
| | - Signature check | | - Employee clock-in | |
| | - Expiry check | | - Sensitive actions | |
| | - Tenant claim | +-----------------------+ |
| +-----------------------+ |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| AUTHORIZATION |
| +-----------------------+ +-----------------------+ |
| | Role-Based (RBAC) | | Permission Policies | |
| | - Admin | | - can:create_sale | |
| | - Manager | | - can:void_sale | |
| | - Cashier | | - can:view_reports | |
| +-----------------------+ +-----------------------+ |
+------------------------------------------------------------------+
L.9B Data Flow Reference
Detailed Implementation Reference (from former High-Level Architecture chapter, now consolidated here):
Pattern 1: Online Sale Flow
Online Sale Flow
================
[POS Client] [Central API] [Database]
| | |
| 1. POST /sales | |
|------------------------------>| |
| | 2. Validate request |
| |------------------------------>|
| | |
| | 3. Begin transaction |
| |------------------------------>|
| | |
| | 4. Create sale record |
| |------------------------------>|
| | |
| | 5. Decrement inventory |
| |------------------------------>|
| | |
| | 6. Log sale event |
| |------------------------------>|
| | |
| | 7. Commit transaction |
| |------------------------------>|
| | |
| 8. Return sale confirmation | |
|<------------------------------| |
| | |
| 9. Print receipt | |
| | |
Pattern 2: Offline Sale Flow
Offline Sale Flow
=================
[POS Client] [Local SQLite] [Sync Queue]
| | |
| 1. Create sale locally | |
|------------------------------>| |
| | 2. Generate local UUID |
| | |
| 3. Decrement local inventory | |
|------------------------------>| |
| | |
| 4. Queue for sync | |
|-------------------------------------------------------------->|
| | |
| 5. Print receipt | |
| | |
--- Later, when online ---
[Sync Service] [Central API] [Database]
| | |
| 1. Pop from queue | |
| | |
| 2. POST /sync/sales | |
|------------------------------>| |
| | 3. Validate (check for dupe) |
| |------------------------------>|
| | |
| | 4. Insert with local UUID |
| |------------------------------>|
| | |
| 5. Mark synced | |
|<------------------------------| |
Pattern 3: Inventory Sync Flow
Inventory Sync from Shopify
===========================
[Shopify] [Webhook Handler] [Inventory Svc]
| | |
| 1. inventory_levels/update | |
|------------------------------>| |
| | 2. Validate webhook |
| | |
| | 3. Parse inventory update |
| |------------------------------>|
| | |
| | 4. Update stock level |
| |------------------------------>|
| | |
| | 5. Log inventory event |
| |------------------------------>|
| | |
| | 6. Broadcast to POS clients |
| |------------------------------>|
| | (SignalR) |
L.9C Domain Model Reference
Domain Model Overview (from former Domain Model chapter, now consolidated here): NOTE: Only bounded contexts, aggregates, and ER diagram included here. Detailed entity field definitions are in Part III Database chapters.
Bounded Contexts Overview
Domain Bounded Contexts
=======================
+------------------------------------------------------------------+
| POS PLATFORM |
| |
| +-------------+ +-------------+ +-------------+ |
| | CATALOG | | SALES | | INVENTORY | |
| | | | | | | |
| | Products | | Sales | | StockLevels | |
| | Variants | | LineItems | | Adjustments | |
| | Categories | | Payments | | Transfers | |
| | Pricing | | Refunds | | Counts | |
| +-------------+ +-------------+ +-------------+ |
| |
| +-------------+ +-------------+ +-------------+ |
| | CUSTOMER | | EMPLOYEE | | LOCATION | |
| | | | | | | |
| | Customers | | Employees | | Locations | |
| | Addresses | | Roles | | Registers | |
| | Loyalty | | Permissions | | Settings | |
| | Credits | | Shifts | | TaxRates | |
| +-------------+ +-------------+ +-------------+ |
| |
+------------------------------------------------------------------+
Context Summary Table
| Context | Entities | Purpose |
|---|---|---|
| Catalog | Product, Variant, Category, PricingRule | Product management |
| Sales | Sale, LineItem, Payment, Refund | Transaction processing |
| Inventory | InventoryItem, Adjustment, Transfer | Stock management |
| Customer | Customer, Address, Credit, Loyalty | Customer management |
| Employee | Employee, Role, Permission, Shift | Staff management |
| Location | Location, Register, Drawer, TaxRate | Store configuration |
Entity Relationship Diagram
Entity Relationships
====================
+----------+
| Category |
+----+-----+
|
| 1:N
v
+----------+ 1:N +----------+ 1:N +----------------+
| Location |<-------------| Product |------------->| ProductVariant |
+----+-----+ +----+-----+ +-------+--------+
| | |
| | |
| 1:N | |
v | |
+----------+ | |
| Register | v v
+----+-----+ +----------+ +----------------+
| |Inventory | | Adjustment |
| | Item | | Item |
| 1:N +----------+ +----------------+
v
+----------+
|CashDrawer|
+----------+
+----------+ 1:N +----------+ 1:N +----------+
| Customer |------------->| Sale |------------->| LineItem |
+----+-----+ +----+-----+ +----------+
| |
| | 1:N
| 1:N v
v +----------+
+----------+ | Payment |
| Credit | +----------+
+----------+
+----------+ N:1 +----------+ 1:N +----------+
| Employee |------------->| Role |------------->|Permission|
+----+-----+ +----------+ +----------+
|
| 1:N
v
+----------+
| Shift |
+----------+
Aggregate Boundaries
Each aggregate has a root entity and encapsulates related entities:
Aggregate Definitions
=====================
SALE Aggregate
+------------------------------------------+
| Sale (Root) |
| +-- SaleLineItem[] (owned) |
| +-- Payment[] (owned) |
| +-- Refund[] (reference: sale_id) |
+------------------------------------------+
INVENTORY_ADJUSTMENT Aggregate
+------------------------------------------+
| InventoryAdjustment (Root) |
| +-- InventoryAdjustmentItem[] (owned) |
+------------------------------------------+
INVENTORY_TRANSFER Aggregate
+------------------------------------------+
| InventoryTransfer (Root) |
| +-- InventoryTransferItem[] (owned) |
+------------------------------------------+
CUSTOMER Aggregate
+------------------------------------------+
| Customer (Root) |
| +-- CustomerAddress[] (owned) |
| +-- StoreCredit[] (reference) |
| +-- LoyaltyTransaction[] (reference) |
+------------------------------------------+
PRODUCT Aggregate
+------------------------------------------+
| Product (Root) |
| +-- ProductVariant[] (owned) |
+------------------------------------------+
L.10 Risks & Mitigations
| Risk | Mitigation Strategy |
|---|---|
| Sync Conflicts | Use Event Sourcing to replay conflicting events deterministically. First-commit-wins for inventory with backorder escalation. |
| Observability Overload | LGTM stack with integration-specific dashboards: circuit breaker state, DLQ depth, sync latency, safety buffer violations, disapproval rate per channel. |
| GenAI Code Risks | 6-Gate Security Pyramid: SAST + SCA + Secrets + ArchUnit + Pact + Manual Review. Architecture conformance tests prevent module boundary violations. |
| PCI-DSS Non-Compliance | FIM via Wazuh agents on all POS nodes. SCA via Snyk. SBOM generation. Session management with 15-minute timeout. |
| Supply Chain Attacks | Package firewall at proxy level. Real-time SBOM. Automated dependency updates with vulnerability scanning. |
| External API Cascade Failure | Circuit breaker (5 failures/60s → OPEN). Module 6 as Extractable Integration Gateway with failure isolation. Bulkheaded thread pools. |
| Credential Compromise | HashiCorp Vault with key hierarchy. 90-day automated rotation. Emergency rotation procedures. Least-privilege access policies. |
| Overselling Across Channels | Safety buffer computation with 4-level priority resolution. Transactional Outbox for atomic inventory + event. First-commit-wins with backorder escalation. |
L.10A Key Architecture Decisions (BRD-v12)
This section documents critical architecture decisions derived from BRD-v12 requirements analysis. Each decision follows the Architecture Decision Record (ADR) format.
L.10A.1 Offline Strategy Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-001 |
| Context | POS terminals must operate during network outages without losing transactions |
| Decision | Queue-and-Sync with configurable limits |
| Alternatives Considered | 1) Full local database replica, 2) Degraded mode only, 3) Queue-only (selected) |
| Rationale | Full replica adds sync complexity; degraded mode loses sales; queue-only balances reliability with simplicity |
| Reference | BRD-v12 §1.16, Section L.10A.1 |
Implementation Configuration:
offline_mode:
max_queue_size: 100
sync_interval_seconds: 30
conflict_strategy: "server_wins_with_review"
# Operations ALLOWED offline
allowed_offline:
- sale_new
- return_with_receipt
- price_check
- parked_sale_create
- parked_sale_retrieve
# Operations BLOCKED offline (too risky)
blocked_offline:
- customer_create # Requires uniqueness check
- credit_limit_check # Requires real-time balance
- on_account_payment # Risk of exceeding limit
- multi_store_inventory # Requires network
- gift_card_activation # Must register immediately
- gift_card_reload # Risk of double-load
- transfer_request # Requires other store
- reservation_create # Requires other store
Conflict Resolution Strategy:
┌─────────────────────────────────────────────────────────────┐
│ OFFLINE SYNC WORKFLOW │
├─────────────────────────────────────────────────────────────┤
│ │
│ Network Lost Network Restored Conflict? │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Queue │─────────►│ Sync │────────►│ Review │ │
│ │ Locally │ │ to API │ │ Manager │ │
│ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │ │
│ Max 100 txns 30-second Server wins │
│ interval with flag │
│ │
└─────────────────────────────────────────────────────────────┘
L.10A.1A POS Client Architecture
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
POS Client Architecture
=======================
+-----------------------------------------------------------------------+
| POS CLIENT |
| |
| +------------------------+ +-------------------------------+ |
| | Presentation | | Local Storage | |
| | | | | |
| | +------------------+ | | +-------------------------+ | |
| | | Sales Screen | | | | SQLite Database | | |
| | +------------------+ | | | | | |
| | | Product Grid | | | | +---------------------+ | | |
| | +------------------+ | | | | products_cache | | | |
| | | Cart Panel | | | | +---------------------+ | | |
| | +------------------+ | | | | pending_sales | | | |
| | | Payment Dialog | | | | +---------------------+ | | |
| | +------------------+ | | | | sync_queue | | | |
| | | Receipt Print | | | | +---------------------+ | | |
| | +------------------+ | | | | events (local) | | | |
| +------------------------+ | | +---------------------+ | | |
| | | | | customer_cache | | | |
| v | | +---------------------+ | | |
| +------------------------+ | +-------------------------+ | |
| | Application Layer | | | |
| | | +-------------------------------+ |
| | +------------------+ | ^ |
| | | SaleService |------------------------>| |
| | +------------------+ | | |
| | | InventoryService |------------------------>| |
| | +------------------+ | | |
| | | CustomerService |------------------------>| |
| | +------------------+ | |
| +------------------------+ |
| | |
| v |
| +------------------------+ +-------------------------------+ |
| | Sync Service | | Connection Monitor | |
| | | | | |
| | - Queue Manager |<------>| - Ping Central API | |
| | - Conflict Resolver | | - Track online/offline | |
| | - Retry Handler | | - Trigger sync when online | |
| | - Batch Uploader | | | |
| +------------------------+ +-------------------------------+ |
| | |
+-------------|----------------------------------------------------------+
|
v (when online)
+-----------------------------------------------------------------------+
| CENTRAL API |
+-----------------------------------------------------------------------+
L.10A.1B Local Database Schema (SQLite)
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
-- SQLite Schema for POS Client
-- Product cache (synced from server)
CREATE TABLE products_cache (
id TEXT PRIMARY KEY,
sku TEXT UNIQUE NOT NULL,
barcode TEXT,
name TEXT NOT NULL,
category_name TEXT,
price REAL NOT NULL,
cost REAL,
tax_code TEXT,
is_taxable INTEGER DEFAULT 1,
track_inventory INTEGER DEFAULT 1,
image_url TEXT,
variants_json TEXT, -- JSON array of variants
synced_at TEXT NOT NULL, -- When last synced from server
created_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX idx_products_barcode ON products_cache(barcode);
CREATE INDEX idx_products_name ON products_cache(name);
-- Inventory cache (synced from server)
CREATE TABLE inventory_cache (
product_id TEXT NOT NULL,
variant_id TEXT,
location_id TEXT NOT NULL,
quantity INTEGER NOT NULL,
synced_at TEXT NOT NULL,
PRIMARY KEY (product_id, variant_id, location_id)
);
-- Customer cache (synced from server)
CREATE TABLE customers_cache (
id TEXT PRIMARY KEY,
customer_number TEXT UNIQUE,
first_name TEXT,
last_name TEXT,
email TEXT,
phone TEXT,
loyalty_points INTEGER DEFAULT 0,
store_credit REAL DEFAULT 0,
synced_at TEXT NOT NULL
);
-- Local sales (created offline, pending sync)
CREATE TABLE local_sales (
id TEXT PRIMARY KEY,
sale_number TEXT UNIQUE NOT NULL,
location_id TEXT NOT NULL,
register_id TEXT NOT NULL,
employee_id TEXT NOT NULL,
customer_id TEXT,
status TEXT DEFAULT 'completed',
subtotal REAL NOT NULL,
discount_total REAL DEFAULT 0,
tax_total REAL DEFAULT 0,
total REAL NOT NULL,
line_items_json TEXT NOT NULL, -- JSON array of line items
payments_json TEXT NOT NULL, -- JSON array of payments
created_at TEXT DEFAULT (datetime('now')),
synced_at TEXT -- NULL until synced
);
CREATE INDEX idx_local_sales_synced ON local_sales(synced_at);
-- Event queue (append-only, sync to server)
CREATE TABLE event_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
event_id TEXT UNIQUE NOT NULL,
aggregate_type TEXT NOT NULL,
aggregate_id TEXT NOT NULL,
event_type TEXT NOT NULL,
event_data TEXT NOT NULL, -- JSON
created_at TEXT NOT NULL,
created_by TEXT,
synced_at TEXT, -- NULL until synced
sync_attempts INTEGER DEFAULT 0,
last_error TEXT
);
CREATE INDEX idx_event_queue_pending ON event_queue(synced_at) WHERE synced_at IS NULL;
-- Sync metadata
CREATE TABLE sync_status (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT DEFAULT (datetime('now'))
);
-- Track what we've synced
INSERT INTO sync_status (key, value) VALUES
('last_product_sync', '1970-01-01T00:00:00Z'),
('last_inventory_sync', '1970-01-01T00:00:00Z'),
('last_customer_sync', '1970-01-01T00:00:00Z'),
('last_event_push', '1970-01-01T00:00:00Z');
L.10A.1C Sync Queue Design
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Sync Queue Architecture
=======================
+-------------------+ +-------------------+ +-------------------+
| Sale Created | | Inventory Adj | | Customer Created |
| (Offline) | | (Offline) | | (Offline) |
+--------+----------+ +--------+----------+ +--------+----------+
| | |
v v v
+-----------------------------------------------------------------------+
| SYNC QUEUE |
| |
| Priority | Type | Status | Retries | Last Error |
| --------------------------------------------------------------- |
| 1 | SaleCreated | pending | 0 | |
| 1 | PaymentReceived | pending | 0 | |
| 2 | InventoryAdjusted | pending | 0 | |
| 3 | CustomerCreated | failed | 3 | Timeout |
| 1 | SaleCompleted | pending | 0 | |
| |
| Priority Legend: |
| 1 = Critical (sales, payments) - sync immediately |
| 2 = Important (inventory) - sync within minutes |
| 3 = Normal (customers) - sync when convenient |
+-----------------------------------------------------------------------+
|
| Sync Processor (runs when online)
v
+-----------------------------------------------------------------------+
| CENTRAL API |
| |
| POST /api/sync/events |
| [ |
| { eventType: "SaleCreated", ... }, |
| { eventType: "PaymentReceived", ... }, |
| ... |
| ] |
| |
| Response: { synced: 5, conflicts: 0, errors: [] } |
+-----------------------------------------------------------------------+
Sync Priority Rules
| Priority | Event Types | Sync Timing |
|---|---|---|
| 1 (Critical) | Sales, Payments, Refunds, Voids | Immediate when online |
| 2 (Important) | Inventory adjustments, Transfers | Within 5 minutes |
| 3 (Normal) | Customer updates, Loyalty changes | Within 15 minutes |
| 4 (Low) | Analytics events, Logs | Batch sync hourly |
L.10A.1D Conflict Resolution Strategies
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Conflict Resolution Matrix
==========================
+------------------+---------------------+--------------------------------+
| Data Type | Strategy | Reasoning |
+------------------+---------------------+--------------------------------+
| Sales | Append-Only | Each sale is unique, no |
| | (No Conflicts) | conflicts possible |
+------------------+---------------------+--------------------------------+
| Inventory | Last-Write-Wins | Central server is authority, |
| | (Server Wins) | client updates are suggestions |
+------------------+---------------------+--------------------------------+
| Customers | Merge on Key | Merge by email, combine |
| | (Email = Key) | non-conflicting fields |
+------------------+---------------------+--------------------------------+
| Products | Server Authority | Product catalog managed |
| | (Read-Only Client) | centrally, client is cache |
+------------------+---------------------+--------------------------------+
| Employees | Server Authority | HR data managed centrally |
| | (Read-Only Client) | |
+------------------+---------------------+--------------------------------+
| Settings | Server Authority | Config managed by admin |
| | (Read-Only Client) | |
+------------------+---------------------+--------------------------------+
Strategy 1: Append-Only (Sales)
Sale Conflict Resolution: None Required
========================================
Client A (Offline): Client B (Offline):
Sale S-001 created @ 10:15 Sale S-002 created @ 10:16
LineItem: Product X, Qty 2 LineItem: Product Y, Qty 1
Payment: $50 cash Payment: $25 credit
When both sync:
Server: Accepts S-001 (unique ID)
Server: Accepts S-002 (unique ID)
Result: Both sales recorded, no conflict
Strategy 2: Last-Write-Wins (Inventory)
Inventory Conflict Resolution: Server Authority
===============================================
Server State:
Product X @ Location HQ: 100 units
Client A (Offline): Client B (Offline):
Sells 5 units of Product X Sells 3 units of Product X
Local: 95 units Local: 97 units
When both sync:
Server receives: "Sold 5 units" from A
Server receives: "Sold 3 units" from B
Server calculates: 100 - 5 - 3 = 92 units
Server pushes new quantity to all clients
Result:
All clients update to 92 units
Individual decrements preserved
No quantity lost or duplicated
Strategy 3: Merge on Key (Customers)
Customer Conflict Resolution: Merge
===================================
Server State:
Customer email: john@example.com
Name: John Doe
Phone: (blank)
Loyalty: 500 points
Client A (Offline): Client B (Offline):
Updates phone to 555-1234 Updates loyalty to 600 points
When both sync:
Server merges non-conflicting fields:
Name: John Doe (unchanged)
Phone: 555-1234 (from A)
Loyalty: 600 points (from B)
If same field changed:
Server uses timestamp to pick latest
Or prompts admin for resolution
L.10A.1E Sync Processor Workflow
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Sync Processor State Machine
============================
+-------------+
| IDLE |
+------+------+
|
| Connection detected
v
+-------------+
| SYNCING |
+------+------+
|
+------------------+------------------+
| | |
v v v
+-------------+ +-------------+ +-------------+
| PUSH EVENTS | | PULL DATA | | COMPLETE |
| | | | | |
| - Sales | | - Products | | - Update |
| - Payments | | - Inventory | | metadata |
| - Inventory | | - Customers | | - Return |
| changes | | - Settings | | to IDLE |
+------+------+ +------+------+ +-------------+
| |
+------------------+
|
v
+-------------+
| HANDLE |
| CONFLICTS |
+------+------+
|
v
+-------------+
| COMPLETE |
+-------------+
Sync Service Implementation
// SyncService.cs
public class SyncService : IHostedService
{
private readonly ILocalDatabase _localDb;
private readonly IApiClient _apiClient;
private readonly IConnectionMonitor _connectionMonitor;
private readonly IConflictResolver _conflictResolver;
private readonly ILogger<SyncService> _logger;
private Timer? _syncTimer;
private bool _isSyncing = false;
public async Task StartAsync(CancellationToken cancellationToken)
{
_connectionMonitor.OnlineStatusChanged += HandleConnectionChange;
// Check for pending sync every 30 seconds
_syncTimer = new Timer(
async _ => await TrySyncAsync(),
null,
TimeSpan.Zero,
TimeSpan.FromSeconds(30)
);
}
private async void HandleConnectionChange(object? sender, bool isOnline)
{
if (isOnline)
{
_logger.LogInformation("Connection restored, starting sync");
await TrySyncAsync();
}
}
private async Task TrySyncAsync()
{
if (_isSyncing) return;
if (!_connectionMonitor.IsOnline) return;
_isSyncing = true;
try
{
// 1. Push local events to server
await PushEventsAsync();
// 2. Pull updated data from server
await PullProductsAsync();
await PullInventoryAsync();
await PullCustomersAsync();
// 3. Update sync timestamps
await UpdateSyncMetadataAsync();
_logger.LogInformation("Sync completed successfully");
}
catch (Exception ex)
{
_logger.LogError(ex, "Sync failed");
}
finally
{
_isSyncing = false;
}
}
private async Task PushEventsAsync()
{
// Get pending events ordered by priority
var pendingEvents = await _localDb.GetPendingEventsAsync();
if (!pendingEvents.Any()) return;
// Batch events (max 100 per request)
var batches = pendingEvents.Chunk(100);
foreach (var batch in batches)
{
try
{
var response = await _apiClient.PostEventsAsync(batch);
// Mark synced events
foreach (var evt in response.Synced)
{
await _localDb.MarkEventSyncedAsync(evt.EventId);
}
// Handle conflicts
foreach (var conflict in response.Conflicts)
{
await _conflictResolver.ResolveAsync(conflict);
}
}
catch (HttpRequestException)
{
// Network error, increment retry count
foreach (var evt in batch)
{
await _localDb.IncrementEventRetryAsync(evt.EventId);
}
throw;
}
}
}
private async Task PullProductsAsync()
{
var lastSync = await _localDb.GetSyncTimestampAsync("products");
var products = await _apiClient.GetProductsUpdatedSinceAsync(lastSync);
foreach (var product in products)
{
await _localDb.UpsertProductCacheAsync(product);
}
}
private async Task PullInventoryAsync()
{
var locationId = await GetCurrentLocationIdAsync();
var lastSync = await _localDb.GetSyncTimestampAsync("inventory");
var inventory = await _apiClient.GetInventoryUpdatedSinceAsync(locationId, lastSync);
foreach (var item in inventory)
{
// Apply server's quantity (server is authority)
await _localDb.UpdateInventoryCacheAsync(item);
}
}
}
L.10A.1F Sale Creation Flow (Offline-Capable)
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Offline Sale Flow
=================
1. Cashier scans items
+----------------+
| Local Lookup |
| products_cache |
+----------------+
|
v
2. Add to cart (no network needed)
+----------------+
| In-Memory Cart |
+----------------+
|
v
3. Customer pays
+----------------+
| Payment Dialog |
| (card or cash) |
+----------------+
|
v
4. Save sale locally
+----------------+
| local_sales |
| (SQLite) |
+----------------+
|
v
5. Queue sync events
+----------------+
| event_queue |
| SaleCreated |
| ItemAdded x N |
| PaymentRcvd |
| SaleCompleted |
+----------------+
|
v
6. Decrement local inventory
+----------------+
| inventory_cache|
| (optimistic) |
+----------------+
|
v
7. Print receipt
+----------------+
| Receipt ready |
| (no waiting) |
+----------------+
|
v
8. Background sync (when online)
+----------------+
| SyncService |
| pushes events |
+----------------+
Sale Service Implementation
// SaleService.cs
public class SaleService
{
private readonly ILocalDatabase _localDb;
private readonly IEventQueue _eventQueue;
private readonly IReceiptPrinter _printer;
public async Task<Sale> CompleteSaleAsync(Cart cart, List<Payment> payments)
{
// 1. Generate local IDs
var saleId = Guid.NewGuid();
var saleNumber = GenerateSaleNumber();
// 2. Create sale record
var sale = new Sale
{
Id = saleId,
SaleNumber = saleNumber,
LocationId = GetCurrentLocationId(),
RegisterId = GetCurrentRegisterId(),
EmployeeId = GetCurrentEmployeeId(),
CustomerId = cart.CustomerId,
Status = "completed",
Subtotal = cart.Subtotal,
DiscountTotal = cart.DiscountTotal,
TaxTotal = cart.TaxTotal,
Total = cart.Total,
LineItems = cart.Items.Select(MapToLineItem).ToList(),
Payments = payments,
CreatedAt = DateTime.UtcNow
};
// 3. Save to local database
await _localDb.InsertSaleAsync(sale);
// 4. Queue events for sync
await _eventQueue.EnqueueAsync(new SaleCreated
{
SaleId = saleId,
SaleNumber = saleNumber,
LocationId = sale.LocationId,
EmployeeId = sale.EmployeeId,
CustomerId = sale.CustomerId,
CreatedAt = sale.CreatedAt
});
foreach (var item in sale.LineItems)
{
await _eventQueue.EnqueueAsync(new SaleLineItemAdded
{
SaleId = saleId,
LineItemId = item.Id,
ProductId = item.ProductId,
Sku = item.Sku,
Name = item.Name,
Quantity = item.Quantity,
UnitPrice = item.UnitPrice
});
// 5. Decrement local inventory (optimistic)
await _localDb.DecrementInventoryAsync(
item.ProductId,
item.VariantId,
sale.LocationId,
item.Quantity
);
}
foreach (var payment in payments)
{
await _eventQueue.EnqueueAsync(new PaymentReceived
{
SaleId = saleId,
PaymentId = payment.Id,
PaymentMethod = payment.Method,
Amount = payment.Amount
});
}
await _eventQueue.EnqueueAsync(new SaleCompleted
{
SaleId = saleId,
Total = sale.Total,
CompletedAt = DateTime.UtcNow
});
// 6. Print receipt (async, don't wait)
_ = _printer.PrintReceiptAsync(sale);
return sale;
}
private string GenerateSaleNumber()
{
// Format: HQ-20251229-0001
// Location-Date-Sequence
var location = GetCurrentLocationCode();
var date = DateTime.Now.ToString("yyyyMMdd");
var sequence = GetNextLocalSequence();
return $"{location}-{date}-{sequence:D4}";
}
}
L.10A.1G Connection Monitor
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
// ConnectionMonitor.cs
public class ConnectionMonitor : IHostedService
{
private readonly IApiClient _apiClient;
private readonly ILogger<ConnectionMonitor> _logger;
private Timer? _pingTimer;
private bool _isOnline = false;
public bool IsOnline => _isOnline;
public event EventHandler<bool>? OnlineStatusChanged;
public Task StartAsync(CancellationToken cancellationToken)
{
// Ping server every 10 seconds
_pingTimer = new Timer(
async _ => await CheckConnectionAsync(),
null,
TimeSpan.Zero,
TimeSpan.FromSeconds(10)
);
return Task.CompletedTask;
}
private async Task CheckConnectionAsync()
{
var wasOnline = _isOnline;
try
{
// Simple health check endpoint
var response = await _apiClient.PingAsync();
_isOnline = response.IsSuccessStatusCode;
}
catch
{
_isOnline = false;
}
if (_isOnline != wasOnline)
{
_logger.LogInformation(
"Connection status changed: {Status}",
_isOnline ? "ONLINE" : "OFFLINE"
);
OnlineStatusChanged?.Invoke(this, _isOnline);
}
}
public Task StopAsync(CancellationToken cancellationToken)
{
_pingTimer?.Dispose();
return Task.CompletedTask;
}
}
Offline UI Indicator
Offline Indicator Design
========================
When ONLINE:
+-----------------------------------------------------------------------+
| [=] NEXUS POS [GM Store] [John D] |
| Status: Connected |
+-----------------------------------------------------------------------+
When OFFLINE:
+-----------------------------------------------------------------------+
| [=] NEXUS POS [!] OFFLINE MODE [GM Store] |
| +-----------------------------------------------------------------+ |
| | Working offline. 5 sales pending sync. | |
| +-----------------------------------------------------------------+ |
+-----------------------------------------------------------------------+
When SYNCING:
+-----------------------------------------------------------------------+
| [=] NEXUS POS [<->] Syncing... 3/5 [GM Store] |
+-----------------------------------------------------------------------+
L.10A.1H CRDTs for Conflict-Free Synchronization
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Overview
While event sourcing handles sales conflicts (append-only), other data types benefit from CRDTs (Conflict-free Replicated Data Types) - data structures mathematically guaranteed to converge without coordination.
Traditional Sync Problem
========================
Terminal A (Offline): Terminal B (Offline):
Inventory: 100 Inventory: 100
Customer purchases 5 Receives shipment +20
Local: 95 Local: 120
When both sync - CONFLICT!
Which value is correct? 95 or 120?
Answer: Neither! Correct is 115 (100 - 5 + 20)
CRDT Solution:
Both terminals track OPERATIONS, not STATE
G-Counter for additions: {A: 0, B: 20}
PN-Counter for decrements: {A: 5, B: 0}
Final value: 100 + 20 - 5 = 115
CRDT Types for POS
| CRDT Type | Use Case | Merge Strategy |
|---|---|---|
| G-Counter | Transaction counts, sales counts | Sum all increments |
| PN-Counter | Inventory levels (+/-) | Sum increments, sum decrements |
| LWW-Register | Price updates, last modified | Highest timestamp wins |
| OR-Set | Cart items, applied discounts | Union with tombstones |
| MV-Register | Customer preferences | Keep all concurrent values |
G-Counter Implementation (Transaction Counts)
// GCounter.cs - Grow-only counter
public class GCounter
{
// Each node tracks its own increment count
private readonly Dictionary<string, long> _counters = new();
private readonly string _nodeId;
public GCounter(string nodeId)
{
_nodeId = nodeId;
_counters[nodeId] = 0;
}
public void Increment(long amount = 1)
{
_counters[_nodeId] += amount;
}
public long Value => _counters.Values.Sum();
// Merge with another G-Counter (associative, commutative, idempotent)
public void Merge(GCounter other)
{
foreach (var (nodeId, count) in other._counters)
{
if (_counters.TryGetValue(nodeId, out var existing))
{
_counters[nodeId] = Math.Max(existing, count);
}
else
{
_counters[nodeId] = count;
}
}
}
public Dictionary<string, long> State => new(_counters);
}
PN-Counter for Inventory
// PNCounter.cs - Positive-Negative Counter for inventory
public class PNCounter
{
private readonly GCounter _positive;
private readonly GCounter _negative;
private readonly string _nodeId;
public PNCounter(string nodeId)
{
_nodeId = nodeId;
_positive = new GCounter(nodeId);
_negative = new GCounter(nodeId);
}
public void Increment(long amount = 1)
{
_positive.Increment(amount);
}
public void Decrement(long amount = 1)
{
_negative.Increment(amount);
}
public long Value => _positive.Value - _negative.Value;
public void Merge(PNCounter other)
{
_positive.Merge(other._positive);
_negative.Merge(other._negative);
}
}
// Usage in inventory sync
public class InventoryCRDT
{
private readonly Dictionary<string, PNCounter> _inventory = new();
public void RecordSale(string sku, int quantity, string terminalId)
{
if (!_inventory.ContainsKey(sku))
_inventory[sku] = new PNCounter(terminalId);
_inventory[sku].Decrement(quantity);
}
public void RecordReceiving(string sku, int quantity, string terminalId)
{
if (!_inventory.ContainsKey(sku))
_inventory[sku] = new PNCounter(terminalId);
_inventory[sku].Increment(quantity);
}
public int GetQuantity(string sku) =>
_inventory.TryGetValue(sku, out var counter)
? (int)counter.Value
: 0;
}
LWW-Register for Price Updates
// LWWRegister.cs - Last-Writer-Wins Register
public class LWWRegister<T>
{
private T? _value;
private DateTime _timestamp;
private string _nodeId;
public LWWRegister(string nodeId)
{
_nodeId = nodeId;
_timestamp = DateTime.MinValue;
}
public void Set(T value, DateTime? timestamp = null)
{
var ts = timestamp ?? DateTime.UtcNow;
if (ts > _timestamp)
{
_value = value;
_timestamp = ts;
}
}
public T? Value => _value;
public DateTime Timestamp => _timestamp;
public void Merge(LWWRegister<T> other)
{
if (other._timestamp > _timestamp)
{
_value = other._value;
_timestamp = other._timestamp;
}
}
}
// Usage for price sync
public class PriceCRDT
{
private readonly Dictionary<string, LWWRegister<decimal>> _prices = new();
public void UpdatePrice(string sku, decimal price, string terminalId)
{
if (!_prices.ContainsKey(sku))
_prices[sku] = new LWWRegister<decimal>(terminalId);
_prices[sku].Set(price);
}
public decimal? GetPrice(string sku) =>
_prices.TryGetValue(sku, out var register)
? register.Value
: null;
}
OR-Set for Cart Items (with Tombstones)
// ORSet.cs - Observed-Remove Set with tombstones
public class ORSet<T>
{
private readonly Dictionary<T, HashSet<string>> _additions = new();
private readonly Dictionary<T, HashSet<string>> _removals = new();
private readonly string _nodeId;
private int _counter = 0;
public ORSet(string nodeId)
{
_nodeId = nodeId;
}
public void Add(T element)
{
var uniqueTag = $"{_nodeId}:{++_counter}";
if (!_additions.ContainsKey(element))
_additions[element] = new HashSet<string>();
_additions[element].Add(uniqueTag);
}
public void Remove(T element)
{
// Only remove tags we've seen (observed-remove semantics)
if (_additions.TryGetValue(element, out var tags))
{
if (!_removals.ContainsKey(element))
_removals[element] = new HashSet<string>();
foreach (var tag in tags)
{
_removals[element].Add(tag);
}
}
}
public IEnumerable<T> Elements =>
_additions
.Where(kv =>
{
var remainingTags = _removals.TryGetValue(kv.Key, out var removed)
? kv.Value.Except(removed)
: kv.Value;
return remainingTags.Any();
})
.Select(kv => kv.Key);
public void Merge(ORSet<T> other)
{
// Merge additions
foreach (var (element, tags) in other._additions)
{
if (!_additions.ContainsKey(element))
_additions[element] = new HashSet<string>();
_additions[element].UnionWith(tags);
}
// Merge removals (tombstones)
foreach (var (element, tags) in other._removals)
{
if (!_removals.ContainsKey(element))
_removals[element] = new HashSet<string>();
_removals[element].UnionWith(tags);
}
}
}
Tombstone Handling Strategy
| Strategy | Retention | Trade-off |
|---|---|---|
| Time-based | 7-14 days | May resurrect if offline longer |
| Epoch-based | Until all nodes sync | Requires sync confirmation |
| Compaction | Periodic cleanup | Best balance for POS |
// TombstoneManager.cs
public class TombstoneManager
{
private readonly TimeSpan _tombstoneTTL = TimeSpan.FromDays(7);
public void CompactTombstones<T>(ORSet<T> set)
{
// Remove tombstones older than TTL
// Requires tracking tombstone timestamps
}
public bool IsSafeToCompact(DateTime tombstoneCreated) =>
DateTime.UtcNow - tombstoneCreated > _tombstoneTTL;
}
CRDT Sync Protocol
CRDT Sync Flow
==============
POS Terminal A Central API
| |
| 1. Push local CRDT state |
| ────────────────────────────────► |
| { |
| "inventory": { |
| "NXJ1078": { |
| "positive": {"A": 5}, |
| "negative": {"A": 2} |
| } |
| }, |
| "prices": {...} |
| } |
| |
| 2. Server merges with global state |
| |
| 3. Return merged state |
| ◄───────────────────────────────── |
| { |
| "inventory": { |
| "NXJ1078": { |
| "positive": {"A": 5, "B": 10, "HQ": 100},
| "negative": {"A": 2, "C": 3}
| } |
| } |
| } |
| |
| 4. Local merge |
| Final inventory = 110 |
When to Use CRDTs vs. Event Sourcing
| Scenario | Approach | Reasoning |
|---|---|---|
| Sales transactions | Event Sourcing | Need full audit trail |
| Inventory counts | PN-Counter CRDT | Frequent concurrent updates |
| Price updates | LWW-Register | Last price wins |
| Cart items | OR-Set | Add/remove operations |
| Customer data | Event Sourcing | Need history |
| Real-time counters | G-Counter CRDT | Dashboard metrics |
Reference Libraries
| Language | Library | Notes |
|---|---|---|
| C# | Akka.DistributedData | Battle-tested, Akka ecosystem |
| C# | Microsoft.FASTER | High-performance state |
| TypeScript | Automerge | Good for client-side |
| Rust | rust-crdt | If building native components |
L.10A.2 Tax Engine Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-002 |
| Context | Need flexible tax calculation supporting multiple jurisdictions |
| Decision | Custom-Built Tax Engine with modular jurisdiction support |
| Alternatives Considered | 1) Third-party service (Avalara/TaxJar), 2) Custom-built (selected) |
| Rationale | Full control over rules; no per-transaction fees; offline support; expansion flexibility |
| Reference | BRD-v12 §1.17 |
Tax Calculation Hierarchy (Priority Order):
┌─────────────────────────────────────────────────────────────┐
│ TAX CALCULATION HIERARCHY │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. PRODUCT-LEVEL OVERRIDE (Highest Priority) │
│ └── Example: "Grocery Food - 1.5%" │
│ └── Example: "Prepared Food - 10%" │
│ └── Example: "Prescription Drugs - 0%" │
│ │
│ 2. CUSTOMER-LEVEL EXEMPTION │
│ └── Example: "Reseller Certificate" │
│ └── Example: "Non-Profit 501(c)(3)" │
│ └── Example: "Diplomatic Status" │
│ │
│ 3. LOCATION-BASED TAX (Default) │
│ └── State Tax + County Tax + City Tax + District Tax │
│ └── Based on store physical address │
│ │
└─────────────────────────────────────────────────────────────┘
Virginia Initial Configuration:
tax_jurisdictions:
virginia:
state_rate: 4.3
default_local_rate: 1.0
# Regional additional taxes
regions:
hampton_roads:
counties: ["Norfolk", "Virginia Beach", "Newport News", "Hampton"]
additional_rate: 0.7
northern_virginia:
counties: ["Arlington", "Fairfax", "Loudoun", "Prince William"]
additional_rate: 0.7
central_virginia:
counties: ["Henrico", "Chesterfield", "Richmond City"]
additional_rate: 0.0
# Product exemptions/reduced rates
exemptions:
- category: "grocery_food"
rate: 1.5 # Reduced rate
- category: "prescription_drugs"
rate: 0.0
- category: "medical_equipment"
rate: 0.0
Expansion Roadmap:
jurisdiction_modules:
virginia: { status: "active" }
california: { status: "planned", notes: "Complex district taxes, no gift card expiry" }
oregon: { status: "planned", notes: "No sales tax state" }
canada: { status: "planned", notes: "GST/PST/HST complexity" }
european_union: { status: "planned", notes: "VAT with reverse charge" }
L.10A.3 Payment Integration Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-003 |
| Context | Need PCI-compliant card payment processing with minimal compliance burden |
| Decision | SAQ-A Semi-Integrated terminals (no card data touches our system) |
| Alternatives Considered | 1) Full integration SAQ-D, 2) Semi-integrated SAQ-A (selected), 3) Redirect-only |
| Rationale | Simplest PCI compliance; card data never in our scope; supports offline void via token |
| Reference | BRD-v12 §1.18 |
Payment Flow Architecture:
┌─────────────────────────────────────────────────────────────┐
│ SAQ-A PAYMENT ARCHITECTURE │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ POS UI │────►│ Backend │────►│ Terminal │ │
│ │ │ │ API │ │ │ │
│ └──────────┘ └──────────┘ └────┬─────┘ │
│ ▲ │ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────┐ │
│ │ │ PAYMENT PROCESSOR │ │
│ │ │ (Card data ONLY here) │ │
│ │ └─────────────────────────────────────┘ │
│ │ │ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────┐ │
│ └───────────│ Token + Approval + Masked Card │ │
│ │ (NO full PAN, CVV, or track data) │ │
│ └─────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Data Storage Rules:
┌─────────────────────────────────────────────────────────────┐
│ PAYMENT DATA STORAGE RULES │
├─────────────────────────────────────────────────────────────┤
│ │
│ ✅ STORED (Allowed): ❌ PROHIBITED (Never): │
│ ├── Payment token ├── Full card number │
│ ├── Approval code ├── CVV/CVC │
│ ├── Masked card (****1234) ├── Track data │
│ ├── Card brand (Visa/MC/Amex) ├── PIN block │
│ ├── Entry method (chip/tap) ├── EMV cryptogram (raw) │
│ ├── Terminal ID │ │
│ └── Timestamp │ │
│ │
└─────────────────────────────────────────────────────────────┘
L.10A.4 Multi-Tenancy Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-004 (Revised) |
| Context | Platform must support multiple retail tenants with strong data isolation |
| Decision | Row-Level Isolation with PostgreSQL RLS |
| Alternatives Considered | 1) Database-per-tenant, 2) Schema-per-tenant, 3) Row-level isolation with RLS (selected) |
| Rationale | Matches BRD v18.0 data models (135 occurrences of tenant_id FK across all modules). Simpler operations — no per-tenant schema migration tooling. RLS enforces isolation at the database level, preventing accidental cross-tenant data access. |
| Reference | BRD-v18.0, Chapter 03 |
v18.0 Update: The original Architecture Styles Worksheet v1.6 specified Schema-Per-Tenant. Expert panel review identified a contradiction: every data model table in BRD v18.0 includes
tenant_id UUID FK(row-level isolation pattern, 135 occurrences). This revision aligns the architecture decision with the actual BRD data models.
Database Structure:
database: pos_production
│
├── schema: shared
│ ├── tax_rates (global, no tenant_id)
│ ├── system_config (global)
│ └── tenant_registry (tenant metadata)
│
└── schema: public (all tenant data)
├── orders (tenant_id UUID FK + RLS)
├── customers (tenant_id UUID FK + RLS)
├── inventory (tenant_id UUID FK + RLS)
├── products (tenant_id UUID FK + RLS)
├── integration_providers (tenant_id UUID FK + RLS)
└── ... (all other tables with tenant_id + RLS)
RLS Policy Implementation:
-- Enable RLS on every tenant table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
-- Create isolation policy
CREATE POLICY tenant_isolation ON orders
USING (tenant_id = current_setting('app.current_tenant')::uuid);
-- Force RLS for non-superuser roles
ALTER TABLE orders FORCE ROW LEVEL SECURITY;
Connection Pattern:
// Tenant resolution via middleware
public class TenantMiddleware
{
public async Task InvokeAsync(HttpContext context)
{
var tenantId = ResolveTenantFromJwt(context);
// Set PostgreSQL session variable for RLS
await connection.ExecuteAsync(
"SET app.current_tenant = @tenantId",
new { tenantId });
await _next(context);
}
}
Benefits:
- Simpler connection pooling (shared pool, not per-schema)
- Standard query patterns (no search_path manipulation)
- Easier migrations (single schema, applied once)
- RLS enforcement at database level (defense-in-depth)
- Matches BRD v18.0 data model conventions
Trade-offs:
- Less physical isolation than schema separation (mitigated by RLS)
- All tenants share same table structure (flexibility limited)
- RLS policies must be applied to every table (automated via migration scripts)
L.10A.4A Multi-Tenancy Strategies Comparison
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
Multi-Tenancy Strategies
========================
Strategy 1: Shared Tables (Row-Level)
+----------------------------------+
| products |
| +--------+--------+------------+ |
| | tenant | id | name | |
| +--------+--------+------------+ |
| | nexus | 1 | T-Shirt | |
| | acme | 2 | Jacket | |
| | nexus | 3 | Jeans | |
| +--------+--------+------------+ |
+----------------------------------+
Pros: Simple, low overhead
Cons: Risk of data leakage, complex queries, no isolation
Strategy 2: Separate Databases
+-------------+ +-------------+ +-------------+
| nexus_db | | acme_db | | beta_db |
| +--------+ | | +--------+ | | +--------+ |
| |products| | | |products| | | |products| |
| +--------+ | | +--------+ | | +--------+ |
| |sales | | | |sales | | | |sales | |
| +--------+ | | +--------+ | | +--------+ |
+-------------+ +-------------+ +-------------+
Pros: Complete isolation
Cons: Connection overhead, backup complexity, cost at scale
Strategy 3: Schema-Per-Tenant
+-----------------------------------------------------+
| pos_platform database |
| |
| +-----------+ +--------------+ +--------------+ |
| | shared | | tenant_nexus | | tenant_acme | |
| +-----------+ +--------------+ +--------------+ |
| | tenants | | products | | products | |
| | plans | | sales | | sales | |
| | features | | inventory | | inventory | |
| +-----------+ | customers | | customers | |
| +--------------+ +--------------+ |
+-----------------------------------------------------+
Pros: Isolation + efficiency, easy backup/restore per tenant
Cons: More complex migrations (but manageable)
Decision Matrix
| Requirement | Shared Tables | Separate DBs | Schema-Per-Tenant |
|---|---|---|---|
| Data Isolation | Poor | Excellent | Excellent |
| Performance | Good | Excellent | Very Good |
| Backup/Restore | Complex | Simple | Simple |
| Connection Overhead | Low | High | Low |
| Query Complexity | High | Low | Low |
| Compliance (SOC2) | Difficult | Easy | Easy |
| Cost at Scale | Low | High | Medium |
| Migration Complexity | Low | Low | Medium |
Note: The Architecture Styles analysis (L.10A.4 above) selected Row-Level Isolation with PostgreSQL RLS as the production strategy, which aligns with BRD v18.0 data models (135 occurrences of
tenant_id). The Schema-Per-Tenant comparison above is preserved for reference and as an alternative should physical isolation requirements change.
L.10A.4B Tenant Resolution & Middleware
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
Tenant Resolution Flow
Tenant Resolution Flow
======================
+---------------------------+
| Incoming Request |
| nexus.pos-platform.com |
+-------------+-------------+
|
v
+---------------------------+
| Extract Subdomain |
| subdomain = "nexus" |
+-------------+-------------+
|
v
+---------------------------+
| Lookup in shared.tenants|
| WHERE subdomain = ? |
+-------------+-------------+
|
+----------------------+----------------------+
| |
[Found] [Not Found]
| |
v v
+---------------------------+ +---------------------------+
| Set PostgreSQL | | Return 404 |
| search_path TO | | "Tenant not found" |
| tenant_nexus, shared | +---------------------------+
+-------------+-------------+
|
v
+---------------------------+
| Continue with request |
| All queries now use |
| tenant_nexus schema |
+---------------------------+
ASP.NET Core Tenant Middleware
// TenantMiddleware.cs
public class TenantMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<TenantMiddleware> _logger;
public TenantMiddleware(RequestDelegate next, ILogger<TenantMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context, ITenantService tenantService, IDbContextFactory<PosDbContext> dbFactory)
{
// 1. Extract subdomain from host
var host = context.Request.Host.Host;
var subdomain = ExtractSubdomain(host);
if (string.IsNullOrEmpty(subdomain))
{
context.Response.StatusCode = 400;
await context.Response.WriteAsJsonAsync(new { error = "Invalid tenant" });
return;
}
// 2. Lookup tenant in shared schema
var tenant = await tenantService.GetBySubdomainAsync(subdomain);
if (tenant == null)
{
context.Response.StatusCode = 404;
await context.Response.WriteAsJsonAsync(new { error = "Tenant not found" });
return;
}
if (tenant.Status == "suspended")
{
context.Response.StatusCode = 403;
await context.Response.WriteAsJsonAsync(new { error = "Account suspended" });
return;
}
// 3. Store tenant in HttpContext for downstream use
context.Items["Tenant"] = tenant;
context.Items["TenantSchema"] = tenant.SchemaName;
_logger.LogDebug("Resolved tenant: {TenantSlug} -> {Schema}", tenant.Slug, tenant.SchemaName);
// 4. Continue pipeline
await _next(context);
}
private string? ExtractSubdomain(string host)
{
// nexus.pos-platform.com -> nexus
// localhost:5000 -> null (development fallback)
var parts = host.Split('.');
if (parts.Length >= 3)
{
return parts[0];
}
// Development fallback: check header
return null;
}
}
// ITenantService.cs
public interface ITenantService
{
Task<Tenant?> GetBySubdomainAsync(string subdomain);
Task<Tenant?> GetBySlugAsync(string slug);
Task<string> CreateTenantAsync(CreateTenantRequest request);
}
// TenantService.cs
public class TenantService : ITenantService
{
private readonly IDbContextFactory<SharedDbContext> _dbFactory;
private readonly ILogger<TenantService> _logger;
public TenantService(IDbContextFactory<SharedDbContext> dbFactory, ILogger<TenantService> logger)
{
_dbFactory = dbFactory;
_logger = logger;
}
public async Task<Tenant?> GetBySubdomainAsync(string subdomain)
{
await using var db = await _dbFactory.CreateDbContextAsync();
return await db.Tenants
.AsNoTracking()
.FirstOrDefaultAsync(t => t.Subdomain == subdomain);
}
public async Task<string> CreateTenantAsync(CreateTenantRequest request)
{
var schemaName = $"tenant_{request.Slug}";
await using var db = await _dbFactory.CreateDbContextAsync();
// 1. Create tenant record
var tenant = new Tenant
{
Slug = request.Slug,
Name = request.Name,
Subdomain = request.Subdomain,
SchemaName = schemaName,
PlanId = request.PlanId,
Status = "active"
};
db.Tenants.Add(tenant);
await db.SaveChangesAsync();
// 2. Create schema (raw SQL)
await db.Database.ExecuteSqlRawAsync($"CREATE SCHEMA {schemaName}");
// 3. Run migrations on new schema
await RunMigrationsAsync(schemaName);
_logger.LogInformation("Created tenant: {Slug} with schema {Schema}", request.Slug, schemaName);
return tenant.Id.ToString();
}
private async Task RunMigrationsAsync(string schemaName)
{
// Apply all tenant schema tables
// This would run the full schema creation script
}
}
DbContext with Dynamic Schema
// PosDbContext.cs
public class PosDbContext : DbContext
{
private readonly string _schemaName;
public PosDbContext(DbContextOptions<PosDbContext> options, IHttpContextAccessor httpContextAccessor)
: base(options)
{
// Get schema from HttpContext (set by TenantMiddleware)
_schemaName = httpContextAccessor.HttpContext?.Items["TenantSchema"]?.ToString()
?? "tenant_default";
}
public DbSet<Product> Products => Set<Product>();
public DbSet<Sale> Sales => Set<Sale>();
public DbSet<Customer> Customers => Set<Customer>();
public DbSet<Employee> Employees => Set<Employee>();
public DbSet<Location> Locations => Set<Location>();
// ... other DbSets
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Set default schema for all entities
modelBuilder.HasDefaultSchema(_schemaName);
// Apply entity configurations
modelBuilder.ApplyConfigurationsFromAssembly(typeof(PosDbContext).Assembly);
}
}
Connection String with search_path
// TenantDbContextFactory.cs
public class TenantDbContextFactory : IDbContextFactory<PosDbContext>
{
private readonly IConfiguration _config;
private readonly IHttpContextAccessor _httpContextAccessor;
public TenantDbContextFactory(IConfiguration config, IHttpContextAccessor httpContextAccessor)
{
_config = config;
_httpContextAccessor = httpContextAccessor;
}
public PosDbContext CreateDbContext()
{
var schemaName = _httpContextAccessor.HttpContext?.Items["TenantSchema"]?.ToString()
?? throw new InvalidOperationException("No tenant context");
var baseConnectionString = _config.GetConnectionString("DefaultConnection");
// Append search_path to connection string
var connectionString = $"{baseConnectionString};Search Path={schemaName},shared";
var optionsBuilder = new DbContextOptionsBuilder<PosDbContext>();
optionsBuilder.UseNpgsql(connectionString);
return new PosDbContext(optionsBuilder.Options);
}
}
L.10A.4C Tenant Provisioning
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
New Tenant Signup Flow
======================
[Admin Portal] [API] [Database]
| | |
| 1. POST /tenants | |
| { name, slug, plan } | |
|------------------------------>| |
| | |
| | 2. Validate slug uniqueness |
| |--------------------------------->|
| | |
| | 3. Insert into shared.tenants |
| |--------------------------------->|
| | |
| | 4. CREATE SCHEMA tenant_{slug} |
| |--------------------------------->|
| | |
| | 5. Run schema migrations |
| | (create all tables) |
| |--------------------------------->|
| | |
| | 6. Seed default data |
| | (roles, permissions) |
| |--------------------------------->|
| | |
| | 7. Create admin user |
| |--------------------------------->|
| | |
| 8. Return tenant details | |
| { id, subdomain, status } | |
|<------------------------------| |
| | |
| 9. Redirect to tenant portal | |
| nexus.pos-platform.com | |
| | |
L.10A.4D Schema Migration Strategy
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
Applying Migrations to All Tenants
// TenantMigrationService.cs
public class TenantMigrationService
{
private readonly SharedDbContext _sharedDb;
private readonly ILogger<TenantMigrationService> _logger;
public async Task ApplyMigrationToAllTenantsAsync(string migrationScript)
{
var tenants = await _sharedDb.Tenants.ToListAsync();
foreach (var tenant in tenants)
{
try
{
_logger.LogInformation("Applying migration to {Schema}", tenant.SchemaName);
await _sharedDb.Database.ExecuteSqlRawAsync(
$"SET search_path TO {tenant.SchemaName}; {migrationScript}"
);
_logger.LogInformation("Migration complete for {Schema}", tenant.SchemaName);
}
catch (Exception ex)
{
_logger.LogError(ex, "Migration failed for {Schema}", tenant.SchemaName);
// Continue with other tenants or abort based on policy
}
}
}
}
Migration Script Example
-- Migration: Add loyalty_tier to customers
-- File: 2025-01-15_add_loyalty_tier.sql
DO $$
DECLARE
tenant_schema TEXT;
BEGIN
FOR tenant_schema IN
SELECT schema_name FROM shared.tenants WHERE status = 'active'
LOOP
EXECUTE format('ALTER TABLE %I.customers ADD COLUMN IF NOT EXISTS loyalty_tier VARCHAR(20) DEFAULT ''bronze''', tenant_schema);
END LOOP;
END $$;
Shared Schema SQL Reference
-- Schema: shared
-- Tenant Registry
CREATE TABLE shared.tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
slug VARCHAR(50) UNIQUE NOT NULL, -- 'nexus', 'acme'
name VARCHAR(255) NOT NULL, -- 'Nexus Clothing'
subdomain VARCHAR(100) UNIQUE NOT NULL, -- 'nexus.pos-platform.com'
schema_name VARCHAR(100) NOT NULL, -- 'tenant_nexus'
plan_id UUID REFERENCES shared.subscription_plans(id),
status VARCHAR(20) DEFAULT 'active', -- active, suspended, trial
trial_ends_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- Subscription Plans
CREATE TABLE shared.subscription_plans (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(100) NOT NULL, -- 'Starter', 'Professional'
code VARCHAR(50) UNIQUE NOT NULL, -- 'starter', 'pro', 'enterprise'
price_monthly DECIMAL(10,2),
price_yearly DECIMAL(10,2),
max_locations INTEGER DEFAULT 1,
max_registers INTEGER DEFAULT 2,
max_employees INTEGER DEFAULT 5,
max_products INTEGER DEFAULT 1000,
features JSONB DEFAULT '{}', -- Feature flags
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Feature Flags
CREATE TABLE shared.feature_flags (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
key VARCHAR(100) UNIQUE NOT NULL, -- 'loyalty_program'
name VARCHAR(255) NOT NULL,
description TEXT,
default_enabled BOOLEAN DEFAULT FALSE,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Insert default plans
INSERT INTO shared.subscription_plans (name, code, price_monthly, max_locations, max_registers, max_employees, max_products) VALUES
('Starter', 'starter', 49.00, 1, 2, 5, 1000),
('Professional', 'pro', 149.00, 3, 10, 25, 10000),
('Enterprise', 'enterprise', 499.00, -1, -1, -1, -1); -- -1 = unlimited
L.10A.5 Commission Reversal Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-005 |
| Context | Need fair commission adjustment when sales are voided or items are returned |
| Decision | Proportional Reversal on returns, Full Reversal on voids |
| Alternatives Considered | 1) Full reversal always, 2) Proportional (selected), 3) No reversal |
| Rationale | Fair to employees; maintains incentive alignment; distinguishes mistakes (voids) from returns |
| Reference | BRD-v12 §1.8 |
Commission Reversal Rules:
┌─────────────────────────────────────────────────────────────┐
│ COMMISSION REVERSAL LOGIC │
├─────────────────────────────────────────────────────────────┤
│ │
│ VOID (Same day, before drawer close): │
│ ├── Reversal: 100% (full) │
│ ├── Rationale: Mistake correction, sale didn't happen │
│ └── Example: $6 commission → reverse $6 │
│ │
│ RETURN (After sale completed): │
│ ├── Reversal: Proportional to returned value │
│ ├── Formula: Original Commission × (Returned / Original) │
│ └── Example: │
│ Sale: $120, Commission: $6 (5%) │
│ Return: $80 of items │
│ Reversal: $6 × ($80/$120) = $4.00 │
│ Net Commission: $6 - $4 = $2.00 │
│ │
└─────────────────────────────────────────────────────────────┘
Configuration:
commissions:
default_rate_percent: 2.0
category_rates:
electronics: 3.0
services: 5.0
# Reversal rules
reverse_on_void: true
void_reversal_method: "full" # 100%
reduce_on_return: true
return_reversal_method: "proportional" # Based on value
L.10A.6 Geographic Expansion Strategy
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-006 |
| Context | Initial deployment in Virginia with planned expansion to other US states and international |
| Decision | Virginia-First with modular jurisdiction architecture |
| Phases | 1) Virginia (Day 1), 2) US expansion (Year 2), 3) International (Year 3+) |
| Reference | BRD-v12 §1.17.3 |
Expansion Strategy:
┌─────────────────────────────────────────────────────────────┐
│ GEOGRAPHIC EXPANSION ROADMAP │
├─────────────────────────────────────────────────────────────┤
│ │
│ PHASE 1: Virginia (Day 1) │
│ ├── Tax: State 4.3% + Local 1% + Regional 0.7% │
│ ├── Gift Cards: 5-year minimum expiry allowed │
│ └── Compliance: Virginia Consumer Protection Act │
│ │
│ PHASE 2: US Expansion │
│ ├── California: No gift card expiry, $10 cash-out rule │
│ ├── Oregon: No sales tax │
│ ├── New York: Complex local taxes │
│ └── Florida: No income tax, tourism taxes │
│ │
│ PHASE 3: International │
│ ├── Canada: GST/HST/PST provincial variations │
│ ├── EU: VAT with reverse charge for B2B │
│ └── UK: Post-Brexit VAT rules │
│ │
└─────────────────────────────────────────────────────────────┘
Design Principle: Always design for the most restrictive jurisdiction (California for US), then enable features where permitted.
Gift Card Jurisdiction Matrix: | Jurisdiction | Expiry Allowed | Inactivity Fee | Cash-Out Required | |–––––––|––––––––|––––––––|—————––| | Virginia | Yes (5yr min) | Yes (after 12mo) | No | | California | No | No | Yes ($10 threshold) | | New York | No | No | No | | Default | No | No | No |
L.10A.7 Decision Dependency Graph
┌─────────────────────────────────────────────────────────────┐
│ ARCHITECTURE DECISION DEPENDENCIES │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ │
│ │ Geographic Scope │ │
│ │ (ADR-BRD-006) │ │
│ └────────┬─────────┘ │
│ │ │
│ ┌──────────────┼──────────────┐ │
│ ▼ ▼ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Tax Engine │ │ Gift Card │ │ Compliance │ │
│ │(ADR-BRD-002)│ │ Rules │ │ Rules │ │
│ └──────┬─────┘ └────────────┘ └────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────┐ │
│ │ Offline │───────────────────────────┐ │
│ │(ADR-BRD-001)│ │ │
│ └──────┬─────┘ │ │
│ │ ▼ │
│ ▼ ┌────────────┐ │
│ ┌────────────┐ │ Payment │ │
│ │ Multi- │ │(ADR-BRD-003)│ │
│ │ Tenancy │ └────────────┘ │
│ │(ADR-BRD-004)│ │
│ └────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
L.11 Style Decision Summary
Final Selection
+------------------------------------------------------------------+
| ARCHITECTURE DECISION SUMMARY |
| (v2.0 - Panel Reviewed) |
+------------------------------------------------------------------+
| |
| QUESTION: What is the primary architecture style? |
| ANSWER: Event-Driven Modular Monolith |
| |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ SELECTED PATTERNS │ |
| ├─────────────────────────────────────────────────────────────┤ |
| │ ✅ Modular Monolith → Central API │ |
| │ ✅ Microkernel (Plugin) → POS Client │ |
| │ ✅ Event-Driven → PostgreSQL Events (v1.0) │ |
| │ Kafka (v2.0, when justified) │ |
| │ ✅ Event Sourcing → Sales (Full) + Inventory (Audit)│ |
| │ + Integrations (Audit-trail) │ |
| │ ✅ CQRS → Sales Module (Read/Write split) │ |
| │ ✅ Offline-First → POS Client (SQLite) │ |
| │ ✅ Row-Level with RLS → Multi-Tenant Isolation │ |
| │ ✅ Integration Gateway → Module 6 (Extractable) │ |
| │ ✅ Circuit Breaker → External API Resilience │ |
| │ ✅ Transactional Outbox → Guaranteed Event Delivery │ |
| │ ✅ Provider Abstraction → IIntegrationProvider Interface │ |
| │ ✅ Credential Vault → HashiCorp Vault │ |
| └─────────────────────────────────────────────────────────────┘ |
| |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ REJECTED PATTERNS │ |
| ├─────────────────────────────────────────────────────────────┤ |
| │ ❌ Microservices → Too complex for current scale │ |
| │ ❌ Space-Based → Too complex for financial audit │ |
| │ ❌ Schema-Per-Tenant → Replaced by Row-Level with RLS │ |
| │ ❌ Kafka (v1.0) → Deferred to v2.0 │ |
| └─────────────────────────────────────────────────────────────┘ |
| |
+------------------------------------------------------------------+
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2026-01-24 |
| Updated | 2026-02-25 |
| Source | Architecture Styles Worksheet v2.0, BRD-v18.0, Chapters 02-06 |
| Author | Claude Code |
| Reviewer | Expert Panel (Marcus Chen, Sarah Rodriguez, James O’Brien, Priya Patel) |
| Status | Active |
| Part | II - Architecture |
| Chapter | 04 of 32 |
| Previous | Chapter 12 v2.0.0 |
Change Log
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2026-01-24 | Initial document |
| 1.1.0 | 2026-01-26 | Added Section L.10A (Key Architecture Decisions from BRD-v12) with 6 ADRs |
| 2.0.0 | 2026-02-19 | Expert panel review (6.50/10): Replaced Schema-Per-Tenant with Row-Level RLS; deferred Kafka to v2.0 (PostgreSQL Events for v1.0); added Extractable Integration Gateway for Module 6; added L.1.9 Integration Patterns (Circuit Breaker, Transactional Outbox, Provider Abstraction, ACL, Saga); added L.4A CQRS/ES Scope per module; added L.4B Integration Architecture Patterns with diagrams; replaced SonarQube-only security with 6-Gate Security Test Pyramid; added HashiCorp Vault credential architecture; updated Style Evaluation Matrix scores; added integration-specific risks and mitigations |
| 3.0.0 | 2026-02-22 | Consolidated implementation references from Chapters 05-09: Added L.4A.1-7 (Event Store schema, Kafka architecture, Schema Registry, DLQ pattern, Domain Events catalog, Projections, Temporal Queries, Snapshots from Ch 08); Added L.9A-9B (System Architecture diagrams, Data Flow patterns from Ch 05); Added L.9C (Domain Model bounded contexts, aggregates, ER diagram from Ch 07); Added L.10A.1A-1H (POS Client architecture, SQLite schema, Sync Queue, Conflict Resolution, Sync Processor, Sale Creation Flow, Connection Monitor, CRDTs from Ch 09); Added L.10A.4A-4D (Multi-Tenancy strategies comparison, Tenant Middleware, Provisioning workflow, Migration strategy from Ch 06) |
Next Chapter: Chapter 05: Architecture Components (BRD v20.0)
This chapter is part of the POS Blueprint Book. All content is self-contained.