Chapter 18: Development Environment Setup

18.1 Overview

This chapter provides complete, step-by-step instructions for setting up your development environment for the POS platform. By the end, you will have a fully functional local development stack.


18.2 Prerequisites

Required Software

SoftwareVersionPurpose
.NET SDK8.0+Backend development
PostgreSQL16+Primary database
Docker24.0+Containerization
Docker Compose2.20+Multi-container orchestration
Node.js20 LTSFrontend tooling
Git2.40+Version control

Hardware Requirements

ComponentMinimumRecommended
RAM8 GB16 GB
Storage20 GB free50 GB SSD
CPU4 cores8 cores

18.3 Project Structure

/volume1/docker/pos-platform/
├── CLAUDE.md                          # AI assistant guidance
├── README.md                          # Quick start guide
├── .gitignore                         # Git ignore patterns
├── .env.example                       # Environment template
├── pos-platform.sln                   # .NET solution file
│
├── docker/
│   ├── docker-compose.yml             # Development stack
│   ├── docker-compose.prod.yml        # Production overrides
│   ├── Dockerfile                     # API container build
│   ├── Dockerfile.web                 # Web container build
│   └── .env                           # Docker environment (gitignored)
│
├── src/
│   ├── PosPlatform.Core/              # Domain layer
│   │   ├── Entities/                  # Domain entities
│   │   ├── ValueObjects/              # Immutable value objects
│   │   ├── Events/                    # Domain events
│   │   ├── Exceptions/                # Domain exceptions
│   │   ├── Interfaces/                # Repository interfaces
│   │   └── Services/                  # Domain services
│   │
│   ├── PosPlatform.Infrastructure/    # Infrastructure layer
│   │   ├── Data/                      # EF Core contexts
│   │   ├── Repositories/              # Repository implementations
│   │   ├── Services/                  # External service integrations
│   │   ├── Messaging/                 # Event bus, queues
│   │   └── MultiTenant/               # Tenant resolution
│   │
│   ├── PosPlatform.Api/               # API layer
│   │   ├── Controllers/               # REST endpoints
│   │   ├── Middleware/                # Request pipeline
│   │   ├── Filters/                   # Action filters
│   │   ├── DTOs/                      # Data transfer objects
│   │   └── Program.cs                 # Application entry
│   │
│   └── PosPlatform.Web/               # Blazor frontend
│       ├── Components/                # Blazor components
│       ├── Pages/                     # Routable pages
│       ├── Services/                  # Frontend services
│       └── wwwroot/                   # Static assets
│
├── tests/
│   ├── PosPlatform.Core.Tests/        # Unit tests
│   ├── PosPlatform.Api.Tests/         # API integration tests
│   └── PosPlatform.E2E.Tests/         # End-to-end tests
│
└── database/
    ├── migrations/                    # EF Core migrations
    ├── seed/                          # Seed data scripts
    └── init.sql                       # Database initialization

18.4 Step 1: Install Prerequisites

Linux (Ubuntu/Debian)

# Update package manager
sudo apt update && sudo apt upgrade -y

# Install .NET 8 SDK
wget https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt update
sudo apt install -y dotnet-sdk-8.0

# Verify .NET installation
dotnet --version

# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in for group changes

# Verify Docker
docker --version
docker compose version

# Install Node.js 20 LTS
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs

# Verify Node.js
node --version
npm --version

# Install Git
sudo apt install -y git
git --version

macOS

# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install .NET 8 SDK
brew install dotnet-sdk

# Install Docker Desktop
brew install --cask docker

# Install Node.js
brew install node@20

# Install Git
brew install git

Windows

# Install with winget (Windows Package Manager)
winget install Microsoft.DotNet.SDK.8
winget install Docker.DockerDesktop
winget install OpenJS.NodeJS.LTS
winget install Git.Git

# Alternatively, download installers from:
# - https://dotnet.microsoft.com/download
# - https://docker.com/products/docker-desktop
# - https://nodejs.org/
# - https://git-scm.com/

18.5 Step 2: Create Project Structure

Initialize Repository

# Create project directory
mkdir -p /volume1/docker/pos-platform
cd /volume1/docker/pos-platform

# Initialize Git repository
git init
git branch -M main

# Create initial structure
mkdir -p docker src tests database/migrations database/seed

Create .gitignore

cat > .gitignore << 'EOF'
# Build outputs
bin/
obj/
publish/

# IDE
.vs/
.vscode/
.idea/
*.user
*.suo

# Environment
.env
*.env.local
appsettings.*.json
!appsettings.json
!appsettings.Development.json

# Logs
logs/
*.log

# Docker
docker/.env

# Node
node_modules/
dist/

# Database
*.db
*.sqlite

# OS
.DS_Store
Thumbs.db

# Secrets
*.pem
*.key
secrets/
EOF

Create Solution File

# Create .NET solution
dotnet new sln -n pos-platform

# Create projects
dotnet new classlib -n PosPlatform.Core -o src/PosPlatform.Core
dotnet new classlib -n PosPlatform.Infrastructure -o src/PosPlatform.Infrastructure
dotnet new webapi -n PosPlatform.Api -o src/PosPlatform.Api
dotnet new blazorserver -n PosPlatform.Web -o src/PosPlatform.Web

# Create test projects
dotnet new xunit -n PosPlatform.Core.Tests -o tests/PosPlatform.Core.Tests
dotnet new xunit -n PosPlatform.Api.Tests -o tests/PosPlatform.Api.Tests

# Add projects to solution
dotnet sln add src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet sln add src/PosPlatform.Infrastructure/PosPlatform.Infrastructure.csproj
dotnet sln add src/PosPlatform.Api/PosPlatform.Api.csproj
dotnet sln add src/PosPlatform.Web/PosPlatform.Web.csproj
dotnet sln add tests/PosPlatform.Core.Tests/PosPlatform.Core.Tests.csproj
dotnet sln add tests/PosPlatform.Api.Tests/PosPlatform.Api.Tests.csproj

# Add project references
dotnet add src/PosPlatform.Infrastructure/PosPlatform.Infrastructure.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add src/PosPlatform.Api/PosPlatform.Api.csproj reference src/PosPlatform.Infrastructure/PosPlatform.Infrastructure.csproj
dotnet add src/PosPlatform.Api/PosPlatform.Api.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add src/PosPlatform.Web/PosPlatform.Web.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add tests/PosPlatform.Core.Tests/PosPlatform.Core.Tests.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add tests/PosPlatform.Api.Tests/PosPlatform.Api.Tests.csproj reference src/PosPlatform.Api/PosPlatform.Api.csproj

18.6 Step 3: Docker Configuration

docker-compose.yml

# /volume1/docker/pos-platform/docker/docker-compose.yml
version: '3.8'

services:
  # PostgreSQL Database
  postgres:
    image: postgres:16-alpine
    container_name: pos-postgres
    environment:
      POSTGRES_USER: ${DB_USER:-pos_admin}
      POSTGRES_PASSWORD: ${DB_PASSWORD:-PosDevPass2025!}
      POSTGRES_DB: ${DB_NAME:-pos_platform}
    ports:
      - "5434:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ../database/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-pos_admin} -d ${DB_NAME:-pos_platform}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - pos-network

  # Redis for Caching and Sessions
  redis:
    image: redis:7-alpine
    container_name: pos-redis
    ports:
      - "6380:6379"
    volumes:
      - redis_data:/data
    command: redis-server --appendonly yes
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - pos-network

  # RabbitMQ for Event Bus
  rabbitmq:
    image: rabbitmq:3-management-alpine
    container_name: pos-rabbitmq
    environment:
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER:-pos_user}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS:-PosRabbit2025!}
    ports:
      - "5673:5672"   # AMQP
      - "15673:15672" # Management UI
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq
    healthcheck:
      test: ["CMD", "rabbitmq-diagnostics", "check_running"]
      interval: 30s
      timeout: 10s
      retries: 5
    networks:
      - pos-network

  # POS API (Development)
  api:
    build:
      context: ..
      dockerfile: docker/Dockerfile
    container_name: pos-api
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_URLS=http://+:8080
      - ConnectionStrings__DefaultConnection=Host=postgres;Port=5432;Database=${DB_NAME:-pos_platform};Username=${DB_USER:-pos_admin};Password=${DB_PASSWORD:-PosDevPass2025!}
      - Redis__ConnectionString=redis:6379
      - RabbitMQ__Host=rabbitmq
      - RabbitMQ__Username=${RABBITMQ_USER:-pos_user}
      - RabbitMQ__Password=${RABBITMQ_PASS:-PosRabbit2025!}
    ports:
      - "5100:8080"
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
      rabbitmq:
        condition: service_healthy
    volumes:
      - ../src:/app/src:ro
      - api_logs:/app/logs
    networks:
      - pos-network

  # POS Web (Development)
  web:
    build:
      context: ..
      dockerfile: docker/Dockerfile.web
    container_name: pos-web
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_URLS=http://+:8080
      - ApiBaseUrl=http://api:8080
    ports:
      - "5101:8080"
    depends_on:
      - api
    networks:
      - pos-network

volumes:
  postgres_data:
  redis_data:
  rabbitmq_data:
  api_logs:

networks:
  pos-network:
    driver: bridge

Dockerfile for API

# /volume1/docker/pos-platform/docker/Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src

# Copy solution and project files
COPY *.sln ./
COPY src/PosPlatform.Core/*.csproj ./src/PosPlatform.Core/
COPY src/PosPlatform.Infrastructure/*.csproj ./src/PosPlatform.Infrastructure/
COPY src/PosPlatform.Api/*.csproj ./src/PosPlatform.Api/

# Restore dependencies
RUN dotnet restore src/PosPlatform.Api/PosPlatform.Api.csproj

# Copy source code
COPY src/ ./src/

# Build and publish
WORKDIR /src/src/PosPlatform.Api
RUN dotnet publish -c Release -o /app/publish --no-restore

# Runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
WORKDIR /app

# Install culture support
RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false

# Copy published app
COPY --from=build /app/publish .

# Create non-root user
RUN adduser -D -u 1000 appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 8080
ENTRYPOINT ["dotnet", "PosPlatform.Api.dll"]

Dockerfile for Web

# /volume1/docker/pos-platform/docker/Dockerfile.web
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src

# Copy solution and project files
COPY *.sln ./
COPY src/PosPlatform.Core/*.csproj ./src/PosPlatform.Core/
COPY src/PosPlatform.Web/*.csproj ./src/PosPlatform.Web/

# Restore dependencies
RUN dotnet restore src/PosPlatform.Web/PosPlatform.Web.csproj

# Copy source code
COPY src/ ./src/

# Build and publish
WORKDIR /src/src/PosPlatform.Web
RUN dotnet publish -c Release -o /app/publish --no-restore

# Runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
WORKDIR /app

RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false

COPY --from=build /app/publish .

RUN adduser -D -u 1000 appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 8080
ENTRYPOINT ["dotnet", "PosPlatform.Web.dll"]

Environment Template

# /volume1/docker/pos-platform/docker/.env.example
# Database
DB_USER=pos_admin
DB_PASSWORD=PosDevPass2025!
DB_NAME=pos_platform

# RabbitMQ
RABBITMQ_USER=pos_user
RABBITMQ_PASS=PosRabbit2025!

# API Keys (development)
JWT_SECRET=dev-jwt-secret-key-min-32-characters-long
ENCRYPTION_KEY=dev-encryption-key-32-chars-long

18.7 Step 4: Database Initialization

init.sql

-- /volume1/docker/pos-platform/database/init.sql

-- Create extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm";

-- Create shared schema for platform-wide data
CREATE SCHEMA IF NOT EXISTS shared;

-- Tenants table (platform-wide)
CREATE TABLE shared.tenants (
    id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    code VARCHAR(10) NOT NULL UNIQUE,
    name VARCHAR(100) NOT NULL,
    domain VARCHAR(255),
    status VARCHAR(20) NOT NULL DEFAULT 'active',
    settings JSONB NOT NULL DEFAULT '{}',
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMPTZ
);

-- Platform users (super admins)
CREATE TABLE shared.platform_users (
    id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    email VARCHAR(255) NOT NULL UNIQUE,
    password_hash VARCHAR(255) NOT NULL,
    full_name VARCHAR(100) NOT NULL,
    role VARCHAR(50) NOT NULL DEFAULT 'admin',
    is_active BOOLEAN NOT NULL DEFAULT true,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Function to create tenant schema
CREATE OR REPLACE FUNCTION shared.create_tenant_schema(tenant_code VARCHAR)
RETURNS VOID AS $$
BEGIN
    EXECUTE format('CREATE SCHEMA IF NOT EXISTS tenant_%s', tenant_code);

    -- Create tenant-specific tables
    EXECUTE format('
        CREATE TABLE tenant_%s.locations (
            id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
            code VARCHAR(10) NOT NULL UNIQUE,
            name VARCHAR(100) NOT NULL,
            address JSONB,
            is_active BOOLEAN DEFAULT true,
            created_at TIMESTAMPTZ DEFAULT NOW()
        )', tenant_code);

    EXECUTE format('
        CREATE TABLE tenant_%s.users (
            id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
            employee_id VARCHAR(20) UNIQUE,
            full_name VARCHAR(100) NOT NULL,
            email VARCHAR(255),
            pin_hash VARCHAR(255),
            role VARCHAR(50) NOT NULL,
            location_id UUID REFERENCES tenant_%s.locations(id),
            is_active BOOLEAN DEFAULT true,
            created_at TIMESTAMPTZ DEFAULT NOW()
        )', tenant_code, tenant_code);

    EXECUTE format('
        CREATE TABLE tenant_%s.products (
            id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
            sku VARCHAR(50) NOT NULL UNIQUE,
            name VARCHAR(255) NOT NULL,
            description TEXT,
            category_id UUID,
            base_price DECIMAL(10,2) NOT NULL,
            cost DECIMAL(10,2),
            is_active BOOLEAN DEFAULT true,
            created_at TIMESTAMPTZ DEFAULT NOW(),
            updated_at TIMESTAMPTZ
        )', tenant_code);
END;
$$ LANGUAGE plpgsql;

-- Insert default platform admin
INSERT INTO shared.platform_users (email, password_hash, full_name, role)
VALUES (
    'admin@posplatform.local',
    '$2a$12$LQv3c1yqBWVHxkd0LHAkCOYz6TtxMQJqhN8/X4.vttYqBZq.kxVQ6', -- "admin123"
    'Platform Administrator',
    'super_admin'
);

-- Insert demo tenant
INSERT INTO shared.tenants (code, name, domain, status, settings)
VALUES (
    'DEMO',
    'Demo Retail Store',
    'demo.posplatform.local',
    'active',
    '{"timezone": "America/New_York", "currency": "USD", "taxRate": 0.07}'
);

-- Create demo tenant schema
SELECT shared.create_tenant_schema('demo');

COMMENT ON SCHEMA shared IS 'Platform-wide shared data';

18.8 Step 5: IDE Setup

VS Code Configuration

// /volume1/docker/pos-platform/.vscode/settings.json
{
    "editor.formatOnSave": true,
    "editor.defaultFormatter": "ms-dotnettools.csharp",
    "omnisharp.enableRoslynAnalyzers": true,
    "omnisharp.enableEditorConfigSupport": true,
    "dotnet.defaultSolution": "pos-platform.sln",
    "files.exclude": {
        "**/bin": true,
        "**/obj": true,
        "**/node_modules": true
    },
    "[csharp]": {
        "editor.defaultFormatter": "ms-dotnettools.csharp"
    }
}
// /volume1/docker/pos-platform/.vscode/launch.json
{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch API",
            "type": "coreclr",
            "request": "launch",
            "preLaunchTask": "build-api",
            "program": "${workspaceFolder}/src/PosPlatform.Api/bin/Debug/net8.0/PosPlatform.Api.dll",
            "args": [],
            "cwd": "${workspaceFolder}/src/PosPlatform.Api",
            "console": "internalConsole",
            "stopAtEntry": false,
            "env": {
                "ASPNETCORE_ENVIRONMENT": "Development"
            }
        },
        {
            "name": "Launch Web",
            "type": "coreclr",
            "request": "launch",
            "preLaunchTask": "build-web",
            "program": "${workspaceFolder}/src/PosPlatform.Web/bin/Debug/net8.0/PosPlatform.Web.dll",
            "args": [],
            "cwd": "${workspaceFolder}/src/PosPlatform.Web",
            "console": "internalConsole",
            "stopAtEntry": false
        }
    ]
}
// /volume1/docker/pos-platform/.vscode/extensions.json
{
    "recommendations": [
        "ms-dotnettools.csharp",
        "ms-dotnettools.csdevkit",
        "ms-azuretools.vscode-docker",
        "eamodio.gitlens",
        "streetsidesoftware.code-spell-checker",
        "editorconfig.editorconfig",
        "humao.rest-client",
        "mtxr.sqltools",
        "mtxr.sqltools-driver-pg"
    ]
}

18.9 Step 6: Git Workflow

Branch Strategy

main                    # Production-ready code
  |
  +-- develop           # Integration branch
       |
       +-- feature/*    # New features
       +-- bugfix/*     # Bug fixes
       +-- hotfix/*     # Urgent production fixes

Initial Commit

cd /volume1/docker/pos-platform

# Stage all files
git add .

# Initial commit
git commit -m "Initial project structure with Docker development stack

- Created .NET 8 solution with 4 projects (Core, Infrastructure, Api, Web)
- Added docker-compose with PostgreSQL 16, Redis, RabbitMQ
- Configured multi-tenant database initialization
- Set up VS Code development environment

Generated with Claude Code"

# Create develop branch
git checkout -b develop

18.10 Quick Reference Commands

Start Development Stack

cd /volume1/docker/pos-platform/docker

# Copy environment file
cp .env.example .env

# Start all services
docker compose up -d

# View logs
docker compose logs -f

# Check status
docker compose ps

Database Access

# Connect to PostgreSQL
docker exec -it pos-postgres psql -U pos_admin -d pos_platform

# List schemas
\dn

# List tables in shared schema
\dt shared.*

# List tables in tenant schema
\dt tenant_demo.*

Build and Run Locally

cd /volume1/docker/pos-platform

# Restore dependencies
dotnet restore

# Build solution
dotnet build

# Run API (from project directory)
cd src/PosPlatform.Api
dotnet run

# Run tests
cd /volume1/docker/pos-platform
dotnet test

Stop and Clean

cd /volume1/docker/pos-platform/docker

# Stop services
docker compose down

# Stop and remove volumes (WARNING: deletes data)
docker compose down -v

# Remove unused images
docker image prune -f

18.11 Verification Checklist

After completing setup, verify each component:

  • dotnet --version shows 8.0.x
  • docker compose ps shows all containers healthy
  • PostgreSQL accepts connections on port 5434
  • Redis responds to ping on port 6380
  • RabbitMQ management UI accessible at http://localhost:15673
  • Solution builds without errors: dotnet build
  • All tests pass: dotnet test

18.12 Diagrams as Code Strategy

Overview

To ensure “soft architecture” matches the actual code and enables rapid root-cause analysis, all architecture diagrams must be maintained as code alongside the source.

Primary Strategy

AttributeSelection
ApproachDiagrams as Code
RationalePrevent documentation drift; diagrams stay current
StorageGit repository alongside source code

Tooling Options

ToolBest ForFormat
StructurizrC4 Model, professional docsDSL
Mermaid.jsQuick diagrams, GitHub-nativeMarkdown
PlantUMLDetailed UML, sequence diagramsText
# Install Structurizr CLI
docker pull structurizr/cli

# Create workspace directory
mkdir -p /volume1/docker/pos-platform/docs/architecture
// /volume1/docker/pos-platform/docs/architecture/workspace.dsl

workspace "POS Platform" "Multi-tenant Point of Sale System" {

    model {
        // People
        cashier = person "Cashier" "Processes sales transactions"
        manager = person "Store Manager" "Manages inventory and reports"
        admin = person "Platform Admin" "Manages tenants and system"

        // External Systems
        shopify = softwareSystem "Shopify" "E-commerce platform" "External"
        paymentGateway = softwareSystem "Payment Gateway" "Stripe/Square" "External"

        // POS Platform
        posSystem = softwareSystem "POS Platform" "Multi-tenant retail POS" {
            posClient = container "POS Client" "Desktop/tablet app" ".NET MAUI" "Client"
            centralApi = container "Central API" "REST API" "ASP.NET Core" "API"
            webPortal = container "Web Portal" "Admin dashboard" "Blazor" "Web"
            database = container "Database" "PostgreSQL 16" "PostgreSQL" "Database"
            kafka = container "Event Streaming" "Apache Kafka" "Kafka" "Queue"
            redis = container "Cache" "Redis" "Redis" "Cache"
        }

        // Relationships
        cashier -> posClient "Uses"
        manager -> webPortal "Uses"
        admin -> webPortal "Manages tenants"

        posClient -> centralApi "API calls" "HTTPS"
        webPortal -> centralApi "API calls" "HTTPS"
        centralApi -> database "Reads/writes" "PostgreSQL"
        centralApi -> kafka "Publishes events"
        centralApi -> redis "Caches data"
        centralApi -> shopify "Syncs inventory" "REST"
        centralApi -> paymentGateway "Processes payments" "REST"
    }

    views {
        systemContext posSystem "SystemContext" {
            include *
            autoLayout
        }

        container posSystem "Containers" {
            include *
            autoLayout
        }

        theme default
    }
}

Generate Diagrams

# Export to PNG/SVG
docker run --rm -v $(pwd)/docs/architecture:/workspace structurizr/cli \
    export -workspace /workspace/workspace.dsl -format plantuml

# Or use Structurizr Lite for local preview
docker run -it --rm -p 8888:8080 \
    -v $(pwd)/docs/architecture:/usr/local/structurizr \
    structurizr/lite

Alternative: Mermaid.js

For simpler diagrams, use Mermaid directly in markdown:

<!-- /volume1/docker/pos-platform/docs/architecture/system-overview.md -->

# System Overview

```mermaid
graph TB
    subgraph Client["POS Client"]
        UI[UI Layer]
        SL[Service Layer]
        DB[(SQLite)]
    end

    subgraph Cloud["Cloud Infrastructure"]
        API[Central API]
        PG[(PostgreSQL)]
        K((Kafka))
    end

    UI --> SL
    SL --> DB
    SL --> API
    API --> PG
    API --> K

### Claude Code Integration

Use Claude Code CLI to auto-generate diagram updates during refactoring:

```bash
# After code changes, regenerate diagrams
claude-code /architect-review --update-diagrams

# Or use the dev-team skill
/dev-team update architecture diagrams

Diagram Update Workflow

+------------------------------------------------------------------+
|                 DIAGRAM UPDATE WORKFLOW                           |
+------------------------------------------------------------------+
|                                                                   |
|  1. Developer changes code structure                              |
|     ↓                                                             |
|  2. Pre-commit hook or CI checks for diagram drift                |
|     ↓                                                             |
|  3. If drift detected, Claude Code suggests updates               |
|     ↓                                                             |
|  4. Developer reviews and commits updated diagrams                |
|     ↓                                                             |
|  5. CI generates PNG/SVG exports for documentation                |
|                                                                   |
+------------------------------------------------------------------+

18.13 Quality Assurance (QA) & Testing Strategy

Overview

To ensure end-to-end reliability for financial transactions, the platform implements a comprehensive testing strategy covering unit, integration, E2E, and load testing.

Testing Pyramid

                      /\
                     /  \
                    / E2E \      Cypress/Playwright (Few, Slow)
                   /      \
                  /--------\
                 /Integration\   API Tests (Some, Medium)
                /            \
               /--------------\
              /   Unit Tests   \  xUnit (Many, Fast)
             /                  \
            /--------------------\

Unit Testing

AttributeSelection
FrameworkxUnit
MockingMoq
AssertionsFluentAssertions
Coverage Target80%+ for Core domain
# Run unit tests
dotnet test tests/PosPlatform.Core.Tests

# With coverage report
dotnet test --collect:"XPlat Code Coverage"

Integration Testing

AttributeSelection
FrameworkxUnit + WebApplicationFactory
DatabaseTestcontainers (PostgreSQL)
ScopeAPI endpoints, repository queries
// tests/PosPlatform.Api.Tests/SalesControllerTests.cs

public class SalesControllerTests : IClassFixture<PosApiFactory>
{
    private readonly HttpClient _client;

    public SalesControllerTests(PosApiFactory factory)
    {
        _client = factory.CreateClient();
    }

    [Fact]
    public async Task CreateSale_ValidRequest_Returns201()
    {
        // Arrange
        var request = new CreateSaleRequest
        {
            LocationId = Guid.NewGuid(),
            LineItems = new[] { new LineItemDto { Sku = "TEST001", Quantity = 1 } }
        };

        // Act
        var response = await _client.PostAsJsonAsync("/api/v1/sales", request);

        // Assert
        response.StatusCode.Should().Be(HttpStatusCode.Created);
    }
}

E2E (End-to-End) Testing

AttributeSelection
ToolPlaywright (Primary) or Cypress
ScopeFull user flows: Login → Sale → Payment → Receipt
EnvironmentDockerized test environment
# Install Playwright
npm init playwright@latest

# Install test dependencies
npm install -D @playwright/test
// tests/e2e/cashier-flow.spec.ts

import { test, expect } from '@playwright/test';

test.describe('Cashier Sales Flow', () => {
    test.beforeEach(async ({ page }) => {
        await page.goto('http://localhost:5101');
        await page.fill('[data-testid="pin-input"]', '1234');
        await page.click('[data-testid="login-button"]');
    });

    test('complete sale with cash payment', async ({ page }) => {
        // Scan item
        await page.fill('[data-testid="barcode-input"]', 'NXJ1078');
        await page.press('[data-testid="barcode-input"]', 'Enter');

        // Verify item added
        await expect(page.locator('[data-testid="cart-item"]')).toHaveCount(1);
        await expect(page.locator('[data-testid="cart-total"]')).toContainText('$');

        // Process payment
        await page.click('[data-testid="pay-button"]');
        await page.click('[data-testid="cash-payment"]');
        await page.fill('[data-testid="cash-tendered"]', '50.00');
        await page.click('[data-testid="complete-sale"]');

        // Verify receipt
        await expect(page.locator('[data-testid="receipt-modal"]')).toBeVisible();
        await expect(page.locator('[data-testid="change-due"]')).toBeVisible();
    });

    test('void line item from cart', async ({ page }) => {
        // Add items
        await page.fill('[data-testid="barcode-input"]', 'NXJ1078');
        await page.press('[data-testid="barcode-input"]', 'Enter');
        await page.fill('[data-testid="barcode-input"]', 'NXJ1079');
        await page.press('[data-testid="barcode-input"]', 'Enter');

        // Void first item
        await page.click('[data-testid="cart-item"]:first-child [data-testid="void-item"]');
        await page.click('[data-testid="confirm-void"]');

        // Verify removed
        await expect(page.locator('[data-testid="cart-item"]')).toHaveCount(1);
    });
});

Load Testing

AttributeSelection
Toolk6 (Primary) or JMeter
Scenario“Black Friday” - 500 concurrent transactions
Targetsp99 < 500ms, no errors
# Install k6
brew install k6  # macOS
# or
docker pull grafana/k6
// tests/load/black-friday.js

import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate } from 'k6/metrics';

const errorRate = new Rate('errors');

export const options = {
    scenarios: {
        black_friday: {
            executor: 'ramping-vus',
            startVUs: 0,
            stages: [
                { duration: '2m', target: 100 },  // Ramp up
                { duration: '5m', target: 500 },  // Peak load
                { duration: '2m', target: 0 },    // Ramp down
            ],
            gracefulRampDown: '30s',
        },
    },
    thresholds: {
        http_req_duration: ['p(99)<500'],  // 99% of requests < 500ms
        errors: ['rate<0.01'],              // Error rate < 1%
    },
};

const BASE_URL = __ENV.API_URL || 'http://localhost:5100';

export default function () {
    // Simulate sale creation
    const salePayload = JSON.stringify({
        locationId: 'b5f8e9a0-1234-5678-9abc-def012345678',
        lineItems: [
            { sku: 'NXJ1078', quantity: 1, unitPrice: 29.99 },
            { sku: 'NXJ1079', quantity: 2, unitPrice: 19.99 },
        ],
    });

    const params = {
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${__ENV.AUTH_TOKEN}`,
            'X-Tenant-Id': 'demo',
        },
    };

    const response = http.post(`${BASE_URL}/api/v1/sales`, salePayload, params);

    const success = check(response, {
        'status is 201': (r) => r.status === 201,
        'response time < 500ms': (r) => r.timings.duration < 500,
    });

    errorRate.add(!success);
    sleep(1);
}
# Run load test
k6 run --env API_URL=http://localhost:5100 --env AUTH_TOKEN=xxx tests/load/black-friday.js

# Run with Docker
docker run --rm -i grafana/k6 run - <tests/load/black-friday.js

Code Versioning & Traceability

AttributeSelection
PlatformGitHub/GitLab
VersioningSemantic Versioning (SemVer)
Tagsv1.0.0, v1.1.0, v2.0.0
# Version tagging workflow
git tag -a v1.0.0 -m "Version 1.0.0 - Initial Release"
git push origin v1.0.0

# Each POS terminal tracks deployed version
# API returns version in health check
curl http://localhost:5100/health
# {"status":"healthy","version":"1.2.3","commit":"abc123f"}

CI/CD Pipeline Testing

# .github/workflows/test.yml

name: Test Suite

on: [push, pull_request]

jobs:
  unit-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-dotnet@v4
        with:
          dotnet-version: '8.0.x'
      - run: dotnet test tests/PosPlatform.Core.Tests --logger "trx"

  integration-tests:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16-alpine
        env:
          POSTGRES_PASSWORD: test
        ports:
          - 5432:5432
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-dotnet@v4
        with:
          dotnet-version: '8.0.x'
      - run: dotnet test tests/PosPlatform.Api.Tests

  e2e-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npx playwright install --with-deps
      - run: docker compose -f docker/docker-compose.yml up -d
      - run: npx playwright test
      - uses: actions/upload-artifact@v4
        if: failure()
        with:
          name: playwright-report
          path: playwright-report/

  load-test:
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    steps:
      - uses: actions/checkout@v4
      - uses: grafana/k6-action@v0.3.1
        with:
          filename: tests/load/black-friday.js

Test Data Management

-- tests/seed/test-data.sql

-- Test tenant
INSERT INTO shared.tenants (id, code, name, status)
VALUES ('11111111-1111-1111-1111-111111111111', 'TEST', 'Test Tenant', 'active');

-- Test products
INSERT INTO tenant_test.products (sku, name, base_price)
VALUES
    ('TEST001', 'Test Product 1', 9.99),
    ('TEST002', 'Test Product 2', 19.99),
    ('TEST003', 'Test Product 3', 29.99);

-- Test user (PIN: 1234)
INSERT INTO tenant_test.users (employee_id, full_name, pin_hash, role)
VALUES ('E001', 'Test Cashier', '$2a$12$...', 'cashier');

Reference

For complete architecture characteristics and style selection rationale, see:


18.14 Chaos Engineering Strategy

Overview

Chaos Engineering validates system resilience by intentionally injecting failures. For a POS system handling financial transactions, this ensures the platform gracefully handles network partitions, service failures, and infrastructure issues.

AttributeSelection
ToolLitmusChaos (Primary) or Gremlin
EnvironmentStaging only (never production for POS)
GoalValidate offline-first, circuit breakers, failover

Why Chaos Engineering for POS?

+------------------------------------------------------------------+
|                    RETAIL FAILURE SCENARIOS                        |
+------------------------------------------------------------------+
|                                                                   |
|  Scenario 1: Internet Outage During Sale                          |
|  └── POS must complete transaction offline                        |
|  └── Payment must queue for sync                                  |
|                                                                   |
|  Scenario 2: Payment Processor Down                               |
|  └── Circuit breaker must open                                    |
|  └── Fallback to secondary processor or cash                      |
|                                                                   |
|  Scenario 3: Database Connection Lost                             |
|  └── Read operations from local cache                             |
|  └── Write operations queued in local SQLite                      |
|                                                                   |
|  Scenario 4: Kafka Cluster Failure                                |
|  └── Events stored in outbox table                                |
|  └── Replay on recovery                                           |
|                                                                   |
+------------------------------------------------------------------+

LitmusChaos Installation

# Install LitmusChaos in Kubernetes staging cluster
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v3.0.0.yaml

# Verify installation
kubectl get pods -n litmus

# Install chaos experiments
kubectl apply -f https://hub.litmuschaos.io/api/chaos/3.0.0?file=charts/generic/experiments.yaml

Chaos Experiment: Network Partition

Tests offline-first capability when POS client loses connection to central API.

# chaos-experiments/network-partition.yaml

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: pos-network-partition
  namespace: staging
spec:
  engineState: "active"
  appinfo:
    appns: "staging"
    applabel: "app=pos-client"
    appkind: "deployment"
  chaosServiceAccount: litmus-admin
  experiments:
    - name: pod-network-partition
      spec:
        components:
          env:
            # Target the central API
            - name: TARGET_SERVICE_PORT
              value: "8080"
            - name: NETWORK_INTERFACE
              value: "eth0"
            # Duration of network partition
            - name: TOTAL_CHAOS_DURATION
              value: "300"  # 5 minutes
            # Affect all traffic to API
            - name: DESTINATION_HOSTS
              value: "pos-api.staging.svc.cluster.local"
        probe:
          - name: pos-offline-mode-check
            type: httpProbe
            mode: Continuous
            runProperties:
              probeTimeout: 5
              interval: 10
            httpProbe/inputs:
              url: "http://pos-client:8080/api/health/offline-status"
              method:
                get:
                  criteria: "=="
                  responseCode: "200"
              responseTimeout: 3
---
# Expected Behavior Validation
apiVersion: v1
kind: ConfigMap
metadata:
  name: network-partition-expected
data:
  expected_behavior: |
    1. POS client detects connection loss within 5 seconds
    2. UI shows "Offline Mode" indicator
    3. Sales can be created and processed locally
    4. Payments queue in local SQLite
    5. Sync resumes automatically when connection restored
    6. No duplicate transactions on resync

Chaos Experiment: Payment Processor Failure

Tests circuit breaker and fallback behavior.

# chaos-experiments/payment-processor-failure.yaml

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: payment-processor-chaos
  namespace: staging
spec:
  engineState: "active"
  appinfo:
    appns: "staging"
    applabel: "app=pos-api"
    appkind: "deployment"
  chaosServiceAccount: litmus-admin
  experiments:
    - name: pod-http-modify-response
      spec:
        components:
          env:
            # Inject 500 errors for Stripe calls
            - name: TARGET_SERVICE_PORT
              value: "443"
            - name: TARGET_HOSTS
              value: "api.stripe.com"
            - name: RESPONSE_BODY
              value: '{"error": {"type": "api_error", "message": "Chaos injection"}}'
            - name: STATUS_CODE
              value: "500"
            - name: CHAOS_DURATION
              value: "120"
        probe:
          - name: circuit-breaker-open-check
            type: promProbe
            mode: OnChaos
            runProperties:
              probeTimeout: 30
              interval: 10
            promProbe/inputs:
              endpoint: "http://prometheus:9090"
              query: 'polly_circuit_breaker_state{service="stripe"} == 1'
              comparator:
                type: "int"
                criteria: "=="
                value: "1"  # Circuit should be OPEN

Chaos Experiment: Database Latency

Tests system behavior under slow database conditions.

# chaos-experiments/database-latency.yaml

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: db-latency-chaos
  namespace: staging
spec:
  engineState: "active"
  appinfo:
    appns: "staging"
    applabel: "app=pos-postgres"
    appkind: "statefulset"
  chaosServiceAccount: litmus-admin
  experiments:
    - name: pod-network-latency
      spec:
        components:
          env:
            - name: NETWORK_INTERFACE
              value: "eth0"
            - name: NETWORK_LATENCY
              value: "2000"  # 2 second latency
            - name: JITTER
              value: "500"   # +/- 500ms jitter
            - name: TOTAL_CHAOS_DURATION
              value: "180"
        probe:
          - name: api-response-degradation
            type: httpProbe
            mode: Continuous
            httpProbe/inputs:
              url: "http://pos-api:8080/api/v1/products"
              method:
                get:
                  criteria: "<"
                  responseCode: "500"  # Should not fail, just slow
              responseTimeout: 10

Chaos Experiment: Kafka Broker Failure

Tests event sourcing resilience and outbox pattern.

# chaos-experiments/kafka-broker-failure.yaml

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: kafka-chaos
  namespace: staging
spec:
  engineState: "active"
  appinfo:
    appns: "staging"
    applabel: "app=kafka"
    appkind: "statefulset"
  chaosServiceAccount: litmus-admin
  experiments:
    - name: pod-delete
      spec:
        components:
          env:
            - name: TOTAL_CHAOS_DURATION
              value: "120"
            - name: CHAOS_INTERVAL
              value: "30"
            - name: FORCE
              value: "true"
        probe:
          - name: events-queued-in-outbox
            type: cmdProbe
            mode: Edge
            cmdProbe/inputs:
              command: |
                psql -h pos-postgres -U pos_admin -d pos_platform -c \
                "SELECT COUNT(*) FROM event_outbox WHERE status = 'pending'"
              comparator:
                type: "int"
                criteria: ">="
                value: "1"  # Events should queue
          - name: events-replayed-on-recovery
            type: cmdProbe
            mode: EOT
            cmdProbe/inputs:
              command: |
                # After Kafka recovery, outbox should drain
                psql -h pos-postgres -U pos_admin -d pos_platform -c \
                "SELECT COUNT(*) FROM event_outbox WHERE status = 'pending'"
              comparator:
                type: "int"
                criteria: "=="
                value: "0"  # All events processed

Chaos Testing Schedule

# chaos-experiments/scheduled-chaos.yaml

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosSchedule
metadata:
  name: weekly-resilience-tests
  namespace: staging
spec:
  schedule:
    now: false
    repeat:
      timeRange:
        startTime: "2026-01-01T02:00:00Z"  # Run at 2 AM
      properties:
        minChaosInterval: "168h"  # Weekly
  chaosEngineTemplateSpec:
    engineState: "active"
    appinfo:
      appns: "staging"
      applabel: "app=pos-api"
      appkind: "deployment"
    experiments:
      - name: pod-network-partition
        spec:
          components:
            env:
              - name: TOTAL_CHAOS_DURATION
                value: "300"

Chaos Engineering CI/CD Integration

# .github/workflows/chaos-tests.yml

name: Chaos Engineering Tests

on:
  schedule:
    - cron: '0 3 * * 0'  # Weekly on Sunday at 3 AM
  workflow_dispatch:

jobs:
  chaos-tests:
    runs-on: ubuntu-latest
    environment: staging
    steps:
      - uses: actions/checkout@v4

      - name: Setup kubectl
        uses: azure/setup-kubectl@v3

      - name: Configure kubectl
        run: |
          echo "${{ secrets.STAGING_KUBECONFIG }}" > kubeconfig
          export KUBECONFIG=kubeconfig

      - name: Run Network Partition Test
        run: |
          kubectl apply -f chaos-experiments/network-partition.yaml
          kubectl wait --for=condition=complete chaosengine/pos-network-partition -n staging --timeout=600s

      - name: Validate Offline Mode Results
        run: |
          # Check experiment status
          RESULT=$(kubectl get chaosresult pos-network-partition-pod-network-partition -n staging -o jsonpath='{.status.experimentStatus.verdict}')
          if [ "$RESULT" != "Pass" ]; then
            echo "Chaos experiment FAILED: Network partition handling"
            kubectl logs -l app=pos-client -n staging --tail=100
            exit 1
          fi

      - name: Run Payment Processor Failure Test
        run: |
          kubectl apply -f chaos-experiments/payment-processor-failure.yaml
          kubectl wait --for=condition=complete chaosengine/payment-processor-chaos -n staging --timeout=300s

      - name: Generate Chaos Report
        run: |
          litmusctl get experiments -n staging -o json > chaos-report.json

      - name: Upload Chaos Report
        uses: actions/upload-artifact@v4
        with:
          name: chaos-engineering-report
          path: chaos-report.json

      - name: Notify on Failure
        if: failure()
        uses: slackapi/slack-github-action@v1
        with:
          channel-id: 'chaos-alerts'
          slack-message: 'Chaos engineering tests FAILED in staging'

Chaos Engineering Runbook

# Chaos Engineering Runbook

## Pre-Chaos Checklist

- [ ] Staging environment is isolated from production
- [ ] No active deployments in progress
- [ ] Monitoring dashboards are active
- [ ] On-call engineer is notified
- [ ] Rollback procedures are documented

## During Chaos

1. Monitor Grafana dashboards for:
   - Error rates
   - Latency p99
   - Circuit breaker states
   - Queue depths

2. Validate expected behaviors:
   - Offline mode activates
   - Fallbacks engage
   - No data loss

## Post-Chaos

1. Review chaos experiment results
2. Document any unexpected behaviors
3. Create tickets for resilience improvements
4. Update architecture documentation

18.15 Next Steps

With your development environment ready:

  1. Proceed to Chapter 19: Implementation Roadmap for the full build plan
  2. Begin Phase 1: Foundation in Chapter 20
  3. Reference this chapter (Chapter 18) when adding new developers to the project


Document Information

AttributeValue
Version5.0.0
Created2025-12-29
Updated2026-02-25
AuthorClaude Code
StatusActive
PartVI - Implementation Guide
Chapter18 of 32

This chapter is part of the POS Blueprint Book. All content is self-contained.