The POS Platform Blueprint
A Complete Guide to Building an Enterprise Multi-Tenant Point of Sale System
Version: 5.0.0
Created: December 29, 2025
Updated: February 25, 2026
Target Platform: /volume1/docker/pos-platform/
╔═══════════════════════════════════════════════════════════════════════════════╗
║ ║
║ ████████╗██╗ ██╗███████╗ ║
║ ╚══██╔══╝██║ ██║██╔════╝ ║
║ ██║ ███████║█████╗ ║
║ ██║ ██╔══██║██╔══╝ ║
║ ██║ ██║ ██║███████╗ ║
║ ╚═╝ ╚═╝ ╚═╝╚══════╝ ║
║ ║
║ ██████╗ ██████╗ ███████╗ ██████╗ ██╗ ██╗ ██╗███████╗ ║
║ ██╔══██╗██╔═══██╗██╔════╝ ██╔══██╗██║ ██║ ██║██╔════╝ ║
║ ██████╔╝██║ ██║███████╗ ██████╔╝██║ ██║ ██║█████╗ ║
║ ██╔═══╝ ██║ ██║╚════██║ ██╔══██╗██║ ██║ ██║██╔══╝ ║
║ ██║ ╚██████╔╝███████║ ██████╔╝███████╗╚██████╔╝███████╗ ║
║ ╚═╝ ╚═════╝ ╚══════╝ ╚═════╝ ╚══════╝ ╚═════╝ ╚══════╝ ║
║ ║
║ ██████╗ ██████╗ ██╗███╗ ██╗████████╗ ║
║ ██╔══██╗██╔══██╗██║████╗ ██║╚══██╔══╝ ║
║ ██████╔╝██████╔╝██║██╔██╗ ██║ ██║ ║
║ ██╔═══╝ ██╔══██╗██║██║╚██╗██║ ██║ ║
║ ██║ ██║ ██║██║██║ ╚████║ ██║ ║
║ ╚═╝ ╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝ ╚═╝ ║
║ ║
║ Enterprise Multi-Tenant Point of Sale ║
║ Architecture & Implementation ║
║ ║
╚═══════════════════════════════════════════════════════════════════════════════╝
How to Use This Book
This Blueprint is a self-contained guide for building a production-grade, multi-tenant POS system using Claude Code’s multi-agent orchestration. Every diagram, code sample, database schema, and implementation detail is included directly in these pages.
Reading Order
| If You Want To… | Start With… |
|---|---|
| Understand the vision | Part I: Foundation |
| Design the system | Part II: Architecture |
| Build the database | Part III: Database |
| Write the backend | Part IV: Backend |
| Create the UI | Part V: Frontend |
| Start implementing | Part VI: Implementation |
| Deploy to production | Part VII: Operations |
| Look up terms | Part VIII: Reference |
Claude Code Commands
Throughout this book, you’ll see commands like:
/dev-team implement tenant middleware
These are Claude Code multi-agent commands. See Chapter 29: Claude Code Reference for the complete command guide.
Table of Contents
Front Matter
- 00-BOOK-INDEX.md – You are here
- 01-PREFACE.md - Why this book exists
- BLUEPRINT-INSTRUCTIONS.md - How to maintain this blueprint
Part I: Foundation (Chapter 01)
What this book is and how to use it
Part II: Architecture (Chapters 02-05)
System design, quality attributes, and key decisions
- Chapter 02: Architecture Decision Records
- Chapter 03: Architecture Characteristics
- Chapter 04: Architecture Styles Analysis
- Chapter 05: Architecture Components (BRD v20.0)
Note: In v3.0.0, standalone architecture chapters were consolidated into enriched Characteristics and Styles chapters. In v4.0.0, the full BRD v20.0 was integrated. In v5.0.0, Foundation chapters 02-04 were removed and all chapters renumbered.
Part III: Database (Chapters 06-09)
Complete data layer specification
- Chapter 06: Database Strategy
- Chapter 07: Schema Design
- Chapter 08: Entity Specifications
- Chapter 09: Indexes & Performance
Part IV: Backend (Chapters 10-13)
API and service layer implementation
- Chapter 10: API Design
- Chapter 11: Service Layer
- Chapter 12: Security & Authentication
- Chapter 13: Integration Patterns
Part V: Frontend (Chapters 14-17)
User interface specifications
- Chapter 14: POS Client Application
- Chapter 15: Admin Portal
- Chapter 16: Mobile (Raptag)
- Chapter 17: UI Component Library
Part VI: Implementation Guide (Chapters 18-23)
Step-by-step building instructions
- Chapter 18: Development Environment
- Chapter 19: Implementation Roadmap
- Chapter 20: Phase 1 - Foundation
- Chapter 21: Phase 2 - Core Operations
- Chapter 22: Phase 3 - Supporting Systems
- Chapter 23: Phase 4 - Production Ready
Part VII: Operations (Chapters 24-28)
Deployment and ongoing maintenance
- Chapter 24: Deployment Guide
- Chapter 25: Monitoring & Alerting
- Chapter 26: Security Compliance
- Chapter 27: Disaster Recovery
- Chapter 28: Tenant Lifecycle
Part VIII: Reference (Chapters 29-32)
Quick lookup resources
- Chapter 29: Claude Code Command Reference
- Chapter 30: Glossary
- Chapter 31: Checklists
- Chapter 32: Troubleshooting
Appendices
- Appendix A: Complete API Reference
- Appendix B: Database ERD
- Appendix C: Domain Events Catalog
- Appendix D: UI Mockups
- Appendix E: Code Templates
- Appendix F: BRD-to-Code Module Mapping
Book Statistics
| Metric | Value |
|---|---|
| Total Chapters | 32 |
| Parts | 8 |
| Appendices | 6 (A-F) |
| Database Tables | 51 |
| API Endpoints | 75+ |
| Domain Events | 80 |
| Code Services | 142 |
| Target Grade | A (Production-Ready) |
How to Print This Book
Option 1: Markdown to PDF (Recommended)
Use Pandoc to compile all chapters into a single PDF:
# Install Pandoc (if not installed)
# macOS: brew install pandoc
# Ubuntu: apt install pandoc texlive-xetex
# Navigate to Blueprint folder
cd /volume1/docker/planning/000-POS-Learning/00-Blue-Print
# Compile to PDF
pandoc \
00-BOOK-INDEX.md \
01-PREFACE.md \
Part-I-Foundation/*.md \
Part-II-Architecture/*.md \
Part-III-Database/*.md \
Part-IV-Backend/*.md \
Part-V-Frontend/*.md \
Part-VI-Implementation/*.md \
Part-VII-Operations/*.md \
Part-VIII-Reference/*.md \
Appendices/*.md \
-o POS-Blueprint-Book.pdf \
--toc \
--toc-depth=3 \
--pdf-engine=xelatex \
-V geometry:margin=1in \
-V fontsize=11pt \
-V mainfont="DejaVu Sans" \
-V monofont="DejaVu Sans Mono"
Option 2: VS Code Extension
- Install “Markdown PDF” extension in VS Code
- Open each chapter
- Right-click > “Markdown PDF: Export (pdf)”
- Combine PDFs using any PDF merger
Option 3: Web-Based
Use online tools like:
- Dillinger.io - Paste markdown, export as PDF
- GitHub - Each .md file renders nicely for printing
- Grip - Local GitHub-style markdown preview
Option 4: Build Script (Automated)
#!/bin/bash
# save as: build-book.sh
BOOK_DIR="/volume1/docker/planning/000-POS-Learning/00-Blue-Print"
OUTPUT="$BOOK_DIR/POS-Blueprint-Book.pdf"
# Collect all markdown files in order
FILES=(
"$BOOK_DIR/00-BOOK-INDEX.md"
"$BOOK_DIR/01-PREFACE.md"
)
# Add Part I-VIII
for part in Part-I-Foundation Part-II-Architecture Part-III-Database \
Part-IV-Backend Part-V-Frontend Part-VI-Implementation \
Part-VII-Operations Part-VIII-Reference Appendices; do
for file in "$BOOK_DIR/$part"/*.md; do
[ -f "$file" ] && FILES+=("$file")
done
done
# Build PDF
pandoc "${FILES[@]}" -o "$OUTPUT" --toc --toc-depth=3
echo "Book compiled: $OUTPUT"
Estimated Print Size
| Format | Pages | Notes |
|---|---|---|
| Full Book | ~400-500 | All chapters and appendices |
| Core (Parts I-IV) | ~200 | Architecture + Backend |
| Quick Reference | ~50 | Part VIII only |
Version History
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2025-12-29 | Initial Blueprint Book (39 chapters) |
| 2.0.0 | 2026-02-19 | Expert panel review, integration module additions |
| 5.0.0 | 2026-02-25 | Removed Ch 02-04 (Foundation), rewritten Ch 01 as Blueprint Purpose, renumbered 35 to 32 chapters |
| 3.0.0 | 2026-02-22 | Chapter consolidation (39 to 34): merged High-Level Architecture, Multi-Tenancy, Domain Model, Event Sourcing, and Offline-First into Architecture Characteristics (Ch 06) and Architecture Styles (Ch 07); full renumbering |
| 3.1.0 | 2026-02-22 | Structural cleanup: section numbering standardized across all 34 chapters + 5 appendices, Ch 08/09 rewritten for RLS, Document Information footers added, cross-references audited and fixed |
| 4.0.0 | 2026-02-25 | BRD v20.0 integrated as Chapter 08: Architecture Components; all subsequent chapters renumbered (+1); 26 contradictions reconciled across 12 chapters and 4 appendices |
| 3.3.0 | 2026-02-25 | RFID Counting Subsystem: BRD v20 (Section 5.16 RFID Config, Section 4.6.8 multi-operator counting, 6 new decisions #108-113), schema fixes (tenant_id/RLS on all RFID tables, 3 new tables), chunked sync API, Raptag home dashboard + progress tracking + auto-save/recovery |
| 3.2.0 | 2026-02-24 | Added Appendix F: BRD-to-Code Module Mapping (142 services across 7 modules, 80 domain events, 19 state machines, 107 decisions mapped with full traceability) |
Contributors
| Role | Contributor |
|---|---|
| Architect | Claude Code Architect Agent |
| Author | Claude Code Editor Agent |
| Reviewer | Claude Code Engineer Agent |
| Research | Claude Code Researcher Agent |
| Coordinator | Claude Code Orchestrator |
Blueprint Maintenance
How to Update This Blueprint
See BLUEPRINT-INSTRUCTIONS.md for complete maintenance procedures.
Quick Reference:
| Task | Action |
|---|---|
| Edit content | Update master file, copy to mdbook-src/src/ |
| Add chapter | Create file, update this index, update SUMMARY.md |
| Add appendix | Create file, update this index, update SUMMARY.md |
| Build PDF | Run ./build-book.sh |
| Deploy web | Run cd mdbook-src && mdbook build && wrangler pages deploy book |
CRITICAL: When adding or removing chapters/appendices, you MUST update:
- This file (
00-BOOK-INDEX.md) mdbook-src/src/SUMMARY.md- Copy updated index to
mdbook-src/src/00-BOOK-INDEX.md
Live Site
URL: https://pos-blueprint.pages.dev/
Thoughts & Recommendations
Current Status (as of 2026-02-22)
The blueprint is comprehensive and production-ready for Phase 3 implementation. Key observations:
Strengths
- Complete database schema (51 tables across 13 domains)
- Full API specification (75+ endpoints with request/response examples)
- Multi-tenant architecture designed from day one
- Offline-first design with conflict resolution strategies
- 4-phase implementation roadmap with Claude Code commands
- Consolidated architecture chapters with expert panel review and full implementation details
Areas for Future Enhancement
| Area | Recommendation | Priority |
|---|---|---|
| Testing Strategy | Add chapter on testing patterns (unit, integration, E2E) | High |
| CI/CD Pipeline | Add deployment automation chapter | Medium |
| Performance Benchmarks | Add concrete performance targets and test plans | Medium |
| Data Migration Scripts | Add detailed QB POS to New POS migration scripts | High |
| Training Materials | Add end-user training documentation | Low |
| Mobile Screenshots | Add actual Raptag UI screenshots to Appendix D | Low |
Implementation Notes
- Start with Phase 1 - Foundation is critical; don’t skip tenant isolation
- Use the code templates - Appendix E has copy-paste ready patterns
- Follow the ADRs - Chapter 05 documents why decisions were made
- Test offline early - Don’t leave offline mode for the end
- Schema provisioning - Use the SQL functions in Chapter 10
Open Questions
- Payment processor final selection (Stripe vs Square vs both?)
- RFID hardware vendor confirmation (Zebra confirmed?)
- First pilot store selection for Phase 3 testing
- Data retention policy for audit compliance
Change Log
| Date | Change | Author |
|---|---|---|
| 2025-12-29 | Initial Blueprint v1.0.0 | Claude Code |
| 2026-01-24 | Added maintenance instructions, updated appendix list | Claude Code |
| 2026-02-22 | v3.0.0 - Chapter consolidation (39 to 34), full renumbering | Claude Code |
| 2026-02-23 | v3.1.0 - Structural cleanup: section numbering, RLS fix, footers, cross-refs | Claude Code |
| 2026-02-25 | v5.0.0 - Removed Ch 02-04 (Foundation), rewritten Ch 01 as Blueprint Purpose, renumbered 35 to 32 chapters | Claude Code |
| 2026-02-25 | v4.0.0 - BRD v20.0 as Ch 08, full renumber (34 to 35 chapters), contradiction reconciliation | Claude Code |
| 2026-02-24 | v3.2.0 - Added Appendix F: BRD-to-Code Module Mapping | Claude Code |
“Build it right the first time. This Blueprint is your guide.”
Preface
Why This Book Exists
In late 2025, we embarked on a journey to replace an aging QuickBooks Point of Sale system with a modern, multi-tenant platform. What started as a simple migration project evolved into something much more ambitious: a comprehensive Blueprint for building enterprise-grade retail software.
This book captures everything we learned—the architecture decisions, the database designs, the implementation patterns, and the operational procedures. It’s not a theoretical exercise; it’s a practical guide born from real retail operations across five store locations.
The Three-Phase Philosophy
We adopted a deliberate three-phase approach:
╔═══════════════════════════════════════════════════════════════════════════╗
║ ║
║ PHASE 1: LEARN PHASE 2: DESIGN ║
║ ───────────── ────────────── ║
║ Build Stanly (bridge to QB) Clean room architecture ║
║ Build Raptag (RFID system) Domain models without legacy ║
║ Test with real stores Database schema for scale ║
║ Document every discovery API specifications ║
║ ║
║ ↓ ║
║ ║
║ PHASE 3: BUILD ║
║ ────────────── ║
║ Fresh POS system from scratch ║
║ Using learnings, not legacy code ║
║ Multi-tenant from day one ║
║ Production-grade quality ║
║ ║
╚═══════════════════════════════════════════════════════════════════════════╝
The critical insight: We don’t evolve legacy systems into production. We learn from them, then build fresh. This book is the culmination of Phase 2—the complete design that enables Phase 3.
What Makes This Blueprint Different
1. Self-Contained
Every diagram, code sample, and SQL statement is included directly in these pages. You won’t find “see external documentation” or “refer to file X.” Everything you need is here.
2. Production-Grade
This isn’t a prototype specification. The database schema has 51 tables. The API has 75+ endpoints. The architecture handles offline operations, multi-tenancy, and PCI-DSS compliance. We designed for the edge cases.
3. Claude Code Native
This Blueprint is designed to be built using Claude Code’s multi-agent orchestration. Every chapter includes the exact commands to implement each section. The book and the tool work together.
4. Battle-Tested Patterns
The patterns in this book come from real retail operations—from handling offline sales during network outages to reconciling inventory across multiple locations. These aren’t theoretical; they’re proven.
Who Should Read This Book
| Reader | Focus Areas |
|---|---|
| Architects | Parts II, III, VIII (Architecture, Database, ADRs) |
| Backend Developers | Parts III, IV, VI (Database, Backend, Implementation) |
| Frontend Developers | Parts V, VI (Frontend, Implementation) |
| DevOps Engineers | Parts VII, VIII (Operations, Deployment) |
| Product Managers | Parts I, VI (Foundation, Roadmap) |
| Business Analysts | Parts I, V (Foundation, UI Specifications) |
How to Use This Book
If Starting Fresh
Read sequentially from Part I through Part VI. This gives you the full context before implementation.
If Joining Mid-Project
- Read Part I (Foundation) for context
- Read the relevant Part for your work area
- Use Part VIII (Reference) for quick lookups
If Looking Up Specific Information
Jump directly to:
- Glossary (Chapter 30) for term definitions
- API Reference (Appendix A) for endpoint details
- Checklists (Chapter 31) for procedures
- Troubleshooting (Chapter 32) for problem-solving
The Technology Stack
This Blueprint specifies a complete technology stack:
┌─────────────────────────────────────────────────────────────────────────┐
│ TECHNOLOGY STACK │
│ │
│ BACKEND FRONTEND │
│ ─────── ──────── │
│ ASP.NET Core 8 Blazor Server (Admin Portal) │
│ Entity Framework Core 8 .NET MAUI (POS Client) │
│ PostgreSQL 16 .NET MAUI (Raptag Mobile) │
│ SignalR (Real-time) SQLite (Offline Storage) │
│ │
│ INFRASTRUCTURE INTEGRATIONS │
│ ────────────── ──────────── │
│ Docker + Docker Compose Shopify (E-commerce) │
│ Prometheus + Grafana Stripe/Square (Payments) │
│ Redis (Caching - optional) Zebra (RFID Printers) │
│ Tailscale (VPN Mesh) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Conventions Used
Code Samples
// C# code appears in blocks like this
public class OrderService : IOrderService
{
// Implementation
}
SQL Statements
-- SQL code appears in blocks like this
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid()
);
Claude Commands
# Claude Code commands appear like this
/dev-team implement OrderService with event sourcing
Important Notes
Note: Important information appears in blockquotes like this.
Warning: Critical warnings that could cause issues.
Diagrams
ASCII diagrams are used throughout for portability:
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Client │────►│ API │────►│Database │
└─────────┘ └─────────┘ └─────────┘
Acknowledgments
This Blueprint was created through collaboration between:
- Business stakeholders who defined the requirements
- Store employees who provided real-world feedback
- The Stanly project which taught us what works (and what doesn’t)
- The Raptag project which proved mobile RFID feasibility
- Claude Code agents who helped design, implement, and review
A Note on Quality
We set a goal: this system should be Grade A production quality. That means:
- Availability: 99.9% uptime (less than 9 hours downtime per year)
- Performance: Sub-2-second transaction completion
- Security: PCI-DSS compliant, zero card data storage
- Reliability: Works offline, syncs when connected
- Scalability: Multi-tenant from day one
Every design decision in this book was made with these targets in mind.
Let’s Build
You’re holding the complete Blueprint. The architecture is designed. The database is specified. The APIs are defined. The UI is wireframed. The operations procedures are documented.
All that’s left is to build it.
Turn the page and let’s begin.
December 2025
Document Information
| Attribute | Value |
|---|---|
| Book Title | The POS Platform Blueprint |
| Version | 5.0.0 |
| Created | December 29, 2025 |
| Updated | February 25, 2026 |
| Total Chapters | 32 |
| Total Appendices | 6 (A-F) |
| Target Platform | /volume1/docker/pos-platform/ |
| Print Command | See “How to Print This Book” in Index |
Chapter 01: Blueprint Purpose
1.1 What This Book Is
The POS Platform Blueprint is the complete architecture and implementation guide for building an enterprise multi-tenant retail Point of Sale system. It is the single source of truth for every design decision, database schema, API contract, deployment procedure, and implementation pattern needed to build the platform from scratch.
This book is self-contained. Every diagram, code sample, and specification lives within these chapters. There are no external dependencies or unresolved references. A development team should be able to build the entire system using only this document.
1.2 Who Uses This Book
This Blueprint is designed for Claude Code teams and agents during the coding and development phase. It serves as the foundation plan that AI-assisted development teams follow when implementing the POS platform.
| Audience | How They Use It |
|---|---|
| Claude Code agents | Primary reference during implementation; follow specs exactly |
| Team leads | Assign work by chapter/module; verify implementations against specs |
| Architects | Review ADRs and architecture decisions before implementation |
| Developers | Look up API contracts, database schemas, service patterns |
| DevOps | Follow deployment, monitoring, and DR procedures |
1.3 What the POS Platform Does
The POS Platform is a unified commerce solution for small-to-mid-size retailers operating both online and brick-and-mortar stores. It replaces legacy point-of-sale systems with a modern, multi-tenant SaaS platform.
Core Capabilities
| Capability | Description |
|---|---|
| Point of Sale | Process sales, returns, exchanges across physical locations |
| Inventory Management | Real-time stock tracking across all locations |
| Multi-Location | Support any number of stores per tenant |
| Shopify Integration | Two-way inventory and order synchronization |
| Offline-First | Continue operations during network outages with queue-and-sync |
| RFID Counting | Rapid bulk inventory counting via dedicated Raptag mobile app |
| Multi-Tenant | Row-level isolation with PostgreSQL RLS; one platform, many retailers |
| Payment Processing | PCI SAQ-A semi-integrated via Stripe (no card data touches our system) |
Key Architecture Decisions
- Event-Driven Modular Monolith (Central API) + Microkernel (POS Client)
- PostgreSQL with Row-Level Security for tenant isolation
- Offline-first with SQLite local DB, sync queue, and CRDTs for conflict resolution
- ASP.NET Core 8.0 backend, Blazor web frontend, .NET MAUI mobile
- PostgreSQL LISTEN/NOTIFY for v1.0 events (Kafka deferred to v2.0)
1.4 How to Use This Book
Structure: 32 Chapters, 8 Parts, 6 Appendices
| Part | Chapters | Purpose |
|---|---|---|
| I. Foundation | Ch 01 | This chapter – what the book is and how to navigate it |
| II. Architecture | Ch 02-05 | ADRs, architecture characteristics, styles analysis, and the full BRD |
| III. Database | Ch 06-09 | Database strategy, schema design, entity specifications, indexes |
| IV. Backend | Ch 10-13 | API design, service layer, security/auth, integration patterns |
| V. Frontend | Ch 14-17 | POS client, admin portal, mobile (Raptag), UI component library |
| VI. Implementation | Ch 18-23 | Dev environment, roadmap, Phases 1-4 implementation guides |
| VII. Operations | Ch 24-28 | Deployment, monitoring, security compliance, DR, tenant lifecycle |
| VIII. Reference | Ch 29-32 | Claude commands, glossary, checklists, troubleshooting |
Appendices A-F provide supplementary reference: API specs, ERD, domain events, UI mockups, code templates, and BRD-to-code module mapping.
Reading Guide
Starting a new implementation? Read Parts I-II first for context, then jump to the relevant Part for your work area.
Implementing a specific module? Start with Ch 05 (Architecture Components / BRD) to find your module’s requirements, then check the corresponding backend (Part IV) and frontend (Part V) chapters.
Setting up infrastructure? Go directly to Part VI (Implementation) for dev environment and phased rollout, then Part VII (Operations) for deployment and monitoring.
Looking up a specific pattern? Use Ch 30 (Glossary) for terminology, Ch 31 (Checklists) for process guides, or Ch 32 (Troubleshooting) for common issues.
1.5 Key Mega-References
Two chapters serve as the primary implementation references and contain the bulk of the technical detail:
Chapter 04: Architecture Styles Analysis (~5,000 lines)
The architectural backbone of the system. Key sections include:
| Section | Content |
|---|---|
| L.4 | Selected Architecture Strategy (modular monolith + event-driven) |
| L.4A | CQRS and Event Sourcing scope |
| L.9A | System Architecture Reference (3-tier, service boundaries) |
| L.9B | Data Flow Reference (online/offline sale and sync flows) |
| L.9C | Domain Model Reference (bounded contexts, aggregates) |
| L.10A.1 | Offline-First Strategy (SQLite, sync queue, CRDTs) |
| L.10A.4 | Multi-Tenancy with Row-Level RLS |
| L.6 | QA and Testing Strategy |
| L.7 | Observability (LGTM Stack) |
| L.8 | Security (6-Gate Pyramid) |
Chapter 05: Architecture Components / BRD v20.0 (~19,900 lines)
The complete Business Requirements Document with 7 modules and 113 decisions:
| Module | Scope |
|---|---|
| Module 1 | Multi-Tenant Architecture and User Management |
| Module 2 | Product Catalog and Inventory |
| Module 3 | Sales and Transactions |
| Module 4 | Customer and Loyalty |
| Module 5 | Hardware and Device Management |
| Module 6 | Integrations and External Systems |
| Module 7 | Reporting and Analytics |
Authority rule: When the BRD (Ch 05) conflicts with any other chapter, the BRD wins.
1.6 Conventions Used in This Book
- Section numbering: All chapters use
## X.Y Titleformat (e.g.,## 1.1 What This Book Is) - Cross-references: Point to chapter numbers and filenames (e.g., “See Chapter 04”)
- Code samples: Complete, copy-paste ready; file paths included as comments
- Document footer: Every chapter ends with a standardized Document Information table
- Dual-file sync: Every chapter exists in both the master directory and
mdbook-src/src/for web publishing
1.7 Summary
This Blueprint Book is the complete specification for the POS Platform. It is designed to be consumed by Claude Code agents and development teams who will implement the system. Start with the architecture chapters (Part II) for the big picture, then drill into the specific Part relevant to your current task.
Next Chapter: Chapter 02: Architecture Decision Records
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | I - Foundation |
| Chapter | 01 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 02: Architecture Decision Records
Documenting Key Technical Decisions
This chapter documents the major architectural decisions for the POS Platform using Architecture Decision Records (ADRs). Each ADR captures the context, decision, and consequences of a significant technical choice.
What is an ADR?
Architecture Decision Records provide a structured way to document important technical decisions:
ADR Structure
=============
+------------------------------------------------------------------+
| ADR-XXX: [Title] |
+------------------------------------------------------------------+
| Status: [proposed | accepted | deprecated | superseded] |
| Date: YYYY-MM-DD |
| Deciders: [who made the decision] |
+------------------------------------------------------------------+
| |
| CONTEXT |
| - What is the issue? |
| - What forces are at play? |
| - What constraints exist? |
| |
| DECISION |
| - What is the change? |
| - What did we choose? |
| |
| CONSEQUENCES |
| - What are the positive outcomes? |
| - What are the negative outcomes? |
| - What risks are introduced? |
| |
+------------------------------------------------------------------+
ADR-001: Schema-Per-Tenant Multi-Tenancy
Note: This ADR was superseded by the Row-Level Isolation with PostgreSQL RLS decision documented in Chapter 04, Section L.10A.4. The original decision is preserved here for historical context.
+==================================================================+
| ADR-001: Schema-Per-Tenant Multi-Tenancy |
+==================================================================+
| Status: SUPERSEDED (by Row-Level Isolation with RLS, Ch 04 |
| Section L.10A.4) |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
We are building a multi-tenant POS platform that will serve multiple
independent retail businesses. Each tenant needs:
1. Strong data isolation for security and compliance
2. Easy backup and restore of individual tenant data
3. Ability to scale individual tenants independently
4. Simple data model without tenant_id on every table
5. Compliance with SOC 2 and potential HIPAA requirements
We evaluated three multi-tenancy strategies:
Strategy A: Shared Tables (Row-Level)
- All tenants share tables
- tenant_id column on every table
- WHERE tenant_id = ? on every query
Strategy B: Separate Databases
- Each tenant gets own database
- Complete isolation
- High connection overhead
Strategy C: Schema-Per-Tenant
- Single database, separate schemas
- SET search_path per request
- Logical isolation, shared infrastructure
DECISION
--------
We will use SCHEMA-PER-TENANT multi-tenancy (Strategy C).
Each tenant gets a dedicated PostgreSQL schema:
- shared schema: Platform-wide data (tenants, plans, features)
- tenant_xxx schema: All tenant-specific tables
The tenant is resolved from the subdomain (e.g., nexus.pos-platform.com)
and the database search_path is set accordingly.
CONSEQUENCES
------------
Positive:
+ Strong logical isolation between tenants
+ No tenant_id needed on every table (cleaner data model)
+ Easy per-tenant backup: pg_dump -n tenant_xxx
+ Easy per-tenant restore without affecting other tenants
+ Single connection pool serves all tenants
+ Simpler queries (no WHERE tenant_id = ?)
+ Compliance-friendly for audits and data requests
Negative:
- Migrations must be applied to all tenant schemas
- Cross-tenant queries require explicit schema references
- PostgreSQL has soft limit (~10,000 schemas per database)
- Slight complexity in tenant provisioning
Risks:
- Must ensure search_path is ALWAYS set correctly
- Schema migration failures could leave tenants inconsistent
- Need robust tenant provisioning automation
Mitigations:
- Middleware validates and sets search_path on every request
- Migration runner applies changes atomically per tenant
- Tenant provisioning is scripted and tested
ADR-002: Offline-First POS Architecture
+==================================================================+
| ADR-002: Offline-First POS Architecture |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
POS terminals operate in retail environments where network
connectivity is unreliable:
1. Internet outages occur (ISP issues, weather, accidents)
2. WiFi can be congested during peak shopping hours
3. Store networks may have maintenance windows
4. Rural locations may have poor connectivity
A traditional online-required POS would:
- Block sales during outages (lost revenue)
- Show errors during slow connections (poor UX)
- Require manual workarounds (paper receipts)
Business requirements:
- Sales must NEVER be blocked by network issues
- Receipts must print immediately
- Data must eventually sync to central system
- Inventory should be reasonably accurate
DECISION
--------
We will implement OFFLINE-FIRST architecture for POS clients.
Key design elements:
1. Local SQLite database on each POS terminal
2. All operations work against local database first
3. Event queue for pending changes
4. Background sync when connectivity available
5. Conflict resolution for concurrent changes
Data flow:
User Action -> Local DB -> Event Queue -> [Background] -> Central API
CONSEQUENCES
------------
Positive:
+ Sales never blocked by network issues
+ Instant response time (local operations)
+ Resilient to any connectivity problem
+ Business continues regardless of server status
+ Better user experience for cashiers
Negative:
- Data is eventually consistent (not immediate)
- Inventory counts may drift until sync
- More complex architecture
- Conflict resolution logic required
- Local storage management needed
Risks:
- Data loss if local device fails before sync
- Inventory overselling possible during outages
- Conflict resolution edge cases
Mitigations:
- Aggressive sync when online (every 30 seconds)
- Local database backup to secondary storage
- Conservative inventory thresholds
- Clear offline indicator in UI
- Deterministic conflict resolution rules
ADR-003: Event Sourcing for Sales Domain
+==================================================================+
| ADR-003: Event Sourcing for Sales Domain |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
The Sales domain has specific requirements that traditional CRUD
does not adequately address:
1. Complete audit trail required (PCI-DSS compliance)
2. Need to answer "what happened?" not just "what is?"
3. Offline clients need conflict-free merge capability
4. Historical analysis (sales trends, patterns)
5. Debugging production issues by replaying events
Traditional CRUD limitations:
- Only stores current state
- Updates overwrite history
- Hard to reconstruct past states
- Audit logs separate from data model
DECISION
--------
We will use EVENT SOURCING for the Sales aggregate.
Implementation:
1. Append-only event store in PostgreSQL
2. Events are the source of truth
3. Read models (projections) for queries
4. Snapshots for performance on long streams
Events captured:
- SaleCreated, SaleLineItemAdded, PaymentReceived, SaleCompleted
- SaleVoided, RefundProcessed
- All inventory changes (InventorySold, InventoryAdjusted)
NOT event-sourced (traditional CRUD):
- Products (read-heavy, infrequent changes)
- Employees (HR data, simple lifecycle)
- Locations (configuration data)
CONSEQUENCES
------------
Positive:
+ Complete audit trail built into data model
+ Temporal queries ("inventory on Dec 15 at 3pm")
+ Offline sync via event merge (append-only = no conflicts)
+ Debugging by event replay
+ Analytics on event streams
+ Natural fit for CQRS pattern
Negative:
- More complex than CRUD
- Requires event versioning strategy
- Projections must be rebuilt if logic changes
- Storage grows over time (mitigated by snapshots)
- Learning curve for developers
Risks:
- Event schema evolution complexity
- Projection bugs cause stale read models
- Performance without proper snapshotting
Mitigations:
- Event versioning from day one
- Automated projection rebuild process
- Snapshot every 100 events
- Clear documentation and training
ADR-004: JWT + PIN Authentication
+==================================================================+
| ADR-004: JWT + PIN Authentication |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team, Security Team |
+==================================================================+
CONTEXT
-------
POS systems have unique authentication requirements:
1. API access needs secure, stateless authentication
2. Cashiers need quick clock-in at physical terminals
3. Sensitive actions need additional verification
4. Multiple employees may share a terminal
5. Terminals may be offline
Requirements:
- Strong authentication for API/Admin access
- Fast authentication for cashiers (< 2 seconds)
- Manager override capability
- Works offline for cashier PIN
Industry standards:
- JWT is standard for API authentication
- PINs are standard for POS quick access
- Password + MFA for admin portal access
DECISION
--------
We will implement a HYBRID authentication system:
1. JWT for API Authentication
- Admin portal uses email + password + optional MFA
- Issues JWT token (15 min access, 7 day refresh)
- Standard Bearer token in Authorization header
2. PIN for POS Terminal Access
- 4-6 digit PIN per employee
- Stored as bcrypt hash in database
- Used for: clock-in, sale attribution, drawer access
3. Manager Override
- Sensitive actions require manager PIN
- Void, large discount, price override
- Manager enters their PIN to authorize
4. Offline PIN Validation
- Employee records with PIN hashes cached locally
- Validated against local cache when offline
- Sync employee changes when online
CONSEQUENCES
------------
Positive:
+ Secure API access with industry-standard JWT
+ Fast cashier workflow with PIN
+ Manager oversight on sensitive operations
+ Works offline for POS operations
+ Clear audit trail (who did what)
Negative:
- Two authentication systems to maintain
- PIN is less secure than password (brute force)
- Local PIN cache could be extracted
- Token refresh complexity
Risks:
- PIN guessing attacks
- Stolen JWT tokens
- Stale employee cache (terminated employee)
Mitigations:
- Rate limiting on PIN attempts (3 failures = lockout)
- Short JWT expiry (15 minutes)
- Aggressive employee sync (every 5 minutes)
- PIN attempt logging and alerting
- Secure local storage encryption
ADR-005: PostgreSQL as Primary Database
+==================================================================+
| ADR-005: PostgreSQL as Primary Database |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
We need a database that supports:
1. Schema-per-tenant multi-tenancy
2. JSONB for flexible event storage
3. Strong ACID guarantees for financial data
4. Good performance at scale
5. Mature ecosystem and tooling
Options considered:
- PostgreSQL: Schema support, JSONB, mature
- MySQL: Popular, but weaker schema support
- SQL Server: Good, but licensing costs
- MongoDB: Document store, no ACID, no schemas
- CockroachDB: Distributed, but complexity
DECISION
--------
We will use POSTGRESQL 16 as the primary database.
Justifications:
1. Native Row-Level Security (RLS) for multi-tenancy isolation
(Originally: schema support; updated per ADR-001 supersession)
2. Excellent JSONB for event storage
3. Strong ACID for financial transactions
4. Proven at scale (Instagram, Uber, etc.)
5. Rich extension ecosystem (PostGIS, etc.)
6. Open source, no licensing costs
7. Excellent tooling (pgAdmin, pg_dump)
CONSEQUENCES
------------
Positive:
+ Native RLS for multi-tenant data isolation (see ADR-001 supersession)
+ JSONB enables flexible event data
+ Strong consistency guarantees
+ Mature, well-documented
+ No licensing costs
+ Excellent community support
Negative:
- Single point of failure without replication
- Requires PostgreSQL expertise
- Not as horizontally scalable as NoSQL
- Schema migrations need coordination
Mitigations:
- Streaming replication for HA
- Regular backups with pg_dump
- Team training on PostgreSQL
- Migration automation tooling
ADR-006: ASP.NET Core for Central API
+==================================================================+
| ADR-006: ASP.NET Core for Central API |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
We need a backend framework that supports:
1. High-performance API serving
2. Strong typing for complex domain
3. Entity Framework for database access
4. SignalR for real-time features
5. Docker deployment
6. Team expertise alignment
Options considered:
- ASP.NET Core (C#): Performance, typing, EF Core
- Node.js (Express): Fast dev, but weak typing
- Go (Gin): Performance, but less ecosystem
- Python (FastAPI): ML integration, but slower
- Java (Spring): Enterprise, but verbose
Team context:
- Existing .NET experience from Bridge project
- C# used for MAUI mobile app
- Entity Framework expertise available
DECISION
--------
We will use ASP.NET CORE 8.0 for the Central API.
Justifications:
1. Exceptional performance (near Go levels)
2. Strong typing catches bugs at compile time
3. Entity Framework Core for PostgreSQL
4. Built-in SignalR for real-time
5. Excellent Docker support
6. Team already proficient in C#
7. Same language as POS client and mobile app
CONSEQUENCES
------------
Positive:
+ High performance for API workloads
+ Strong typing reduces runtime errors
+ Seamless EF Core integration
+ Built-in dependency injection
+ Excellent tooling (Visual Studio, Rider)
+ C# across entire stack (API, Client, Mobile)
Negative:
- Larger runtime than Go or Rust
- Windows-centric tooling (though Linux deployment)
- C# developers cost more than Node.js
Mitigations:
- Alpine-based Docker images minimize size
- Use VS Code or Rider on Mac/Linux
- Leverage existing team expertise
ADR Index
| ADR | Title | Status | Date |
|---|---|---|---|
| ADR-001 | Schema-Per-Tenant Multi-Tenancy | Superseded (by Row-Level RLS, Ch 04 L.10A.4) | 2025-12-29 |
| ADR-002 | Offline-First POS Architecture | Accepted | 2025-12-29 |
| ADR-003 | Event Sourcing for Sales Domain | Accepted | 2025-12-29 |
| ADR-004 | JWT + PIN Authentication | Accepted | 2025-12-29 |
| ADR-005 | PostgreSQL as Primary Database | Accepted | 2025-12-29 |
| ADR-006 | ASP.NET Core for Central API | Accepted | 2025-12-29 |
| ADR-013 | RFID Configuration in Tenant Admin | Accepted | 2026-01-01 |
Future ADRs (Planned)
| ADR | Title | Status |
|---|---|---|
| ADR-007 | React for Admin Portal | Proposed |
| ADR-008 | Electron vs Tauri for POS Client | Proposed |
| ADR-009 | Redis for Session & Cache | Proposed |
| ADR-010 | Shopify Sync Strategy | Proposed |
| ADR-011 | Payment Gateway Integration | Proposed |
| ADR-012 | Logging and Monitoring Stack | Proposed |
| ADR-013 | RFID Configuration in Tenant Admin | Accepted |
ADR-013: RFID Configuration Embedded in Tenant Admin Portal
+==================================================================+
| ADR-013: RFID Configuration Embedded in Tenant Admin Portal |
+==================================================================+
| Status: ACCEPTED |
| Date: 2026-01-01 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
RapOS includes RFID inventory capabilities via the Raptag mobile app.
The question arose: where should RFID configuration (device management,
printer setup, tag encoding settings, templates) be managed?
We evaluated three options:
Option A: Embed in Tenant Admin Portal (app.rapos.com)
- RFID settings as feature-flagged section in existing portal
- Uses existing authentication, permissions, navigation
- Shared context with products, locations, users
Option B: Separate RFID Portal (rfid.rapos.com)
- Dedicated portal just for RFID configuration
- 4th portal in the architecture
- Independent scaling and development
Option C: Hybrid Approach
- Basic settings in Tenant Admin
- Advanced configuration in separate portal
- Users navigate between portals
Research was conducted on major RFID vendors:
- SML Clarity: Single platform, modular components
- Checkpoint HALO/ItemOptix: Unified SaaS platform
- Avery Dennison atma.io: Role-based dashboards in one platform
- Impinj ItemSense: Single Management Console
Key finding: NO major RFID vendor uses separate portals for RFID
configuration. All embed RFID features within unified platforms.
DECISION
--------
We will EMBED RFID configuration in the Tenant Admin Portal (Option A).
Implementation:
- Settings > RFID section (feature-flagged)
- Devices tab: Claim codes, device list, release
- Printers tab: IP configuration, test connectivity
- Tag Configuration tab: EPC prefix (read-only), variance thresholds
- Templates tab: Label template library
Mobile app downloads configuration from central API on startup.
No RFID configuration in the mobile app itself.
CONSEQUENCES
------------
Positive:
+ Matches industry pattern (SML, Checkpoint, Avery Dennison)
+ Single login/URL for all tenant management
+ Shared context with products, locations, users
+ Lower development cost (one portal, not two)
+ Progressive disclosure manages complexity
+ Same permissions system applies to RFID
Negative:
- Could become bloated if RFID features grow significantly
- Enterprise customers might want dedicated RFID admin
- Feature flags add slight complexity
Risks:
- Tenant Admin may feel "cluttered" with many features
- RFID power users may want more dedicated experience
Mitigations:
- Use progressive disclosure (collapse advanced settings)
- Role-based visibility (hide RFID from non-RFID users)
- Monitor feedback; re-evaluate if enterprise demand grows
- Feature-flagged sections can be extracted later if needed
Re-evaluation Triggers:
- Multiple enterprise customers (100+ stores) request separation
- RFID feature count exceeds 20+ configuration screens
- Evidence that RFID admins are different people than Tenant admins
How to Propose a New ADR
ADR Proposal Process
====================
1. Copy the ADR template
2. Fill in Context, Decision, Consequences
3. Set Status to "proposed"
4. Submit for architecture review
5. Discuss in architecture meeting
6. Update based on feedback
7. Set Status to "accepted" when approved
8. Add to ADR Index
MADR Template (Markdown Any Decision Records)
We use the MADR (Markdown Any Decision Records) format, which is more comprehensive than the basic ADR format and better suited for complex architectural decisions.
Full MADR Template
# ADR-XXX: [Short Title of Solved Problem and Solution]
## Status
[proposed | accepted | deprecated | superseded by ADR-YYY]
## Date
YYYY-MM-DD
## Decision-Makers
- [Name/Role 1]
- [Name/Role 2]
## Technical Story
[Link to ticket/issue: JIRA-123, GitHub Issue #456]
## Context and Problem Statement
[Describe the context and problem statement, e.g., in free form
using two to three sentences or in the form of an illustrative
story. You may want to articulate the problem in form of a question.]
## Decision Drivers
* [Driver 1, e.g., a force, facing concern, …]
* [Driver 2, e.g., a force, facing concern, …]
* [Driver 3, e.g., a force, facing concern, …]
## Considered Options
1. [Option 1]
2. [Option 2]
3. [Option 3]
4. [Option 4]
## Decision Outcome
**Chosen Option**: "[Option X]"
### Justification
[Justification for why this option was chosen. Reference the
decision drivers and explain how this option best addresses them.]
### Positive Consequences
* [e.g., improvement of quality attribute satisfaction, follow-up
decisions required, …]
* …
### Negative Consequences
* [e.g., compromising quality attribute, follow-up decisions required,
technical debt introduced, …]
* …
## Pros and Cons of the Options
### [Option 1]
[Example: Schema-per-tenant multi-tenancy]
**Pros:**
* Good, because [argument a]
* Good, because [argument b]
**Cons:**
* Bad, because [argument c]
* Bad, because [argument d]
### [Option 2]
[Example: Row-level multi-tenancy]
**Pros:**
* Good, because [argument a]
* Good, because [argument b]
**Cons:**
* Bad, because [argument c]
### [Option 3]
[Example: Database-per-tenant]
**Pros:**
* Good, because [argument a]
**Cons:**
* Bad, because [argument b]
* Bad, because [argument c]
## Links
* [Link type] [Link to ADR] <!-- example: Refined by ADR-007 -->
* [Link type] [Link to external resource]
* Supersedes ADR-XXX
* Related to ADR-YYY
## Notes
[Any additional notes, discussion points, or future considerations]
MADR Example: Kafka Selection
# ADR-014: Apache Kafka for Event Streaming
## Status
accepted
## Date
2026-01-15
## Decision-Makers
- Architecture Team
- Infrastructure Team
## Technical Story
ARCH-456: Select event streaming platform for POS event sourcing
## Context and Problem Statement
Our POS platform uses event sourcing for the Sales and Inventory
domains. We need an event streaming platform that supports:
- Event replay for new consumers
- Durable storage for audit compliance
- High throughput during peak retail periods (Black Friday)
- Multi-datacenter replication for disaster recovery
Which event streaming platform should we use?
## Decision Drivers
* Replayability - New analytics services must process historical events
* Durability - Events must survive broker failures (PCI compliance)
* Throughput - Handle 10,000+ events/second during peak
* Ecosystem - Good client libraries for .NET
* Operations - Team can manage without dedicated staff
## Considered Options
1. Apache Kafka
2. RabbitMQ with Shovel plugin
3. Amazon Kinesis
4. Redis Streams
5. PostgreSQL LISTEN/NOTIFY
## Decision Outcome
**Chosen Option**: "Apache Kafka (with KRaft mode)"
### Justification
Kafka is the only option that provides true event replayability with
configurable retention. New consumers can start from the beginning
of the log and process all historical events. This is critical for:
- Adding new analytics modules
- Rebuilding projections after bugs
- Audit investigations
KRaft mode eliminates ZooKeeper dependency, simplifying operations.
### Positive Consequences
* Complete replayability for compliance and analytics
* Proven at massive scale (LinkedIn, Uber)
* Strong .NET client (Confluent.Kafka)
* Schema Registry for event versioning
### Negative Consequences
* More complex than RabbitMQ
* Requires understanding of partitioning
* Higher resource usage than simpler queues
## Pros and Cons of the Options
### Apache Kafka
**Pros:**
* Good, because events are retained for configurable duration
* Good, because consumers can replay from any offset
* Good, because it handles 100K+ messages/second
* Good, because KRaft mode simplifies deployment
**Cons:**
* Bad, because it requires more operational knowledge
* Bad, because partition management adds complexity
### RabbitMQ with Shovel
**Pros:**
* Good, because it's simpler to operate
* Good, because team has existing experience
**Cons:**
* Bad, because messages are deleted after consumption
* Bad, because replay requires external archival
### Amazon Kinesis
**Pros:**
* Good, because it's fully managed
* Good, because it has replay capability
**Cons:**
* Bad, because of vendor lock-in
* Bad, because pricing is complex at scale
### Redis Streams
**Pros:**
* Good, because it's simple
* Good, because it's low latency
**Cons:**
* Bad, because durability is limited
* Bad, because it's not designed for long-term storage
### PostgreSQL LISTEN/NOTIFY
**Pros:**
* Good, because no additional infrastructure
**Cons:**
* Bad, because it doesn't scale
* Bad, because messages are ephemeral
## Links
* Refined by ADR-015 (Schema Registry Selection)
* Related to ADR-003 (Event Sourcing for Sales Domain)
* [Kafka Documentation](https://kafka.apache.org/documentation/)
## Notes
Evaluated during Q1 2026 architecture review. Confluent Cloud was
considered but rejected due to cost; self-hosted Kafka preferred.
**UPDATE (v3.0.0)**: Kafka is **deferred to v2.0**. Per the Architecture
Styles analysis (Chapter 04, Section L.4A.2),
v1.0 uses PostgreSQL event tables with LISTEN/NOTIFY for event notification
and Transactional Outbox for guaranteed delivery. This ADR remains valid
for v2.0 planning when scale justifies the Kafka operational overhead.
ADR Tooling & Automation
Recommended Tools
| Tool | Purpose | Installation |
|---|---|---|
| adr-tools | CLI for creating/managing ADRs | brew install adr-tools |
| Log4brains | ADR documentation site generator | npm install -g log4brains |
| adr-viewer | Web-based ADR viewer | Docker image available |
ADR Tools CLI
# Install adr-tools
brew install adr-tools # macOS
# or
sudo apt install adr-tools # Ubuntu
# Initialize ADR directory
adr init docs/adr
# Create new ADR
adr new "Use Kafka for Event Streaming"
# Creates: docs/adr/0014-use-kafka-for-event-streaming.md
# Supersede an ADR
adr new -s 3 "Replace Event Sourcing with Outbox Pattern"
# Creates new ADR that supersedes ADR-003
# List all ADRs
adr list
# Generate ADR index
adr generate toc > docs/adr/README.md
Log4brains Integration
Log4brains generates a searchable documentation website from ADRs:
# Install Log4brains
npm install -g log4brains
# Initialize in project
log4brains init
# Start preview server
log4brains preview
# Build static site
log4brains build
# Deploy to GitHub Pages
log4brains build --basePath /pos-platform-adr
# .github/workflows/adr-docs.yml
name: ADR Documentation
on:
push:
branches: [main]
paths:
- 'docs/adr/**'
jobs:
build-adr-site:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for dates
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Log4brains
run: npm install -g log4brains
- name: Build ADR site
run: log4brains build --basePath /pos-platform-adr
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: .log4brains/out
ADR Linting
# .github/workflows/adr-lint.yml
name: ADR Lint
on:
pull_request:
paths:
- 'docs/adr/**'
jobs:
lint-adr:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate ADR Format
run: |
for file in docs/adr/*.md; do
# Check required sections
if ! grep -q "## Status" "$file"; then
echo "ERROR: $file missing Status section"
exit 1
fi
if ! grep -q "## Context" "$file" && ! grep -q "## Context and Problem Statement" "$file"; then
echo "ERROR: $file missing Context section"
exit 1
fi
if ! grep -q "## Decision" "$file" && ! grep -q "## Decision Outcome" "$file"; then
echo "ERROR: $file missing Decision section"
exit 1
fi
done
echo "All ADRs pass validation"
- name: Check ADR Numbering
run: |
# Ensure sequential numbering
expected=1
for file in docs/adr/[0-9]*.md; do
num=$(basename "$file" | grep -o '^[0-9]*')
if [ "$num" != "$expected" ]; then
echo "WARNING: Expected ADR-$expected, found ADR-$num"
fi
expected=$((expected + 1))
done
ADR Review Checklist
# ADR Review Checklist
Before accepting an ADR, verify:
## Structure
- [ ] Uses MADR template
- [ ] Has clear title
- [ ] Status is set correctly
- [ ] Date is current
- [ ] Decision-makers are listed
## Content Quality
- [ ] Context clearly explains the problem
- [ ] Decision drivers are explicit
- [ ] At least 3 options were considered
- [ ] Pros/cons are documented for each option
- [ ] Chosen option justification references drivers
## Completeness
- [ ] Positive consequences listed
- [ ] Negative consequences listed (be honest!)
- [ ] Risks identified
- [ ] Mitigations proposed for risks
- [ ] Links to related ADRs
## Traceability
- [ ] Linked to technical story/ticket
- [ ] References relevant documentation
- [ ] Supersedes/relates to other ADRs if applicable
## Approval
- [ ] Architecture team reviewed
- [ ] Security team reviewed (if applicable)
- [ ] Infrastructure team reviewed (if applicable)
ADR Template
Summary
These Architecture Decision Records capture the foundational technical decisions for the POS Platform:
| ADR | Key Decision | Primary Benefit |
|---|---|---|
| ADR-001 | Tenant isolation via tenant_id + PostgreSQL RLS policies | |
| ADR-002 | Offline-first | Sales never blocked by network |
| ADR-003 | Event sourcing | Complete audit trail and temporal queries |
| ADR-004 | JWT + PIN | Secure API + fast cashier workflow |
| ADR-005 | PostgreSQL | Schema support and JSONB flexibility |
| ADR-006 | ASP.NET Core | Performance and unified C# stack |
| ADR-013 | RFID in Tenant Admin | Industry-standard pattern, shared context |
These decisions form the architectural foundation upon which the rest of the system is built.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | II - Architecture |
| Chapter | 02 of 32 |
Change Log
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2025-12-29 | Initial ADRs (001-006) |
| 2.0.0 | 2026-01-01 | Added ADR-013 (RFID Configuration), MADR template, tooling section |
| 3.0.0 | 2026-02-22 | ADR-001 marked SUPERSEDED (Schema-Per-Tenant replaced by Row-Level RLS per Ch 04 L.10A.4); added Kafka v2.0 deferral note to ADR-014 example (per Ch 04 L.4A.2); fixed Next Chapter link; renumbered chapter references for v3.0.0 |
Next Chapter: Chapter 03: Architecture Characteristics
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 03: Architecture Characteristics
Purpose
This appendix documents the formal architecture characteristics analysis for the Nexus POS Platform. It identifies the driving quality attributes that shape architectural decisions and provides justification for each characteristic’s priority.
Source: Architecture Characteristics Worksheet v2.0 (Expert Panel-Reviewed) System/Project: Nexus - Omnichannel Retail POS (Multi-Tenant Cloud) Architect/Team: Cloud AI Architecture Agents Domain/Quantum: Retail / Inventory Management / Multi-Platform Integration Date Modified: February 19, 2026 Panel Review Score: 6.50/10 → Updated per 4-member expert panel recommendations
K.1 Top 9 Driving Characteristics
These are the primary quality attributes that drive architectural decisions. They are listed in priority order.
| Rank | Characteristic | Priority Level | Blueprint Reference |
|---|---|---|---|
| 1 | Availability | Critical | Ch 04: Architecture Styles, Section L.10A.1 |
| 2 | Interoperability | Critical | Ch 13: Integration Patterns |
| 3 | Data Consistency | Critical | Ch 04: Architecture Styles, Section L.4A |
| 4 | Security | Elevated | Ch 12: Security & Auth, Ch 26: Security Compliance |
| 5 | Compliance (NEW) | Critical | Ch 13: Integrations, Ch 26: Security Compliance |
| 6 | Modifiability | High | Ch 04: Architecture Styles, Section L.9A |
| 7 | Scalability | High | Ch 04: Architecture Styles, Section L.10A.4 |
| 8 | Configurability (ELEVATED from implicit) | High | Ch 04: Architecture Styles Section L.10A.4, BRD Module 5 |
| 9 | Performance | High | Ch 09: Indexes & Performance |
K.2 Characteristics with Definitions and Justifications
K.2.1 Availability (Rank 1)
| Attribute | Value |
|---|---|
| Definition | The amount of uptime of a system; usually measured in 9’s (e.g., 99.9%). |
| Priority | Critical |
| Blueprint Reference | Ch 04: Architecture Styles, Section L.10A.1 |
Justification:
“Offline First” capability is non-negotiable. Physical stores must be able to process transactions even if internet connectivity fails.
Architectural Implications:
- Local SQLite database on POS terminals
- Event queue for offline transaction storage
- Automatic sync when connectivity restored
- Graceful degradation patterns
Offline-First Justification
Retail network reality makes offline-first a non-negotiable design constraint:
Retail Network Reality
======================
Internet Outage at 2pm on Black Friday?
Traditional POS: "Network Error - Cannot Process Sale" (DISASTER)
Offline-First POS: Works normally, syncs when online (BUSINESS CONTINUES)
Slow WiFi during holiday rush?
Traditional POS: 5-second delay per sale (FRUSTRATED CUSTOMERS)
Offline-First POS: Instant response, sync in background (HAPPY CUSTOMERS)
Server maintenance window?
Traditional POS: Store closes or uses manual paper (LOST REVENUE)
Offline-First POS: No impact to operations (FULL REVENUE)
The offline-first design is governed by five core principles (from Ch 04, Section L.10A.1):
| Principle | Description |
|---|---|
| Local-First | All operations work against local database first |
| Async Sync | Sync happens in background, not blocking UI |
| Queue Everything | Changes queue when offline, sync when online |
| Conflict Resolution | Deterministic rules for conflicting changes |
| Eventual Consistency | Accept that data may be temporarily out of sync |
Event Sourcing Enables Offline Sync
Event sourcing (Ch 04 Section L.4A) directly supports availability by enabling offline event queues. When the POS client is offline, all business operations (sales, payments, inventory adjustments) are captured as immutable events stored locally. When connectivity is restored, these events are synced to the central server and merged deterministically. Because events are append-only and each has a unique ID, offline sales never conflict – they simply merge into the central event stream. This makes event sourcing a foundational enabler of the offline-first architecture.
K.2.2 Interoperability (Rank 2)
| Attribute | Value |
|---|---|
| Definition | The ability of the system to interface and interact with other systems to complete a business request. |
| Priority | Critical |
| Blueprint Reference | Ch 13: Integration Patterns |
Justification:
BRD v18.0 Module 6 defines integration with 6 provider families across fundamentally different protocols:
| Provider Family | Auth Model | Rate Limiting | Sync Cadence |
|---|---|---|---|
| Shopify | OAuth 2.0 / PKCE | 50 points/second (GraphQL) | Real-time webhooks |
| Amazon SP-API | OAuth / Login with Amazon (LWA) | Burst + restore token bucket | 2-minute polling |
| Google Merchant | API key + Service Account | Quota-based (daily limits) | 2x/day batch + real-time local inventory |
| Payment Processor | API key / OAuth | Per-transaction | Real-time |
| Email (SMTP) | SMTP credentials | Provider-specific | Event-triggered |
| Carrier APIs | API key | Per-request | On-demand |
The system must maintain an Anti-Corruption Layer (ACL) per provider to prevent external schema changes from propagating into the core POS domain. BRD Section 6.2 mandates:
- Provider Abstraction:
IIntegrationProviderinterface with 5 standard methods per provider - Circuit Breaker: 5 failures within 60 seconds triggers OPEN state; 30-second cooldown before HALF_OPEN
- Idempotency Framework: 24-hour deduplication windows with SHA-256 keying (Section 6.2.5)
- Transactional Outbox: Atomic inventory reservation + guaranteed event publication
Architectural Implications:
- Anti-Corruption Layer (ACL) per provider preventing external schema leakage
- Provider Abstraction pattern (
IIntegrationProvider) for uniform integration interface - Circuit breaker pattern (CLOSED → OPEN → HALF_OPEN state machine)
- Idempotency framework with configurable dedup windows
- Transactional Outbox for guaranteed event delivery
- Rate limiter per provider with adaptive throttling
K.2.3 Data Consistency (Rank 3)
| Attribute | Value |
|---|---|
| Definition | The data across the system is in sync and consistent across databases and tables. |
| Priority | Critical |
| Blueprint Reference | Ch 04: Architecture Styles, Section L.4A |
Justification:
Critical for handling the “Inventory Sync” race conditions between physical shoppers and online orders to prevent overselling.
Architectural Implications:
- Event Sourcing for Sales and Inventory domains
- Eventual consistency model with conflict resolution
- Optimistic concurrency control
- Idempotent event handlers
Event Sourcing Justification
Traditional CRUD systems store only current state. Event sourcing stores every change as an immutable event, providing the full history needed for audit trails, temporal queries, and offline conflict resolution (from Ch 04 Section L.4A):
Traditional CRUD vs Event Sourcing
==================================
CRUD Approach:
+------------------+
| inventory_items |
|------------------|
| sku: NXP001 |
| quantity: 45 | <- Only current state
| updated_at: now |
+------------------+
Event Sourcing Approach:
+------------------+
| events |
|------------------|
| InventoryReceived: +100 @ 2025-01-01 09:00 |
| ItemSold: -2 @ 2025-01-01 10:15 |
| ItemSold: -1 @ 2025-01-01 11:30 |
| ItemSold: -3 @ 2025-01-01 14:22 |
| AdjustmentMade: -49 @ 2025-01-01 16:00 | <- Caught discrepancy!
| ItemSold: -1 @ 2025-01-02 09:15 |
| Current State: 45 (sum of all events) |
+------------------+
Benefits for Retail POS:
| Benefit | Description |
|---|---|
| Complete Audit Trail | Every sale, void, refund, adjustment is recorded forever |
| Temporal Queries | “What was our inventory on December 15th at 3pm?” |
| Offline Sync | Events queue locally, merge when online |
| Conflict Resolution | Compare event streams, not states |
| Debugging | Replay events to reproduce issues |
| Compliance | PCI-DSS, SOX require transaction logs |
K.2.4 Security (Rank 4 - Elevated)
| Attribute | Value |
|---|---|
| Definition | The ability of the system to prevent malicious actions, protect credentials, and restrict access across all trust boundaries. |
| Priority | Elevated (due to multi-platform OAuth, GenAI code generation, PCI-DSS 4.0) |
| Blueprint Reference | Ch 12: Security & Auth, Ch 26: Security Compliance |
Justification:
BRD v18.0 elevates security from basic PCI-DSS to 5 concrete security sub-domains:
| Security Sub-Domain | Scope | Key Requirements |
|---|---|---|
| 1. Authentication & Authorization | Multi-provider OAuth lifecycle | 3 OAuth providers (Shopify, Amazon LWA, Google), MFA for admin users (PCI-DSS 4.0 Req 8.4.2), role-based access control |
| 2. Credential Lifecycle Management | Secrets vault and rotation | HashiCorp Vault for 6 credential types, automated 90-day rotation, tenant-specific encryption keys, emergency rotation procedures |
| 3. Supply Chain Security | Dependency and package safety | Snyk/OWASP SCA with package firewall, SBOM generation (PCI-DSS 4.0 Req 6.3.2), real-time vulnerability scanning |
| 4. GenAI Governance | AI-generated code safety | 6-gate Security Test Pyramid: SAST + SCA + Secrets Detection + Architecture Conformance (ArchUnit/NetArchTest) + Contract Tests (Pact) + Manual Security Review |
| 5. PCI-DSS 4.0 Compliance | Payment card security | SAQ-A boundaries, FIM via Wazuh/OSSEC (Req 11.5.1), session management, audit trail retention (365 days), vulnerability scanning (Req 11.3.1) |
Architectural Implications:
- HashiCorp Vault (Docker container) for centralized credential management
- 6-gate Security Test Pyramid in CI/CD pipeline
- Wazuh/OSSEC agents on all POS terminals for File Integrity Monitoring
- Architecture conformance tests (ArchUnit/NetArchTest) enforcing module boundaries
- Pact contract tests against Shopify/Amazon/Google sandbox APIs
- Audit trail with INTEGRATION category for OAuth operations and webhook verification
POS Security Layers
The system implements a 5-layer defense-in-depth security model (from Ch 04 Section L.9A):
Security Layers
===============
+------------------------------------------------------------------+
| INTERNET |
+---------------------------+--------------------------------------+
|
v
+---------------------------+--------------------------------------+
| TLS TERMINATION |
| (Let's Encrypt) |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| API GATEWAY |
| +-----------------------+ +-----------------------+ |
| | Rate Limiting | | IP Whitelisting | |
| | 100 req/min/client | | (Admin Portal only) | |
| +-----------------------+ +-----------------------+ |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| AUTHENTICATION |
| +-----------------------+ +-----------------------+ |
| | JWT Validation | | PIN Verification | |
| | - Signature check | | - Employee clock-in | |
| | - Expiry check | | - Sensitive actions | |
| | - Tenant claim | +-----------------------+ |
| +-----------------------+ |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| AUTHORIZATION |
| +-----------------------+ +-----------------------+ |
| | Role-Based (RBAC) | | Permission Policies | |
| | - Admin | | - can:create_sale | |
| | - Manager | | - can:void_sale | |
| | - Cashier | | - can:view_reports | |
| +-----------------------+ +-----------------------+ |
+------------------------------------------------------------------+
Each layer provides independent protection: TLS encrypts data in transit, the API gateway enforces rate limits and IP restrictions, JWT authentication validates identity and tenant context, PIN verification secures sensitive in-store actions, and RBAC authorization controls access to specific operations.
Tenant Data Isolation as Security Evidence
Schema-per-tenant isolation (from Ch 04 Section L.10A.4) provides a strong security boundary. Unlike row-level tenancy where a missing WHERE tenant_id = ? clause could leak data across tenants, schema-per-tenant ensures that tenant data is physically separated at the PostgreSQL schema level. Even if application code has a bug, PostgreSQL’s search_path mechanism prevents cross-tenant data access:
-- Query from Tenant A's context (search_path = tenant_nexus)
SELECT * FROM products;
-- Returns only Nexus products
-- Even if someone tries:
SELECT * FROM tenant_acme.products;
-- ERROR: permission denied for schema tenant_acme
K.2.5 Compliance (Rank 5 - NEW)
| Attribute | Value |
|---|---|
| Definition | Adherence to regulatory standards, platform marketplace policies, and legal requirements across all operating jurisdictions and external channels. |
| Priority | Critical |
| Blueprint Reference | Ch 13: Integrations, Ch 26: Security Compliance |
Justification:
BRD v18.0 introduces non-negotiable compliance requirements from 3 external platforms plus existing regulatory frameworks:
| Compliance Domain | Requirements | Impact |
|---|---|---|
| PCI-DSS SAQ-A | No card data stored, tokenized payments, FIM on POS terminals | Payment architecture, audit trail, monitoring |
| Amazon SP-API | Product taxonomy compliance, FBA packaging rules, listing quality standards, content policy enforcement | Catalog validation, product data enrichment |
| Google Merchant | Product data specifications, disapproval prevention, local inventory accuracy, API v1 migration (Content API EOL August 2026) | Data quality engine, inventory sync accuracy |
| Shopify | @idempotent mutation mandate (required 2026-04), webhook verification, POS non-native compliance rules (Decision #99) | API client design, idempotency framework |
| State Regulations | Virginia 5-year gift card minimum expiry, consumer protection, data privacy | Configuration per jurisdiction |
Architectural Implications:
- Platform policy validation engine (“strictest-rule-wins” cross-platform validation per BRD Section 6.6)
- Automated compliance checking on product data before channel publication
- Credential rotation policies per platform requirement
- Audit trail for all external interactions with INTEGRATION event category
- Jurisdiction-aware configuration (geographic expansion design from ADR-BRD-006)
Tenant Isolation Compliance Benefits
Schema-per-tenant isolation (from Ch 04 Section L.10A.4) directly supports SOC 2, GDPR, and HIPAA compliance requirements:
SOC 2 / GDPR Compliance
=======================
Requirement: "Customer data must be logically separated"
With schema-per-tenant:
- Each customer's data in isolated schema
- No risk of WHERE clause forgetting tenant_id
- Clear audit trail per schema
- Easy data export for GDPR requests
- Simple data deletion for "right to be forgotten"
Per-tenant backup and restore is trivially achieved (pg_dump -n tenant_nexus), making data portability and right-to-erasure requests straightforward to fulfill. This isolation model ensures compliance without relying on application-level WHERE clause correctness.
K.2.6 Modifiability (Rank 6)
| Attribute | Value |
|---|---|
| Definition | The ease with which a system can adapt to changes in environment and functionality. |
| Priority | High |
| Blueprint Reference | Ch 04: Architecture Styles, Section L.9A |
Justification:
Plugin architecture is needed for frequent hardware/tax updates without full system rewrites.
Architectural Implications:
- Microkernel (Plugin) architecture for POS Client
- Hardware abstraction layer
- Tax calculation plugins
- Payment processor adapters
K.2.7 Scalability (Rank 7)
| Attribute | Value |
|---|---|
| Definition | Degree to which a product can handle growing or shrinking workloads. |
| Priority | High |
| Blueprint Reference | Ch 04: Architecture Styles, Section L.10A.4 |
Justification:
At some point the system must be able to grow to accommodate the number of new tenants.
Architectural Implications:
- Row-Level Isolation with PostgreSQL RLS (tenant_id + RLS policies)
- Horizontal scaling of stateless API layer
- Connection pooling per tenant
- Resource quotas and throttling
K.2.8 Configurability (Rank 8 - ELEVATED from implicit)
| Attribute | Value |
|---|---|
| Definition | The ability of the system to support multiple configurations and customize behavior on-demand per tenant, channel, and product level. |
| Priority | High |
| Blueprint Reference | Ch 04: Architecture Styles, Section L.10A.4, BRD Module 5 (Setup & Configuration) |
Justification:
BRD Module 5 spans 3,000+ lines of setup and configuration requirements. The system must support hierarchical configuration at multiple levels:
| Configuration Layer | Scope | Examples |
|---|---|---|
| Global | All tenants, all channels | System defaults, tax engine rules |
| Tenant | Per-tenant overrides | Feature toggles, branding, business rules |
| Channel | Per-channel per-tenant | Safety buffer modes, sync frequency, listing rules |
| Product | Per-product per-channel | Override safety buffer, pricing rules, visibility |
Key configuration complexity drivers:
- Safety Buffers: 4-level priority resolution (Product → Category → Channel → Global) with 3 calculation modes (FIXED, PERCENTAGE, MIN_RESERVE) per BRD Section 6.7.2
- Integration YAML: Section 6.12 defines 400+ lines of declarative integration configuration
- Feature Toggles: Per-tenant inventory sync strategy (Safe vs. Aggressive), channel enablement
- Tax Jurisdictions: Modular jurisdiction support with geographic expansion (ADR-BRD-006)
Architectural Implications:
- Hierarchical configuration resolution with 4-level priority
- YAML-driven integration rules (machine-readable, version-controlled)
- Per-tenant per-channel safety buffer settings
- Runtime configuration hot-reload without service restart
- Configuration validation engine preventing invalid combinations
K.2.9 Performance (Rank 9)
| Attribute | Value |
|---|---|
| Definition | The amount of time it takes for the system to process a business request. |
| Priority | High |
| Blueprint Reference | Ch 09: Indexes & Performance |
Justification:
Low latency scanning/checkout is required to prevent queues during high traffic.
Architectural Implications:
- Optimized database indexes
- Read replicas for query-heavy operations
- Caching strategies (Redis)
- Async processing for non-critical operations
Multi-Tenant Performance Considerations
Schema-per-tenant isolation (from Ch 04 Section L.10A.4) has specific performance implications for connection pooling and query execution:
Connection Pooling:
Connection Pool Strategy
========================
+------------------+
| Connection Pool |
| (PgBouncer) |
+--------+---------+
|
+--------------------+--------------------+
| | |
v v v
+-------+-------+ +--------+------+ +---------+-----+
| Connection 1 | | Connection 2 | | Connection 3 |
| search_path: | | search_path: | | search_path: |
| tenant_nexus | | tenant_acme | | tenant_nexus |
+---------------+ +---------------+ +---------------+
Note: search_path is set per-connection, not per-query.
Use transaction pooling mode in PgBouncer.
Query Performance:
Schema-per-tenant eliminates the need for a tenant_id column in every query, resulting in simpler and faster queries:
-- Index per schema (automatically namespaced)
CREATE INDEX idx_products_sku ON tenant_nexus.products(sku);
CREATE INDEX idx_products_sku ON tenant_acme.products(sku);
-- No tenant_id in WHERE clause needed
-- Simpler, faster queries:
SELECT * FROM products WHERE sku = 'NXP0001';
-- vs (row-level tenancy):
SELECT * FROM products WHERE tenant_id = ? AND sku = 'NXP0001';
This eliminates per-query tenant filtering overhead and allows PostgreSQL’s query planner to work with smaller, tenant-scoped indexes for better cache efficiency.
Tenant Performance Isolation
Schema-per-tenant provides natural performance isolation between tenants. A tenant with a large product catalog or high transaction volume does not degrade query performance for other tenants because each schema has its own table statistics, indexes, and can be independently vacuumed and reindexed:
-- Vacuum single tenant without affecting others
VACUUM ANALYZE tenant_nexus.products;
VACUUM ANALYZE tenant_nexus.sales;
-- Reindex single tenant
REINDEX SCHEMA tenant_nexus;
K.3 Implicit Characteristics
These characteristics are inherently required but not explicitly driving architectural decisions.
| Characteristic | Definition | Justification |
|---|---|---|
| Developer Experience (DevEx) | The ease with which developers can interact with the system’s tools, code, and processes. | Security Enabler: High “False Positive” rates from surface-level scanners cause developers to bypass security. We prioritize Deep SAST (accuracy) and AI-Remediation to ensure security does not degrade velocity. |
| Idempotency | The guarantee that repeating the same operation produces the same result without side effects. | BRD Section 6.2.5 mandates an idempotency framework with 24-hour deduplication windows and SHA-256 keying. Shopify @idempotent mutations become mandatory 2026-04. Critical for retry-safe integration operations. |
| Testability | The degree to which the system supports testing at all levels. | BRD v18.0 defines 36 user stories with Gherkin acceptance criteria. Three platform sandboxes (Shopify Dev Store, Amazon SP-API Sandbox, Google Merchant test account) require contract testing. Architecture must support isolation for unit, integration, and E2E tests. |
| Observability | The ability to understand system state from external outputs (logs, metrics, traces). | Multi-platform monitoring across 3 external channels requires first-class treatment. Integration-specific metrics: circuit breaker state, DLQ depth, sync latency, safety buffer violations, disapproval rate. LGTM stack (Loki, Grafana, Tempo, Prometheus). |
| Modularity | Degree to which a system is composed of discrete components. | Update tax logic without breaking inventory system. Module boundaries must be clean enough for independent Claude Code agent development. |
| Fault Tolerance | When fatal errors occur, other parts of the system continue to function. | Local client survival is required; POS must function if cloud crashes. Integration circuit breaker prevents external API failures from cascading to core POS operations. |
| Adaptability | Degree to which a product can be adapted for new environments. | Rapid adoption of new retail trends (social commerce). Module 6 designed as Extractable Integration Gateway for future independent deployment. |
K.4 Others Considered
These characteristics were evaluated but not prioritized as driving characteristics:
| Characteristic | Why Not Selected |
|---|---|
| Recoverability | Covered by Availability + Event Sourcing (replay capability) |
| Safety & Code Quality | Addressed through Security characteristic (6-gate Security Test Pyramid) and DevSecOps pipeline |
K.5 Characteristic Trade-offs
Understanding trade-offs between characteristics is critical for making consistent architectural decisions.
Trade-off Matrix
+------------------+------------------+------------------+------------------+------------------+
| | AVAILABILITY | CONSISTENCY | COMPLIANCE | CONFIGURABILITY |
+------------------+------------------+------------------+------------------+------------------+
| AVAILABILITY | - | TENSION | NEUTRAL | SUPPORTS |
| (Offline-First) | | (Eventual Sync) | | (Local config) |
+------------------+------------------+------------------+------------------+------------------+
| CONSISTENCY | TENSION | - | SUPPORTS | TENSION |
| (Data Sync) | (Offline Mode) | | (Audit Trail) | (Config changes) |
+------------------+------------------+------------------+------------------+------------------+
| COMPLIANCE | NEUTRAL | SUPPORTS | - | SUPPORTS |
| (Regulations) | | (Audit Trail) | | (Jurisdiction) |
+------------------+------------------+------------------+------------------+------------------+
| CONFIGURABILITY | SUPPORTS | TENSION | SUPPORTS | - |
| (Multi-level) | (Local config) | (Config changes) | (Jurisdiction) | |
+------------------+------------------+------------------+------------------+------------------+
| PERFORMANCE | SUPPORTS | TENSION | TENSION | TENSION |
| (Low Latency) | (Local Cache) | (Sync Overhead) | (Validation) | (Resolution) |
+------------------+------------------+------------------+------------------+------------------+
| SECURITY | TENSION | SUPPORTS | SUPPORTS | NEUTRAL |
| (Deep Scans) | (Scan Time) | (Audit Trail) | (PCI-DSS) | |
+------------------+------------------+------------------+------------------+------------------+
Key Trade-off Decisions
| Trade-off | Decision | Rationale |
|---|---|---|
| Availability vs. Consistency | Accept Eventual Consistency | Offline-First is non-negotiable; inventory sync can tolerate short delays |
| Performance vs. Security | 6-gate Security Pyramid with CI/CD gates | Security gates run in CI/CD pipeline, not at runtime; only contract tests add deployment time |
| Performance vs. Compliance | Async platform validation | Cross-platform validation runs asynchronously before channel publication; does not block POS checkout |
| Scalability vs. Simplicity | Row-Level Isolation with RLS in Modular Monolith | Full tenant isolation via PostgreSQL RLS without schema-per-tenant or microservices complexity |
| Compliance vs. Performance | Strictest-rule-wins cached validation | Validation rules cached and applied at publish-time, not checkout-time |
K.6 Characteristic-to-Chapter Mapping
Quick reference for finding characteristic implementations in the blueprint:
| Characteristic | Primary Chapters | Key Sections |
|---|---|---|
| Availability | Ch 04(L.10A.1), Ch 27 | Offline-First Design, Disaster Recovery |
| Interoperability | Ch 13 | Integration Patterns, ACL, Provider Abstraction |
| Data Consistency | Ch 04(L.4A), Ch 07 | Event Sourcing, Schema Design |
| Security | Ch 12, Ch 26 | Authentication, PCI-DSS Compliance, Credential Vault |
| Compliance | Ch 13, Ch 26 | Platform Policy Validation, Regulatory Compliance |
| Modifiability | Ch 04(L.9A), Ch 14 | Plugin Architecture, Hardware Layer |
| Scalability | Ch 04(L.10A.4), Ch 28 | Multi-Tenancy (RLS), Tenant Lifecycle |
| Configurability | Ch 04(L.10A.4), Module 5 | Feature Toggles, Safety Buffers, YAML Config |
| Performance | Ch 09, Ch 25 | Indexes, Monitoring |
| DevEx | Ch 18 | Development Environment |
| Idempotency | Ch 13 | Idempotency Framework, Dedup Windows |
| Testability | Ch 18, Ch 13 | Contract Testing, Platform Sandboxes |
| Observability | Ch 25 | LGTM Stack, Integration Metrics |
| Modularity | Ch 04(L.9A), Ch 04(L.9C) | Domain Model, Module Boundaries |
| Fault Tolerance | Ch 04(L.10A.1), Ch 27 | Offline-First, Circuit Breaker, Disaster Recovery |
| Adaptability | Ch 13 | Integration Adapters, Extractable Gateway |
K.7 Review Schedule
| Review Type | Frequency | Trigger Events |
|---|---|---|
| Scheduled Review | Quarterly | - |
| Event-Driven Review | As needed | New integration requirements, Security incidents, Performance degradation, New tenant requirements |
K.8 Non-Functional Requirements (NFRs)
This section defines measurable targets that validate the architecture characteristics. All NFRs are traceable to BRD requirements.
K.8.1 Performance Requirements
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-PERF-001 | Checkout Latency | < 500ms p99 | BRD-v12 (implied) | Load testing |
| NFR-PERF-002 | RFID Bulk Lookup | < 200ms for 50 tags | BRD-v12 §1.1 | E2E testing |
| NFR-PERF-003 | Price Calculation | < 100ms | BRD-v12 §1.2 | Unit testing |
| NFR-PERF-004 | Tax Calculation | < 50ms | BRD-v12 §1.17 | Unit testing |
| NFR-PERF-005 | Product Search | < 300ms | Implicit | Load testing |
| NFR-PERF-006 | Receipt Generation | < 200ms | Implicit | E2E testing |
Performance Budget:
Total Checkout Time Budget: 500ms
├── Item Lookup: 100ms
├── Price Calculation: 100ms
├── Tax Calculation: 50ms
├── Payment Processing: 150ms (excluding terminal wait)
└── Receipt/Finalize: 100ms
K.8.2 Availability Requirements
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-AVAIL-001 | Cloud API Uptime | 99.9% (8.76 hrs/year downtime) | Implicit | Monitoring |
| NFR-AVAIL-002 | Offline Queue Size | Max 100 transactions | BRD-v12 §1.16.2 | Configuration |
| NFR-AVAIL-003 | Sync Interval | 30 seconds | BRD-v12 §1.16.2 | Configuration |
| NFR-AVAIL-004 | Parked Sale TTL | 4 hours | BRD-v12 §1.1.1 | Configuration |
| NFR-AVAIL-005 | Parked Sales per Terminal | Max 5 | BRD-v12 §1.1.1 | Configuration |
| NFR-AVAIL-006 | Payment Terminal Timeout | 60 seconds | BRD-v12 §1.18.2 | Configuration |
| NFR-AVAIL-007 | Connection Timeout | 10 seconds | BRD-v12 §1.18.2 | Configuration |
Availability Tiers:
Cloud Services: 99.9% (allows ~8.76 hrs downtime/year)
POS Terminal: 99.99% (via offline-first design)
Database (Primary): 99.95% (with automatic failover)
K.8.3 Scalability Requirements
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-SCALE-001 | Concurrent Users | 500 (Black Friday peak) | BRD-v12 (implied) | Load testing |
| NFR-SCALE-002 | Transactions per Second | 1,000 TPS | Chapter 04 L.6 | Load testing |
| NFR-SCALE-003 | Tenant Count | 100+ tenants | Chapter 03 | Architecture |
| NFR-SCALE-004 | Export Row Limit | 1,000 rows max | BRD-v12 §2.5 | Configuration |
| NFR-SCALE-005 | Date Range for Reports | 365 days max | BRD-v12 YAML | Configuration |
| NFR-SCALE-006 | RFID Tags per Request | 50 max | BRD-v12 §1.1 | Configuration |
Scaling Strategy:
Stateless API Layer: Horizontal scaling (Kubernetes HPA)
Database: Vertical scaling + Read replicas + RLS per tenant
Event Stream: PostgreSQL event tables (v1.0), Kafka partitioning (v2.0)
File Storage: Object storage (S3-compatible)
K.8.4 Integration & Timeout Requirements
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-INT-001 | Payment Timeout | 60 seconds | BRD-v12 §1.18.2 | Configuration |
| NFR-INT-002 | Connection Timeout | 10 seconds | BRD-v12 §1.18.2 | Configuration |
| NFR-INT-003 | Multi-Store Data Staleness | Max 5 minutes | BRD-v12 §1.7 | Monitoring |
| NFR-INT-004 | Batch Close Time | 23:00 daily | BRD-v12 §1.18.2 | Configuration |
| NFR-INT-005 | External API Retry | 3 attempts with backoff | Implicit | Configuration |
| NFR-INT-006 | Webhook Delivery | At-least-once | Implicit | Architecture |
Integration Patterns:
Synchronous: REST APIs with circuit breaker
Asynchronous: Event-driven via PostgreSQL Events + LISTEN/NOTIFY (v1.0)
Webhooks: Inbound (Shopify/Amazon) + Outbound with Transactional Outbox
File Transfer: SFTP/S3 for bulk imports
K.8.5 Data & Compliance Requirements
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-DATA-001 | Consent Audit Retention | 7 years | BRD-v12 YAML | Policy |
| NFR-DATA-002 | Privacy Request Response | 30 days | BRD-v12 §2.5 | Process |
| NFR-DATA-003 | Transaction Data Retention | 7 years (tax compliance) | Implicit | Policy |
| NFR-DATA-004 | Gift Card Minimum Expiry | 5 years (Virginia) | BRD-v12 §1.5.2 | Configuration |
| NFR-DATA-005 | Auto-Anonymize Inactive | Configurable (0 = never) | BRD-v12 YAML | Configuration |
Data Classification:
Level 1 (Restricted): Card data (prohibited storage)
Level 2 (Sensitive): Customer PII, credentials
Level 3 (Internal): Transaction data, inventory
Level 4 (Public): Product catalog, store hours
K.8.6 Security Requirements
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-SEC-001 | PCI Scope | SAQ-A (no card data stored) | BRD-v12 §1.18 | PCI Audit |
| NFR-SEC-002 | Payment Data Storage | Token only, no PAN | BRD-v12 §1.18.1 | Code review |
| NFR-SEC-003 | Manager Auth for Overrides | PIN required | BRD-v12 §1.2 | E2E testing |
| NFR-SEC-004 | Blind Count | Expected not shown | BRD-v12 §1.12 | UI testing |
| NFR-SEC-005 | Variance Tolerance | Configurable ($5 default) | BRD-v12 §1.12 | Configuration |
| NFR-SEC-006 | Session Timeout | 15 minutes inactivity | Implicit | Configuration |
| NFR-SEC-007 | Password Policy | Min 12 chars, complexity | Implicit | Configuration |
SAQ-A Compliance - Data Storage Rules:
STORED (Allowed):
- Payment tokens
- Approval codes
- Masked card number (****1234)
- Card brand (Visa, MC, etc.)
- Terminal ID
PROHIBITED (Never store):
- Full card number (PAN)
- CVV/CVC
- Track data
- PIN block
- EMV cryptogram (raw)
K.8.7 Integration Requirements (BRD v18.0)
| Requirement ID | Category | Target | Source | Validation Method |
|---|---|---|---|---|
| NFR-INTG-001 | Shopify Sync Latency | < 5 seconds (webhook processing) | BRD-v18 §6.3 | Integration testing |
| NFR-INTG-002 | Amazon Sync Latency | < 2 minutes (polling interval) | BRD-v18 §6.4 | Integration testing |
| NFR-INTG-003 | Google Batch Sync | 2x/day + real-time local inventory | BRD-v18 §6.5 | Integration testing |
| NFR-INTG-004 | Circuit Breaker Threshold | 5 failures / 60 seconds → OPEN | BRD-v18 §6.2.4 | Unit testing |
| NFR-INTG-005 | DLQ Retry Policy | 3 attempts, exponential backoff | BRD-v18 §6.2.3 | Integration testing |
| NFR-INTG-006 | Safety Buffer Calculation | < 100ms per product per channel | BRD-v18 §6.7.2 | Performance testing |
| NFR-INTG-007 | Integration Health Check | Every 60 seconds per provider | BRD-v18 §6.11 | Monitoring |
| NFR-INTG-008 | Idempotency Window | 24-hour dedup with SHA-256 key | BRD-v18 §6.2.5 | Unit testing |
| NFR-INTG-009 | Credential Rotation | Automated every 90 days | BRD-v18 §6.2.2 | Operations |
Integration Performance Budget:
Shopify Webhook Processing: < 5s
├── Receive + Validate Signature: 100ms
├── Deserialize + Map to Domain: 200ms
├── Business Logic Processing: 2,000ms
├── Database Persistence: 500ms
└── Outbox Event Publication: 200ms
(Buffer): 2,000ms
Amazon SP-API Polling Cycle: < 2min
├── OAuth Token Refresh (if needed): 500ms
├── API Call (paginated): 5,000ms
├── Response Mapping: 1,000ms
├── Inventory Delta Calculation: 2,000ms
└── Database + Outbox: 1,500ms
(Buffer): 110,000ms
K.8.8 NFR Traceability Matrix
This matrix links NFRs to Architecture Characteristics:
| Characteristic | Related NFRs |
|---|---|
| Availability | NFR-AVAIL-001 through NFR-AVAIL-007 |
| Performance | NFR-PERF-001 through NFR-PERF-006 |
| Scalability | NFR-SCALE-001 through NFR-SCALE-006 |
| Security | NFR-SEC-001 through NFR-SEC-007 |
| Interoperability | NFR-INT-001 through NFR-INT-006 |
| Data Consistency | NFR-DATA-001 through NFR-DATA-005 |
| Compliance | NFR-DATA-001 through NFR-DATA-005, NFR-INTG-001 through NFR-INTG-009 |
| Configurability | NFR-INTG-006 (Safety Buffer), platform-specific targets |
NFR Validation Schedule
| NFR Category | Validation Frequency | Responsible Team |
|---|---|---|
| Performance | Every release + quarterly load test | QA + DevOps |
| Availability | Continuous monitoring | DevOps |
| Scalability | Quarterly load test | DevOps |
| Security | Annual PCI audit + continuous scans | Security |
| Integration | Every release + monthly provider sync | QA + Integration Team |
| Compliance | Annual audit | Compliance |
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2026-01-24 |
| Updated | 2026-02-25 |
| Source | Architecture Characteristics Worksheet v2.0, BRD-v18.0, Chapters 02/03/05/06 |
| Author | Claude Code |
| Reviewer | Expert Panel (Marcus Chen, Sarah Rodriguez, James O’Brien, Priya Patel) |
| Status | Active |
| Part | II - Architecture |
| Chapter | 03 of 32 |
| Previous | Chapter 11 v1.1.0 (backup at Chapter-11-Architecture-Characteristics.md.backup-v18.0) |
Change Log
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2026-01-24 | Initial document |
| 1.1.0 | 2026-01-26 | Added Section K.8 (Non-Functional Requirements) with 37 NFRs from BRD-v12 |
| 2.0.0 | 2026-02-19 | Expert panel review (6.50/10): Expanded to Top 9 driving characteristics (added Compliance, elevated Configurability); rewrote Interoperability with 6 provider families, 3 auth models, 3 rate-limiting paradigms; rewrote Security with 5 concrete sub-domains and 6-gate Security Test Pyramid; added Idempotency, Testability, Observability as implicit; added K.8.7 Integration Requirements (9 NFRs from BRD v18.0 Module 6); updated multi-tenancy from Schema-Per-Tenant to Row-Level with RLS; updated event infrastructure from Kafka to PostgreSQL Events (v1.0) |
| 3.0.0 | 2026-02-22 | Enriched with cross-chapter evidence from former Ch 05/06/08/09; all chapter references renumbered for v3.0.0 (39-chapter to 34-chapter consolidation) |
Next Chapter: Chapter 04: Architecture Styles Analysis
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 04: Architecture Styles Analysis
Purpose
This chapter documents the formal architecture styles evaluation for the Nexus POS Platform. It provides the decision rationale for selecting the primary architecture style and supporting patterns, updated per expert panel review against BRD v18.0.
Source: Architecture Styles Worksheet v2.0 (Expert Panel-Reviewed) Project: POS Platform (RapOS) - Implementation for Tenant “Nexus” Architect/Team: Cloud AI Architecture Agents Date: February 19, 2026 Panel Review Score: 6.50/10 → Updated per 4-member expert panel recommendations
L.1 Candidate Architecture Styles
Based on the identified driving characteristics (Availability, Interoperability, Data Consistency), the following architecture styles were evaluated.
L.1.1 Event-Driven Architecture (EDA)
| Attribute | Value |
|---|---|
| Description | A distributed asynchronous architecture pattern used to produce highly scalable and high-performance applications. |
| Relevance to Nexus | Deeply aligned with “Interoperability” and “Data Consistency” (Sync) requirements. External channels (Amazon, Shopify) and local POS terminals produce disjointed events that must be reconciled eventually. |
| Decision | Selected (Communication Layer) |
| Key Technology | PostgreSQL Event Tables + LISTEN/NOTIFY (v1.0); Apache Kafka (v2.0, when scale justifies) |
v18.0 Update: BRD designs around PostgreSQL tables for
idempotency_recordsandintegration_dead_letters(not Kafka topics). Amazon SP-API polls every 2 minutes; Google Merchant batches 2x/day. Streaming infrastructure is not required at launch. PostgreSQL event tables with LISTEN/NOTIFY provide sufficient event notification for v1.0. Kafka adoption deferred to v2.0 when transaction volume or real-time analytics requirements justify the operational overhead (ZooKeeper/KRaft cluster management).
L.1.2 Microservices Architecture
| Attribute | Value |
|---|---|
| Description | An architecture style that structures an application as a collection of loosely coupled services, each with its own database. |
| Relevance to Nexus | Evaluated for “Scalability,” but rejected as the primary style for the Core API. |
| Decision | Rejected |
| Rationale | The operational complexity of managing separate databases for 50+ services is unnecessary for the current scale. |
L.1.3 Microkernel (Plugin) Architecture
| Attribute | Value |
|---|---|
| Description | A core system with a plugin interface to add additional features. |
| Relevance to Nexus | Directly addresses the “Modifiability” requirement. The Blueprint specifies “Integration Adapters” (Payment, Tax) and a “Hardware Layer” in the client, fitting this pattern. |
| Decision | Selected (Client) |
L.1.4 Modular Monolith (Layered) Architecture
| Attribute | Value |
|---|---|
| Description | A single deployable unit (“Central API”) structured into distinct, loosely coupled modules (Catalog, Sales, Inventory) that enforce strict boundaries. |
| Relevance to Nexus | High Fit. The Blueprint describes a “Central API Layer” (Stateless) containing all core services. This offers the modularity of microservices without the distributed complexity, aligning with the “Simplicity” and “Maintenance” goals. |
| Decision | Selected (Core API) |
v18.0 Update — Extractable Integration Gateway: Module 6 (Integrations, 4,800+ lines) is designed as a logically separate module within the monolith with explicit boundary contracts:
IIntegrationProviderinterface, async messaging via Transactional Outbox, and dedicated error handling (ERR-6xxx range). This module can be extracted to a separate service when scale demands independent deployment, without changing the core POS modules. Circuit breaker isolation ensures external API failures (Amazon, Google, Shopify) cannot cascade to POS checkout operations.
L.1.5 Service-Based Architecture
| Attribute | Value |
|---|---|
| Description | A hybrid style with coarse-grained services (e.g., Inventory, Sales, HR) often sharing a database. |
| Relevance to Nexus | Offers a middle ground. The Blueprint’s “Service Layer” within the Central API follows this structure logically. |
| Decision | Middle ground (influences internal structure) |
L.1.6 Space-Based Architecture
| Attribute | Value |
|---|---|
| Description | Designed for high scalability and concurrency using tuple spaces (distributed caching/in-memory grids). |
| Relevance to Nexus | Could handle “Black Friday” spikes, but data consistency (synchronization to persistent storage) is too complex for the strict financial audit requirements. |
| Decision | Rejected |
| Rationale | Too complex for financial audit requirements |
L.1.7 Event Sourcing (Architecture Pattern)
| Attribute | Value |
|---|---|
| Description | A data persistence pattern where state transitions are stored as a sequence of immutable events (e.g., ItemAdded, PaymentAuthorized) rather than just the current state. |
| Relevance to Nexus | Critical. The Blueprint (Section L.4A below) mandates this for the “Sales” and “Inventory” domains to enable “Offline Conflict Resolution,” “Complete Audit Trails,” and “Temporal Queries” (Time Travel). |
| Decision | Selected (Sales & Inventory Domains) |
| Key Technology | PostgreSQL 16 (Append-Only Event Table), Apache Kafka (Streaming Platform) |
L.1.8 Offline-First (Architecture Pattern)
| Attribute | Value |
|---|---|
| Description | Design pattern where the application functions fully offline with local data storage, syncing when connectivity is available. |
| Relevance to Nexus | Critical. POS terminals must operate during network outages. |
| Decision | Selected (Client) |
| Key Technology | SQLite (Local Storage) |
L.1.9 Integration Patterns (BRD v18.0 Module 6)
BRD v18.0 Section 6.2 mandates 5 integration patterns that are architecturally significant. These were evaluated during the expert panel review and all selected.
| Pattern | Description | Decision | BRD Reference |
|---|---|---|---|
| Circuit Breaker | State machine (CLOSED → OPEN → HALF_OPEN) that prevents cascading failures from external APIs. Trips after 5 failures within 60 seconds; 30-second cooldown. | Selected | §6.2.4 |
| Transactional Outbox | Atomic write of business data + outbox event in the same database transaction. A relay process polls the outbox and publishes events, guaranteeing at-least-once delivery without distributed transactions. | Selected | §6.2.3, §6.7.3 |
| Provider Abstraction (Strategy) | IIntegrationProvider interface with 5 standard methods (Connect, Sync, Validate, Publish, HealthCheck) implemented per provider. Enables uniform handling regardless of provider protocol. | Selected | §6.2.1 |
| Anti-Corruption Layer (ACL) | Per-provider translation layer preventing external schema changes from leaking into core domain models. Each provider maps external DTOs to internal domain events. | Selected | §6.2.7 |
| Saga / Orchestration | Cross-platform inventory sync orchestrated as a saga with compensation actions. If a Shopify inventory update succeeds but Amazon fails, the saga compensates by rolling back the Shopify change. | Selected (cross-platform flows) | §6.7 |
Circuit Breaker State Machine:
┌──────────────────────────────────────────────────────────┐
│ CIRCUIT BREAKER STATE MACHINE │
├──────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ 5 failures ┌──────────┐ │
│ │ CLOSED │ ──────────────►│ OPEN │ │
│ │ (Normal) │ in 60 sec │ (Reject) │ │
│ └────┬─────┘ └────┬─────┘ │
│ ▲ │ │
│ │ success │ 30 sec cooldown │
│ │ ▼ │
│ │ ┌───────────┐ │
│ └────────────────────│ HALF_OPEN │ │
│ │ (1 probe) │ │
│ failure ──────────└───────────┘──► OPEN │
│ │
└──────────────────────────────────────────────────────────┘
L.2 Style Evaluation Matrix
Ratings: 1 (Poor) to 5 (Excellent)
Monolithic Styles
| Style | Availability | Interoperability | Data Consistency | Overall Fit |
|---|---|---|---|---|
| Layered (Traditional) | ★★☆☆☆ | ★★☆☆☆ | ★★★★☆ | Backend only |
| Modular Monolith | ★★★☆☆ | ★★★☆☆ | ★★★★☆ | Selected (Core) |
| Microkernel (Plugin) | ★★★☆☆ | ★★★★★ | ★★★☆☆ | Selected (Client) |
v18.0 Note: Modular Monolith Interoperability reduced from 4★ to 3★. Module 6 requires 6 provider families with different scaling needs — a monolith cannot independently scale individual providers. Mitigated by Extractable Integration Gateway design.
Distributed Styles
| Style | Availability | Interoperability | Data Consistency | Overall Fit |
|---|---|---|---|---|
| Service-Based | ★★★★☆ | ★★★★☆ | ★★★☆☆ | Eventual |
| Event-Driven (EDA) | ★★★★★ | ★★★★★ | ★★☆☆☆ | Selected (Comm Layer) |
| Space-Based | ★★★★★ | ★★★☆☆ | ★☆☆☆☆ | Too Complex |
| Microservices | ★★★★☆ | ★★★★☆ | ★☆☆☆☆ | Hard Sync |
v18.0 Note: Service-Based Interoperability raised from 3★ to 4★. Coarse-grained services can independently deploy integration providers.
Patterns
| Pattern | Availability | Interoperability | Data Consistency | Overall Fit |
|---|---|---|---|---|
| Event Sourcing | ★★★☆☆ | ★★★★☆ | ★★★★★ | Selected (Audit/Sync) |
| Offline-First | ★★★★★ | ★★☆☆☆ | ★★★☆☆ | Selected (Client) |
| Integration Patterns | ★★★★☆ | ★★★★★ | ★★★★☆ | Selected (Module 6) |
L.3 Key Trade-off Analysis
Trade-off 1: Availability vs. Consistency
| Aspect | Decision |
|---|---|
| Conflict | The “Offline First” requirement mandates we cannot rely on immediate cloud consistency. |
| Resolution | We must accept Eventual Consistency for inventory sync. |
| Mitigation | Event Sourcing enables deterministic replay to resolve conflicts. |
Trade-off 2: Complexity (Event Sourcing + PostgreSQL Events)
| Aspect | Decision |
|---|---|
| Conflict | Event Sourcing adds complexity compared to standard CRUD. Original design included Apache Kafka for streaming, adding operational burden (ZooKeeper/KRaft). |
| Resolution | Event Sourcing retained for Sales and Inventory domains. Kafka deferred to v2.0. v1.0 uses PostgreSQL event tables with LISTEN/NOTIFY for event notification and Transactional Outbox for guaranteed delivery. |
| Benefit | Preserves event replay capability and audit trail while eliminating Kafka operational complexity. PostgreSQL event tables match BRD’s existing idempotency_records and integration_dead_letters table designs. |
Trade-off 3: Deployment Simplicity (Modular Monolith)
| Aspect | Decision |
|---|---|
| Conflict | Microservices offer independent scaling but add operational overhead. |
| Resolution | Choosing a Modular Monolith (“Central API”) over Microservices. Row-Level Isolation with RLS for multi-tenancy. |
| Benefit | Reduces deployment complexity (one container vs. dozens). Module 6 designed as Extractable Integration Gateway — can be split into a separate service when scale demands it, without changing core POS modules. |
L.4 Selected Architecture Strategy
Primary Declaration
| Attribute | Selection |
|---|---|
| Primary Style | Event-Driven Modular Monolith (Central API) |
| Key Patterns | Event Sourcing (scoped), CQRS (scoped), Offline-First, Row-Level Isolation with RLS |
| Event Infrastructure | PostgreSQL Event Tables + LISTEN/NOTIFY (v1.0); Apache Kafka (v2.0) |
| Integration Strategy | Extractable Integration Gateway (Module 6) |
| Credential Management | HashiCorp Vault |
Architecture Layer Mapping
| Layer | Style/Pattern | Technology |
|---|---|---|
| POS Client | Microkernel (Plugin) + Offline-First | .NET MAUI, SQLite |
| Central API | Modular Monolith | ASP.NET Core 8.0 |
| Communication | Event-Driven | PostgreSQL Events + LISTEN/NOTIFY (v1.0) |
| Data Persistence | Event Sourcing (scoped) + CQRS (scoped) | PostgreSQL 16 |
| Multi-Tenancy | Row-Level Isolation with RLS | PostgreSQL RLS + tenant_id |
| Integration | Extractable Integration Gateway | Module 6, IIntegrationProvider |
| Secrets | Credential Vault | HashiCorp Vault (Docker) |
L.4A CQRS & Event Sourcing Scope
The expert panel identified that CQRS and Event Sourcing scope was undefined. This section clarifies which modules use which patterns, per user decision.
| Module | CQRS | Event Sourcing | Pattern Description |
|---|---|---|---|
| Module 1: Sales | Full CQRS | Full Event Sourcing | Separate read/write models. Events: SaleCreated, PaymentProcessed, ReturnInitiated, VoidExecuted. Event replay for audit and conflict resolution. |
| Module 2: Customers | Standard CRUD | None | Direct query against current-state tables. Simple read/write through repository pattern. |
| Module 3: Catalog | Standard CRUD | None | Read-heavy workload optimized with caching (Redis). Product data served from current-state tables. |
| Module 4: Inventory | Materialized read model | ES for audit trail | Current inventory levels maintained in materialized view. Event Sourcing captures all stock movements for audit trail and conflict resolution (offline sync). |
| Module 5: Setup | Standard CRUD | None | Configuration data accessed directly. Changes logged but not event-sourced. |
| Module 6: Integrations | Standard CRUD | Audit-trail-only ES | Sync logs stored as event stream for debugging and compliance. No event replay for operational queries — current sync state maintained in tables. |
| Section 7: State Machines | N/A | Events drive transitions | 16 state machines powered by domain events. State transitions recorded as events. Database-driven implementation (see below). |
State Machine Implementation: Database-driven pattern using a state column on the entity table plus a state_transitions reference table. This approach provides:
- State column: Each stateful entity (e.g.,
orders.status,returns.status) stores current state directly - Transition table:
state_transitions(from_state, to_state, event, guard_condition, action)defines allowed transitions per entity type - Validation: Application layer validates transitions against the table before applying (preventing invalid state changes)
- Audit: Every transition logged with timestamp, actor, and triggering event
- Benefits: Declarative (non-code) transition rules, easy to modify without deployment, queryable transition history
Design Note: State machines are NOT implemented via Event Sourcing replay. The
statecolumn holds current truth; ES events record the history. This separation keeps state lookups O(1) while maintaining full audit trail.
Event Sourcing vs. Audit Log Relationship: Event Sourcing and the audit log serve separate concerns and are complementary:
- Event Sourcing (Modules 1, 4, 6): Domain events that represent business state changes. Used for: event replay (Sales), conflict resolution (Inventory), sync debugging (Integrations). Stored in event store tables.
- Audit Log: Cross-cutting compliance record of who did what and when. Captures: user identity, IP address, action performed, timestamp, before/after values. Stored in dedicated
audit_logtable. - Relationship: ES events feed INTO the audit log (via event handlers) but the audit log also captures non-ES actions (e.g., login attempts, configuration changes, report generation). The audit log is the compliance artifact; ES is the domain modeling tool.
Event Sourcing Implementation Pattern:
┌──────────────────────────────────────────────────────────┐
│ EVENT SOURCING PATTERN (Sales Module) │
├──────────────────────────────────────────────────────────┤
│ │
│ Command ──► Aggregate ──► Domain Events ──► Event Store │
│ │ │
│ ▼ │
│ Event Handlers │
│ ┌─────────────┐ │
│ │ Read Model │ (CQRS) │
│ │ Projections │ │
│ └─────────────┘ │
│ ┌─────────────┐ │
│ │ Audit Log │ │
│ │ (Immutable) │ │
│ └─────────────┘ │
│ ┌─────────────┐ │
│ │ Integration │ │
│ │ Outbox │ │
│ └─────────────┘ │
│ │
│ Queries ──► Read Model (Materialized View) ──► Response │
│ │
└──────────────────────────────────────────────────────────┘
L.4A.1 Event Store Implementation
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Event Store Schema (PostgreSQL)
The append-only event store is the source of truth:
-- Event Store Schema
CREATE TABLE events (
id BIGSERIAL PRIMARY KEY,
event_id UUID UNIQUE NOT NULL DEFAULT gen_random_uuid(),
aggregate_type VARCHAR(100) NOT NULL, -- 'Sale', 'Inventory', 'Customer'
aggregate_id UUID NOT NULL, -- The entity this event belongs to
event_type VARCHAR(100) NOT NULL, -- 'SaleCreated', 'ItemAdded'
event_data JSONB NOT NULL, -- Full event payload
metadata JSONB NOT NULL DEFAULT '{}', -- Correlation, causation IDs
version INTEGER NOT NULL, -- Aggregate version (for optimistic concurrency)
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by UUID, -- Employee who caused the event
-- Optimistic concurrency: aggregate_id + version must be unique
UNIQUE (aggregate_type, aggregate_id, version)
);
-- Indexes for common queries
CREATE INDEX idx_events_aggregate ON events (aggregate_type, aggregate_id);
CREATE INDEX idx_events_type ON events (event_type);
CREATE INDEX idx_events_created_at ON events USING BRIN (created_at);
CREATE INDEX idx_events_metadata ON events USING GIN (metadata);
-- Snapshots table (for performance on long event streams)
CREATE TABLE snapshots (
id BIGSERIAL PRIMARY KEY,
aggregate_type VARCHAR(100) NOT NULL,
aggregate_id UUID NOT NULL,
version INTEGER NOT NULL,
state JSONB NOT NULL, -- Serialized aggregate state
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE (aggregate_type, aggregate_id)
);
-- Outbox table (for reliable event publishing)
CREATE TABLE event_outbox (
id BIGSERIAL PRIMARY KEY,
event_id UUID NOT NULL REFERENCES events(event_id),
destination VARCHAR(100) NOT NULL, -- 'signalr', 'webhook', 'sync'
status VARCHAR(20) DEFAULT 'pending',
attempts INTEGER DEFAULT 0,
last_error TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
processed_at TIMESTAMPTZ
);
Event Sourcing Architecture Diagram
Event Sourcing Architecture
===========================
+-------------------------------------------------------------------------+
| POS CLIENT |
| |
| +------------------+ +-------------------+ +-----------------+ |
| | Command Handler | | Event Store | | Projector | |
| | | | (Local SQLite) | | (Read Model) | |
| | CreateSale |--->| |--->| | |
| | VoidSale | | SaleCreated | | sale_summaries | |
| | AddPayment | | ItemAdded | | inventory_view | |
| +------------------+ | PaymentReceived | +-----------------+ |
| +-------------------+ |
| | |
+-------------------------------------------------------------------------+
| Sync
v
+-------------------------------------------------------------------------+
| CENTRAL API |
| |
| +------------------+ +-------------------+ +-----------------+ |
| | Command Handler | | Event Store | | Projector | |
| | (Validates) | | (PostgreSQL) | | (Read Model) | |
| | |<---| |--->| | |
| | Deduplication | | All tenant events | | sales | |
| | Conflict Check | | Append-only | | inventory_items | |
| +------------------+ | Immutable | | customers | |
| +-------------------+ +-----------------+ |
+-------------------------------------------------------------------------+
CQRS Pattern
CQRS Pattern
============
+----------------------+
| User Action |
+----------+-----------+
|
+----------------------+----------------------+
| |
v v
+-------------------+ +-------------------+
| COMMAND | | QUERY |
| (Write) | | (Read) |
+-------------------+ +-------------------+
| |
v v
+-------------------+ +-------------------+
| Command Handler | | Query Handler |
| - Validate | | - No validation |
| - Business rules | | - Fast lookup |
| - Generate events | | - Denormalized |
+-------------------+ +-------------------+
| ^
v |
+-------------------+ +-------------------+
| Event Store |----------------------->| Read Models |
| (Append-only) | Projections | (Optimized) |
+-------------------+ +-------------------+
Write Side (Commands)
// Commands - Express intent
public record CreateSaleCommand(
Guid SaleId,
Guid LocationId,
Guid EmployeeId,
Guid? CustomerId,
List<SaleLineItemDto> LineItems
);
public record VoidSaleCommand(
Guid SaleId,
Guid EmployeeId,
string Reason
);
public record AddPaymentCommand(
Guid SaleId,
string PaymentMethod,
decimal Amount,
string? Reference
);
Read Side (Queries)
// Queries - Request data
public record GetSaleByIdQuery(Guid SaleId);
public record GetDailySalesQuery(Guid LocationId, DateTime Date);
public record GetInventoryLevelQuery(string Sku, Guid LocationId);
// Read models - Optimized for queries
public class SaleSummaryView
{
public Guid Id { get; set; }
public string SaleNumber { get; set; }
public string CustomerName { get; set; } // Denormalized
public string EmployeeName { get; set; } // Denormalized
public decimal Total { get; set; }
public string Status { get; set; }
public DateTime CreatedAt { get; set; }
}
L.4A.2 Event Streaming (Apache Kafka)
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Technology Selection
| Attribute | Selection |
|---|---|
| Platform | Apache Kafka |
| Version | 3.6+ (with KRaft mode) |
| Primary Rationale | Replayability |
Why Kafka over alternatives?
| Alternative | Why Not Selected |
|---|---|
| RabbitMQ | No native replay; messages deleted after consumption |
| Redis Streams | Less durable; not designed for long-term event storage |
| AWS SQS | No replay capability; messages expire |
| PostgreSQL LISTEN/NOTIFY | Not scalable; no persistence |
Kafka Replayability
+------------------------------------------------------------------+
| KAFKA REPLAYABILITY |
+------------------------------------------------------------------+
| |
| Event Log (Immutable, Ordered): |
| |
| Partition 0: [E1] -> [E2] -> [E3] -> [E4] -> [E5] -> ... |
| ^ ^ |
| | | |
| Consumer Group A: ─────┘ | (Processed up to E2) |
| Consumer Group B: ────────────────────┘ (Processed up to E4) |
| |
| NEW Consumer Group C can start from E1 and replay ALL events! |
| |
+------------------------------------------------------------------+
Kafka Topics Architecture
POS Kafka Topics
================
┌────────────────────────────────────────────────────────────────┐
│ TOPIC STRUCTURE │
├────────────────────────────────────────────────────────────────┤
│ │
│ pos.events.sales - All sale-related events │
│ ├── Partition 0 (Location A) │
│ ├── Partition 1 (Location B) │
│ └── Partition N (Location N) │
│ │
│ pos.events.inventory - Inventory movements │
│ ├── Partition 0-N (By SKU hash) │
│ │
│ pos.events.customers - Customer activity │
│ ├── Partition 0-N (By customer hash) │
│ │
│ pos.sync.outbound - Events to sync to external systems │
│ ├── Shopify, Amazon, etc. │
│ │
│ pos.sync.inbound - Events from external systems │
│ ├── Online orders, inventory updates │
│ │
└────────────────────────────────────────────────────────────────┘
Kafka Configuration (Docker Compose)
# docker-compose.kafka.yml
services:
kafka:
image: confluentinc/cp-kafka:7.5.0
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_RETENTION_HOURS: 168 # 7 days
KAFKA_LOG_RETENTION_BYTES: 10737418240 # 10GB per partition
KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
ports:
- "9092:9092"
volumes:
- kafka_data:/var/lib/kafka/data
kafka-ui:
image: provectuslabs/kafka-ui:latest
environment:
KAFKA_CLUSTERS_0_NAME: pos-cluster
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
ports:
- "8090:8080"
Event Publishing Pattern
// KafkaEventPublisher.cs
public class KafkaEventPublisher : IEventPublisher
{
private readonly IProducer<string, string> _producer;
private readonly ILogger<KafkaEventPublisher> _logger;
public async Task PublishAsync<T>(T @event, CancellationToken ct = default)
where T : IDomainEvent
{
var topic = GetTopicForEvent(@event);
var key = GetPartitionKey(@event); // e.g., LocationId for ordering
var message = new Message<string, string>
{
Key = key,
Value = JsonSerializer.Serialize(@event),
Headers = new Headers
{
{ "event-type", Encoding.UTF8.GetBytes(@event.GetType().Name) },
{ "correlation-id", Encoding.UTF8.GetBytes(@event.CorrelationId.ToString()) },
{ "tenant-id", Encoding.UTF8.GetBytes(@event.TenantId.ToString()) }
}
};
var result = await _producer.ProduceAsync(topic, message, ct);
_logger.LogDebug(
"Published {EventType} to {Topic}:{Partition}@{Offset}",
@event.GetType().Name,
result.Topic,
result.Partition.Value,
result.Offset.Value
);
}
private string GetTopicForEvent(IDomainEvent @event) => @event switch
{
SaleCreated or SaleCompleted or SaleVoided => "pos.events.sales",
InventoryReceived or InventorySold => "pos.events.inventory",
CustomerCreated or LoyaltyPointsEarned => "pos.events.customers",
_ => "pos.events.general"
};
}
Schema Registry & Event Versioning
Overview
As the POS platform evolves, event schemas will change. Schema Registry provides:
- Schema Validation: Prevent incompatible events from being published
- Schema Evolution: Safe migrations without breaking consumers
- Schema History: Version tracking for all event types
| Attribute | Selection |
|---|---|
| Tool | Confluent Schema Registry |
| Format | Avro (Primary) or Protobuf |
| Strategy | BACKWARD compatibility |
Schema Registry Architecture
┌─────────────────────────────────────────────────────────────────┐
│ SCHEMA REGISTRY FLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌──────────────────┐ ┌─────────────┐ │
│ │ Producer │ │ Schema Registry │ │ Consumer │ │
│ │ (POS API) │ │ (Confluent) │ │ (Analytics) │ │
│ └──────┬──────┘ └────────┬─────────┘ └──────┬──────┘ │
│ │ │ │ │
│ 1. Register/Get Schema │ │ │
│ │ ─────────────────> │ │ │
│ │ │ │ │
│ 2. Schema ID returned │ │ │
│ │ <───────────────── │ │ │
│ │ │ │ │
│ 3. Publish event with │ │ │
│ schema ID prefix │ │ │
│ │ ─────────────────────────────────────────> │ │
│ │ │ │ │
│ │ 4. Consumer fetches │ │
│ │ schema by ID │ │
│ │ <─────────────────── │ │
│ │ │ │
│ │ 5. Deserialize with │ │
│ │ correct schema │ │
│ │
└─────────────────────────────────────────────────────────────────┘
Avro Schema Definition (SaleCreated)
// schemas/sale-created.avsc
{
"type": "record",
"name": "SaleCreated",
"namespace": "io.posplatform.events.sales",
"doc": "Event fired when a new sale is initiated",
"fields": [
{
"name": "eventId",
"type": { "type": "string", "logicalType": "uuid" },
"doc": "Unique event identifier"
},
{
"name": "saleId",
"type": { "type": "string", "logicalType": "uuid" },
"doc": "Sale aggregate identifier"
},
{
"name": "tenantId",
"type": { "type": "string", "logicalType": "uuid" }
},
{
"name": "locationId",
"type": { "type": "string", "logicalType": "uuid" }
},
{
"name": "employeeId",
"type": { "type": "string", "logicalType": "uuid" }
},
{
"name": "customerId",
"type": ["null", { "type": "string", "logicalType": "uuid" }],
"default": null,
"doc": "Optional customer for loyalty"
},
{
"name": "saleNumber",
"type": "string"
},
{
"name": "createdAt",
"type": { "type": "long", "logicalType": "timestamp-millis" }
},
{
"name": "metadata",
"type": {
"type": "map",
"values": "string"
},
"default": {}
}
]
}
Schema Evolution Rules (BACKWARD Compatibility)
| Change | Allowed? | Notes |
|---|---|---|
| Add field with default | Yes | New consumers can read old messages |
| Remove field with default | Yes | Old consumers ignore missing field |
| Add field without default | No | Old messages fail validation |
| Remove required field | No | New messages fail for old consumers |
| Change field type | No | Type mismatch errors |
| Rename field | No | Use aliases instead |
Schema Evolution Example (v2)
// schemas/sale-created-v2.avsc (BACKWARD COMPATIBLE)
{
"type": "record",
"name": "SaleCreated",
"namespace": "io.posplatform.events.sales",
"fields": [
// ... existing fields ...
// NEW FIELD - Added with default value (BACKWARD COMPATIBLE)
{
"name": "channel",
"type": "string",
"default": "in_store",
"doc": "Sales channel: in_store, online, mobile"
},
// NEW OPTIONAL FIELD (BACKWARD COMPATIBLE)
{
"name": "referralCode",
"type": ["null", "string"],
"default": null
}
]
}
Producer Configuration with Schema Registry
// Infrastructure/Messaging/SchemaRegistryProducer.cs
using Confluent.Kafka;
using Confluent.SchemaRegistry;
using Confluent.SchemaRegistry.Serdes;
public class SchemaRegistryProducer<TKey, TValue> : IEventPublisher
where TValue : ISpecificRecord
{
private readonly IProducer<TKey, TValue> _producer;
public SchemaRegistryProducer(
string bootstrapServers,
string schemaRegistryUrl)
{
var schemaRegistryConfig = new SchemaRegistryConfig
{
Url = schemaRegistryUrl
};
var schemaRegistry = new CachedSchemaRegistryClient(schemaRegistryConfig);
var producerConfig = new ProducerConfig
{
BootstrapServers = bootstrapServers,
Acks = Acks.All, // Wait for all replicas
EnableIdempotence = true
};
_producer = new ProducerBuilder<TKey, TValue>(producerConfig)
.SetKeySerializer(new AvroSerializer<TKey>(schemaRegistry))
.SetValueSerializer(new AvroSerializer<TValue>(schemaRegistry, new AvroSerializerConfig
{
// Fail if schema is not compatible
AutoRegisterSchemas = false,
SubjectNameStrategy = SubjectNameStrategy.TopicRecord
}))
.Build();
}
public async Task PublishAsync(
string topic,
TKey key,
TValue value,
CancellationToken ct = default)
{
var result = await _producer.ProduceAsync(topic, new Message<TKey, TValue>
{
Key = key,
Value = value
}, ct);
_logger.LogDebug(
"Published {EventType} to {Topic} with schema ID {SchemaId}",
typeof(TValue).Name,
result.Topic,
result.Value
);
}
}
CI/CD Schema Validation
# .github/workflows/schema-validation.yml
name: Schema Validation
on:
pull_request:
paths:
- 'schemas/**'
jobs:
validate-schemas:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Start Schema Registry
run: |
docker compose -f docker/docker-compose.kafka.yml up -d schema-registry
sleep 10
- name: Test Schema Compatibility
run: |
for schema in schemas/*.avsc; do
subject=$(basename "$schema" .avsc)-value
echo "Testing compatibility for $subject"
# Check if schema is BACKWARD compatible with existing
curl -X POST \
-H "Content-Type: application/vnd.schemaregistry.v1+json" \
-d @"$schema" \
"http://localhost:8081/compatibility/subjects/$subject/versions/latest" \
| jq -e '.is_compatible == true' || exit 1
done
- name: Register Schemas (on merge to main)
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: |
for schema in schemas/*.avsc; do
subject=$(basename "$schema" .avsc)-value
curl -X POST \
-H "Content-Type: application/vnd.schemaregistry.v1+json" \
-d "{\"schema\": $(cat "$schema" | jq -Rs .)}" \
"http://localhost:8081/subjects/$subject/versions"
done
Docker Compose with Schema Registry
# docker/docker-compose.kafka.yml (updated)
services:
schema-registry:
image: confluentinc/cp-schema-registry:7.5.0
container_name: pos-schema-registry
depends_on:
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:9092
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
# Enforce BACKWARD compatibility by default
SCHEMA_REGISTRY_SCHEMA_COMPATIBILITY_LEVEL: BACKWARD
L.4A.3 Dead Letter Queue Pattern
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Overview
When event processing fails (malformed data, business rule violations, transient errors), messages go to a Dead Letter Queue for investigation and replay.
| Attribute | Selection |
|---|---|
| Purpose | Capture failed messages without blocking main flow |
| Retention | 30 days |
| Monitoring | Alert when DLQ depth > threshold |
DLQ Architecture
┌─────────────────────────────────────────────────────────────────┐
│ DLQ PATTERN │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
│ │ pos.events. │ │ Consumer │ │ Handler │ │
│ │ sales │───>│ Group │───>│ Logic │ │
│ │ (Main Topic) │ │ │ │ │ │
│ └───────────────┘ └───────────────┘ └───────┬───────┘ │
│ │ │
│ ┌───────┴───────┐ │
│ │ Success? │ │
│ └───────┬───────┘ │
│ Yes ┌───────┴───────┐ No │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────┐ ┌───────────┐ │
│ │ Commit │ │ Retry │ │
│ │ Offset │ │ Logic │ │
│ └──────────┘ └─────┬─────┘ │
│ │ │
│ ┌──────┴─────┐ │
│ │ Max Retries│ │
│ │ Exceeded? │ │
│ └──────┬─────┘ │
│ No ┌────────┴──────┐│
│ │ ││
│ ▼ ▼│
│ ┌──────────┐ ┌────────┴──┐
│ │ Retry │ │ DLQ │
│ │ Topic │ │ Topic │
│ └──────────┘ └───────────┘
│ pos.events.
│ sales.dlq
└─────────────────────────────────────────────────────────────────┘
DLQ Consumer Implementation
// Infrastructure/Messaging/DlqAwareConsumer.cs
public class DlqAwareConsumer<TKey, TValue>
{
private readonly IConsumer<TKey, TValue> _consumer;
private readonly IProducer<string, DeadLetterMessage> _dlqProducer;
private readonly ILogger _logger;
private const int MAX_RETRIES = 3;
private readonly TimeSpan[] _retryDelays = new[]
{
TimeSpan.FromSeconds(1),
TimeSpan.FromSeconds(5),
TimeSpan.FromSeconds(30)
};
public async Task ConsumeWithDlqAsync(
string topic,
Func<ConsumeResult<TKey, TValue>, Task> handler,
CancellationToken ct)
{
_consumer.Subscribe(topic);
while (!ct.IsCancellationRequested)
{
var result = _consumer.Consume(ct);
var retryCount = GetRetryCount(result.Message.Headers);
try
{
await handler(result);
_consumer.Commit(result);
}
catch (TransientException ex) when (retryCount < MAX_RETRIES)
{
_logger.LogWarning(
ex,
"Transient error processing message. Retry {Retry}/{Max}",
retryCount + 1,
MAX_RETRIES
);
await Task.Delay(_retryDelays[retryCount], ct);
await PublishToRetryTopicAsync(result, retryCount + 1);
_consumer.Commit(result);
}
catch (Exception ex)
{
_logger.LogError(
ex,
"Failed to process message after {Retries} retries. Sending to DLQ.",
retryCount
);
await PublishToDlqAsync(result, ex, retryCount);
_consumer.Commit(result);
}
}
}
private async Task PublishToDlqAsync(
ConsumeResult<TKey, TValue> result,
Exception exception,
int retryCount)
{
var dlqMessage = new DeadLetterMessage
{
OriginalTopic = result.Topic,
OriginalPartition = result.Partition.Value,
OriginalOffset = result.Offset.Value,
Key = result.Message.Key?.ToString(),
Value = SerializeValue(result.Message.Value),
Headers = ExtractHeaders(result.Message.Headers),
ErrorType = exception.GetType().FullName,
ErrorMessage = exception.Message,
StackTrace = exception.StackTrace,
RetryCount = retryCount,
FirstFailedAt = GetFirstFailedAt(result.Message.Headers),
LastFailedAt = DateTime.UtcNow,
ConsumerGroup = _consumerGroup,
ConsumerInstance = Environment.MachineName
};
var dlqTopic = $"{result.Topic}.dlq";
await _dlqProducer.ProduceAsync(dlqTopic, new Message<string, DeadLetterMessage>
{
Key = result.Message.Key?.ToString(),
Value = dlqMessage
});
}
}
DLQ Message Structure
// Domain/Events/DeadLetterMessage.cs
public record DeadLetterMessage
{
/// <summary>Original Kafka topic</summary>
public string OriginalTopic { get; init; }
/// <summary>Original partition</summary>
public int OriginalPartition { get; init; }
/// <summary>Original offset</summary>
public long OriginalOffset { get; init; }
/// <summary>Original message key</summary>
public string Key { get; init; }
/// <summary>Original message value (base64 if binary)</summary>
public string Value { get; init; }
/// <summary>Original headers</summary>
public Dictionary<string, string> Headers { get; init; }
/// <summary>Error details</summary>
public string ErrorType { get; init; }
public string ErrorMessage { get; init; }
public string StackTrace { get; init; }
/// <summary>Processing metadata</summary>
public int RetryCount { get; init; }
public DateTime FirstFailedAt { get; init; }
public DateTime LastFailedAt { get; init; }
public string ConsumerGroup { get; init; }
public string ConsumerInstance { get; init; }
}
DLQ Monitoring & Alerting
# prometheus/alerts/dlq-alerts.yml
groups:
- name: kafka-dlq-alerts
rules:
- alert: DLQMessagesAccumulating
expr: kafka_consumer_group_lag{topic=~".*\\.dlq"} > 100
for: 15m
labels:
severity: warning
annotations:
summary: "DLQ has {{ $value }} unprocessed messages"
description: "Topic {{ $labels.topic }} has accumulated messages"
- alert: DLQCriticalBacklog
expr: kafka_consumer_group_lag{topic=~".*\\.dlq"} > 1000
for: 5m
labels:
severity: critical
annotations:
summary: "CRITICAL: DLQ backlog exceeds 1000 messages"
runbook_url: "https://wiki.internal/runbooks/dlq-overflow"
DLQ Replay Tool
// Tools/DlqReplayService.cs
public class DlqReplayService
{
public async Task ReplayMessagesAsync(
string dlqTopic,
DateTime? from = null,
DateTime? to = null,
Func<DeadLetterMessage, bool>? filter = null)
{
var consumer = CreateDlqConsumer(dlqTopic);
var producer = CreateMainTopicProducer();
var messages = await ReadDlqMessagesAsync(consumer, from, to);
foreach (var dlqMessage in messages)
{
if (filter != null && !filter(dlqMessage))
{
_logger.LogDebug("Skipping message by filter: {Key}", dlqMessage.Key);
continue;
}
_logger.LogInformation(
"Replaying message from DLQ: Topic={Topic}, Offset={Offset}",
dlqMessage.OriginalTopic,
dlqMessage.OriginalOffset
);
// Publish back to original topic
await producer.ProduceAsync(dlqMessage.OriginalTopic, new Message<string, string>
{
Key = dlqMessage.Key,
Value = dlqMessage.Value,
Headers = new Headers
{
{ "x-dlq-replay", Encoding.UTF8.GetBytes("true") },
{ "x-dlq-original-offset", Encoding.UTF8.GetBytes(dlqMessage.OriginalOffset.ToString()) }
}
});
}
_logger.LogInformation("Replayed {Count} messages from DLQ", messages.Count);
}
}
# CLI usage for DLQ replay
dotnet run --project tools/DlqReplay -- \
--topic pos.events.sales.dlq \
--from "2026-01-20T00:00:00Z" \
--filter "ErrorType contains 'Transient'"
L.4A.4 Domain Events Catalog
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Sale Aggregate Events
Sale Events
===========
SaleCreated
+-----------------------+----------------------------------------+
| Field | Description |
+-----------------------+----------------------------------------+
| sale_id | UUID of the new sale |
| sale_number | Human-readable sale number |
| location_id | Where the sale occurred |
| register_id | Which register |
| employee_id | Who created the sale |
| customer_id | Customer (if any) |
| created_at | Timestamp |
+-----------------------+----------------------------------------+
SaleLineItemAdded
+-----------------------+----------------------------------------+
| sale_id | Parent sale |
| line_item_id | UUID of the line item |
| product_id | Product being sold |
| variant_id | Variant (if any) |
| sku | SKU at time of sale |
| name | Product name at time of sale |
| quantity | Quantity sold |
| unit_price | Price per unit |
| discount_amount | Line discount |
| tax_amount | Line tax |
+-----------------------+----------------------------------------+
SaleLineItemRemoved
+-----------------------+----------------------------------------+
| sale_id | Parent sale |
| line_item_id | UUID of removed item |
| reason | Why removed |
+-----------------------+----------------------------------------+
PaymentReceived
+-----------------------+----------------------------------------+
| sale_id | Parent sale |
| payment_id | UUID of payment |
| payment_method | cash, credit, debit, etc. |
| amount | Payment amount |
| reference | Card last 4, check #, etc. |
| auth_code | Authorization code |
+-----------------------+----------------------------------------+
SaleCompleted
+-----------------------+----------------------------------------+
| sale_id | The sale being completed |
| subtotal | Final subtotal |
| discount_total | Total discounts |
| tax_total | Total tax |
| total | Final total |
| completed_at | Timestamp |
+-----------------------+----------------------------------------+
SaleVoided
+-----------------------+----------------------------------------+
| sale_id | The voided sale |
| voided_by | Employee who voided |
| reason | Void reason |
| voided_at | Timestamp |
+-----------------------+----------------------------------------+
Inventory Aggregate Events
Inventory Events
================
InventoryReceived
+-----------------------+----------------------------------------+
| location_id | Where received |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity | Amount received |
| cost | Unit cost |
| reference | PO number, transfer # |
| received_by | Employee |
+-----------------------+----------------------------------------+
InventoryAdjusted
+-----------------------+----------------------------------------+
| location_id | Location |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity_change | +/- amount |
| new_quantity | New on-hand quantity |
| reason | count, damage, theft, return |
| adjusted_by | Employee |
| notes | Additional context |
+-----------------------+----------------------------------------+
InventorySold
+-----------------------+----------------------------------------+
| location_id | Where sold |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity | Amount sold (positive) |
| sale_id | Related sale |
+-----------------------+----------------------------------------+
InventoryTransferred
+-----------------------+----------------------------------------+
| transfer_id | Transfer document |
| from_location_id | Source location |
| to_location_id | Destination location |
| product_id | Product |
| variant_id | Variant (if any) |
| quantity | Amount transferred |
| transferred_by | Employee |
+-----------------------+----------------------------------------+
InventoryCounted
+-----------------------+----------------------------------------+
| location_id | Location |
| product_id | Product |
| variant_id | Variant |
| expected_quantity | System quantity before count |
| actual_quantity | Physical count |
| variance | Difference |
| counted_by | Employee |
| count_session_id | Batch count session |
+-----------------------+----------------------------------------+
Customer Aggregate Events
Customer Events
===============
CustomerCreated
+-----------------------+----------------------------------------+
| customer_id | New customer UUID |
| customer_number | Human-readable ID |
| first_name | First name |
| last_name | Last name |
| email | Email address |
| phone | Phone number |
| created_by | Employee |
+-----------------------+----------------------------------------+
CustomerUpdated
+-----------------------+----------------------------------------+
| customer_id | Customer UUID |
| changes | Map of field -> {old, new} |
| updated_by | Employee |
+-----------------------+----------------------------------------+
LoyaltyPointsEarned
+-----------------------+----------------------------------------+
| customer_id | Customer |
| points | Points earned |
| sale_id | Related sale |
| new_balance | Updated balance |
+-----------------------+----------------------------------------+
LoyaltyPointsRedeemed
+-----------------------+----------------------------------------+
| customer_id | Customer |
| points | Points redeemed |
| sale_id | Related sale |
| new_balance | Updated balance |
+-----------------------+----------------------------------------+
StoreCreditIssued
+-----------------------+----------------------------------------+
| customer_id | Customer |
| credit_id | Credit UUID |
| amount | Credit amount |
| reason | Why issued |
| issued_by | Employee |
+-----------------------+----------------------------------------+
Employee Aggregate Events
Employee Events
===============
EmployeeClockIn
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| location_id | Where clocking in |
| shift_id | New shift UUID |
| clocked_in_at | Timestamp |
+-----------------------+----------------------------------------+
EmployeeClockOut
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| shift_id | Shift being closed |
| clocked_out_at | Timestamp |
| break_minutes | Total break time |
+-----------------------+----------------------------------------+
EmployeeBreakStarted
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| shift_id | Current shift |
| started_at | Break start time |
+-----------------------+----------------------------------------+
EmployeeBreakEnded
+-----------------------+----------------------------------------+
| employee_id | Employee UUID |
| shift_id | Current shift |
| ended_at | Break end time |
| duration_minutes | Break duration |
+-----------------------+----------------------------------------+
CashDrawer Aggregate Events
Cash Drawer Events
==================
DrawerOpened
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| register_id | Register UUID |
| employee_id | Who opened |
| opening_balance | Starting cash amount |
| opened_at | Timestamp |
+-----------------------+----------------------------------------+
DrawerCashDrop
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| amount | Amount dropped to safe |
| employee_id | Who dropped |
| dropped_at | Timestamp |
+-----------------------+----------------------------------------+
DrawerPaidIn
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| amount | Amount added |
| reason | Why (petty cash, etc.) |
| employee_id | Who added |
+-----------------------+----------------------------------------+
DrawerPaidOut
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| amount | Amount removed |
| reason | Why (vendor payment, etc.) |
| employee_id | Who removed |
+-----------------------+----------------------------------------+
DrawerClosed
+-----------------------+----------------------------------------+
| drawer_id | Drawer UUID |
| employee_id | Who closed |
| closing_balance | Actual cash counted |
| expected_balance | System calculated |
| variance | Difference (over/short) |
| closed_at | Timestamp |
+-----------------------+----------------------------------------+
L.4A.5 Event Projection Patterns
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Projection Architecture
=======================
+-------------------+
| Event Stream |
| |
| SaleCreated |
| ItemAdded |
| ItemAdded |
| PaymentReceived |
| SaleCompleted |
+--------+----------+
|
| Projector reads events
v
+-------------------+ +-------------------+ +-------------------+
| Sale Projector | |Inventory Projector| |Customer Projector |
| | | | | |
| - Build sale view | | - Update stock | | - Update stats |
| - Calculate totals| | - Track movements | | - Loyalty points |
+--------+----------+ +--------+----------+ +--------+----------+
| | |
v v v
+-------------------+ +-------------------+ +-------------------+
| sale_summaries | | inventory_levels | | customer_stats |
| (Read Model) | | (Read Model) | | (Read Model) |
+-------------------+ +-------------------+ +-------------------+
Sale Projector Implementation
// SaleProjector.cs
public class SaleProjector : IEventHandler
{
private readonly IDbContextFactory<ReadModelDbContext> _dbFactory;
public SaleProjector(IDbContextFactory<ReadModelDbContext> dbFactory)
{
_dbFactory = dbFactory;
}
public async Task HandleAsync(SaleCreated @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var view = new SaleSummaryView
{
Id = @event.SaleId,
SaleNumber = @event.SaleNumber,
LocationId = @event.LocationId,
EmployeeId = @event.EmployeeId,
CustomerId = @event.CustomerId,
Status = "draft",
Subtotal = 0,
Total = 0,
CreatedAt = @event.CreatedAt
};
db.SaleSummaries.Add(view);
await db.SaveChangesAsync();
}
public async Task HandleAsync(SaleLineItemAdded @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var sale = await db.SaleSummaries.FindAsync(@event.SaleId);
if (sale == null) return;
var lineTotal = @event.Quantity * @event.UnitPrice - @event.DiscountAmount;
sale.Subtotal += lineTotal;
sale.ItemCount += @event.Quantity;
await db.SaveChangesAsync();
}
public async Task HandleAsync(SaleCompleted @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var sale = await db.SaleSummaries.FindAsync(@event.SaleId);
if (sale == null) return;
sale.Status = "completed";
sale.DiscountTotal = @event.DiscountTotal;
sale.TaxTotal = @event.TaxTotal;
sale.Total = @event.Total;
sale.CompletedAt = @event.CompletedAt;
await db.SaveChangesAsync();
}
public async Task HandleAsync(SaleVoided @event)
{
await using var db = await _dbFactory.CreateDbContextAsync();
var sale = await db.SaleSummaries.FindAsync(@event.SaleId);
if (sale == null) return;
sale.Status = "voided";
sale.VoidedAt = @event.VoidedAt;
sale.VoidedBy = @event.VoidedBy;
sale.VoidReason = @event.Reason;
await db.SaveChangesAsync();
}
}
L.4A.6 Temporal Queries
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
Event sourcing enables powerful temporal queries:
-- What was inventory on a specific date?
SELECT
product_id,
SUM(CASE
WHEN event_type = 'InventoryReceived' THEN (event_data->>'quantity')::int
WHEN event_type = 'InventorySold' THEN -(event_data->>'quantity')::int
WHEN event_type = 'InventoryAdjusted' THEN (event_data->>'quantity_change')::int
ELSE 0
END) as quantity
FROM events
WHERE aggregate_type = 'Inventory'
AND (event_data->>'location_id')::uuid = '...'
AND created_at <= '2025-12-15 15:00:00'
GROUP BY product_id;
-- Sales trend for specific product
SELECT
date_trunc('day', created_at) as date,
SUM((event_data->>'quantity')::int) as units_sold
FROM events
WHERE event_type = 'InventorySold'
AND (event_data->>'product_id')::uuid = '...'
AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY date_trunc('day', created_at)
ORDER BY date;
-- Audit trail for specific sale
SELECT
event_type,
event_data,
created_at,
created_by
FROM events
WHERE aggregate_type = 'Sale'
AND aggregate_id = '...'
ORDER BY version;
L.4A.7 Snapshots for Performance
Detailed Implementation Reference (from former Event Sourcing & CQRS chapter, now consolidated here):
For aggregates with many events, snapshots prevent replaying the entire stream:
Snapshot Strategy
=================
Without Snapshots:
Event 1 -> Event 2 -> ... -> Event 5000 -> Current State
(Slow for aggregates with many events)
With Snapshots:
Event 1 -> ... -> Event 1000 -> [Snapshot @ v1000]
|
-> Event 1001 -> ... -> Event 1050 -> Current State
(Load snapshot, then only replay 50 events)
Snapshot Implementation
// AggregateRepository.cs
public class AggregateRepository<T> where T : AggregateRoot
{
private readonly IEventStore _eventStore;
private readonly ISnapshotStore _snapshotStore;
private const int SNAPSHOT_THRESHOLD = 100;
public async Task<T> LoadAsync(Guid id)
{
var aggregate = Activator.CreateInstance<T>();
// 1. Try to load snapshot
var snapshot = await _snapshotStore.GetAsync<T>(id);
int fromVersion = 0;
if (snapshot != null)
{
aggregate.RestoreFromSnapshot(snapshot.State);
fromVersion = snapshot.Version;
}
// 2. Load events after snapshot
var events = await _eventStore.GetEventsAsync(id, fromVersion);
foreach (var @event in events)
{
aggregate.Apply(@event);
}
return aggregate;
}
public async Task SaveAsync(T aggregate)
{
var newEvents = aggregate.GetUncommittedEvents();
// 1. Append events
await _eventStore.AppendAsync(aggregate.Id, newEvents, aggregate.Version);
// 2. Create snapshot if threshold reached
if (aggregate.Version % SNAPSHOT_THRESHOLD == 0)
{
var snapshot = aggregate.CreateSnapshot();
await _snapshotStore.SaveAsync(aggregate.Id, aggregate.Version, snapshot);
}
aggregate.ClearUncommittedEvents();
}
}
L.4B Integration Architecture Patterns
BRD v18.0 Module 6 defines integration patterns that are architecturally significant. This section documents their implementation strategy.
Transactional Outbox Pattern
Guarantees atomic business data persistence + event publication without distributed transactions.
┌──────────────────────────────────────────────────────────┐
│ TRANSACTIONAL OUTBOX PATTERN │
├──────────────────────────────────────────────────────────┤
│ │
│ Application Outbox Relay │
│ ┌─────────────────┐ ┌──────────────────┐ │
│ │ BEGIN TRANSACTION│ │ Poll outbox table│ │
│ │ │ │ every 5 seconds │ │
│ │ 1. Write to │ └────────┬─────────┘ │
│ │ business table│ │ │
│ │ │ ▼ │
│ │ 2. Write to │ ┌──────────────────┐ │
│ │ outbox table │ │ Publish event │ │
│ │ │ │ via LISTEN/NOTIFY│ │
│ │ COMMIT │ └────────┬─────────┘ │
│ └─────────────────┘ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Mark as published│ │
│ │ (idempotent) │ │
│ └──────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
Provider Abstraction (Strategy Pattern)
┌──────────────────────────────────────────────────────────┐
│ PROVIDER ABSTRACTION PATTERN │
├──────────────────────────────────────────────────────────┤
│ │
│ IIntegrationProvider │
│ ┌──────────────────┐ │
│ │ + Connect() │ │
│ │ + SyncProducts() │ │
│ │ + SyncInventory()│ │
│ │ + ValidateData() │ │
│ │ + HealthCheck() │ │
│ └────────┬─────────┘ │
│ │ │
│ ┌─────────────┼─────────────┐ │
│ ▼ ▼ ▼ │
│ ┌────────────┐┌────────────┐┌─────────────────┐ │
│ │ Shopify ││ Amazon ││ Google │ │
│ │ Provider ││ Provider ││ Merchant │ │
│ │ ││ ││ Provider │ │
│ │ GraphQL ││ REST/LWA ││ REST/Service Acct│ │
│ │ 50pts/sec ││ Burst+Tok ││ Quota-based │ │
│ │ Webhooks ││ 2min Poll ││ 2x/day Batch │ │
│ └────────────┘└────────────┘└─────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
Safety Buffer Computation
Per BRD Section 6.7.2, channel-available quantity is calculated as:
Channel Available = POS Available - Safety Buffer
┌──────────────────────────────────────────────────────────┐
│ SAFETY BUFFER COMPUTATION │
├──────────────────────────────────────────────────────────┤
│ │
│ 4-Level Priority Resolution: │
│ 1. Product-Level Override (highest priority) │
│ 2. Category-Level Default │
│ 3. Channel-Level Default │
│ 4. Global Default (lowest priority) │
│ │
│ 3 Calculation Modes: │
│ ┌──────────────────────────────────────────────────┐ │
│ │ FIXED: Buffer = fixed_quantity │ │
│ │ PERCENTAGE: Buffer = pos_available * percentage │ │
│ │ MIN_RESERVE: Buffer = pos_available - min_reserve │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ Example (FIXED mode, buffer = 2): │
│ POS Available: 10 → Channel Available: 8 │
│ │
│ Example (PERCENTAGE mode, 20%): │
│ POS Available: 10 → Buffer: 2 → Channel Available: 8 │
│ │
└──────────────────────────────────────────────────────────┘
L.5 Architecture Documentation & Traceability
To ensure “soft architecture” matches the code and enables rapid root-cause analysis.
| Aspect | Selection |
|---|---|
| Strategy | “Diagrams as Code” to prevent documentation drift |
| Tooling | Structurizr (C4 Model) or Mermaid.js |
| Implementation | Architecture diagrams committed to Git repository alongside source code |
| Automation | Use Claude Code CLI to auto-generate updates to diagrams during refactoring |
C4 Model Levels
+-------------------------------------------------------------------+
| C4 MODEL HIERARCHY |
+-------------------------------------------------------------------+
| |
| Level 1: System Context |
| +------------------+ +------------------+ +-------------+ |
| | POS Client |<--->| Central API |<--->| Shopify | |
| | (Terminals) | | (Cloud) | | Amazon | |
| +------------------+ +------------------+ +-------------+ |
| |
| Level 2: Container Diagram |
| +------------------+ +------------------+ +-------------+ |
| | POS App | | API Gateway | | Kafka | |
| | (SQLite) | | Auth Service | | Cluster | |
| +------------------+ | Sales Module | +-------------+ |
| | Inventory Mod | +-------------+ |
| +------------------+ | PostgreSQL | |
| +-------------+ |
| |
| Level 3: Component Diagram (per module) |
| Level 4: Code Diagram (class/sequence) |
| |
+-------------------------------------------------------------------+
L.6 Quality Assurance (QA) & Testing Strategy
To ensure end-to-end reliability for financial transactions.
E2E (End-to-End) Testing
| Attribute | Selection |
|---|---|
| Tool | Cypress or Playwright |
| Scope | Full simulation: Cashier login → Scan Item → Process Payment → Print Receipt |
Example Test Flow:
1. Cashier authenticates with PIN
2. Scan barcode (NXJ1078)
3. Apply discount (if applicable)
4. Select payment method (Cash/Card)
5. Process payment
6. Print/email receipt
7. Verify inventory decremented
8. Verify event published to Kafka
Load Testing
| Attribute | Selection |
|---|---|
| Tool | k6 or JMeter |
| Scope | Simulate “Black Friday” traffic (500 concurrent transactions) |
Black Friday Scenario:
Concurrent Users: 500
Duration: 30 minutes
Target TPS: 1000 transactions/second
Acceptable Latency: p99 < 500ms
Code Management
| Attribute | Selection |
|---|---|
| Platform | GitHub/GitLab |
| Versioning | Semantic Versioning (tags v1.x.x) |
| Traceability | Exact code version deployed to each POS terminal |
L.7 Observability & Monitoring Strategy
Primary Pattern
| Attribute | Selection |
|---|---|
| Pattern | OpenTelemetry (OTel) “Trace-to-Code” Pipeline |
| Rationale | Industry-standard OTel protocol prevents vendor lock-in and enables tracing an error from a specific store directly to the line of code |
Technology Stack (The “LGTM” Stack)
| Component | Tool | Purpose |
|---|---|---|
| L - Logs | Loki | Log aggregation |
| G - Grafana | Grafana | Visualization dashboards |
| T - Traces | Tempo (or Jaeger) | Distributed tracing |
| M - Metrics | Prometheus | Metrics collection |
Instrumentation
| Layer | Instrumentation |
|---|---|
| API | OpenTelemetry auto-instrumentation (.NET) |
| Database | Query tracing, slow query logging |
| Events | PostgreSQL event tables with LISTEN/NOTIFY (v1.0), correlation IDs for tracing |
| POS Client | Local telemetry buffer, sync on reconnect |
L.8 Security & Compliance Strategy
Primary Pattern
| Attribute | Selection |
|---|---|
| Pattern | 6-Gate Security Test Pyramid with DevSecOps for PCI Compliance |
| Rationale | Claude Code agents generate the full codebase. A single SonarQube gate is insufficient to catch missing authorization checks, incorrect OAuth implementation, SAQ-A violations, architecture drift, or insecure CORS/CSP headers. The 6-gate pyramid ensures defense-in-depth for AI-generated code. |
6-Gate Security Test Pyramid
| Gate | Tool | Purpose | Blocks Merge? |
|---|---|---|---|
| 1. SAST | SonarQube / CodeQL | Static code vulnerability scanning (SQLi, XSS, hardcoded secrets) | Yes |
| 2. SCA | Snyk / OWASP Dependency-Check | Package vulnerability scanning + SBOM generation (PCI-DSS 4.0 Req 6.3.2) | Yes |
| 3. Secrets Detection | GitLeaks / TruffleHog | Credential leak prevention in source code and commit history | Yes |
| 4. Architecture Conformance | ArchUnit / NetArchTest | Module boundary enforcement, dependency rules (e.g., Module 6 cannot directly access Module 1 internals) | Yes |
| 5. Contract Tests | Pact | Shopify/Amazon/Google sandbox API contract verification; webhook signature validation | Yes |
| 6. Manual Security Review | Human reviewer | Security-critical paths: payment flows, credential vault access, OAuth token handling, PCI boundary | Yes (tagged PRs only) |
┌──────────────────────────────────────────────────────────┐
│ 6-GATE SECURITY TEST PYRAMID │
├──────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ │
│ │ Manual │ Gate 6 │
│ │ Review │ (Security-critical PRs) │
│ ┌─┴─────────┴─┐ │
│ │ Contract │ Gate 5 │
│ │ Tests │ (Pact + Sandboxes) │
│ ┌─┴─────────────┴─┐ │
│ │ Architecture │ Gate 4 │
│ │ Conformance │ (ArchUnit) │
│ ┌─┴─────────────────┴─┐ │
│ │ Secrets Detection │ Gate 3 │
│ │ (GitLeaks) │ │
│ ┌─┴─────────────────────┴─┐ │
│ │ SCA (Snyk + SBOM) │ Gate 2 │
│ ┌─┴─────────────────────────┴─┐ │
│ │ SAST (SonarQube / CodeQL) │ Gate 1 │
│ └─────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
FIM (File Integrity Monitoring) - PCI Requirement
| Attribute | Selection |
|---|---|
| Tool | Wazuh or OSSEC |
| Action | Monitors POS terminals and servers for unauthorized file changes |
| PCI Reference | PCI-DSS 4.0 Req 11.5.1 |
| Criticality | Essential for detecting skimmers, tampering, and supply chain compromise |
Credential Vault Architecture
| Attribute | Selection |
|---|---|
| Technology | HashiCorp Vault (Docker container) |
| Deployment | Single Vault instance with auto-unseal; Docker Compose alongside PostgreSQL |
Key Hierarchy:
Master Encryption Key (Vault auto-unseal)
└── Tenant-Specific Keys
├── tenant_nexus_key
│ ├── Shopify OAuth tokens
│ ├── Amazon LWA credentials
│ ├── Google Service Account key
│ ├── Payment processor tokens
│ ├── SMTP credentials
│ └── Webhook signing keys
└── tenant_acme_key
└── ... (same structure)
6 Credential Types:
| # | Credential Type | Provider | Auth Method | Rotation |
|---|---|---|---|---|
| 1 | Shopify OAuth token | Shopify | OAuth 2.0 / PKCE | On expiry + 90-day forced |
| 2 | Amazon LWA credentials | Amazon | Login with Amazon (OAuth) | On expiry + 90-day forced |
| 3 | Google Service Account | Service Account JSON key | 90-day rotation | |
| 4 | Payment processor token | Various | API key / OAuth | 90-day rotation |
| 5 | SMTP credentials | Email provider | Username/password | 90-day rotation |
| 6 | Webhook signing keys | All providers | HMAC-SHA256 | On compromise + 90-day |
Access Policy: Least privilege; application-role-based access. Integration services can only read their own provider credentials. Credential writes require admin role with MFA.
DevSecOps Pipeline
┌───────────────────────────────────────────────────────────────────┐
│ DEVSECOPS PIPELINE (v2.0) │
├───────────────────────────────────────────────────────────────────┤
│ │
│ Developer / Claude Code Agent │
│ │ │
│ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Pre-commit │──►│ Gate 1: │──►│ Gate 2: │ │
│ │ Hooks │ │ SAST │ │ SCA + SBOM │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │ │
│ ┌──────────────────────────────────┘ │
│ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Gate 3: │──►│ Gate 4: │──►│ Gate 5: │ │
│ │ Secrets │ │ ArchUnit │ │ Pact Tests │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │ │
│ ┌──────────────────────────────────┘ │
│ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ E2E Tests │──►│ Gate 6: │──►│ Deploy │ │
│ │ (Cypress) │ │ Manual │ │ + Wazuh │ │
│ └────────────┘ │ (if tagged)│ │ FIM │ │
│ └────────────┘ └────────────┘ │
│ │
└───────────────────────────────────────────────────────────────────┘
Offline Queue Security
POS terminals operating offline accumulate queued transactions that must be protected against tampering, interception, and replay attacks.
| Control | Implementation | Purpose |
|---|---|---|
| Queue Encryption | AES-256-GCM with device-specific key | Protects queued transactions at rest on SQLite |
| Tamper Detection | HMAC-SHA256 over each queued transaction | Detects modification of queued data before sync |
| Transaction Signing | Device certificate signs each transaction | Non-repudiation; proves transaction originated from authorized terminal |
| Replay Prevention | Monotonic sequence number + timestamp | Prevents re-submission of previously synced transactions |
| Key Storage | Device secure enclave / TPM where available | Protects encryption keys from extraction |
┌──────────────────────────────────────────────────────────┐
│ OFFLINE QUEUE SECURITY MODEL │
├──────────────────────────────────────────────────────────┤
│ │
│ Transaction Created (Offline) │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────┐ │
│ │ Serialize │───►│ HMAC-SHA256 │───►│ AES-256 │ │
│ │ Transaction │ │ (Integrity) │ │ Encrypt │ │
│ └─────────────┘ └──────────────┘ └──────┬─────┘ │
│ │ │
│ ▼ │
│ ┌───────────┐ │
│ │ SQLite │ │
│ │ Queue │ │
│ └───────────┘ │
│ │ │
│ Network Restored │ │
│ ▼ │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────┐ │
│ │ Verify │◄───│ Decrypt │◄───│ Read from │ │
│ │ HMAC + Seq │ │ AES-256 │ │ Queue │ │
│ └──────┬──────┘ └──────────────┘ └────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Sync to │ │
│ │ Central API │ │
│ └─────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
L.9 Diagrammatic Overview
System Architecture (Mermaid)
graph TD
subgraph Client_Device ["POS Client"]
UI[UI Layer]
SL[Service Layer]
DB_Local[(SQLite)]
SL --> DB_Local
end
subgraph Cloud_Infrastructure ["Cloud Infrastructure"]
LB[Load Balancer]
subgraph Central_API ["Central API (Modular Monolith)"]
Auth[Auth Module]
Sales[Sales Module]
Inv[Inventory Module]
end
subgraph Data_Layer ["Data Layer"]
PG[(PostgreSQL)]
Events[(PG Events)]
end
end
subgraph DevOps_Pipeline ["DevSecOps & Traceability"]
Git[GitHub - Semantic Ver]
Struct[Structurizr - Docs]
Sonar[SonarQube - SAST]
Cypress[Cypress - E2E]
Wazuh[Wazuh - FIM/PCI]
end
SL --> LB
LB --> Auth
Auth --> Sales
Sales --> Events
Sales --> PG
Git --> Sonar
Sonar --> Cypress
Cypress --> Struct
Wazuh -.-> Central_API
Wazuh -.-> Client_Device
ASCII Version
+------------------------------------------------------------------+
| NEXUS POS ARCHITECTURE |
+------------------------------------------------------------------+
| |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ POS CLIENT (STORE) │ |
| │ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ │ |
| │ │ UI │───▶│ Service Layer│───▶│ SQLite (Local) │ │ |
| │ │ (MAUI) │ │ (Plugins) │ │ (Offline Data) │ │ |
| │ └──────────┘ └──────────────┘ └──────────────────┘ │ |
| └──────────────────────────┬──────────────────────────────────┘ |
| │ |
| ▼ (Sync when online) |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ CLOUD INFRASTRUCTURE │ |
| │ │ |
| │ ┌──────────────────────────────────────────────────────┐ │ |
| │ │ CENTRAL API (Modular Monolith) │ │ |
| │ │ ┌────────┐ ┌────────┐ ┌──────────┐ ┌──────────┐ │ │ |
| │ │ │ Auth │ │ Sales │ │Inventory │ │ Catalog │ │ │ |
| │ │ └────────┘ └────────┘ └──────────┘ └──────────┘ │ │ |
| │ └──────────────────────┬───────────────────────────────┘ │ |
| │ │ │ |
| │ ┌───────────────┼───────────────┐ │ |
| │ ▼ ▼ ▼ │ |
| │ ┌───────────┐ ┌───────────┐ ┌───────────────┐ │ |
| │ │PostgreSQL │ │ HashiCorp │ │ External │ │ |
| │ │(Events + │ │ Vault │ │ Systems │ │ |
| │ │ State) │ │(Secrets) │ │(Shopify, etc.)│ │ |
| │ └───────────┘ └───────────┘ └───────────────┘ │ |
| └─────────────────────────────────────────────────────────────┘ |
| |
+------------------------------------------------------------------+
L.9A System Architecture Reference
Detailed Implementation Reference (from former High-Level Architecture chapter, now consolidated here):
Complete System Architecture Diagram
+===========================================================================+
| CLOUD LAYER |
| +------------------+ +------------------+ +------------------+ |
| | Shopify API | | Payment Gateway | | Tax Service | |
| | (E-commerce) | | (Stripe/Square) | | (TaxJar) | |
| +--------+---------+ +--------+---------+ +--------+---------+ |
| | | | |
+===========|=====================|=====================|====================+
| | |
v v v
+===========================================================================+
| API GATEWAY LAYER |
| +---------------------------------------------------------------------+ |
| | Kong / NGINX Gateway | |
| | +-------------+ +-------------+ +-------------+ +-------------+ | |
| | | Rate Limit | | Auth | | Routing | | Logging | | |
| | +-------------+ +-------------+ +-------------+ +-------------+ | |
| +---------------------------------------------------------------------+ |
+===========================================================================+
|
v
+===========================================================================+
| CENTRAL API LAYER |
| (ASP.NET Core 8.0 / Node.js) |
| |
| +------------------+ +------------------+ +------------------+ |
| | Catalog Service | | Sales Service | |Inventory Service| |
| | | | | | | |
| | - Products | | - Transactions | | - Stock Levels | |
| | - Categories | | - Receipts | | - Adjustments | |
| | - Pricing | | - Refunds | | - Transfers | |
| | - Variants | | - Layaways | | - Counts | |
| +------------------+ +------------------+ +------------------+ |
| |
| +------------------+ +------------------+ +------------------+ |
| |Customer Service | |Employee Service | | Sync Service | |
| | | | | | | |
| | - Profiles | | - Users | | - Shopify Sync | |
| | - Loyalty | | - Roles | | - Offline Sync | |
| | - History | | - Permissions | | - Event Queue | |
| | - Credits | | - Shifts | | - Conflict Res | |
| +------------------+ +------------------+ +------------------+ |
| |
+===========================================================================+
|
v
+===========================================================================+
| DATA LAYER |
| +---------------------------------------------------------------------+ |
| | PostgreSQL 16 Cluster | |
| | | |
| | +-----------------+ +-----------------+ +-----------------+ | |
| | | shared schema | | tenant_nexus | | tenant_acme | | |
| | | (platform) | | (Nexus Clothing)| | (Acme Retail) | | |
| | +-----------------+ +-----------------+ +-----------------+ | |
| | | |
| +---------------------------------------------------------------------+ |
| +------------------+ +------------------+ |
| | Redis | | Event Store | |
| | (Cache/Queue) | | (Append-Only) | |
| +------------------+ +------------------+ |
+===========================================================================+
+===========================================================================+
| CLIENT APPLICATIONS |
| |
| +------------------+ +------------------+ +------------------+ |
| | POS Client | | Admin Portal | | Raptag Mobile | |
| | (Desktop App) | | (React SPA) | | (.NET MAUI) | |
| | | | | | | |
| | - Sales Terminal | | - Dashboard | | - RFID Scanning | |
| | - Offline Mode | | - Reports | | - Inventory | |
| | - Local SQLite | | - Configuration | | - Quick Counts | |
| | - Receipt Print | | - User Mgmt | | - Transfers | |
| +------------------+ +------------------+ +------------------+ |
| |
+===========================================================================+
Three-Tier Architecture Detail
Tier 1: Cloud Layer (External Services)
| Service | Purpose | Protocol | Data Flow |
|---|---|---|---|
| Shopify API | E-commerce sync | REST/GraphQL | Bidirectional |
| Payment Gateway | Card processing | REST + Webhooks | Request/Response |
| Tax Service | Tax calculation | REST | Request/Response |
| Email Service | Notifications | SMTP/API | Outbound only |
| SMS Service | Alerts | API | Outbound only |
Cloud Integration Flow
======================
Shopify Payment Gateway Tax Service
| | |
| Products, Orders | Authorization | Rate Lookup
| Inventory | Capture | Calculation
| | Refund |
v v v
+----------------------------------------------------------------+
| Integration Adapters |
| +---------------+ +------------------+ +------------------+ |
| |ShopifyAdapter | | PaymentAdapter | | TaxAdapter | |
| +---------------+ +------------------+ +------------------+ |
+----------------------------------------------------------------+
|
v
[Central API Services]
Tier 2: Central API Layer (Application Services)
API Gateway
Request Flow Through Gateway
============================
Client Request
|
v
+--------------------------------------------------+
| API GATEWAY |
| |
| 1. [Rate Limiting] -----> 100 req/min/client |
| | |
| v |
| 2. [Authentication] ----> JWT Validation |
| | |
| v |
| 3. [Tenant Resolution] -> Extract tenant_id |
| | |
| v |
| 4. [Request Logging] ---> Correlation ID |
| | |
| v |
| 5. [Route to Service] --> /api/v1/sales/* |
| |
+--------------------------------------------------+
|
v
Service Handler
Core Services
| Service | Responsibilities | Key Endpoints |
|---|---|---|
| Catalog Service | Products, categories, pricing, variants | /api/v1/products/* |
| Sales Service | Transactions, receipts, refunds, holds | /api/v1/sales/* |
| Inventory Service | Stock levels, adjustments, transfers | /api/v1/inventory/* |
| Customer Service | Profiles, loyalty, purchase history | /api/v1/customers/* |
| Employee Service | Users, roles, permissions, shifts | /api/v1/employees/* |
| Sync Service | Offline sync, conflict resolution | /api/v1/sync/* |
Tier 3: Data Layer (Persistence)
Data Layer Architecture
=======================
+------------------+ +------------------+ +------------------+
| PostgreSQL | | Redis | | Event Store |
| (Primary DB) | | (Cache/Queue) | | (Append-Only) |
+------------------+ +------------------+ +------------------+
| | |
| | |
+-------v------------------------v------------------------v--------+
| |
| Schema: shared Cache Keys Events |
| +--------------+ +------------+ +-------------+ |
| | tenants | | product: | | SaleCreated | |
| | plans | | {id} | | ItemAdded | |
| | features | | session: | | PaymentRcvd | |
| +--------------+ | {token} | | StockAdj | |
| | inventory: | +-------------+ |
| Schema: tenant_xxx | {sku} | |
| +--------------+ +------------+ |
| | products | |
| | sales | |
| | inventory | |
| | customers | |
| +--------------+ |
| |
+-------------------------------------------------------------------+
Client Applications
POS Client (Desktop)
POS Client Architecture
=======================
+-------------------------------------------------------------------+
| POS CLIENT (Electron/Tauri) |
| |
| +-----------------------+ +---------------------------+ |
| | UI Layer | | Local Storage | |
| | +----------------+ | | +--------------------+ | |
| | | Sales Screen | | | | SQLite Database | | |
| | +----------------+ | | | | | |
| | | Product Grid | | | | - products_cache | | |
| | +----------------+ | | | - pending_sales | | |
| | | Cart Panel | | | | - sync_queue | | |
| | +----------------+ | | +--------------------+ | |
| | | Payment Dialog | | | | |
| | +----------------+ | +---------------------------+ |
| +-----------------------+ |
| |
| +-----------------------+ +---------------------------+ |
| | Service Layer | | Hardware Layer | |
| | +----------------+ | | +--------------------+ | |
| | | SaleService | | | | Receipt Printer | | |
| | +----------------+ | | +--------------------+ | |
| | | SyncService | | | | Barcode Scanner | | |
| | +----------------+ | | +--------------------+ | |
| | | OfflineService | | | | Cash Drawer | | |
| | +----------------+ | | +--------------------+ | |
| +-----------------------+ | | Card Reader | | |
| | +--------------------+ | |
| +---------------------------+ |
+-------------------------------------------------------------------+
Admin Portal (Web)
Admin Portal Architecture
=========================
+-------------------------------------------------------------------+
| ADMIN PORTAL (React SPA) |
| |
| +------------------------+ +---------------------------+ |
| | Navigation | | Main Content | |
| | +------------------+ | | +---------------------+ | |
| | | Dashboard | | | | Dashboard View | | |
| | +------------------+ | | | - KPIs | | |
| | | Products | | | | - Charts | | |
| | +------------------+ | | | - Alerts | | |
| | | Sales | | | +---------------------+ | |
| | +------------------+ | | +---------------------+ | |
| | | Inventory | | | | Product Manager | | |
| | +------------------+ | | | - CRUD | | |
| | | Customers | | | | - Bulk Import | | |
| | +------------------+ | | | - Sync Status | | |
| | | Employees | | | +---------------------+ | |
| | +------------------+ | | | |
| | | Reports | | | | |
| | +------------------+ | | | |
| | | Settings | | | | |
| | +------------------+ | | | |
| +------------------------+ +---------------------------+ |
| |
| State Management: React Query + Context |
| Routing: React Router |
| UI Framework: TailwindCSS |
+-------------------------------------------------------------------+
Raptag Mobile (RFID)
Raptag Mobile Architecture
==========================
+-------------------------------------------------------------------+
| RAPTAG MOBILE (.NET MAUI) |
| |
| +------------------------+ +---------------------------+ |
| | RFID Layer | | UI Layer | |
| | +------------------+ | | +---------------------+ | |
| | | Zebra SDK | | | | Scan Screen | | |
| | +------------------+ | | +---------------------+ | |
| | | Tag Parser | | | | Inventory Count | | |
| | +------------------+ | | +---------------------+ | |
| | | Batch Processor | | | | Transfer Screen | | |
| | +------------------+ | | +---------------------+ | |
| +------------------------+ +---------------------------+ |
| |
| +------------------------+ +---------------------------+ |
| | Local Storage | | API Client | |
| | +------------------+ | | +---------------------+ | |
| | | SQLite | | | | HTTP Client | | |
| | +------------------+ | | +---------------------+ | |
| | | Scan Buffer | | | | Offline Queue | | |
| | +------------------+ | | +---------------------+ | |
| +------------------------+ +---------------------------+ |
+-------------------------------------------------------------------+
Service Boundaries
Service Boundary Diagram
========================
+-------------------+ +-------------------+ +-------------------+
| Catalog Service | | Sales Service | |Inventory Service |
| | | | | |
| OWNS: | | OWNS: | | OWNS: |
| - products | | - sales | | - inventory_items |
| - categories | | - line_items | | - stock_levels |
| - pricing_rules | | - payments | | - adjustments |
| - product_variants| | - refunds | | - transfers |
| - product_images | | - holds | | - count_sessions |
| | | | | |
| REFERENCES: | | REFERENCES: | | REFERENCES: |
| (none) | | - product_id | | - product_id |
| | | - customer_id | | - location_id |
| | | - employee_id | | |
+-------------------+ +-------------------+ +-------------------+
+-------------------+ +-------------------+
| Customer Service | | Employee Service |
| | | |
| OWNS: | | OWNS: |
| - customers | | - employees |
| - loyalty_cards | | - roles |
| - store_credits | | - permissions |
| - addresses | | - shifts |
| | | - time_entries |
| REFERENCES: | | |
| (none) | | REFERENCES: |
| | | - location_id |
+-------------------+ +-------------------+
Technology Stack Summary
| Layer | Technology | Justification |
|---|---|---|
| API Gateway | Kong or NGINX | Proven, scalable, plugin ecosystem |
| Central API | ASP.NET Core 8.0 | Performance, C# ecosystem, EF Core |
| Database | PostgreSQL 16 | Multi-tenant support, JSON support, reliability |
| Cache | Redis | Session storage, real-time features |
| Event Store | PostgreSQL (append-only) | Simplicity, same DB engine |
| POS Client | Electron or Tauri | Cross-platform desktop, offline SQLite |
| Admin Portal | React + TypeScript | Modern SPA, rich ecosystem |
| Mobile App | .NET MAUI | C# codebase, Zebra RFID SDK support |
| Real-time | SignalR | Inventory broadcasts, notifications |
Deployment Topology
Production Deployment
=====================
+------------------+
| Load Balancer |
| (HAProxy/ALB) |
+--------+---------+
|
+----------------------+----------------------+
| | |
+---------v--------+ +---------v--------+ +---------v--------+
| API Server 1 | | API Server 2 | | API Server 3 |
| | | | | |
| - Central API | | - Central API | | - Central API |
| - Stateless | | - Stateless | | - Stateless |
+--------+---------+ +---------+--------+ +---------+--------+
| | |
+----------+------------+-----------+----------+
| |
+---------v--------+ +---------v--------+
| PostgreSQL | | Redis |
| (Primary) | | (Cluster) |
+--------+---------+ +------------------+
|
+--------v---------+
| PostgreSQL |
| (Replica) |
+------------------+
Store Locations (5 stores):
+----------------+ +----------------+ +----------------+
| GM Store | | HM Store | | LM Store |
| +------------+ | | +------------+ | | +------------+ |
| |POS Client 1| | | |POS Client 1| | | |POS Client 1| |
| +------------+ | | +------------+ | | +------------+ |
| |POS Client 2| | | +------------+ | +----------------+
| +------------+ | | |POS Client 2| |
+----------------+ | +------------+ |
+----------------+
Security Architecture
Security Layers
===============
+------------------------------------------------------------------+
| INTERNET |
+---------------------------+--------------------------------------+
|
v
+---------------------------+--------------------------------------+
| TLS TERMINATION |
| (Let's Encrypt) |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| API GATEWAY |
| +-----------------------+ +-----------------------+ |
| | Rate Limiting | | IP Whitelisting | |
| | 100 req/min/client | | (Admin Portal only) | |
| +-----------------------+ +-----------------------+ |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| AUTHENTICATION |
| +-----------------------+ +-----------------------+ |
| | JWT Validation | | PIN Verification | |
| | - Signature check | | - Employee clock-in | |
| | - Expiry check | | - Sensitive actions | |
| | - Tenant claim | +-----------------------+ |
| +-----------------------+ |
+---------------------------+--------------------------------------+
|
v
+------------------------------------------------------------------+
| AUTHORIZATION |
| +-----------------------+ +-----------------------+ |
| | Role-Based (RBAC) | | Permission Policies | |
| | - Admin | | - can:create_sale | |
| | - Manager | | - can:void_sale | |
| | - Cashier | | - can:view_reports | |
| +-----------------------+ +-----------------------+ |
+------------------------------------------------------------------+
L.9B Data Flow Reference
Detailed Implementation Reference (from former High-Level Architecture chapter, now consolidated here):
Pattern 1: Online Sale Flow
Online Sale Flow
================
[POS Client] [Central API] [Database]
| | |
| 1. POST /sales | |
|------------------------------>| |
| | 2. Validate request |
| |------------------------------>|
| | |
| | 3. Begin transaction |
| |------------------------------>|
| | |
| | 4. Create sale record |
| |------------------------------>|
| | |
| | 5. Decrement inventory |
| |------------------------------>|
| | |
| | 6. Log sale event |
| |------------------------------>|
| | |
| | 7. Commit transaction |
| |------------------------------>|
| | |
| 8. Return sale confirmation | |
|<------------------------------| |
| | |
| 9. Print receipt | |
| | |
Pattern 2: Offline Sale Flow
Offline Sale Flow
=================
[POS Client] [Local SQLite] [Sync Queue]
| | |
| 1. Create sale locally | |
|------------------------------>| |
| | 2. Generate local UUID |
| | |
| 3. Decrement local inventory | |
|------------------------------>| |
| | |
| 4. Queue for sync | |
|-------------------------------------------------------------->|
| | |
| 5. Print receipt | |
| | |
--- Later, when online ---
[Sync Service] [Central API] [Database]
| | |
| 1. Pop from queue | |
| | |
| 2. POST /sync/sales | |
|------------------------------>| |
| | 3. Validate (check for dupe) |
| |------------------------------>|
| | |
| | 4. Insert with local UUID |
| |------------------------------>|
| | |
| 5. Mark synced | |
|<------------------------------| |
Pattern 3: Inventory Sync Flow
Inventory Sync from Shopify
===========================
[Shopify] [Webhook Handler] [Inventory Svc]
| | |
| 1. inventory_levels/update | |
|------------------------------>| |
| | 2. Validate webhook |
| | |
| | 3. Parse inventory update |
| |------------------------------>|
| | |
| | 4. Update stock level |
| |------------------------------>|
| | |
| | 5. Log inventory event |
| |------------------------------>|
| | |
| | 6. Broadcast to POS clients |
| |------------------------------>|
| | (SignalR) |
L.9C Domain Model Reference
Domain Model Overview (from former Domain Model chapter, now consolidated here): NOTE: Only bounded contexts, aggregates, and ER diagram included here. Detailed entity field definitions are in Part III Database chapters.
Bounded Contexts Overview
Domain Bounded Contexts
=======================
+------------------------------------------------------------------+
| POS PLATFORM |
| |
| +-------------+ +-------------+ +-------------+ |
| | CATALOG | | SALES | | INVENTORY | |
| | | | | | | |
| | Products | | Sales | | StockLevels | |
| | Variants | | LineItems | | Adjustments | |
| | Categories | | Payments | | Transfers | |
| | Pricing | | Refunds | | Counts | |
| +-------------+ +-------------+ +-------------+ |
| |
| +-------------+ +-------------+ +-------------+ |
| | CUSTOMER | | EMPLOYEE | | LOCATION | |
| | | | | | | |
| | Customers | | Employees | | Locations | |
| | Addresses | | Roles | | Registers | |
| | Loyalty | | Permissions | | Settings | |
| | Credits | | Shifts | | TaxRates | |
| +-------------+ +-------------+ +-------------+ |
| |
+------------------------------------------------------------------+
Context Summary Table
| Context | Entities | Purpose |
|---|---|---|
| Catalog | Product, Variant, Category, PricingRule | Product management |
| Sales | Sale, LineItem, Payment, Refund | Transaction processing |
| Inventory | InventoryItem, Adjustment, Transfer | Stock management |
| Customer | Customer, Address, Credit, Loyalty | Customer management |
| Employee | Employee, Role, Permission, Shift | Staff management |
| Location | Location, Register, Drawer, TaxRate | Store configuration |
Entity Relationship Diagram
Entity Relationships
====================
+----------+
| Category |
+----+-----+
|
| 1:N
v
+----------+ 1:N +----------+ 1:N +----------------+
| Location |<-------------| Product |------------->| ProductVariant |
+----+-----+ +----+-----+ +-------+--------+
| | |
| | |
| 1:N | |
v | |
+----------+ | |
| Register | v v
+----+-----+ +----------+ +----------------+
| |Inventory | | Adjustment |
| | Item | | Item |
| 1:N +----------+ +----------------+
v
+----------+
|CashDrawer|
+----------+
+----------+ 1:N +----------+ 1:N +----------+
| Customer |------------->| Sale |------------->| LineItem |
+----+-----+ +----+-----+ +----------+
| |
| | 1:N
| 1:N v
v +----------+
+----------+ | Payment |
| Credit | +----------+
+----------+
+----------+ N:1 +----------+ 1:N +----------+
| Employee |------------->| Role |------------->|Permission|
+----+-----+ +----------+ +----------+
|
| 1:N
v
+----------+
| Shift |
+----------+
Aggregate Boundaries
Each aggregate has a root entity and encapsulates related entities:
Aggregate Definitions
=====================
SALE Aggregate
+------------------------------------------+
| Sale (Root) |
| +-- SaleLineItem[] (owned) |
| +-- Payment[] (owned) |
| +-- Refund[] (reference: sale_id) |
+------------------------------------------+
INVENTORY_ADJUSTMENT Aggregate
+------------------------------------------+
| InventoryAdjustment (Root) |
| +-- InventoryAdjustmentItem[] (owned) |
+------------------------------------------+
INVENTORY_TRANSFER Aggregate
+------------------------------------------+
| InventoryTransfer (Root) |
| +-- InventoryTransferItem[] (owned) |
+------------------------------------------+
CUSTOMER Aggregate
+------------------------------------------+
| Customer (Root) |
| +-- CustomerAddress[] (owned) |
| +-- StoreCredit[] (reference) |
| +-- LoyaltyTransaction[] (reference) |
+------------------------------------------+
PRODUCT Aggregate
+------------------------------------------+
| Product (Root) |
| +-- ProductVariant[] (owned) |
+------------------------------------------+
L.10 Risks & Mitigations
| Risk | Mitigation Strategy |
|---|---|
| Sync Conflicts | Use Event Sourcing to replay conflicting events deterministically. First-commit-wins for inventory with backorder escalation. |
| Observability Overload | LGTM stack with integration-specific dashboards: circuit breaker state, DLQ depth, sync latency, safety buffer violations, disapproval rate per channel. |
| GenAI Code Risks | 6-Gate Security Pyramid: SAST + SCA + Secrets + ArchUnit + Pact + Manual Review. Architecture conformance tests prevent module boundary violations. |
| PCI-DSS Non-Compliance | FIM via Wazuh agents on all POS nodes. SCA via Snyk. SBOM generation. Session management with 15-minute timeout. |
| Supply Chain Attacks | Package firewall at proxy level. Real-time SBOM. Automated dependency updates with vulnerability scanning. |
| External API Cascade Failure | Circuit breaker (5 failures/60s → OPEN). Module 6 as Extractable Integration Gateway with failure isolation. Bulkheaded thread pools. |
| Credential Compromise | HashiCorp Vault with key hierarchy. 90-day automated rotation. Emergency rotation procedures. Least-privilege access policies. |
| Overselling Across Channels | Safety buffer computation with 4-level priority resolution. Transactional Outbox for atomic inventory + event. First-commit-wins with backorder escalation. |
L.10A Key Architecture Decisions (BRD-v12)
This section documents critical architecture decisions derived from BRD-v12 requirements analysis. Each decision follows the Architecture Decision Record (ADR) format.
L.10A.1 Offline Strategy Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-001 |
| Context | POS terminals must operate during network outages without losing transactions |
| Decision | Queue-and-Sync with configurable limits |
| Alternatives Considered | 1) Full local database replica, 2) Degraded mode only, 3) Queue-only (selected) |
| Rationale | Full replica adds sync complexity; degraded mode loses sales; queue-only balances reliability with simplicity |
| Reference | BRD-v12 §1.16, Section L.10A.1 |
Implementation Configuration:
offline_mode:
max_queue_size: 100
sync_interval_seconds: 30
conflict_strategy: "server_wins_with_review"
# Operations ALLOWED offline
allowed_offline:
- sale_new
- return_with_receipt
- price_check
- parked_sale_create
- parked_sale_retrieve
# Operations BLOCKED offline (too risky)
blocked_offline:
- customer_create # Requires uniqueness check
- credit_limit_check # Requires real-time balance
- on_account_payment # Risk of exceeding limit
- multi_store_inventory # Requires network
- gift_card_activation # Must register immediately
- gift_card_reload # Risk of double-load
- transfer_request # Requires other store
- reservation_create # Requires other store
Conflict Resolution Strategy:
┌─────────────────────────────────────────────────────────────┐
│ OFFLINE SYNC WORKFLOW │
├─────────────────────────────────────────────────────────────┤
│ │
│ Network Lost Network Restored Conflict? │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Queue │─────────►│ Sync │────────►│ Review │ │
│ │ Locally │ │ to API │ │ Manager │ │
│ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │ │
│ Max 100 txns 30-second Server wins │
│ interval with flag │
│ │
└─────────────────────────────────────────────────────────────┘
L.10A.1A POS Client Architecture
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
POS Client Architecture
=======================
+-----------------------------------------------------------------------+
| POS CLIENT |
| |
| +------------------------+ +-------------------------------+ |
| | Presentation | | Local Storage | |
| | | | | |
| | +------------------+ | | +-------------------------+ | |
| | | Sales Screen | | | | SQLite Database | | |
| | +------------------+ | | | | | |
| | | Product Grid | | | | +---------------------+ | | |
| | +------------------+ | | | | products_cache | | | |
| | | Cart Panel | | | | +---------------------+ | | |
| | +------------------+ | | | | pending_sales | | | |
| | | Payment Dialog | | | | +---------------------+ | | |
| | +------------------+ | | | | sync_queue | | | |
| | | Receipt Print | | | | +---------------------+ | | |
| | +------------------+ | | | | events (local) | | | |
| +------------------------+ | | +---------------------+ | | |
| | | | | customer_cache | | | |
| v | | +---------------------+ | | |
| +------------------------+ | +-------------------------+ | |
| | Application Layer | | | |
| | | +-------------------------------+ |
| | +------------------+ | ^ |
| | | SaleService |------------------------>| |
| | +------------------+ | | |
| | | InventoryService |------------------------>| |
| | +------------------+ | | |
| | | CustomerService |------------------------>| |
| | +------------------+ | |
| +------------------------+ |
| | |
| v |
| +------------------------+ +-------------------------------+ |
| | Sync Service | | Connection Monitor | |
| | | | | |
| | - Queue Manager |<------>| - Ping Central API | |
| | - Conflict Resolver | | - Track online/offline | |
| | - Retry Handler | | - Trigger sync when online | |
| | - Batch Uploader | | | |
| +------------------------+ +-------------------------------+ |
| | |
+-------------|----------------------------------------------------------+
|
v (when online)
+-----------------------------------------------------------------------+
| CENTRAL API |
+-----------------------------------------------------------------------+
L.10A.1B Local Database Schema (SQLite)
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
-- SQLite Schema for POS Client
-- Product cache (synced from server)
CREATE TABLE products_cache (
id TEXT PRIMARY KEY,
sku TEXT UNIQUE NOT NULL,
barcode TEXT,
name TEXT NOT NULL,
category_name TEXT,
price REAL NOT NULL,
cost REAL,
tax_code TEXT,
is_taxable INTEGER DEFAULT 1,
track_inventory INTEGER DEFAULT 1,
image_url TEXT,
variants_json TEXT, -- JSON array of variants
synced_at TEXT NOT NULL, -- When last synced from server
created_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX idx_products_barcode ON products_cache(barcode);
CREATE INDEX idx_products_name ON products_cache(name);
-- Inventory cache (synced from server)
CREATE TABLE inventory_cache (
product_id TEXT NOT NULL,
variant_id TEXT,
location_id TEXT NOT NULL,
quantity INTEGER NOT NULL,
synced_at TEXT NOT NULL,
PRIMARY KEY (product_id, variant_id, location_id)
);
-- Customer cache (synced from server)
CREATE TABLE customers_cache (
id TEXT PRIMARY KEY,
customer_number TEXT UNIQUE,
first_name TEXT,
last_name TEXT,
email TEXT,
phone TEXT,
loyalty_points INTEGER DEFAULT 0,
store_credit REAL DEFAULT 0,
synced_at TEXT NOT NULL
);
-- Local sales (created offline, pending sync)
CREATE TABLE local_sales (
id TEXT PRIMARY KEY,
sale_number TEXT UNIQUE NOT NULL,
location_id TEXT NOT NULL,
register_id TEXT NOT NULL,
employee_id TEXT NOT NULL,
customer_id TEXT,
status TEXT DEFAULT 'completed',
subtotal REAL NOT NULL,
discount_total REAL DEFAULT 0,
tax_total REAL DEFAULT 0,
total REAL NOT NULL,
line_items_json TEXT NOT NULL, -- JSON array of line items
payments_json TEXT NOT NULL, -- JSON array of payments
created_at TEXT DEFAULT (datetime('now')),
synced_at TEXT -- NULL until synced
);
CREATE INDEX idx_local_sales_synced ON local_sales(synced_at);
-- Event queue (append-only, sync to server)
CREATE TABLE event_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
event_id TEXT UNIQUE NOT NULL,
aggregate_type TEXT NOT NULL,
aggregate_id TEXT NOT NULL,
event_type TEXT NOT NULL,
event_data TEXT NOT NULL, -- JSON
created_at TEXT NOT NULL,
created_by TEXT,
synced_at TEXT, -- NULL until synced
sync_attempts INTEGER DEFAULT 0,
last_error TEXT
);
CREATE INDEX idx_event_queue_pending ON event_queue(synced_at) WHERE synced_at IS NULL;
-- Sync metadata
CREATE TABLE sync_status (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT DEFAULT (datetime('now'))
);
-- Track what we've synced
INSERT INTO sync_status (key, value) VALUES
('last_product_sync', '1970-01-01T00:00:00Z'),
('last_inventory_sync', '1970-01-01T00:00:00Z'),
('last_customer_sync', '1970-01-01T00:00:00Z'),
('last_event_push', '1970-01-01T00:00:00Z');
L.10A.1C Sync Queue Design
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Sync Queue Architecture
=======================
+-------------------+ +-------------------+ +-------------------+
| Sale Created | | Inventory Adj | | Customer Created |
| (Offline) | | (Offline) | | (Offline) |
+--------+----------+ +--------+----------+ +--------+----------+
| | |
v v v
+-----------------------------------------------------------------------+
| SYNC QUEUE |
| |
| Priority | Type | Status | Retries | Last Error |
| --------------------------------------------------------------- |
| 1 | SaleCreated | pending | 0 | |
| 1 | PaymentReceived | pending | 0 | |
| 2 | InventoryAdjusted | pending | 0 | |
| 3 | CustomerCreated | failed | 3 | Timeout |
| 1 | SaleCompleted | pending | 0 | |
| |
| Priority Legend: |
| 1 = Critical (sales, payments) - sync immediately |
| 2 = Important (inventory) - sync within minutes |
| 3 = Normal (customers) - sync when convenient |
+-----------------------------------------------------------------------+
|
| Sync Processor (runs when online)
v
+-----------------------------------------------------------------------+
| CENTRAL API |
| |
| POST /api/sync/events |
| [ |
| { eventType: "SaleCreated", ... }, |
| { eventType: "PaymentReceived", ... }, |
| ... |
| ] |
| |
| Response: { synced: 5, conflicts: 0, errors: [] } |
+-----------------------------------------------------------------------+
Sync Priority Rules
| Priority | Event Types | Sync Timing |
|---|---|---|
| 1 (Critical) | Sales, Payments, Refunds, Voids | Immediate when online |
| 2 (Important) | Inventory adjustments, Transfers | Within 5 minutes |
| 3 (Normal) | Customer updates, Loyalty changes | Within 15 minutes |
| 4 (Low) | Analytics events, Logs | Batch sync hourly |
L.10A.1D Conflict Resolution Strategies
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Conflict Resolution Matrix
==========================
+------------------+---------------------+--------------------------------+
| Data Type | Strategy | Reasoning |
+------------------+---------------------+--------------------------------+
| Sales | Append-Only | Each sale is unique, no |
| | (No Conflicts) | conflicts possible |
+------------------+---------------------+--------------------------------+
| Inventory | Last-Write-Wins | Central server is authority, |
| | (Server Wins) | client updates are suggestions |
+------------------+---------------------+--------------------------------+
| Customers | Merge on Key | Merge by email, combine |
| | (Email = Key) | non-conflicting fields |
+------------------+---------------------+--------------------------------+
| Products | Server Authority | Product catalog managed |
| | (Read-Only Client) | centrally, client is cache |
+------------------+---------------------+--------------------------------+
| Employees | Server Authority | HR data managed centrally |
| | (Read-Only Client) | |
+------------------+---------------------+--------------------------------+
| Settings | Server Authority | Config managed by admin |
| | (Read-Only Client) | |
+------------------+---------------------+--------------------------------+
Strategy 1: Append-Only (Sales)
Sale Conflict Resolution: None Required
========================================
Client A (Offline): Client B (Offline):
Sale S-001 created @ 10:15 Sale S-002 created @ 10:16
LineItem: Product X, Qty 2 LineItem: Product Y, Qty 1
Payment: $50 cash Payment: $25 credit
When both sync:
Server: Accepts S-001 (unique ID)
Server: Accepts S-002 (unique ID)
Result: Both sales recorded, no conflict
Strategy 2: Last-Write-Wins (Inventory)
Inventory Conflict Resolution: Server Authority
===============================================
Server State:
Product X @ Location HQ: 100 units
Client A (Offline): Client B (Offline):
Sells 5 units of Product X Sells 3 units of Product X
Local: 95 units Local: 97 units
When both sync:
Server receives: "Sold 5 units" from A
Server receives: "Sold 3 units" from B
Server calculates: 100 - 5 - 3 = 92 units
Server pushes new quantity to all clients
Result:
All clients update to 92 units
Individual decrements preserved
No quantity lost or duplicated
Strategy 3: Merge on Key (Customers)
Customer Conflict Resolution: Merge
===================================
Server State:
Customer email: john@example.com
Name: John Doe
Phone: (blank)
Loyalty: 500 points
Client A (Offline): Client B (Offline):
Updates phone to 555-1234 Updates loyalty to 600 points
When both sync:
Server merges non-conflicting fields:
Name: John Doe (unchanged)
Phone: 555-1234 (from A)
Loyalty: 600 points (from B)
If same field changed:
Server uses timestamp to pick latest
Or prompts admin for resolution
L.10A.1E Sync Processor Workflow
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Sync Processor State Machine
============================
+-------------+
| IDLE |
+------+------+
|
| Connection detected
v
+-------------+
| SYNCING |
+------+------+
|
+------------------+------------------+
| | |
v v v
+-------------+ +-------------+ +-------------+
| PUSH EVENTS | | PULL DATA | | COMPLETE |
| | | | | |
| - Sales | | - Products | | - Update |
| - Payments | | - Inventory | | metadata |
| - Inventory | | - Customers | | - Return |
| changes | | - Settings | | to IDLE |
+------+------+ +------+------+ +-------------+
| |
+------------------+
|
v
+-------------+
| HANDLE |
| CONFLICTS |
+------+------+
|
v
+-------------+
| COMPLETE |
+-------------+
Sync Service Implementation
// SyncService.cs
public class SyncService : IHostedService
{
private readonly ILocalDatabase _localDb;
private readonly IApiClient _apiClient;
private readonly IConnectionMonitor _connectionMonitor;
private readonly IConflictResolver _conflictResolver;
private readonly ILogger<SyncService> _logger;
private Timer? _syncTimer;
private bool _isSyncing = false;
public async Task StartAsync(CancellationToken cancellationToken)
{
_connectionMonitor.OnlineStatusChanged += HandleConnectionChange;
// Check for pending sync every 30 seconds
_syncTimer = new Timer(
async _ => await TrySyncAsync(),
null,
TimeSpan.Zero,
TimeSpan.FromSeconds(30)
);
}
private async void HandleConnectionChange(object? sender, bool isOnline)
{
if (isOnline)
{
_logger.LogInformation("Connection restored, starting sync");
await TrySyncAsync();
}
}
private async Task TrySyncAsync()
{
if (_isSyncing) return;
if (!_connectionMonitor.IsOnline) return;
_isSyncing = true;
try
{
// 1. Push local events to server
await PushEventsAsync();
// 2. Pull updated data from server
await PullProductsAsync();
await PullInventoryAsync();
await PullCustomersAsync();
// 3. Update sync timestamps
await UpdateSyncMetadataAsync();
_logger.LogInformation("Sync completed successfully");
}
catch (Exception ex)
{
_logger.LogError(ex, "Sync failed");
}
finally
{
_isSyncing = false;
}
}
private async Task PushEventsAsync()
{
// Get pending events ordered by priority
var pendingEvents = await _localDb.GetPendingEventsAsync();
if (!pendingEvents.Any()) return;
// Batch events (max 100 per request)
var batches = pendingEvents.Chunk(100);
foreach (var batch in batches)
{
try
{
var response = await _apiClient.PostEventsAsync(batch);
// Mark synced events
foreach (var evt in response.Synced)
{
await _localDb.MarkEventSyncedAsync(evt.EventId);
}
// Handle conflicts
foreach (var conflict in response.Conflicts)
{
await _conflictResolver.ResolveAsync(conflict);
}
}
catch (HttpRequestException)
{
// Network error, increment retry count
foreach (var evt in batch)
{
await _localDb.IncrementEventRetryAsync(evt.EventId);
}
throw;
}
}
}
private async Task PullProductsAsync()
{
var lastSync = await _localDb.GetSyncTimestampAsync("products");
var products = await _apiClient.GetProductsUpdatedSinceAsync(lastSync);
foreach (var product in products)
{
await _localDb.UpsertProductCacheAsync(product);
}
}
private async Task PullInventoryAsync()
{
var locationId = await GetCurrentLocationIdAsync();
var lastSync = await _localDb.GetSyncTimestampAsync("inventory");
var inventory = await _apiClient.GetInventoryUpdatedSinceAsync(locationId, lastSync);
foreach (var item in inventory)
{
// Apply server's quantity (server is authority)
await _localDb.UpdateInventoryCacheAsync(item);
}
}
}
L.10A.1F Sale Creation Flow (Offline-Capable)
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Offline Sale Flow
=================
1. Cashier scans items
+----------------+
| Local Lookup |
| products_cache |
+----------------+
|
v
2. Add to cart (no network needed)
+----------------+
| In-Memory Cart |
+----------------+
|
v
3. Customer pays
+----------------+
| Payment Dialog |
| (card or cash) |
+----------------+
|
v
4. Save sale locally
+----------------+
| local_sales |
| (SQLite) |
+----------------+
|
v
5. Queue sync events
+----------------+
| event_queue |
| SaleCreated |
| ItemAdded x N |
| PaymentRcvd |
| SaleCompleted |
+----------------+
|
v
6. Decrement local inventory
+----------------+
| inventory_cache|
| (optimistic) |
+----------------+
|
v
7. Print receipt
+----------------+
| Receipt ready |
| (no waiting) |
+----------------+
|
v
8. Background sync (when online)
+----------------+
| SyncService |
| pushes events |
+----------------+
Sale Service Implementation
// SaleService.cs
public class SaleService
{
private readonly ILocalDatabase _localDb;
private readonly IEventQueue _eventQueue;
private readonly IReceiptPrinter _printer;
public async Task<Sale> CompleteSaleAsync(Cart cart, List<Payment> payments)
{
// 1. Generate local IDs
var saleId = Guid.NewGuid();
var saleNumber = GenerateSaleNumber();
// 2. Create sale record
var sale = new Sale
{
Id = saleId,
SaleNumber = saleNumber,
LocationId = GetCurrentLocationId(),
RegisterId = GetCurrentRegisterId(),
EmployeeId = GetCurrentEmployeeId(),
CustomerId = cart.CustomerId,
Status = "completed",
Subtotal = cart.Subtotal,
DiscountTotal = cart.DiscountTotal,
TaxTotal = cart.TaxTotal,
Total = cart.Total,
LineItems = cart.Items.Select(MapToLineItem).ToList(),
Payments = payments,
CreatedAt = DateTime.UtcNow
};
// 3. Save to local database
await _localDb.InsertSaleAsync(sale);
// 4. Queue events for sync
await _eventQueue.EnqueueAsync(new SaleCreated
{
SaleId = saleId,
SaleNumber = saleNumber,
LocationId = sale.LocationId,
EmployeeId = sale.EmployeeId,
CustomerId = sale.CustomerId,
CreatedAt = sale.CreatedAt
});
foreach (var item in sale.LineItems)
{
await _eventQueue.EnqueueAsync(new SaleLineItemAdded
{
SaleId = saleId,
LineItemId = item.Id,
ProductId = item.ProductId,
Sku = item.Sku,
Name = item.Name,
Quantity = item.Quantity,
UnitPrice = item.UnitPrice
});
// 5. Decrement local inventory (optimistic)
await _localDb.DecrementInventoryAsync(
item.ProductId,
item.VariantId,
sale.LocationId,
item.Quantity
);
}
foreach (var payment in payments)
{
await _eventQueue.EnqueueAsync(new PaymentReceived
{
SaleId = saleId,
PaymentId = payment.Id,
PaymentMethod = payment.Method,
Amount = payment.Amount
});
}
await _eventQueue.EnqueueAsync(new SaleCompleted
{
SaleId = saleId,
Total = sale.Total,
CompletedAt = DateTime.UtcNow
});
// 6. Print receipt (async, don't wait)
_ = _printer.PrintReceiptAsync(sale);
return sale;
}
private string GenerateSaleNumber()
{
// Format: HQ-20251229-0001
// Location-Date-Sequence
var location = GetCurrentLocationCode();
var date = DateTime.Now.ToString("yyyyMMdd");
var sequence = GetNextLocalSequence();
return $"{location}-{date}-{sequence:D4}";
}
}
L.10A.1G Connection Monitor
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
// ConnectionMonitor.cs
public class ConnectionMonitor : IHostedService
{
private readonly IApiClient _apiClient;
private readonly ILogger<ConnectionMonitor> _logger;
private Timer? _pingTimer;
private bool _isOnline = false;
public bool IsOnline => _isOnline;
public event EventHandler<bool>? OnlineStatusChanged;
public Task StartAsync(CancellationToken cancellationToken)
{
// Ping server every 10 seconds
_pingTimer = new Timer(
async _ => await CheckConnectionAsync(),
null,
TimeSpan.Zero,
TimeSpan.FromSeconds(10)
);
return Task.CompletedTask;
}
private async Task CheckConnectionAsync()
{
var wasOnline = _isOnline;
try
{
// Simple health check endpoint
var response = await _apiClient.PingAsync();
_isOnline = response.IsSuccessStatusCode;
}
catch
{
_isOnline = false;
}
if (_isOnline != wasOnline)
{
_logger.LogInformation(
"Connection status changed: {Status}",
_isOnline ? "ONLINE" : "OFFLINE"
);
OnlineStatusChanged?.Invoke(this, _isOnline);
}
}
public Task StopAsync(CancellationToken cancellationToken)
{
_pingTimer?.Dispose();
return Task.CompletedTask;
}
}
Offline UI Indicator
Offline Indicator Design
========================
When ONLINE:
+-----------------------------------------------------------------------+
| [=] NEXUS POS [GM Store] [John D] |
| Status: Connected |
+-----------------------------------------------------------------------+
When OFFLINE:
+-----------------------------------------------------------------------+
| [=] NEXUS POS [!] OFFLINE MODE [GM Store] |
| +-----------------------------------------------------------------+ |
| | Working offline. 5 sales pending sync. | |
| +-----------------------------------------------------------------+ |
+-----------------------------------------------------------------------+
When SYNCING:
+-----------------------------------------------------------------------+
| [=] NEXUS POS [<->] Syncing... 3/5 [GM Store] |
+-----------------------------------------------------------------------+
L.10A.1H CRDTs for Conflict-Free Synchronization
Detailed Implementation Reference (from former Offline-First Design chapter, now consolidated here):
Overview
While event sourcing handles sales conflicts (append-only), other data types benefit from CRDTs (Conflict-free Replicated Data Types) - data structures mathematically guaranteed to converge without coordination.
Traditional Sync Problem
========================
Terminal A (Offline): Terminal B (Offline):
Inventory: 100 Inventory: 100
Customer purchases 5 Receives shipment +20
Local: 95 Local: 120
When both sync - CONFLICT!
Which value is correct? 95 or 120?
Answer: Neither! Correct is 115 (100 - 5 + 20)
CRDT Solution:
Both terminals track OPERATIONS, not STATE
G-Counter for additions: {A: 0, B: 20}
PN-Counter for decrements: {A: 5, B: 0}
Final value: 100 + 20 - 5 = 115
CRDT Types for POS
| CRDT Type | Use Case | Merge Strategy |
|---|---|---|
| G-Counter | Transaction counts, sales counts | Sum all increments |
| PN-Counter | Inventory levels (+/-) | Sum increments, sum decrements |
| LWW-Register | Price updates, last modified | Highest timestamp wins |
| OR-Set | Cart items, applied discounts | Union with tombstones |
| MV-Register | Customer preferences | Keep all concurrent values |
G-Counter Implementation (Transaction Counts)
// GCounter.cs - Grow-only counter
public class GCounter
{
// Each node tracks its own increment count
private readonly Dictionary<string, long> _counters = new();
private readonly string _nodeId;
public GCounter(string nodeId)
{
_nodeId = nodeId;
_counters[nodeId] = 0;
}
public void Increment(long amount = 1)
{
_counters[_nodeId] += amount;
}
public long Value => _counters.Values.Sum();
// Merge with another G-Counter (associative, commutative, idempotent)
public void Merge(GCounter other)
{
foreach (var (nodeId, count) in other._counters)
{
if (_counters.TryGetValue(nodeId, out var existing))
{
_counters[nodeId] = Math.Max(existing, count);
}
else
{
_counters[nodeId] = count;
}
}
}
public Dictionary<string, long> State => new(_counters);
}
PN-Counter for Inventory
// PNCounter.cs - Positive-Negative Counter for inventory
public class PNCounter
{
private readonly GCounter _positive;
private readonly GCounter _negative;
private readonly string _nodeId;
public PNCounter(string nodeId)
{
_nodeId = nodeId;
_positive = new GCounter(nodeId);
_negative = new GCounter(nodeId);
}
public void Increment(long amount = 1)
{
_positive.Increment(amount);
}
public void Decrement(long amount = 1)
{
_negative.Increment(amount);
}
public long Value => _positive.Value - _negative.Value;
public void Merge(PNCounter other)
{
_positive.Merge(other._positive);
_negative.Merge(other._negative);
}
}
// Usage in inventory sync
public class InventoryCRDT
{
private readonly Dictionary<string, PNCounter> _inventory = new();
public void RecordSale(string sku, int quantity, string terminalId)
{
if (!_inventory.ContainsKey(sku))
_inventory[sku] = new PNCounter(terminalId);
_inventory[sku].Decrement(quantity);
}
public void RecordReceiving(string sku, int quantity, string terminalId)
{
if (!_inventory.ContainsKey(sku))
_inventory[sku] = new PNCounter(terminalId);
_inventory[sku].Increment(quantity);
}
public int GetQuantity(string sku) =>
_inventory.TryGetValue(sku, out var counter)
? (int)counter.Value
: 0;
}
LWW-Register for Price Updates
// LWWRegister.cs - Last-Writer-Wins Register
public class LWWRegister<T>
{
private T? _value;
private DateTime _timestamp;
private string _nodeId;
public LWWRegister(string nodeId)
{
_nodeId = nodeId;
_timestamp = DateTime.MinValue;
}
public void Set(T value, DateTime? timestamp = null)
{
var ts = timestamp ?? DateTime.UtcNow;
if (ts > _timestamp)
{
_value = value;
_timestamp = ts;
}
}
public T? Value => _value;
public DateTime Timestamp => _timestamp;
public void Merge(LWWRegister<T> other)
{
if (other._timestamp > _timestamp)
{
_value = other._value;
_timestamp = other._timestamp;
}
}
}
// Usage for price sync
public class PriceCRDT
{
private readonly Dictionary<string, LWWRegister<decimal>> _prices = new();
public void UpdatePrice(string sku, decimal price, string terminalId)
{
if (!_prices.ContainsKey(sku))
_prices[sku] = new LWWRegister<decimal>(terminalId);
_prices[sku].Set(price);
}
public decimal? GetPrice(string sku) =>
_prices.TryGetValue(sku, out var register)
? register.Value
: null;
}
OR-Set for Cart Items (with Tombstones)
// ORSet.cs - Observed-Remove Set with tombstones
public class ORSet<T>
{
private readonly Dictionary<T, HashSet<string>> _additions = new();
private readonly Dictionary<T, HashSet<string>> _removals = new();
private readonly string _nodeId;
private int _counter = 0;
public ORSet(string nodeId)
{
_nodeId = nodeId;
}
public void Add(T element)
{
var uniqueTag = $"{_nodeId}:{++_counter}";
if (!_additions.ContainsKey(element))
_additions[element] = new HashSet<string>();
_additions[element].Add(uniqueTag);
}
public void Remove(T element)
{
// Only remove tags we've seen (observed-remove semantics)
if (_additions.TryGetValue(element, out var tags))
{
if (!_removals.ContainsKey(element))
_removals[element] = new HashSet<string>();
foreach (var tag in tags)
{
_removals[element].Add(tag);
}
}
}
public IEnumerable<T> Elements =>
_additions
.Where(kv =>
{
var remainingTags = _removals.TryGetValue(kv.Key, out var removed)
? kv.Value.Except(removed)
: kv.Value;
return remainingTags.Any();
})
.Select(kv => kv.Key);
public void Merge(ORSet<T> other)
{
// Merge additions
foreach (var (element, tags) in other._additions)
{
if (!_additions.ContainsKey(element))
_additions[element] = new HashSet<string>();
_additions[element].UnionWith(tags);
}
// Merge removals (tombstones)
foreach (var (element, tags) in other._removals)
{
if (!_removals.ContainsKey(element))
_removals[element] = new HashSet<string>();
_removals[element].UnionWith(tags);
}
}
}
Tombstone Handling Strategy
| Strategy | Retention | Trade-off |
|---|---|---|
| Time-based | 7-14 days | May resurrect if offline longer |
| Epoch-based | Until all nodes sync | Requires sync confirmation |
| Compaction | Periodic cleanup | Best balance for POS |
// TombstoneManager.cs
public class TombstoneManager
{
private readonly TimeSpan _tombstoneTTL = TimeSpan.FromDays(7);
public void CompactTombstones<T>(ORSet<T> set)
{
// Remove tombstones older than TTL
// Requires tracking tombstone timestamps
}
public bool IsSafeToCompact(DateTime tombstoneCreated) =>
DateTime.UtcNow - tombstoneCreated > _tombstoneTTL;
}
CRDT Sync Protocol
CRDT Sync Flow
==============
POS Terminal A Central API
| |
| 1. Push local CRDT state |
| ────────────────────────────────► |
| { |
| "inventory": { |
| "NXJ1078": { |
| "positive": {"A": 5}, |
| "negative": {"A": 2} |
| } |
| }, |
| "prices": {...} |
| } |
| |
| 2. Server merges with global state |
| |
| 3. Return merged state |
| ◄───────────────────────────────── |
| { |
| "inventory": { |
| "NXJ1078": { |
| "positive": {"A": 5, "B": 10, "HQ": 100},
| "negative": {"A": 2, "C": 3}
| } |
| } |
| } |
| |
| 4. Local merge |
| Final inventory = 110 |
When to Use CRDTs vs. Event Sourcing
| Scenario | Approach | Reasoning |
|---|---|---|
| Sales transactions | Event Sourcing | Need full audit trail |
| Inventory counts | PN-Counter CRDT | Frequent concurrent updates |
| Price updates | LWW-Register | Last price wins |
| Cart items | OR-Set | Add/remove operations |
| Customer data | Event Sourcing | Need history |
| Real-time counters | G-Counter CRDT | Dashboard metrics |
Reference Libraries
| Language | Library | Notes |
|---|---|---|
| C# | Akka.DistributedData | Battle-tested, Akka ecosystem |
| C# | Microsoft.FASTER | High-performance state |
| TypeScript | Automerge | Good for client-side |
| Rust | rust-crdt | If building native components |
L.10A.2 Tax Engine Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-002 |
| Context | Need flexible tax calculation supporting multiple jurisdictions |
| Decision | Custom-Built Tax Engine with modular jurisdiction support |
| Alternatives Considered | 1) Third-party service (Avalara/TaxJar), 2) Custom-built (selected) |
| Rationale | Full control over rules; no per-transaction fees; offline support; expansion flexibility |
| Reference | BRD-v12 §1.17 |
Tax Calculation Hierarchy (Priority Order):
┌─────────────────────────────────────────────────────────────┐
│ TAX CALCULATION HIERARCHY │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. PRODUCT-LEVEL OVERRIDE (Highest Priority) │
│ └── Example: "Grocery Food - 1.5%" │
│ └── Example: "Prepared Food - 10%" │
│ └── Example: "Prescription Drugs - 0%" │
│ │
│ 2. CUSTOMER-LEVEL EXEMPTION │
│ └── Example: "Reseller Certificate" │
│ └── Example: "Non-Profit 501(c)(3)" │
│ └── Example: "Diplomatic Status" │
│ │
│ 3. LOCATION-BASED TAX (Default) │
│ └── State Tax + County Tax + City Tax + District Tax │
│ └── Based on store physical address │
│ │
└─────────────────────────────────────────────────────────────┘
Virginia Initial Configuration:
tax_jurisdictions:
virginia:
state_rate: 4.3
default_local_rate: 1.0
# Regional additional taxes
regions:
hampton_roads:
counties: ["Norfolk", "Virginia Beach", "Newport News", "Hampton"]
additional_rate: 0.7
northern_virginia:
counties: ["Arlington", "Fairfax", "Loudoun", "Prince William"]
additional_rate: 0.7
central_virginia:
counties: ["Henrico", "Chesterfield", "Richmond City"]
additional_rate: 0.0
# Product exemptions/reduced rates
exemptions:
- category: "grocery_food"
rate: 1.5 # Reduced rate
- category: "prescription_drugs"
rate: 0.0
- category: "medical_equipment"
rate: 0.0
Expansion Roadmap:
jurisdiction_modules:
virginia: { status: "active" }
california: { status: "planned", notes: "Complex district taxes, no gift card expiry" }
oregon: { status: "planned", notes: "No sales tax state" }
canada: { status: "planned", notes: "GST/PST/HST complexity" }
european_union: { status: "planned", notes: "VAT with reverse charge" }
L.10A.3 Payment Integration Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-003 |
| Context | Need PCI-compliant card payment processing with minimal compliance burden |
| Decision | SAQ-A Semi-Integrated terminals (no card data touches our system) |
| Alternatives Considered | 1) Full integration SAQ-D, 2) Semi-integrated SAQ-A (selected), 3) Redirect-only |
| Rationale | Simplest PCI compliance; card data never in our scope; supports offline void via token |
| Reference | BRD-v12 §1.18 |
Payment Flow Architecture:
┌─────────────────────────────────────────────────────────────┐
│ SAQ-A PAYMENT ARCHITECTURE │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ POS UI │────►│ Backend │────►│ Terminal │ │
│ │ │ │ API │ │ │ │
│ └──────────┘ └──────────┘ └────┬─────┘ │
│ ▲ │ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────┐ │
│ │ │ PAYMENT PROCESSOR │ │
│ │ │ (Card data ONLY here) │ │
│ │ └─────────────────────────────────────┘ │
│ │ │ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────┐ │
│ └───────────│ Token + Approval + Masked Card │ │
│ │ (NO full PAN, CVV, or track data) │ │
│ └─────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Data Storage Rules:
┌─────────────────────────────────────────────────────────────┐
│ PAYMENT DATA STORAGE RULES │
├─────────────────────────────────────────────────────────────┤
│ │
│ ✅ STORED (Allowed): ❌ PROHIBITED (Never): │
│ ├── Payment token ├── Full card number │
│ ├── Approval code ├── CVV/CVC │
│ ├── Masked card (****1234) ├── Track data │
│ ├── Card brand (Visa/MC/Amex) ├── PIN block │
│ ├── Entry method (chip/tap) ├── EMV cryptogram (raw) │
│ ├── Terminal ID │ │
│ └── Timestamp │ │
│ │
└─────────────────────────────────────────────────────────────┘
L.10A.4 Multi-Tenancy Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-004 (Revised) |
| Context | Platform must support multiple retail tenants with strong data isolation |
| Decision | Row-Level Isolation with PostgreSQL RLS |
| Alternatives Considered | 1) Database-per-tenant, 2) Schema-per-tenant, 3) Row-level isolation with RLS (selected) |
| Rationale | Matches BRD v18.0 data models (135 occurrences of tenant_id FK across all modules). Simpler operations — no per-tenant schema migration tooling. RLS enforces isolation at the database level, preventing accidental cross-tenant data access. |
| Reference | BRD-v18.0, Chapter 03 |
v18.0 Update: The original Architecture Styles Worksheet v1.6 specified Schema-Per-Tenant. Expert panel review identified a contradiction: every data model table in BRD v18.0 includes
tenant_id UUID FK(row-level isolation pattern, 135 occurrences). This revision aligns the architecture decision with the actual BRD data models.
Database Structure:
database: pos_production
│
├── schema: shared
│ ├── tax_rates (global, no tenant_id)
│ ├── system_config (global)
│ └── tenant_registry (tenant metadata)
│
└── schema: public (all tenant data)
├── orders (tenant_id UUID FK + RLS)
├── customers (tenant_id UUID FK + RLS)
├── inventory (tenant_id UUID FK + RLS)
├── products (tenant_id UUID FK + RLS)
├── integration_providers (tenant_id UUID FK + RLS)
└── ... (all other tables with tenant_id + RLS)
RLS Policy Implementation:
-- Enable RLS on every tenant table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
-- Create isolation policy
CREATE POLICY tenant_isolation ON orders
USING (tenant_id = current_setting('app.current_tenant')::uuid);
-- Force RLS for non-superuser roles
ALTER TABLE orders FORCE ROW LEVEL SECURITY;
Connection Pattern:
// Tenant resolution via middleware
public class TenantMiddleware
{
public async Task InvokeAsync(HttpContext context)
{
var tenantId = ResolveTenantFromJwt(context);
// Set PostgreSQL session variable for RLS
await connection.ExecuteAsync(
"SET app.current_tenant = @tenantId",
new { tenantId });
await _next(context);
}
}
Benefits:
- Simpler connection pooling (shared pool, not per-schema)
- Standard query patterns (no search_path manipulation)
- Easier migrations (single schema, applied once)
- RLS enforcement at database level (defense-in-depth)
- Matches BRD v18.0 data model conventions
Trade-offs:
- Less physical isolation than schema separation (mitigated by RLS)
- All tenants share same table structure (flexibility limited)
- RLS policies must be applied to every table (automated via migration scripts)
L.10A.4A Multi-Tenancy Strategies Comparison
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
Multi-Tenancy Strategies
========================
Strategy 1: Shared Tables (Row-Level)
+----------------------------------+
| products |
| +--------+--------+------------+ |
| | tenant | id | name | |
| +--------+--------+------------+ |
| | nexus | 1 | T-Shirt | |
| | acme | 2 | Jacket | |
| | nexus | 3 | Jeans | |
| +--------+--------+------------+ |
+----------------------------------+
Pros: Simple, low overhead
Cons: Risk of data leakage, complex queries, no isolation
Strategy 2: Separate Databases
+-------------+ +-------------+ +-------------+
| nexus_db | | acme_db | | beta_db |
| +--------+ | | +--------+ | | +--------+ |
| |products| | | |products| | | |products| |
| +--------+ | | +--------+ | | +--------+ |
| |sales | | | |sales | | | |sales | |
| +--------+ | | +--------+ | | +--------+ |
+-------------+ +-------------+ +-------------+
Pros: Complete isolation
Cons: Connection overhead, backup complexity, cost at scale
Strategy 3: Schema-Per-Tenant
+-----------------------------------------------------+
| pos_platform database |
| |
| +-----------+ +--------------+ +--------------+ |
| | shared | | tenant_nexus | | tenant_acme | |
| +-----------+ +--------------+ +--------------+ |
| | tenants | | products | | products | |
| | plans | | sales | | sales | |
| | features | | inventory | | inventory | |
| +-----------+ | customers | | customers | |
| +--------------+ +--------------+ |
+-----------------------------------------------------+
Pros: Isolation + efficiency, easy backup/restore per tenant
Cons: More complex migrations (but manageable)
Decision Matrix
| Requirement | Shared Tables | Separate DBs | Schema-Per-Tenant |
|---|---|---|---|
| Data Isolation | Poor | Excellent | Excellent |
| Performance | Good | Excellent | Very Good |
| Backup/Restore | Complex | Simple | Simple |
| Connection Overhead | Low | High | Low |
| Query Complexity | High | Low | Low |
| Compliance (SOC2) | Difficult | Easy | Easy |
| Cost at Scale | Low | High | Medium |
| Migration Complexity | Low | Low | Medium |
Note: The Architecture Styles analysis (L.10A.4 above) selected Row-Level Isolation with PostgreSQL RLS as the production strategy, which aligns with BRD v18.0 data models (135 occurrences of
tenant_id). The Schema-Per-Tenant comparison above is preserved for reference and as an alternative should physical isolation requirements change.
L.10A.4B Tenant Resolution & Middleware
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
Tenant Resolution Flow
Tenant Resolution Flow
======================
+---------------------------+
| Incoming Request |
| nexus.pos-platform.com |
+-------------+-------------+
|
v
+---------------------------+
| Extract Subdomain |
| subdomain = "nexus" |
+-------------+-------------+
|
v
+---------------------------+
| Lookup in shared.tenants|
| WHERE subdomain = ? |
+-------------+-------------+
|
+----------------------+----------------------+
| |
[Found] [Not Found]
| |
v v
+---------------------------+ +---------------------------+
| Set PostgreSQL | | Return 404 |
| search_path TO | | "Tenant not found" |
| tenant_nexus, shared | +---------------------------+
+-------------+-------------+
|
v
+---------------------------+
| Continue with request |
| All queries now use |
| tenant_nexus schema |
+---------------------------+
ASP.NET Core Tenant Middleware
// TenantMiddleware.cs
public class TenantMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<TenantMiddleware> _logger;
public TenantMiddleware(RequestDelegate next, ILogger<TenantMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context, ITenantService tenantService, IDbContextFactory<PosDbContext> dbFactory)
{
// 1. Extract subdomain from host
var host = context.Request.Host.Host;
var subdomain = ExtractSubdomain(host);
if (string.IsNullOrEmpty(subdomain))
{
context.Response.StatusCode = 400;
await context.Response.WriteAsJsonAsync(new { error = "Invalid tenant" });
return;
}
// 2. Lookup tenant in shared schema
var tenant = await tenantService.GetBySubdomainAsync(subdomain);
if (tenant == null)
{
context.Response.StatusCode = 404;
await context.Response.WriteAsJsonAsync(new { error = "Tenant not found" });
return;
}
if (tenant.Status == "suspended")
{
context.Response.StatusCode = 403;
await context.Response.WriteAsJsonAsync(new { error = "Account suspended" });
return;
}
// 3. Store tenant in HttpContext for downstream use
context.Items["Tenant"] = tenant;
context.Items["TenantSchema"] = tenant.SchemaName;
_logger.LogDebug("Resolved tenant: {TenantSlug} -> {Schema}", tenant.Slug, tenant.SchemaName);
// 4. Continue pipeline
await _next(context);
}
private string? ExtractSubdomain(string host)
{
// nexus.pos-platform.com -> nexus
// localhost:5000 -> null (development fallback)
var parts = host.Split('.');
if (parts.Length >= 3)
{
return parts[0];
}
// Development fallback: check header
return null;
}
}
// ITenantService.cs
public interface ITenantService
{
Task<Tenant?> GetBySubdomainAsync(string subdomain);
Task<Tenant?> GetBySlugAsync(string slug);
Task<string> CreateTenantAsync(CreateTenantRequest request);
}
// TenantService.cs
public class TenantService : ITenantService
{
private readonly IDbContextFactory<SharedDbContext> _dbFactory;
private readonly ILogger<TenantService> _logger;
public TenantService(IDbContextFactory<SharedDbContext> dbFactory, ILogger<TenantService> logger)
{
_dbFactory = dbFactory;
_logger = logger;
}
public async Task<Tenant?> GetBySubdomainAsync(string subdomain)
{
await using var db = await _dbFactory.CreateDbContextAsync();
return await db.Tenants
.AsNoTracking()
.FirstOrDefaultAsync(t => t.Subdomain == subdomain);
}
public async Task<string> CreateTenantAsync(CreateTenantRequest request)
{
var schemaName = $"tenant_{request.Slug}";
await using var db = await _dbFactory.CreateDbContextAsync();
// 1. Create tenant record
var tenant = new Tenant
{
Slug = request.Slug,
Name = request.Name,
Subdomain = request.Subdomain,
SchemaName = schemaName,
PlanId = request.PlanId,
Status = "active"
};
db.Tenants.Add(tenant);
await db.SaveChangesAsync();
// 2. Create schema (raw SQL)
await db.Database.ExecuteSqlRawAsync($"CREATE SCHEMA {schemaName}");
// 3. Run migrations on new schema
await RunMigrationsAsync(schemaName);
_logger.LogInformation("Created tenant: {Slug} with schema {Schema}", request.Slug, schemaName);
return tenant.Id.ToString();
}
private async Task RunMigrationsAsync(string schemaName)
{
// Apply all tenant schema tables
// This would run the full schema creation script
}
}
DbContext with Dynamic Schema
// PosDbContext.cs
public class PosDbContext : DbContext
{
private readonly string _schemaName;
public PosDbContext(DbContextOptions<PosDbContext> options, IHttpContextAccessor httpContextAccessor)
: base(options)
{
// Get schema from HttpContext (set by TenantMiddleware)
_schemaName = httpContextAccessor.HttpContext?.Items["TenantSchema"]?.ToString()
?? "tenant_default";
}
public DbSet<Product> Products => Set<Product>();
public DbSet<Sale> Sales => Set<Sale>();
public DbSet<Customer> Customers => Set<Customer>();
public DbSet<Employee> Employees => Set<Employee>();
public DbSet<Location> Locations => Set<Location>();
// ... other DbSets
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Set default schema for all entities
modelBuilder.HasDefaultSchema(_schemaName);
// Apply entity configurations
modelBuilder.ApplyConfigurationsFromAssembly(typeof(PosDbContext).Assembly);
}
}
Connection String with search_path
// TenantDbContextFactory.cs
public class TenantDbContextFactory : IDbContextFactory<PosDbContext>
{
private readonly IConfiguration _config;
private readonly IHttpContextAccessor _httpContextAccessor;
public TenantDbContextFactory(IConfiguration config, IHttpContextAccessor httpContextAccessor)
{
_config = config;
_httpContextAccessor = httpContextAccessor;
}
public PosDbContext CreateDbContext()
{
var schemaName = _httpContextAccessor.HttpContext?.Items["TenantSchema"]?.ToString()
?? throw new InvalidOperationException("No tenant context");
var baseConnectionString = _config.GetConnectionString("DefaultConnection");
// Append search_path to connection string
var connectionString = $"{baseConnectionString};Search Path={schemaName},shared";
var optionsBuilder = new DbContextOptionsBuilder<PosDbContext>();
optionsBuilder.UseNpgsql(connectionString);
return new PosDbContext(optionsBuilder.Options);
}
}
L.10A.4C Tenant Provisioning
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
New Tenant Signup Flow
======================
[Admin Portal] [API] [Database]
| | |
| 1. POST /tenants | |
| { name, slug, plan } | |
|------------------------------>| |
| | |
| | 2. Validate slug uniqueness |
| |--------------------------------->|
| | |
| | 3. Insert into shared.tenants |
| |--------------------------------->|
| | |
| | 4. CREATE SCHEMA tenant_{slug} |
| |--------------------------------->|
| | |
| | 5. Run schema migrations |
| | (create all tables) |
| |--------------------------------->|
| | |
| | 6. Seed default data |
| | (roles, permissions) |
| |--------------------------------->|
| | |
| | 7. Create admin user |
| |--------------------------------->|
| | |
| 8. Return tenant details | |
| { id, subdomain, status } | |
|<------------------------------| |
| | |
| 9. Redirect to tenant portal | |
| nexus.pos-platform.com | |
| | |
L.10A.4D Schema Migration Strategy
Detailed Implementation Reference (from former Multi-Tenancy Design chapter, now consolidated here):
Applying Migrations to All Tenants
// TenantMigrationService.cs
public class TenantMigrationService
{
private readonly SharedDbContext _sharedDb;
private readonly ILogger<TenantMigrationService> _logger;
public async Task ApplyMigrationToAllTenantsAsync(string migrationScript)
{
var tenants = await _sharedDb.Tenants.ToListAsync();
foreach (var tenant in tenants)
{
try
{
_logger.LogInformation("Applying migration to {Schema}", tenant.SchemaName);
await _sharedDb.Database.ExecuteSqlRawAsync(
$"SET search_path TO {tenant.SchemaName}; {migrationScript}"
);
_logger.LogInformation("Migration complete for {Schema}", tenant.SchemaName);
}
catch (Exception ex)
{
_logger.LogError(ex, "Migration failed for {Schema}", tenant.SchemaName);
// Continue with other tenants or abort based on policy
}
}
}
}
Migration Script Example
-- Migration: Add loyalty_tier to customers
-- File: 2025-01-15_add_loyalty_tier.sql
DO $$
DECLARE
tenant_schema TEXT;
BEGIN
FOR tenant_schema IN
SELECT schema_name FROM shared.tenants WHERE status = 'active'
LOOP
EXECUTE format('ALTER TABLE %I.customers ADD COLUMN IF NOT EXISTS loyalty_tier VARCHAR(20) DEFAULT ''bronze''', tenant_schema);
END LOOP;
END $$;
Shared Schema SQL Reference
-- Schema: shared
-- Tenant Registry
CREATE TABLE shared.tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
slug VARCHAR(50) UNIQUE NOT NULL, -- 'nexus', 'acme'
name VARCHAR(255) NOT NULL, -- 'Nexus Clothing'
subdomain VARCHAR(100) UNIQUE NOT NULL, -- 'nexus.pos-platform.com'
schema_name VARCHAR(100) NOT NULL, -- 'tenant_nexus'
plan_id UUID REFERENCES shared.subscription_plans(id),
status VARCHAR(20) DEFAULT 'active', -- active, suspended, trial
trial_ends_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- Subscription Plans
CREATE TABLE shared.subscription_plans (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(100) NOT NULL, -- 'Starter', 'Professional'
code VARCHAR(50) UNIQUE NOT NULL, -- 'starter', 'pro', 'enterprise'
price_monthly DECIMAL(10,2),
price_yearly DECIMAL(10,2),
max_locations INTEGER DEFAULT 1,
max_registers INTEGER DEFAULT 2,
max_employees INTEGER DEFAULT 5,
max_products INTEGER DEFAULT 1000,
features JSONB DEFAULT '{}', -- Feature flags
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Feature Flags
CREATE TABLE shared.feature_flags (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
key VARCHAR(100) UNIQUE NOT NULL, -- 'loyalty_program'
name VARCHAR(255) NOT NULL,
description TEXT,
default_enabled BOOLEAN DEFAULT FALSE,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Insert default plans
INSERT INTO shared.subscription_plans (name, code, price_monthly, max_locations, max_registers, max_employees, max_products) VALUES
('Starter', 'starter', 49.00, 1, 2, 5, 1000),
('Professional', 'pro', 149.00, 3, 10, 25, 10000),
('Enterprise', 'enterprise', 499.00, -1, -1, -1, -1); -- -1 = unlimited
L.10A.5 Commission Reversal Decision
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-005 |
| Context | Need fair commission adjustment when sales are voided or items are returned |
| Decision | Proportional Reversal on returns, Full Reversal on voids |
| Alternatives Considered | 1) Full reversal always, 2) Proportional (selected), 3) No reversal |
| Rationale | Fair to employees; maintains incentive alignment; distinguishes mistakes (voids) from returns |
| Reference | BRD-v12 §1.8 |
Commission Reversal Rules:
┌─────────────────────────────────────────────────────────────┐
│ COMMISSION REVERSAL LOGIC │
├─────────────────────────────────────────────────────────────┤
│ │
│ VOID (Same day, before drawer close): │
│ ├── Reversal: 100% (full) │
│ ├── Rationale: Mistake correction, sale didn't happen │
│ └── Example: $6 commission → reverse $6 │
│ │
│ RETURN (After sale completed): │
│ ├── Reversal: Proportional to returned value │
│ ├── Formula: Original Commission × (Returned / Original) │
│ └── Example: │
│ Sale: $120, Commission: $6 (5%) │
│ Return: $80 of items │
│ Reversal: $6 × ($80/$120) = $4.00 │
│ Net Commission: $6 - $4 = $2.00 │
│ │
└─────────────────────────────────────────────────────────────┘
Configuration:
commissions:
default_rate_percent: 2.0
category_rates:
electronics: 3.0
services: 5.0
# Reversal rules
reverse_on_void: true
void_reversal_method: "full" # 100%
reduce_on_return: true
return_reversal_method: "proportional" # Based on value
L.10A.6 Geographic Expansion Strategy
| Attribute | Value |
|---|---|
| Decision ID | ADR-BRD-006 |
| Context | Initial deployment in Virginia with planned expansion to other US states and international |
| Decision | Virginia-First with modular jurisdiction architecture |
| Phases | 1) Virginia (Day 1), 2) US expansion (Year 2), 3) International (Year 3+) |
| Reference | BRD-v12 §1.17.3 |
Expansion Strategy:
┌─────────────────────────────────────────────────────────────┐
│ GEOGRAPHIC EXPANSION ROADMAP │
├─────────────────────────────────────────────────────────────┤
│ │
│ PHASE 1: Virginia (Day 1) │
│ ├── Tax: State 4.3% + Local 1% + Regional 0.7% │
│ ├── Gift Cards: 5-year minimum expiry allowed │
│ └── Compliance: Virginia Consumer Protection Act │
│ │
│ PHASE 2: US Expansion │
│ ├── California: No gift card expiry, $10 cash-out rule │
│ ├── Oregon: No sales tax │
│ ├── New York: Complex local taxes │
│ └── Florida: No income tax, tourism taxes │
│ │
│ PHASE 3: International │
│ ├── Canada: GST/HST/PST provincial variations │
│ ├── EU: VAT with reverse charge for B2B │
│ └── UK: Post-Brexit VAT rules │
│ │
└─────────────────────────────────────────────────────────────┘
Design Principle: Always design for the most restrictive jurisdiction (California for US), then enable features where permitted.
Gift Card Jurisdiction Matrix: | Jurisdiction | Expiry Allowed | Inactivity Fee | Cash-Out Required | |–––––––|––––––––|––––––––|—————––| | Virginia | Yes (5yr min) | Yes (after 12mo) | No | | California | No | No | Yes ($10 threshold) | | New York | No | No | No | | Default | No | No | No |
L.10A.7 Decision Dependency Graph
┌─────────────────────────────────────────────────────────────┐
│ ARCHITECTURE DECISION DEPENDENCIES │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ │
│ │ Geographic Scope │ │
│ │ (ADR-BRD-006) │ │
│ └────────┬─────────┘ │
│ │ │
│ ┌──────────────┼──────────────┐ │
│ ▼ ▼ ▼ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Tax Engine │ │ Gift Card │ │ Compliance │ │
│ │(ADR-BRD-002)│ │ Rules │ │ Rules │ │
│ └──────┬─────┘ └────────────┘ └────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────┐ │
│ │ Offline │───────────────────────────┐ │
│ │(ADR-BRD-001)│ │ │
│ └──────┬─────┘ │ │
│ │ ▼ │
│ ▼ ┌────────────┐ │
│ ┌────────────┐ │ Payment │ │
│ │ Multi- │ │(ADR-BRD-003)│ │
│ │ Tenancy │ └────────────┘ │
│ │(ADR-BRD-004)│ │
│ └────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
L.11 Style Decision Summary
Final Selection
+------------------------------------------------------------------+
| ARCHITECTURE DECISION SUMMARY |
| (v2.0 - Panel Reviewed) |
+------------------------------------------------------------------+
| |
| QUESTION: What is the primary architecture style? |
| ANSWER: Event-Driven Modular Monolith |
| |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ SELECTED PATTERNS │ |
| ├─────────────────────────────────────────────────────────────┤ |
| │ ✅ Modular Monolith → Central API │ |
| │ ✅ Microkernel (Plugin) → POS Client │ |
| │ ✅ Event-Driven → PostgreSQL Events (v1.0) │ |
| │ Kafka (v2.0, when justified) │ |
| │ ✅ Event Sourcing → Sales (Full) + Inventory (Audit)│ |
| │ + Integrations (Audit-trail) │ |
| │ ✅ CQRS → Sales Module (Read/Write split) │ |
| │ ✅ Offline-First → POS Client (SQLite) │ |
| │ ✅ Row-Level with RLS → Multi-Tenant Isolation │ |
| │ ✅ Integration Gateway → Module 6 (Extractable) │ |
| │ ✅ Circuit Breaker → External API Resilience │ |
| │ ✅ Transactional Outbox → Guaranteed Event Delivery │ |
| │ ✅ Provider Abstraction → IIntegrationProvider Interface │ |
| │ ✅ Credential Vault → HashiCorp Vault │ |
| └─────────────────────────────────────────────────────────────┘ |
| |
| ┌─────────────────────────────────────────────────────────────┐ |
| │ REJECTED PATTERNS │ |
| ├─────────────────────────────────────────────────────────────┤ |
| │ ❌ Microservices → Too complex for current scale │ |
| │ ❌ Space-Based → Too complex for financial audit │ |
| │ ❌ Schema-Per-Tenant → Replaced by Row-Level with RLS │ |
| │ ❌ Kafka (v1.0) → Deferred to v2.0 │ |
| └─────────────────────────────────────────────────────────────┘ |
| |
+------------------------------------------------------------------+
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2026-01-24 |
| Updated | 2026-02-25 |
| Source | Architecture Styles Worksheet v2.0, BRD-v18.0, Chapters 02-06 |
| Author | Claude Code |
| Reviewer | Expert Panel (Marcus Chen, Sarah Rodriguez, James O’Brien, Priya Patel) |
| Status | Active |
| Part | II - Architecture |
| Chapter | 04 of 32 |
| Previous | Chapter 12 v2.0.0 |
Change Log
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2026-01-24 | Initial document |
| 1.1.0 | 2026-01-26 | Added Section L.10A (Key Architecture Decisions from BRD-v12) with 6 ADRs |
| 2.0.0 | 2026-02-19 | Expert panel review (6.50/10): Replaced Schema-Per-Tenant with Row-Level RLS; deferred Kafka to v2.0 (PostgreSQL Events for v1.0); added Extractable Integration Gateway for Module 6; added L.1.9 Integration Patterns (Circuit Breaker, Transactional Outbox, Provider Abstraction, ACL, Saga); added L.4A CQRS/ES Scope per module; added L.4B Integration Architecture Patterns with diagrams; replaced SonarQube-only security with 6-Gate Security Test Pyramid; added HashiCorp Vault credential architecture; updated Style Evaluation Matrix scores; added integration-specific risks and mitigations |
| 3.0.0 | 2026-02-22 | Consolidated implementation references from Chapters 05-09: Added L.4A.1-7 (Event Store schema, Kafka architecture, Schema Registry, DLQ pattern, Domain Events catalog, Projections, Temporal Queries, Snapshots from Ch 08); Added L.9A-9B (System Architecture diagrams, Data Flow patterns from Ch 05); Added L.9C (Domain Model bounded contexts, aggregates, ER diagram from Ch 07); Added L.10A.1A-1H (POS Client architecture, SQLite schema, Sync Queue, Conflict Resolution, Sync Processor, Sale Creation Flow, Connection Monitor, CRDTs from Ch 09); Added L.10A.4A-4D (Multi-Tenancy strategies comparison, Tenant Middleware, Provisioning workflow, Migration strategy from Ch 06) |
Next Chapter: Chapter 05: Architecture Components (BRD v20.0)
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 05: Architecture Components (BRD v20.0)
5.0 About This Chapter
This chapter contains the complete Business Requirements Document (BRD) v20.0 for the POS Platform. It serves as the authoritative source for all functional requirements, business rules, state machines, and module specifications.
Authority Rule: When any other chapter in this Blueprint conflicts with content in this chapter, this chapter (BRD) wins.
Module Overview
| Module | Scope | BRD Sections |
|---|---|---|
| 1. Sales | POS workflows, payments, returns, offline operations | 1.1-1.20 |
| 2. Customers | Profiles, groups, tiers, communication | 2.1-2.8 |
| 3. Catalog | Products, variants, pricing, multi-channel | 3.1-3.15 |
| 4. Inventory | Stock management, transfers, counting, RFID | 4.1-4.19 |
| 5. Setup & Configuration | Tenant config, roles, registers, RFID config | 5.1-5.21 |
| 6. Integrations | Shopify, Amazon, Google, payments, shipping | 6.1-6.13 |
| 7. State Machines | All 16+ state machine definitions | 7.1-7.16 |
How to Use This Chapter
- For implementation: Find the relevant Module and Section number
- For business rules: Check the YAML business rules in each module
- For state machines: See Module 7 (State Machine Reference)
- For decisions: 113 decisions documented throughout, summarized in Appendix
- For user stories: Gherkin acceptance criteria at the end of each module
This document consolidates all features into a modular architecture.
- Module 1 (Sales): Covers Scanner, Financials, Discounts, Post-Sale corrections, Gift Cards, Exchanges, Special Orders, Multi-Store, Commissions, Cash Management, Offline Operations, Tax Engine, Payment Integration, and more.
- Module 2 (Customers): Covers Profiles, Merging, Tax Logic, Data Management, Customer Groups, Notes, Communication Preferences, and Privacy Compliance.
- Module 3 (Catalog): Covers Product Types & Data Model (with retail attributes, custom fields, UoM, shipping, templates, matrix management), Product Lifecycle, Pricing Engine (price hierarchy, price books, promotions, markdowns), Barcode Management, Categories/Seasons/Collections, Multi-Channel Management, Shopify Integration, Vendor Management, Search & Discovery, Label Printing, Media Management, Notes & Attachments, Permissions & Approvals, Product Analytics, and comprehensive User Stories with Gherkin acceptance criteria.
- Module 4 (Inventory): Covers Inventory Status Model, Purchase Orders & Procurement, Receiving & Inspection, Reorder Management, Inventory Counting & Auditing, Inventory Adjustments, Inter-Store Transfers, Vendor RMA & Returns, Serial & Lot Tracking, Landed Cost & Costing, Product Movement History, POS & Sales Integration, Online Order Fulfillment, Offline Operations, Alerts & Notifications, Dashboard & Reports, Business Rules, and comprehensive User Stories with Gherkin acceptance criteria.
- Module 5 (Setup & Configuration): Covers System Settings, Multi-Currency, Location Management, Supplier Configuration, User Profiles & Permissions, Time Tracking (Clock-In/Clock-Out), Register Management, Printers & Peripherals, Tax Configuration, UoM Management, Payment Methods, Custom Fields, Approval Workflows, Receipt Configuration, Email Templates, Loyalty Settings, Audit Logging, Consolidated Business Rules (YAML), Tenant Onboarding Wizard, Field Specifications Reference with Technical User Stories, and comprehensive User Stories with Gherkin acceptance criteria.
- Module 6 (Integrations & External Systems): Covers Integration Architecture (provider abstraction, retry/backoff, circuit breaker, idempotency, rate limiting, webhook pipeline), Shopify Integration (enhanced with GraphQL, bulk operations, third-party POS rules, BOPIS, omnichannel), Amazon SP-API Integration (OAuth/LWA, catalog/listings/orders/FBA APIs, FBA+FBM fulfillment, compliance, safety buffers), Google Merchant API Integration (product data management, local inventory, disapproval prevention, image requirements, GBP integration), Cross-Platform Product Data Requirements (unified validation matrix, strictest-rule-wins), Cross-Platform Inventory Sync Rules (safety buffers, oversell prevention, channel-specific rules), Payment Processor Integration, Email Provider Integration, Carrier & Shipping Integration, Integration Hub, Integration Business Rules (YAML), and comprehensive User Stories with Gherkin acceptance criteria.
1. Sales Module
1.1 Core Sales Workflow (Item Entry & Logic)
Scope: Scanner/Manual Entry, Inventory Checks, Parking, and Customer Association.
Cross-Reference: See Module 4, Section 4.13 for inventory reservation model during sales.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant SC as Scanner
participant API as Backend
participant DB as DB
participant PROMO as Promo Engine
Note over U, PROMO: Phase 1: Initiation & Entry
loop Action Loop
U->>UI: Toggle Mode (Sale / Return / Exchange)
alt Input Methods
U->>UI: Scan Barcode / Search SKU
UI->>API: GET /product/{sku}
else Scanner Broadcast
SC->>UI: Tags Detected (Array)
UI->>API: POST /products/bulk-lookup
Note right of API: Max 50 tags per request
end
API->>DB: Fetch Price, Stock, Tax
DB-->>UI: Return Product Data
alt Stock Validation
opt Mode == Sale
UI->>UI: Check Stock > 0
alt Low Stock
UI-->>U: Warning / Block Item
end
end
end
alt Mode Check
opt Mode == Exchange
U->>UI: Load Original Sale for Exchange
UI->>API: GET /orders/{id}
API-->>UI: Return Original Sale Items
UI->>UI: Select Items to Exchange OUT
UI->>UI: Scan/Add New Items IN
UI->>UI: Calculate Price Difference
Note right of UI: Links to Exchange flow in Section 1.4
end
end
UI->>UI: Add to Cart
par Intelligence & Context
UI->>PROMO: Analyze Cart Context
PROMO-->>UI: Trigger Upsell Alert ("Buy 1 more for 10% off")
and Customer Attachment
opt Attach Customer
UI->>API: GET /customer/{id} (Loyalty/Debt/Price Tier)
API-->>UI: Return Profile (Credit Limit, Tax Class, Price Level)
UI->>UI: Recalculate Prices (if Price Tier applies)
UI->>UI: Recalculate Tax (if Exempt)
end
end
end
opt Session Management
U->>UI: Click "Park Sale"
UI->>API: POST /sales/park
API->>DB: Serialize State & Release Locks
U->>UI: Click "Retrieve Sale"
UI->>API: GET /sales/parked
API-->>UI: Restore Cart
end
1.1.1 Parked Sale State Machine
stateDiagram-v2
[*] --> ACTIVE: Cart Created
ACTIVE --> PARKED: Staff Parks Sale
PARKED --> ACTIVE: Staff Retrieves Sale
PARKED --> EXPIRED: TTL Exceeded (4 hours)
EXPIRED --> [*]: Inventory Released
ACTIVE --> PENDING: Proceed to Payment
Parked Sale Rules:
- Maximum parked sales per terminal: 5
- TTL (Time-to-Live): 4 hours
- Inventory is soft-reserved while parked (visible to other terminals with warning)
- Expired parked sales auto-release inventory and log reason
1.1.2 Reports: Core Sales
| Report | Purpose | Key Data Fields |
|---|---|---|
| Daily Sales Summary | End-of-day overview of all transactions | Date, total sales, transaction count, avg transaction value, payment method breakdown |
| Item Entry Method Report | Track how items are entered into the system | Scanner vs manual entry counts, scanner success rate, failed scan count |
| Parked Sales Report | Monitor parked sales activity and expirations | Parked count, retrieved count, expired count, avg park duration |
| Hourly Sales Heatmap | Identify peak sales hours for staffing | Hour, transaction count, total revenue, avg items per transaction |
1.2 Discounts & Pricing Logic
Scope: Line-item overrides, Global discounts, Promotion application, Price Tiers, and Loyalty Redemptions.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Phase 2: Modification & Pricing
loop Pricing Actions
alt Line Item Modification
U->>UI: Select Item -> Override Price
opt Manager Auth
UI-->>U: Prompt PIN
U->>UI: Enter PIN
end
UI->>UI: Apply New Price & Reason Code (e.g., "Damaged")
end
alt Discounting
U->>UI: Apply Global Discount / Promo Code / Coupon
UI->>API: Validate Code (Expiry/Stacking/Single-Use)
API-->>UI: Valid
Note right of UI: Critical Calculation Order:
UI->>UI: 1. Apply Price Tier (Wholesale/VIP)
UI->>UI: 2. Apply Line Discounts
UI->>UI: 3. Apply Global % (Excluding Non-Discountable)
UI->>UI: 4. Apply Coupons
UI->>UI: 5. Calculate Tax (on discounted subtotal)
UI->>UI: 6. Apply Loyalty Redemptions (after tax)
end
end
1.2.1 Discount Calculation Order (Definitive)
The system applies discounts in this strict order:
| Order | Type | Example | Applies To |
|---|---|---|---|
| 1 | Price Tier | Wholesale pricing | Base price replacement |
| 2 | Line Discounts | “Damaged - 20% off” | Individual items |
| 3 | Automatic Promos | “Buy 2 Get 1 Free” | Qualifying items |
| 4 | Global Discount | “10% off entire order” | Subtotal (excl. non-discountable) |
| 5 | Coupons | “SAVE10” code | After global discount |
| 6 | Tax Calculation | State + Local tax | On final discounted amount |
| 7 | Loyalty Redemption | “500 points = $5 off” | Final subtotal after tax |
Non-Discountable Items: Gift cards, deposits, and items flagged is_discountable = false are excluded from global discounts.
1.2.2 Reports: Discounts & Pricing
| Report | Purpose | Key Data Fields |
|---|---|---|
| Discount Usage Report | Track all discounts applied | Discount type, frequency, total value, avg discount %, top discounted items |
| Promotion Effectiveness | Measure promo campaign success | Promo code, redemption count, revenue impact, items sold via promo |
| Coupon Performance | Track coupon redemption rates | Coupon code, issued count, redeemed count, expired count, revenue impact |
| Price Override Audit | Monitor manual price changes | Override count, avg override %, reason codes, authorizing manager |
1.3 Financial Settlement (Payments, Layaway, Third-Party Financing)
Scope: Split tenders, Credit Limits, Gift Cards, Third-Party Financing (Affirm), and Finalization.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
participant PAY as Payment Gateway
Note over U, DB: Phase 3: Settlement
U->>UI: Click "Pay"
loop Tender Processing
U->>UI: Select Method
Note right of UI: Multiple tenders allowed: cash + card(s), multiple cards, etc.
alt Cash
UI-->>U: Show "Collect: $20.03"
U->>UI: Enter Amount Received
UI->>UI: Calculate Change Due
else Gift Card
U->>UI: Scan/Enter Gift Card Number
UI->>API: GET /giftcards/{number}/balance
API-->>UI: Return Balance & Expiry
alt Sufficient Balance
UI->>UI: Deduct from Gift Card
UI->>UI: Add to Tender List
else Insufficient Balance
UI-->>U: "Card Balance: $X. Apply partial?"
U->>UI: Apply Partial Amount
end
else On-Account (Store Credit)
UI->>API: Check Credit Limit (Balance + Pending Layaways + Cart)
alt Exceeds Limit
API-->>UI: Block Transaction
else Approved
UI->>UI: Add to Tender List
end
else Layaway Deposit
U->>UI: Select Layaway Mode
UI->>UI: Validate Min Deposit %
Note right of UI: Sale Status -> LAYAWAY
else Credit Card (SAQ-A Flow)
UI->>API: POST /payments/initiate {amount, order_id}
API->>PAY: Send amount to terminal
Note over PAY: Customer taps/inserts card
PAY-->>API: Token + Approval Code
API->>DB: Store token (never card data)
API-->>UI: Payment Approved
else Third-Party Financing (Affirm)
U->>UI: Select "Pay with Affirm"
UI->>API: POST /payments/affirm/initiate {amount, order_id, customer}
API->>PAY: Create Affirm Checkout Session
PAY-->>API: Redirect URL / QR Code
API-->>UI: Display QR Code or Redirect
Note over PAY: Customer completes Affirm application on their device
PAY-->>API: Webhook: Loan Approved + Charge ID
API->>DB: Store Affirm charge_id, loan_id, status
API-->>UI: Payment Approved (Affirm)
Note right of API: Full amount received from Affirm; customer pays Affirm directly
end
UI->>UI: Update Remaining Balance
end
UI->>API: POST /orders/finalize
par Backend Operations
API->>DB: Write Order Record
API->>DB: Update Inventory (Decrement Sale / Increment Return)
API->>DB: Update Customer (Loyalty Points / Increase Debt)
API->>DB: Update Gift Card Balance (if used)
API->>DB: Record Commission (Employee ID + Amount)
end
API-->>UI: Transaction Success
opt Receipt Printing
U->>UI: Select Template (Thermal / A4 Invoice / Gift Receipt)
UI->>U: Print / Email Receipt
end
1.3.1 Order State Machine
stateDiagram-v2
[*] --> DRAFT: Cart Created
DRAFT --> PENDING: Click Pay
PENDING --> PARTIAL_PAID: Partial Payment
PARTIAL_PAID --> PENDING: More Payment Needed
PENDING --> PAID: Full Payment
PARTIAL_PAID --> PAID: Full Payment
PAID --> COMPLETED: Finalized
PAID --> HOLD_FOR_PICKUP: Hold Requested
HOLD_FOR_PICKUP --> READY_FOR_PICKUP: Items Staged
READY_FOR_PICKUP --> COMPLETED: Customer Picked Up
READY_FOR_PICKUP --> HOLD_EXPIRED: Deadline Passed
HOLD_EXPIRED --> CONTACT_CUSTOMER: Staff Notified
CONTACT_CUSTOMER --> READY_FOR_PICKUP: Deadline Extended
CONTACT_CUSTOMER --> REFUNDED: Customer Wants Refund
COMPLETED --> VOIDED: Void Action (Same Day)
COMPLETED --> PARTIALLY_RETURNED: Partial Return
PARTIALLY_RETURNED --> FULLY_RETURNED: All Items Returned
VOIDED --> [*]
FULLY_RETURNED --> [*]
REFUNDED --> [*]
1.3.2 Layaway State Machine
stateDiagram-v2
[*] --> DEPOSIT_PAID: Min Deposit Received
DEPOSIT_PAID --> RESERVED: Inventory Reserved
RESERVED --> RESERVED: Additional Payment
RESERVED --> PAID_IN_FULL: Final Payment
PAID_IN_FULL --> COMPLETED: Items Released
RESERVED --> CANCELLED: Customer Cancels
RESERVED --> FORFEITED: Payment Deadline Missed
CANCELLED --> [*]
FORFEITED --> [*]
COMPLETED --> [*]
1.3.3 Credit Limit Calculation
When checking if a customer can use On-Account payment:
Available Credit = Credit Limit - (Current Debt + Pending Layaway Balances + Current Cart Total)
Example:
- Credit Limit: $500
- Current Debt: $150
- Pending Layaway (remaining balance): $100
- Current Cart: $200
- Available Credit: $500 - ($150 + $100 + $200) = $50
- Result: Blocked (cart exceeds available credit by $150)
1.3.4 Reports: Financial Settlement
| Report | Purpose | Key Data Fields |
|---|---|---|
| Payment Method Breakdown | Analyze tender mix | Cash total, card total, gift card total, on-account total, Affirm total, split tender count |
| Affirm Financing Summary | Track third-party financing usage | Affirm transaction count, total financed, avg loan amount, approval rate |
| Layaway Status Report | Monitor active layaways | Active count, total deposits, total remaining balances, overdue count |
| On-Account Aging Report | Track customer credit usage | Customer, balance, credit limit, days outstanding, aging buckets (30/60/90) |
1.4 Post-Sale Management (History, Void, Returns, Exchanges)
Scope: Corrections, History, Returns, Dedicated Exchanges, Receipt Reprinting, and Data Export.
sequenceDiagram
autonumber
participant U as Manager
participant UI as POS UI
participant API as Backend
participant DB as DB
U->>UI: Open Sales History
U->>UI: Apply Filters (Date, User, Status)
UI->>API: GET /sales/history
alt Action: Void (Same-Day Correction Only)
Note right of U: "Oops, wrong item - same day"
UI->>API: GET /orders/{id}/void-eligibility
API-->>UI: Check if same business day & drawer still open
alt Eligible for Void
U->>UI: Click Void -> Confirm
UI->>API: POST /sales/{id}/void
par Reversal
API->>DB: Reverse Inventory & Loyalty
API->>DB: Reverse Commission (Full)
API->>DB: Set Status "VOIDED"
end
UI-->>U: Alert: "Manually Refund Card Terminal"
else Not Eligible (Different Day)
UI-->>U: "Cannot void - use Return instead"
end
else Action: Return (with Policy Check)
Note right of U: "Customer brought it back"
U->>UI: Scan Receipt Barcode
UI->>API: POST /receipts/validate {barcode_data}
API->>DB: Verify receipt authenticity & match to order
alt Receipt Valid
API-->>UI: Receipt Verified - Load Original Sale
else Receipt Invalid
API-->>UI: "Invalid Receipt - Cannot Process Return"
UI-->>U: "Receipt validation failed. Verify receipt."
end
UI->>API: GET /sales/{id}/return-eligibility
API-->>UI: Return Policy Result
alt Within Policy (Receipt + Time)
UI->>UI: Select Items to Return
UI->>UI: Process Refund to Original Payment
API->>DB: Reverse Commission (Proportional)
else Outside Policy (No Receipt / Expired)
UI-->>U: "Policy Exception - Store Credit Only"
opt Manager Override
UI-->>U: Prompt Manager PIN
U->>UI: Enter PIN + Reason Code
end
UI->>UI: Issue Store Credit
end
else Action: Exchange (Dedicated Flow)
Note right of U: "Customer wants different size"
U->>UI: Load Original Sale -> Click "Exchange"
UI->>UI: Select Item(s) to Exchange OUT
UI->>UI: Scan/Add New Item(s) IN
UI->>UI: Calculate Price Difference
alt Customer Owes Money
UI-->>U: "Collect: $15.00 difference"
U->>UI: Process Payment
else Store Owes Refund
UI-->>U: "Refund: $10.00 to customer"
U->>UI: Process Refund
else Even Exchange
UI->>UI: No Payment Required
end
UI->>API: POST /sales/exchange
API->>DB: Create Exchange Record (Links Old & New)
API->>DB: Update Inventory (Both Directions)
API->>DB: Adjust Commission (if price difference)
else Action: Reprint Receipt
U->>UI: Click "Reprint Receipt"
UI->>API: GET /orders/{id}/receipt
API-->>UI: Return Receipt Data
U->>UI: Select Format (Thermal / A4 / Email)
opt Email to Different Address
U->>UI: Enter Email Address
UI->>API: POST /orders/{id}/email-receipt
end
else Action: Pay Off Layaway
U->>UI: Retrieve Layaway
UI->>UI: Pay Remaining Balance
UI->>API: Finalize (Status: COMPLETED)
API->>DB: Release Reserve -> Sold
end
Cross-Reference: See Module 4, Section 4.13 for POS inventory integration details.
1.4.1 Void vs. Return Rules
| Aspect | Void | Return |
|---|---|---|
| When allowed | Same business day, drawer open | Any time within policy |
| Inventory | Reversed immediately | Reversed on completion |
| Commission | Full reversal | Proportional reversal |
| Card refund | Manual on terminal only | Staff chooses: Manual on terminal OR Automatic via token |
| Cash refund | Cash returned from drawer | Cash returned from drawer |
| Audit trail | “VOIDED” status | “RETURNED” line items |
| Use case | Cashier mistake | Customer return |
Card Refund Method Selection (Returns Only):
- Manual on terminal: Customer presents physical card. Staff processes refund on the payment terminal. Use when customer is present with their card.
- Automatic via token: System uses the stored payment token to process refund without card present. Use when customer does not have their card or for remote returns.
- Cash payments: Refund is always cash from drawer. No token option available for cash transactions.
1.4.2 Reports: Post-Sale
| Report | Purpose | Key Data Fields |
|---|---|---|
| Void Summary | Track voided transactions | Void count, total value, reason codes, voiding employee, authorizing manager |
| Return Summary | Track returns by period | Return count, total refund value, refund method breakdown, top returned items |
| Exchange Summary | Track exchange activity | Exchange count, price difference (net), upgrade vs downgrade ratio |
| Refund Method Report | Analyze refund processing | Manual on terminal count, automatic via token count, cash refund count |
Email Template: TMPL-REFUND-CONFIRMATION
| Field | Value |
|---|---|
| Trigger | Refund processed (card or store credit) |
| Recipient | Customer (if email on file) |
| Content | Refund amount, method, expected processing time, original order reference |
1.5 Gift Card Management
Scope: Selling, Activating, Redeeming, Balance Management, and Jurisdiction Compliance.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Gift Card Operations
alt Sell New Gift Card
U->>UI: Scan Gift Card (or Enter Number)
U->>UI: Enter Load Amount
UI->>API: POST /giftcards/activate
API->>DB: Create Gift Card Record (Number, Balance, Expiry)
API->>DB: Set Jurisdiction Rules (based on store location)
API-->>UI: Activation Success
UI->>UI: Add Gift Card to Cart as Product
Note right of UI: Proceeds through normal checkout
end
alt Check Balance
U->>UI: Click "Gift Card Balance"
U->>UI: Scan/Enter Card Number
UI->>API: GET /giftcards/{number}/balance
API-->>UI: Display Balance & Expiry Date (if applicable)
end
alt Reload Existing Card
U->>UI: Scan Gift Card
UI->>API: GET /giftcards/{number}
API-->>UI: Card Found - Current Balance
U->>UI: Enter Reload Amount
UI->>UI: Add Reload to Cart
Note right of UI: Balance updated after payment
end
alt Cash Out (California Compliance)
U->>UI: Scan Gift Card
UI->>API: GET /giftcards/{number}
API-->>UI: Return Balance & Jurisdiction
alt Balance <= Cash Out Threshold
UI-->>U: "Eligible for Cash Out: $8.50"
U->>UI: Process Cash Out
API->>DB: Zero Balance, Record Cash Out
else Balance > Threshold
UI-->>U: "Not eligible for cash out"
end
end
1.5.1 Gift Card State Machine
stateDiagram-v2
[*] --> INACTIVE: Card Manufactured
INACTIVE --> ACTIVE: Activated at POS (Sold)
ACTIVE --> ACTIVE: Partial Redemption
ACTIVE --> ACTIVE: Reload
ACTIVE --> DEPLETED: Balance = $0.00
ACTIVE --> EXPIRED: Past Expiry Date (where allowed)
DEPLETED --> ACTIVE: Reloaded
DEPLETED --> CASHED_OUT: Cash Out Processed
EXPIRED --> [*]
CASHED_OUT --> [*]
note right of ACTIVE
Balance > $0
Within expiry (if applicable)
end note
note right of DEPLETED
Balance = $0
Can be reloaded
end note
1.5.2 Gift Card Jurisdiction Compliance
| Rule | Virginia | California | New York | Default |
|---|---|---|---|---|
| Expiry Allowed | Yes (5yr min) | No | No | No |
| Inactivity Fees | Yes (after 12mo) | No | No | No |
| Cash Out Threshold | None | $10.00 | None | None |
| Cash Out Required | No | Yes | No | No |
Implementation: Store location determines which jurisdiction rules apply. System defaults to most restrictive (California-style) and enables features where permitted.
1.5.3 Reports: Gift Cards
| Report | Purpose | Key Data Fields |
|---|---|---|
| Gift Card Liability Report | Track outstanding gift card balances | Total active cards, total outstanding balance, avg balance per card |
| Gift Card Activity Report | Monitor gift card transactions | Activations, reloads, redemptions, cash-outs, expired cards by period |
| Gift Card Aging | Identify dormant cards | Card number, last activity date, balance, days inactive |
1.6 Special Orders & Back Orders
Scope: Ordering out-of-stock items with customer deposits.
sequenceDiagram
autonumber
participant U as Staff
participant C as Customer
participant UI as POS UI
participant API as Backend
participant DB as DB
participant INV as Inventory/Purchasing
Note over U, INV: Special Order Flow
U->>UI: Search Product -> Out of Stock
UI-->>U: "Available for Special Order"
U->>UI: Click "Create Special Order"
UI->>UI: Attach Customer (Required)
UI->>UI: Enter Quantity Needed
UI->>UI: Calculate Deposit (Min 50% or Full)
C->>U: Agrees to Deposit
U->>UI: Process Deposit Payment
UI->>API: POST /special-orders/create
par Order Creation
API->>DB: Create Special Order Record
API->>DB: Link Customer & Deposit Payment
API->>DB: Set Status: DEPOSIT_PAID
API->>INV: Notify Purchasing Team
end
API-->>UI: Order #SO-12345 Created
UI->>U: Print Special Order Receipt
Note over INV, DB: Later - Item Arrives
INV->>API: POST /special-orders/{id}/received
API->>DB: Update Status: READY_FOR_PICKUP
API-->>C: Send Notification (SMS/Email)
Note over U, C: Customer Pickup
U->>UI: Retrieve Special Order
UI->>UI: Show Deposit Already Paid
UI->>UI: Calculate Remaining Balance
U->>UI: Collect Remaining Payment
UI->>API: POST /special-orders/{id}/complete
API->>DB: Status: COMPLETED
1.6.1 Special Order State Machine
stateDiagram-v2
[*] --> CREATED: Order Initiated
CREATED --> DEPOSIT_PAID: Deposit Received
DEPOSIT_PAID --> ORDERED: Sent to Vendor
ORDERED --> RECEIVED: Item Arrived at Store
RECEIVED --> READY_FOR_PICKUP: Inspected & Staged
READY_FOR_PICKUP --> COMPLETED: Customer Picked Up
READY_FOR_PICKUP --> ABANDONED: No Pickup (30+ days)
CREATED --> CANCELLED: Customer Cancels (No Deposit)
DEPOSIT_PAID --> CANCELLED_REFUND: Customer Cancels (Refund Deposit)
CANCELLED --> [*]
CANCELLED_REFUND --> [*]
COMPLETED --> [*]
ABANDONED --> [*]
1.6.2 Reports: Special Orders
| Report | Purpose | Key Data Fields |
|---|---|---|
| Special Order Status Report | Track all special orders by status | Order ID, customer, item, status, deposit amount, days in current status |
| Special Order Aging | Identify overdue or stalled orders | Orders past expected date, orders approaching abandonment threshold |
Email Template: TMPL-SPECIAL-ORDER-READY
| Field | Value |
|---|---|
| Trigger | Special order status changes to READY_FOR_PICKUP |
| Recipient | Customer |
| Content | Order details, remaining balance, store address, pickup deadline |
1.7 Multi-Store Inventory & Transfers
Scope: Cross-store inventory lookup, transfers, and reservations (requires full payment).
Cross-Reference: See Module 4, Section 4.8 for inter-store transfer details.
sequenceDiagram
autonumber
participant U as Staff
participant C as Customer
participant UI as POS UI
participant API as Backend
participant DB as DB
participant S2 as Store B
Note over U, S2: Multi-Store Inventory Check
U->>UI: Search Product -> Low/No Stock
U->>UI: Click "Check Other Stores"
UI->>API: GET /inventory/multi-store/{sku}
Note right of API: Eventually consistent (max 5 min stale)
API-->>UI: Return Stock Levels (All Locations)
UI-->>U: Display: "Store B: 5 units, Store C: 2 units"
alt Request Transfer to This Store
U->>UI: Select Store B -> "Request Transfer"
UI->>UI: Enter Quantity
UI-->>U: "Customer must pay in full to process"
C->>U: Agrees to Pay
U->>UI: Add Item to Cart (Status: TRANSFER_PENDING)
U->>UI: Complete Full Payment
UI->>API: POST /transfers/request
API->>DB: Create Transfer Record (PAID)
API->>S2: Notify Store B to Ship
API-->>UI: Transfer #TRF-789 Created
UI-->>U: "Item will arrive in 2-3 days"
UI->>U: Print Transfer Receipt for Customer
else Reserve at Other Store (Customer Pickup)
U->>UI: Select Store B -> "Reserve for Pickup"
UI-->>U: "Customer must pay in full to reserve"
C->>U: Agrees to Pay
U->>UI: Process Full Payment
UI->>API: POST /reservations/create
API->>DB: Create Reservation (PAID)
API->>S2: Reserve Item at Store B
API-->>UI: Reservation #RES-456 Created
UI-->>U: "Reserved at Store B until [date]"
UI->>U: Print Pickup Voucher for Customer
end
Note over S2, DB: Store B Fulfillment
S2->>API: POST /transfers/{id}/picked
API->>DB: Update Status: PICKING
S2->>API: POST /transfers/{id}/shipped
API->>DB: Update Status: SHIPPED
Note right of DB: Carrier scan triggers IN_TRANSIT
API-->>U: Notification: "Transfer Shipped"
1.7.1 Transfer State Machine
stateDiagram-v2
[*] --> REQUESTED: Transfer Initiated
REQUESTED --> PAID: Customer Paid in Full
PAID --> PICKING: Source Store Processing
PICKING --> SHIPPED: Handed to Carrier
SHIPPED --> IN_TRANSIT: Carrier Scan Confirmed
IN_TRANSIT --> RECEIVED: Arrived at Destination
RECEIVED --> COMPLETED: Customer Notified & Picked Up
REQUESTED --> CANCELLED: Cancelled Before Payment
PAID --> CANCELLED_REFUND: Cancelled After Payment
CANCELLED --> [*]
CANCELLED_REFUND --> [*]
COMPLETED --> [*]
1.7.2 Reservation State Machine
stateDiagram-v2
[*] --> REQUESTED: Reservation Initiated
REQUESTED --> PAID: Customer Paid in Full
PAID --> RESERVED: Item Held at Store
RESERVED --> PICKED_UP: Customer Collected
RESERVED --> EXPIRED: Reservation Deadline Passed
EXPIRED --> REFUND_PENDING: Auto-Refund Triggered
REFUND_PENDING --> REFUNDED: Refund Processed
REQUESTED --> CANCELLED: Cancelled Before Payment
CANCELLED --> [*]
PICKED_UP --> [*]
REFUNDED --> [*]
1.7.3 Ship to Customer from Other Location
Scope: Direct shipping from a source store to the customer’s address, with carrier integration for real-time shipping cost calculation.
sequenceDiagram
autonumber
participant U as Staff
participant C as Customer
participant UI as POS UI
participant API as Backend
participant DB as DB
participant S2 as Source Store
participant SHIP as Carrier API
Note over U, SHIP: Ship to Customer from Another Store
U->>UI: Search Product -> Low/No Stock
U->>UI: Click "Check Other Stores"
UI->>API: GET /inventory/multi-store/{sku}
API-->>UI: Return Stock Levels (All Locations)
U->>UI: Select Source Store -> "Ship to Customer"
UI-->>U: "Enter Customer Shipping Address"
C->>U: Provides Shipping Address
U->>UI: Enter Address Details
UI->>API: POST /shipping/calculate
Note right of API: {origin_store, destination_address, items, weight}
API->>SHIP: Request Shipping Rates
SHIP-->>API: Return Shipping Options
API-->>UI: Display Shipping Options
UI-->>U: "Standard (3-5 days): $8.99 | Express (1-2 days): $15.99"
C->>U: Selects Shipping Option
U->>UI: Add Item + Shipping to Cart
UI->>UI: Total = Item Price + Shipping Cost
UI-->>U: "Customer must pay in full"
C->>U: Pays Full Amount (Item + Shipping)
U->>UI: Process Payment
UI->>API: POST /shipments/create
API->>DB: Create Shipment Record (PAID)
API->>S2: Notify Source Store to Pack & Ship
API-->>UI: Shipment #SHP-101 Created
UI-->>U: "Item will be shipped to customer"
UI->>U: Print Shipment Receipt for Customer
Note over S2, SHIP: Source Store Fulfillment
S2->>API: POST /shipments/{id}/packed
API->>DB: Update Status: PACKED
S2->>SHIP: Request Shipping Label
SHIP-->>S2: Return Label + Tracking Number
S2->>API: POST /shipments/{id}/shipped {tracking_number}
API->>DB: Update Status: SHIPPED
API-->>C: Send Tracking Email to Customer
Note over SHIP, DB: Delivery
SHIP->>API: Webhook: Delivered
API->>DB: Update Status: DELIVERED
API-->>C: Send Delivery Confirmation Email
1.7.4 Ship-to-Customer State Machine
stateDiagram-v2
[*] --> REQUESTED: Shipment Initiated
REQUESTED --> PAID: Customer Paid (Item + Shipping)
PAID --> PICKING: Source Store Processing
PICKING --> PACKED: Items Packed
PACKED --> SHIPPED: Label Generated & Handed to Carrier
SHIPPED --> IN_TRANSIT: Carrier Pickup Confirmed
IN_TRANSIT --> DELIVERED: Delivery Confirmed
REQUESTED --> CANCELLED: Cancelled Before Payment
PAID --> CANCELLED_REFUND: Cancelled After Payment (Full Refund)
CANCELLED --> [*]
CANCELLED_REFUND --> [*]
DELIVERED --> [*]
1.7.5 Reports: Multi-Store & Shipping
| Report | Purpose | Key Data Fields |
|---|---|---|
| Transfer Status Report | Track inter-store transfers | Transfer ID, source/destination, status, days in transit, customer |
| Shipping Fulfillment Report | Monitor ship-to-customer orders | Shipment ID, carrier, tracking, status, delivery date, shipping cost |
| Reservation Report | Track cross-store reservations | Reservation ID, store, item, status, expiry date, customer |
| Multi-Store Inventory Discrepancy | Flag stock mismatches after sync | SKU, expected vs actual, location, last sync time |
Email Template: TMPL-SHIPMENT-TRACKING
| Field | Value |
|---|---|
| Trigger | Shipment status changes to SHIPPED |
| Recipient | Customer |
| Content | Tracking number, carrier, estimated delivery, order details |
Email Template: TMPL-DELIVERY-CONFIRMATION
| Field | Value |
|---|---|
| Trigger | Shipment status changes to DELIVERED |
| Recipient | Customer |
| Content | Delivery confirmation, order summary, return/exchange policy link |
1.8 Sales Commissions
Scope: Track employee sales for commission calculation with proportional reversal on returns.
Cross-Reference: See Module 5, Section 5.5 for user commission rate configuration.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Commission Tracking (Per Transaction)
U->>UI: Login to POS (Employee ID Captured)
Note right of UI: Throughout Sale...
UI->>UI: Employee ID attached to session
U->>UI: Complete Sale
UI->>API: POST /orders/finalize
par Commission Recording
API->>DB: Calculate Commission Amount
Note right of API: Based on: Sale Total, Product Categories, Employee Tier
API->>DB: Insert Commission Record
Note right of DB: {employee_id, order_id, amount, date, line_items[]}
end
Note over U, DB: Commission Reversal on Return
U->>UI: Process Return (2 of 3 items)
UI->>API: POST /returns/create
par Proportional Reversal
API->>DB: Calculate Return Value / Original Sale Value
Note right of API: $80 returned / $120 sale = 66.7%
API->>DB: Reverse 66.7% of Commission
API->>DB: Update Commission Record
end
Note over U, DB: Commission Reporting
U->>UI: Manager -> Reports -> Commissions
UI->>API: GET /reports/commissions?date_range&employee
API->>DB: Aggregate Commission Data
API-->>UI: Return Commission Summary
UI-->>U: Display: Employee | Sales | Returns | Net Commission
1.8.1 Commission Calculation Rules
commission_calculation:
# Base calculation
base_method: "percentage_of_sale"
# Reversal rules
void_reversal: "full" # 100% reversal on void
return_reversal: "proportional" # Based on returned value
# Proportional calculation
# Commission Adjustment = Original Commission × (Returned Value / Original Sale Value)
# Example:
# Original Sale: $120, Commission: $6.00 (5%)
# Return: $80 worth of items
# Reversal: $6.00 × ($80/$120) = $4.00
# Net Commission: $6.00 - $4.00 = $2.00
1.8.2 Reports: Commissions
| Report | Purpose | Key Data Fields |
|---|---|---|
| Commission Summary | Period overview of all commissions | Total sales, total commissions, avg commission rate, top earners |
| Commission by Employee | Individual employee performance | Employee, sales count, sales total, commission earned, returns impact |
| Commission Reversal Log | Track commission adjustments | Order ID, original commission, reversal amount, reversal type (void/return), date |
1.9 Return Policy Engine
Configuration Note: Store return and exchange policies are manually configured in the application’s Settings/Setup module. Policies are NOT hardcoded in the application. Each tenant can configure different policies per store location and per sales channel (online vs in-store).
Scope: Configurable return rules based on receipt, time, and item type.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Return Policy Evaluation
U->>UI: Start Return
alt Has Receipt
U->>UI: Scan Receipt Barcode
UI->>API: POST /receipts/validate {barcode_data}
API->>DB: Verify receipt authenticity & match to order
alt Receipt Valid
API-->>UI: Receipt Verified
else Receipt Invalid
API-->>UI: "Invalid Receipt"
UI-->>U: "Receipt validation failed"
end
UI->>API: GET /orders/{id}
API-->>UI: Return Original Sale Data
UI->>UI: Calculate Days Since Purchase
alt Within 30 Days
UI-->>U: "Full Refund Eligible"
UI->>UI: Refund to Original Payment Method
else 31-90 Days
UI-->>U: "Store Credit Only"
UI->>UI: Issue Store Credit
else Beyond 90 Days
UI-->>U: "Manager Approval Required"
U->>UI: Enter Manager PIN
U->>UI: Select Exception Reason
end
else No Receipt
U->>UI: Scan Item Barcode
UI->>API: GET /products/{sku}/current-price
API-->>UI: Return Current Selling Price
UI-->>U: "No Receipt - Store Credit at Current Price"
UI-->>U: "Manager Approval Required"
U->>UI: Enter Manager PIN
UI->>UI: Issue Store Credit (Current Price)
end
alt Item-Specific Rules
opt Final Sale Item
UI-->>U: "BLOCKED: Final Sale - No Returns"
end
opt Opened Electronics
UI-->>U: "Restocking Fee: 15%"
UI->>UI: Deduct Restocking Fee
end
end
1.9.1 Default Policy Configuration Examples
Note: These are example default values only. Actual policies are configured per tenant in the application’s Settings/Setup module and are NOT hardcoded.
Online Sales Policy
| Policy | Timeframe | Refund Method | Conditions |
|---|---|---|---|
| Return | 30 days from delivery | Original payment method minus shipping & processing fees | Item in original condition, receipt required |
| Exchange | 30 days from delivery | Price difference applies; shipping & processing fees excluded from refund | Same category items preferred |
In-Store Sales Policy
| Policy | Timeframe | Refund Method | Conditions |
|---|---|---|---|
| Return | 24 hours from purchase | Original payment method | Receipt required (scanned for validation) |
| Exchange | 24 hours from purchase | Price difference applies | Receipt required (scanned for validation) |
Policy Configuration Fields (Settings/Setup):
- Return window (days/hours per channel)
- Exchange window (days/hours per channel)
- Refund method options (original method, store credit, etc.)
- Restocking fee percentage and applicable categories
- Final sale categories
- Manager override permissions
- Online shipping/processing fee exclusion rules
1.9.2 Reports: Return Policy
| Report | Purpose | Key Data Fields |
|---|---|---|
| Return Policy Exception Report | Track manager overrides on return policy | Order ID, exception type, reason code, authorizing manager, refund amount |
| Return Reason Analysis | Understand why customers return items | Reason code, frequency, product categories, avg refund value |
| Channel Return Comparison | Compare online vs in-store returns | Channel, return count, return rate, avg processing time |
1.10 Serial Number Tracking
Scope: Capture serial numbers for designated high-value items.
Cross-Reference: See Module 4, Section 4.10 for serial number tracking lifecycle.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Serial Number Capture at Sale
U->>UI: Scan Product (Serial Required Flag)
UI-->>U: "Enter Serial Number"
U->>UI: Scan/Enter Serial Number
UI->>API: POST /serials/validate
alt Serial Already Sold
API-->>UI: "ERROR: Serial already in system"
UI-->>U: Block Item - Investigate
else Serial Valid/New
API-->>UI: Valid
UI->>UI: Attach Serial to Line Item
end
U->>UI: Complete Sale
UI->>API: POST /orders/finalize
API->>DB: Store Serial with Order Line
Note right of DB: {order_id, line_id, serial_number, product_sku}
Note over U, DB: Serial Lookup (Returns/Warranty)
U->>UI: Search by Serial Number
UI->>API: GET /serials/{number}
API-->>UI: Return Purchase History
UI-->>U: "Sold on [date] to [customer] - Order #123"
1.10.1 Reports: Serial Number Tracking
| Report | Purpose | Key Data Fields |
|---|---|---|
| Serial Number Audit Trail | Full history of serial-tracked items | Serial number, product SKU, sale date, customer, return status |
| Missing Serial Report | Flag transactions missing required serials | Order ID, product, serial required flag, serial captured (Y/N) |
1.11 Hold for Pickup (Including BOPIS)
Scope: Fully paid items held for customer pickup at the CURRENT store. This includes both in-store holds and BOPIS (Buy Online, Pick Up In Store) orders. Different from Layaway (partial payment) and Reservation (item at another store).
Cross-Reference: See Module 4, Section 4.13 for inventory reservation model.
Reservation vs Hold for Pickup - Key Distinction:
Aspect Reservation (Section 1.7.2) Hold for Pickup (Section 1.11) What it is Reserve item at a DIFFERENT store for customer pickup Pay for items at THIS store, pick up later Origin In-store POS (staff-initiated) In-store POS or Online order (BOPIS) Payment Full payment at originating store Full payment required upfront Inventory location At the remote store At the current store Customer picks up at The remote store The same store (or originating store for BOPIS) BOPIS No Yes - this is the BOPIS flow Use case “Store B has it, I’ll drive there to get it” “I’ll pay now, come back Saturday” or “Order online, pick up in store” Examples:
Reservation Example: Customer is at Store A. Item is out of stock. Store B has 3 units. Customer pays at Store A. Item is reserved at Store B. Customer drives to Store B to pick it up with their pickup voucher.
Hold for Pickup Example 1 (In-Store): Customer at Store A buys a large piece of furniture. Pays in full. Asks store to hold it until Saturday when they can bring a truck. Store stages the item. Customer returns Saturday to pick up.
Hold for Pickup Example 2 (Online/BOPIS): Customer browses the online store. Selects “Pick Up at Store A.” Pays online. Store A receives the order, stages the items. Customer receives “Ready for Pickup” notification. Customer walks into Store A and picks up.
sequenceDiagram
autonumber
participant U as Staff
participant C as Customer
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Hold for Pickup Flow
U->>UI: Add Items to Cart
U->>UI: Attach Customer
U->>UI: Click "Hold for Pickup"
UI->>UI: Set Pickup Deadline (Default: 7 days)
UI-->>U: "Customer must pay in full"
C->>U: Pays Full Amount
U->>UI: Process Payment
UI->>API: POST /orders/finalize
API->>DB: Create Order (Status: HOLD_FOR_PICKUP)
API->>DB: Set Pickup Deadline
API->>DB: Reserve Inventory
API-->>UI: Order Complete
UI->>U: Print Pickup Slip
Note over U, DB: Customer Returns to Pickup
U->>UI: Retrieve Held Order
UI->>API: GET /orders/held/{id}
API-->>UI: Display Order Details
U->>UI: Verify Customer ID
U->>UI: Click "Release to Customer"
UI->>API: PATCH /orders/{id}/pickup-complete
API->>DB: Status: COMPLETED
API->>DB: Release Inventory Hold
Note over API, DB: Expiry Handling (Background Job)
API->>DB: Check Overdue Holds Daily
alt Hold Expired
API->>DB: Status: HOLD_EXPIRED
API-->>U: Alert: "Hold #123 expired - contact customer"
end
1.11.1 Hold for Pickup State Machine
stateDiagram-v2
[*] --> HOLD_FOR_PICKUP: Full Payment + Hold Request
HOLD_FOR_PICKUP --> READY_FOR_PICKUP: Items Staged
READY_FOR_PICKUP --> COMPLETED: Customer Picked Up
READY_FOR_PICKUP --> HOLD_EXPIRED: Deadline Passed
HOLD_EXPIRED --> CONTACT_CUSTOMER: Staff Notified
CONTACT_CUSTOMER --> READY_FOR_PICKUP: Deadline Extended
CONTACT_CUSTOMER --> REFUNDED: Customer Wants Refund
REFUNDED --> [*]
COMPLETED --> [*]
1.12 Cash Drawer Management
Scope: Opening float, blind counts, variance tracking, X-reports (mid-shift), and Z-reports (end-of-shift).
Cross-Reference: See Module 5, Section 5.7 for register configuration and Section 5.6 for clock-in/clock-out time tracking.
sequenceDiagram
autonumber
participant U as Staff
participant M as Manager
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Shift Start - Open Drawer
M->>UI: Open Cash Drawer Session
M->>UI: Enter Opening Float Amount
UI->>API: POST /cash-drawer/open
API->>DB: Create Drawer Session (opening_float, start_time)
API-->>UI: Drawer Session Started
Note over U, DB: During Shift - Transactions
loop Cash Transactions
U->>UI: Process Cash Sale/Refund
UI->>DB: Record Cash In/Out
end
opt Mid-Shift Check (X-Report)
U->>UI: Click "X-Report"
UI->>API: GET /cash-drawer/x-report
API->>DB: Calculate Current Expected Amount
Note right of API: Expected = Float + Cash Sales So Far - Cash Refunds - Payouts
API-->>UI: Return X-Report Data
UI->>U: Print/Display X-Report
Note right of UI: X-Report does NOT close the drawer
Note right of UI: Can be run multiple times per shift
end
Note over U, DB: Shift End - Close Drawer
U->>UI: Click "Close Drawer"
UI-->>U: "Perform Blind Count"
U->>UI: Enter Counted Cash (Blind - no expected shown)
UI->>API: POST /cash-drawer/count
API->>DB: Calculate Expected Amount
Note right of API: Expected = Float + Cash Sales - Cash Refunds - Payouts
API->>DB: Calculate Variance (Counted - Expected)
alt Variance Within Tolerance
API-->>UI: "Drawer Balanced"
else Variance Outside Tolerance
API-->>UI: "Variance: -$5.00 - Manager Approval Required"
M->>UI: Enter PIN + Variance Reason
end
API->>DB: Close Drawer Session
API->>DB: Record Final Counts & Variance
UI->>U: Print Z-Report
Note over M, DB: Z-Report Contents
Note right of UI: - Opening Float
Note right of UI: - Cash Sales Total
Note right of UI: - Cash Refunds Total
Note right of UI: - Expected Cash
Note right of UI: - Counted Cash
Note right of UI: - Variance (+/-)
Note right of UI: - Transaction Count
1.12.1 Cash Drawer State Machine
stateDiagram-v2
[*] --> CLOSED: Drawer Secured
CLOSED --> OPENING: Manager Opens
OPENING --> OPEN: Float Entered
OPEN --> OPEN: Transactions Processed
OPEN --> COUNTING: Close Initiated
COUNTING --> BALANCED: Variance Within Tolerance
COUNTING --> VARIANCE_DETECTED: Variance Outside Tolerance
VARIANCE_DETECTED --> MANAGER_REVIEW: Awaiting Approval
MANAGER_REVIEW --> BALANCED: Manager Approved
BALANCED --> CLOSED: Z-Report Printed
CLOSED --> [*]
1.12.2 X-Report vs Z-Report
| Aspect | X-Report | Z-Report |
|---|---|---|
| When | Any time during shift | End of shift only |
| Closes Drawer | No | Yes |
| Resets Counters | No | Yes |
| Frequency | Unlimited per shift | Once per shift |
| Use Cases | Mid-shift audit, shift handoff check, manager spot-check | End-of-day close, final reconciliation |
| Content | Same as Z-Report (opening float, sales, refunds, expected cash) | Same content + final blind count + variance |
X-Report Use Cases:
- Manager wants to verify cash during a busy period
- Shift handoff between employees (outgoing staff checks before handing off)
- Routine mid-day audit required by store policy
- Investigating a suspected cash handling issue
1.12.3 Reports: Cash Drawer
| Report | Purpose | Key Data Fields |
|---|---|---|
| X-Report | Mid-shift cash snapshot (does not close drawer) | Opening float, cash sales, cash refunds, payouts, expected cash |
| Z-Report | End-of-shift final reconciliation | Same as X-Report + blind count, variance, manager approval |
| Variance History Report | Track cash variances over time | Date, shift, employee, expected, counted, variance, reason code |
| Cash Movement Log | Detailed cash in/out record | Timestamp, type (sale/refund/payout/float), amount, employee |
1.13 Price Check Mode
Scope: Quick price lookup without adding to cart.
sequenceDiagram
autonumber
participant U as Staff
participant C as Customer
participant UI as POS UI
participant API as Backend
C->>U: "How much is this?"
U->>UI: Click "Price Check" Mode
UI->>UI: Switch to Price Check Display
U->>UI: Scan Item Barcode
UI->>API: GET /products/{sku}
API-->>UI: Return Product Info
UI-->>U: Display Large Price
UI-->>U: Show: Name, SKU, Price, Stock Level
opt Promotion Active
UI-->>U: "ON SALE: Was $50, Now $39.99"
end
U->>UI: Press Any Key / Timeout
UI->>UI: Return to Normal Sale Mode
1.14 Coupon System
Scope: Single-use and multi-use coupons (separate from promo codes).
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Coupon Application
U->>UI: Click "Apply Coupon"
U->>UI: Scan/Enter Coupon Code
UI->>API: POST /coupons/validate
API->>DB: Lookup Coupon
alt Single-Use Coupon (e.g., Birthday)
API->>DB: Check if Already Redeemed
alt Already Used
API-->>UI: "Coupon Already Redeemed"
else Valid
API-->>UI: Return Discount Details
end
else Multi-Use Coupon (e.g., SAVE10)
API->>DB: Check Usage Limit & Expiry
API-->>UI: Return Discount Details
end
alt Valid Coupon
UI->>UI: Apply Discount
UI->>UI: Display Savings
Note right of UI: Coupon marked for redemption at finalize
end
U->>UI: Complete Sale
UI->>API: POST /orders/finalize
par Coupon Processing
API->>DB: Mark Single-Use Coupon as REDEEMED
API->>DB: Increment Multi-Use Coupon Counter
API->>DB: Link Coupon to Order
end
1.14.1 Coupon State Machine
stateDiagram-v2
[*] --> CREATED: Coupon Generated
CREATED --> ACTIVE: Published/Distributed
state ACTIVE {
[*] --> AVAILABLE
AVAILABLE --> APPLIED: Added to Cart
APPLIED --> AVAILABLE: Removed from Cart
APPLIED --> REDEEMED: Order Finalized
}
ACTIVE --> EXPIRED: Past Expiry Date
ACTIVE --> DEPLETED: Usage Limit Reached (Multi-Use)
REDEEMED --> [*]: Single-Use Complete
EXPIRED --> [*]
DEPLETED --> [*]
1.15 Flexible Loyalty Programs
Scope: Configurable loyalty: points-per-dollar, punch cards, spend thresholds.
Cross-Reference: See Module 5, Section 5.17 for loyalty program settings configuration (point rates, tier thresholds, gift card settings).
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Loyalty Program Types
U->>UI: Attach Customer to Sale
UI->>API: GET /customers/{id}/loyalty
API-->>UI: Return Loyalty Profile & Active Programs
alt Points Per Dollar Program
Note right of UI: Earn 1 point per $1 spent
UI->>UI: Calculate Points to Earn
UI-->>U: "Customer earns 45 points"
opt Redeem Points
U->>UI: Click "Redeem Points"
UI-->>U: "500 points = $5 off"
U->>UI: Apply Redemption
end
else Punch Card Program
Note right of UI: Buy 10, Get 1 Free
UI->>UI: Check Qualifying Items in Cart
UI-->>U: "Coffee Purchase: Punch 3 of 10"
opt Card Complete
UI-->>U: "FREE ITEM EARNED!"
UI->>UI: Auto-Apply Free Item Discount
end
else Spend Threshold Program
Note right of UI: Spend $100, Get $10 Off
UI->>UI: Check Customer's Period Spend
UI-->>U: "Customer has spent $85 this month"
opt Threshold Reached This Sale
UI-->>U: "$10 Reward Unlocked!"
UI->>UI: Apply or Save for Next Visit
end
end
U->>UI: Complete Sale
UI->>API: POST /orders/finalize
par Loyalty Updates
API->>DB: Award Points Earned
API->>DB: Update Punch Card Count
API->>DB: Update Spend Totals
API->>DB: Check Tier Upgrades
end
1.15.1 Customer Tier State Machine
stateDiagram-v2
[*] --> BRONZE: New Customer
BRONZE --> SILVER: Spend >= $1,000/year
SILVER --> GOLD: Spend >= $5,000/year
GOLD --> GOLD: Maintains Spend
GOLD --> SILVER: Annual Spend < $5,000
SILVER --> BRONZE: Annual Spend < $1,000
note right of BRONZE
1x points
Standard pricing
end note
note right of SILVER
1.5x points
5% discount
end note
note right of GOLD
2x points
10% discount
Early access
end note
1.15.2 Reports: Loyalty Programs
| Report | Purpose | Key Data Fields |
|---|---|---|
| Loyalty Points Summary | Track points economy | Total points issued, redeemed, expired, outstanding balance |
| Tier Distribution Report | Customer breakdown by tier | Tier, customer count, avg spend, upgrade/downgrade count |
| Points Expiry Forecast | Predict upcoming point expirations | Customer, points expiring, expiry date, days remaining |
| Punch Card Activity | Track punch card completions | Program, cards started, cards completed, avg completion time |
| Loyalty ROI Analysis | Measure loyalty program value | Points cost, additional revenue from loyalty customers, retention rate |
1.16 Offline Operations
Scope: Queue-and-sync architecture for network resilience.
Cross-Reference: See Module 4, Section 4.15 for offline inventory operations.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant Q as Local Queue
participant API as Backend
participant DB as DB
Note over U, DB: Network Loss Detected
UI->>UI: Detect Network Unavailable
UI-->>U: Display "OFFLINE MODE" Indicator
Note over U, Q: Processing Sales Offline
U->>UI: Process Sale
UI->>Q: Queue Transaction Locally
Note right of Q: Store: items, customer_id, payments, timestamp
Q-->>UI: Transaction Queued (#1)
UI-->>U: "Sale Complete (Pending Sync)"
UI->>U: Print Receipt with "OFFLINE" watermark
loop Continue Offline Sales
U->>UI: Process More Sales
UI->>Q: Queue Each Transaction
UI-->>U: Show Queue Count (e.g., "3 pending")
end
Note over UI, DB: Network Restored
UI->>UI: Detect Network Available
UI-->>U: Display "SYNCING..." Indicator
loop Sync Queue
Q->>API: POST /orders/sync
API->>DB: Validate & Write
alt Conflict: Item Out of Stock
API-->>UI: Conflict Detected
UI-->>U: Alert: "Item X sold out - review order #1"
Note right of UI: Flag for manager review
else Success
API-->>UI: Synced
Q->>Q: Remove from Queue
end
end
UI-->>U: "All transactions synced"
1.16.1 Offline Mode State Machine
stateDiagram-v2
[*] --> ONLINE: Network Available
ONLINE --> OFFLINE: Network Lost
OFFLINE --> SYNCING: Network Restored
SYNCING --> ONLINE: Queue Empty
SYNCING --> CONFLICT_REVIEW: Conflicts Detected
CONFLICT_REVIEW --> SYNCING: Conflicts Resolved
CONFLICT_REVIEW --> ONLINE: Manager Override
note right of OFFLINE
Queue transactions locally
Limited operations only
end note
note right of SYNCING
Processing queue
Show progress
end note
1.16.2 Offline Operations Rules
offline_mode:
# Maximum transactions to queue locally
max_queue_size: 100
# Auto-sync interval when online (seconds)
sync_interval_seconds: 30
# Conflict resolution strategy
conflict_strategy: "server_wins_with_review"
# Operations ALLOWED offline
allowed_offline:
- sale_new
- return_with_receipt
- price_check
- gift_card_balance_check # Cached balance, show warning
- parked_sale_create
- parked_sale_retrieve
# Operations BLOCKED offline
blocked_offline:
- customer_create # Requires uniqueness check
- credit_limit_check # Requires real-time balance
- on_account_payment # Risk of exceeding limit
- multi_store_inventory # Requires network
- gift_card_activation # Must register immediately
- gift_card_reload # Risk of double-load
- transfer_request # Requires other store
- reservation_create # Requires other store
# Gift card handling offline
gift_card_offline:
balance_check: "use_cached" # Show cached with warning
redemption: "block" # Too risky - balance could be stale
# Inventory conflict handling
inventory_conflict:
# If offline sale sold item that's now out of stock
resolution: "manager_review"
auto_backorder: false
1.16.3 Offline Sync Conflict Resolution
| Conflict Type | Resolution | Action |
|---|---|---|
| Item out of stock | Manager review | Flag order, allow override or backorder |
| Price changed | Use offline price | Honor price at time of sale |
| Customer deleted | Use anonymous | Reassign to “Walk-in Customer” |
| Promotion expired | Manager review | Allow or remove discount |
| Gift card insufficient | Block sync | Require additional payment |
| Transfer item sold at source | Customer notification | Email customer (TMPL-OFFLINE-SOLD), ask to contact store to discuss options |
| Ship-to-customer item sold at source | Customer notification | Email customer (TMPL-OFFLINE-SOLD), ask to contact store to discuss options |
| Reservation item sold at source | Customer notification | Email customer (TMPL-OFFLINE-SOLD), ask to contact store to discuss options |
1.16.4 Offline Conflict Email Templates
When the system comes back online and discovers that a Transfer, Ship-to-Customer, or Reservation request cannot be fulfilled because the item was sold from the source location during the offline period, the system must automatically notify the customer.
Email Template: TMPL-OFFLINE-SOLD (Item Availability Change Notification)
| Field | Value |
|---|---|
| Template ID | TMPL-OFFLINE-SOLD |
| Trigger | Offline sync detects item sold from source location for Transfer, Ship-to-Customer, or Reserve requests |
| Recipient | Customer (email on file) |
| Subject | “Update Regarding Your Order - Action Required” |
| Content | Informs customer that the requested item is no longer available at the source location. Asks the customer to contact the store at their earliest convenience to discuss available options (alternative locations, backorder, refund, etc.) |
| Fallback | If no customer email on file, create staff alert for manual outreach |
Offline Conflict Notification Rules:
offline_conflict_notifications:
# Notify customer when item sold from source location
item_sold_at_source:
notify_customer: true
email_template: "TMPL-OFFLINE-SOLD"
fallback_if_no_email: "staff_alert"
# Staff alert appears in manager dashboard
staff_alert_priority: "high"
1.17 Tax Calculation Engine
Scope: Custom tax engine with jurisdiction support, hierarchy rules, and exemptions.
Cross-Reference: See Module 5, Section 5.9 for tax jurisdiction setup, compound rate configuration (State/County/City), and rate assignment per location.
sequenceDiagram
autonumber
participant UI as POS UI
participant API as Backend
participant TAX as Tax Engine
participant DB as DB
Note over UI, DB: Tax Calculation Flow
UI->>API: Calculate Tax for Cart
API->>TAX: Submit Cart + Store Location + Customer
TAX->>DB: Get Store Tax Jurisdiction
Note right of DB: Store in Virginia: State 4.3% + Local 1%
TAX->>DB: Get Customer Tax Status
Note right of DB: Customer: Regular (no exemption)
loop For Each Line Item
TAX->>DB: Get Product Tax Category
alt Product Override (e.g., Food)
TAX->>TAX: Apply Product Tax Rate (0% for groceries)
else Customer Exempt
TAX->>TAX: Apply 0% Tax
else Standard
TAX->>TAX: Apply Jurisdiction Rate (5.3%)
end
end
TAX-->>API: Return Tax Breakdown
API-->>UI: Display Tax Summary
Note right of UI: Tax Breakdown:
Note right of UI: State Tax (4.3%): $4.30
Note right of UI: Local Tax (1.0%): $1.00
Note right of UI: Total Tax: $5.30
1.17.1 Tax Hierarchy (Priority Order)
1. Product-Level Override (Highest Priority)
└── Example: "Grocery - Tax Exempt", "Prepared Food - 10%"
2. Customer-Level Exemption
└── Example: "Reseller Certificate", "Diplomatic Status", "Non-Profit"
3. Location-Based Tax (Default)
└── Based on store physical address
└── Includes: State + County + City + Special District
1.17.2 Virginia Tax Configuration
tax_jurisdictions:
virginia:
state_rate: 4.3
# Regional taxes (in addition to state)
regions:
hampton_roads:
counties: ["Norfolk", "Virginia Beach", "Newport News", "Hampton"]
additional_rate: 0.7
northern_virginia:
counties: ["Arlington", "Fairfax", "Loudoun", "Prince William"]
additional_rate: 0.7
central_virginia:
counties: ["Henrico", "Chesterfield", "Richmond City"]
additional_rate: 0.0
# Local tax (most localities)
default_local_rate: 1.0
# Product exemptions
exemptions:
- category: "grocery_food"
rate: 1.5 # Reduced rate for groceries
- category: "prescription_drugs"
rate: 0.0
- category: "medical_equipment"
rate: 0.0
tax_exemption_types:
- code: "RESALE"
description: "Reseller Certificate"
requires_certificate: true
certificate_expiry: true
- code: "NONPROFIT"
description: "501(c)(3) Non-Profit"
requires_certificate: true
certificate_expiry: true
- code: "DIPLOMAT"
description: "Diplomatic Exemption"
requires_certificate: true
certificate_expiry: false
- code: "NATIVE"
description: "Native American Tribal Member"
requires_certificate: true
certificate_expiry: false
1.17.3 Tax Engine Design for Expansion
Note: The Virginia compound tax jurisdiction model is the active reference implementation. See Section 5.9 for the
tax_jurisdictionsandtax_ratestables that implement the 3-level compound model (State + County + City).
jurisdiction_modules:
virginia:
status: "active" # Reference implementation
model: "compound_3_level" # State + County/Regional + City
sales_tax: true
use_tax: false
california:
status: "planned"
sales_tax: true
district_taxes: true # Complex district overlay
oregon:
status: "planned"
sales_tax: false # No sales tax state
canada:
status: "planned"
gst: true
pst: true # Varies by province
hst: true # Harmonized in some provinces
european_union:
status: "planned"
vat: true
reverse_charge: true # B2B cross-border
1.18 Payment Integration (SAQ-A)
Scope: Semi-integrated payment terminal architecture with PCI SAQ-A compliance.
Cross-Reference: Payment data storage rules, terminal communication protocol, processor configuration, and failure handling details have been consolidated into Module 6, Section 6.8. The payment flow sequence diagram below remains in this section as it is part of the core sales workflow.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Your Backend
participant TERM as Payment Terminal
participant PROC as Payment Processor
Note over U, PROC: Card Payment Flow (SAQ-A)
U->>UI: Click "Pay by Card"
UI->>API: POST /payments/initiate
Note right of API: {order_id, amount, terminal_id}
API->>TERM: Send Payment Request
Note right of TERM: Amount: $45.00
TERM-->>U: "Insert/Tap Card"
Note over TERM: Customer interacts with terminal only
U->>TERM: Customer taps card
TERM->>PROC: Encrypted Card Data
Note right of PROC: Card data NEVER touches your system
PROC-->>TERM: Authorization Response
TERM-->>API: Token + Approval Code + Masked Card
API->>API: Store Token (NOT card data)
Note right of API: Stored: token, approval_code, ****1234, VISA
API-->>UI: Payment Approved
UI-->>U: "Approved - $45.00"
Note over U, PROC: Refund Flow (Using Token)
U->>UI: Process Refund
UI->>API: POST /refunds/create
Note right of API: {order_id, amount, reason}
API->>API: Retrieve Stored Token
API->>PROC: Refund Request with Token
PROC-->>API: Refund Approved
API-->>UI: Refund Complete
1.18.1 Payment Data Storage Rules
payment_data:
# Data your system STORES
stored:
- transaction_id # Your internal ID
- payment_token # Processor token for refunds
- approval_code # Authorization code
- masked_card_number # Last 4 digits only (****1234)
- card_brand # Visa, Mastercard, Amex, Discover
- entry_method # chip, tap, swipe, manual
- terminal_id # Which terminal processed
- timestamp # When processed
- amount # Transaction amount
# Data your system NEVER stores (PCI prohibited)
prohibited:
- full_card_number # 16-digit PAN
- cvv_cvc # Security code
- track_data # Magnetic stripe data
- pin_block # Encrypted PIN
- emv_data # Chip cryptogram (raw)
1.18.2 Terminal Communication
terminal_integration:
protocol: "cloud_api" # Terminal vendor's cloud service
# Timeout settings
payment_timeout_seconds: 60
connection_timeout_seconds: 10
# Terminal states
states:
- IDLE: "Ready for transaction"
- WAITING_FOR_CARD: "Display amount, await tap/insert"
- PROCESSING: "Communicating with processor"
- APPROVED: "Transaction successful"
- DECLINED: "Transaction declined"
- ERROR: "Communication or hardware error"
- CANCELLED: "Customer or staff cancelled"
# Error handling
on_timeout: "prompt_retry_or_cancel"
on_decline: "display_reason_allow_retry"
on_error: "log_and_alert_manager"
# Void window (same-day before batch)
same_day_void: true
batch_close_time: "23:00" # Auto-batch at 11 PM
1.18.3 Payment Terminal Failure Handling
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant TERM as Payment Terminal
U->>UI: Click "Pay by Card"
UI->>API: POST /payments/initiate
API->>TERM: Send Payment Request
alt Terminal Timeout
TERM--xAPI: No Response (60s)
API-->>UI: "Terminal not responding"
UI-->>U: Options: Retry | Different Terminal | Cash | Cancel
else Terminal Declined
TERM-->>API: Declined (Insufficient Funds)
API-->>UI: "Card Declined: Insufficient Funds"
UI-->>U: Options: Try Another Card | Cash | Cancel
else Terminal Error
TERM-->>API: Error (Hardware Issue)
API-->>UI: "Terminal Error"
UI-->>U: Options: Different Terminal | Cash | Cancel
API->>API: Log Error, Alert Manager
end
1.18.4 Reports: Payment Integration
| Report | Purpose | Key Data Fields |
|---|---|---|
| Payment Terminal Performance | Monitor terminal health | Terminal ID, transaction count, avg response time, error rate, decline rate |
| Decline Rate Report | Track payment failures | Decline reason, frequency, terminal, time of day, retry success rate |
| Batch Settlement Report | Daily batch close summary | Batch date, transaction count, total amount, settlement status |
1.19 Sales User Stories (Epics)
Epic 1.A: Core Sales & Inventory
- Story 1.A.1 (Hybrid Entry): Staff can add items via Scanner (bulk array, max 50 tags) or Barcode (single SKU), with immediate stock validation.
- Story 1.A.2 (Parking): Staff can “Park” a sale (max 5 per terminal, 4-hour TTL) to serve another customer and “Retrieve” it later. Parked items soft-reserve inventory.
- Story 1.A.3 (Mixed Cart): A single transaction can contain both Sales (positive price) and Returns (negative price), calculating a Net Total.
- Story 1.A.4 (Inventory Checks): System validates stock > 0 before adding to cart. Returns trigger
INCREMENTstock event; Sales triggerDECREMENT. - Story 1.A.5 (Price Check Mode): Staff can scan items in “Price Check” mode to display price without adding to cart. Useful for customer inquiries.
Epic 1.B: Pricing & Promotion
- Story 1.B.1 (Smart Promos): The system alerts staff (“Upsell Opportunity”) when a cart is eligible for a promo (e.g., “Buy 2 Get 1 Free”).
- Story 1.B.2 (Granular Discounts): Line-item discounts apply before global discounts. Manager PIN is required for overrides above a certain %.
- Story 1.B.3 (Discount Stacking): System prevents invalid stacking (e.g., cannot use Promo Code on “Clearance” items).
- Story 1.B.4 (Price Tiers): Customers can be assigned price tiers (Retail, Wholesale, VIP, Employee) that apply different base prices before any discounts.
- Story 1.B.5 (Coupon System): System supports both single-use coupons (birthday, referral) and multi-use coupons (promotional codes). Single-use coupons are marked redeemed after use.
- Story 1.B.6 (Discount Order): Discounts apply in strict order: Price Tier → Line Discounts → Auto Promos → Global % → Coupons → Tax → Loyalty Redemptions. Loyalty redemptions apply after tax calculation.
Epic 1.C: Payments & Financials
- Story 1.C.1 (On-Account): Trusted customers can buy on credit. System validates
Current Debt + Pending Layaways + Cart <= Credit Limit. Paying off debt requires Cash/Card (prevents circular credit). - Story 1.C.2 (Layaway): Customers can reserve items with a partial deposit. Inventory is reserved immediately (Status:
RESERVED), but revenue is not fully recognized until completion. - Story 1.C.3 (Split Tender): Transactions support mixed payments (e.g., $20 Cash + Remaining on Card). Customers can use multiple credit cards and combine cash + card(s) in any combination. Each card’s token is stored separately for refund processing.
- Story 1.C.4 (Gift Cards): Staff can sell gift cards, reload existing cards, check balances, and accept gift cards as payment. Partial redemption is supported. Jurisdiction rules apply.
- Story 1.C.5 (Cash Drawer Management): Each shift requires opening float entry, blind cash counts at close, variance tracking, X-report (mid-shift) and Z-report (end-of-shift) generation. Variances outside tolerance require manager approval.
- Story 1.C.6 (Credit Card Payments - SAQ-A): Card payments use semi-integrated terminals. Card data never touches our system. Only tokens, approval codes, and masked card numbers are stored.
- Story 1.C.7 (Payment Failures): When terminal times out, declines, or errors, staff can retry, switch terminals, accept cash, or cancel. All failures are logged.
- Story 1.C.8 (Third-Party Financing - Affirm): Staff can offer Affirm as a payment option. Customer completes financing on their device. Store receives full payment from Affirm immediately. Customer repays Affirm per their loan terms.
- Story 1.C.9 (Multiple Payment Methods): Customers can split payment across multiple credit cards or combine cash + card(s). System tracks each card’s token separately for individual refund processing.
Epic 1.D: Post-Sale & Data
- Story 1.D.1 (Voiding): Voiding is only allowed same business day with drawer open. Voiding reverses inventory, loyalty, and commissions (full reversal). Record is flagged
VOIDED, not deleted. - Story 1.D.2 (Returns): Returns require staff to scan the receipt for system validation before processing. Commission is reversed proportionally based on returned value. Refunds use stored payment tokens or manual terminal processing (staff chooses).
- Story 1.D.3 (Receipts): Staff can choose between Thermal (standard), A4 (invoice), or Gift Receipts (hidden price) at print time.
- Story 1.D.4 (Receipt Reprint): Staff can reprint any historical receipt or email to a different address.
- Story 1.D.5 (History & Export): Managers can filter history by Date/User/Status and export to CSV (limit 1,000 rows for performance).
- Story 1.D.6 (Return Policy Engine): System enforces return and exchange policies that are manually configured in the application’s Settings/Setup module. Policies are per-tenant, per-store, and per-channel (online vs in-store). Example defaults: 30 days full refund with receipt, 31-90 days store credit, no receipt = store credit at current price with manager approval.
- Story 1.D.7 (Dedicated Exchanges): Staff can process exchanges as a single transaction showing item out, item in, and price difference. Exchange records link the original and new items. Commission adjusts for price difference.
Epic 1.E: Special Orders & Transfers
- Story 1.E.1 (Special Orders): Staff can create special orders for out-of-stock items. Customer pays deposit (minimum 50% or configurable). System tracks order status and notifies customer on arrival.
- Story 1.E.2 (Multi-Store Inventory): Staff can view inventory at all store locations (eventually consistent, max 5 min stale). Can request transfers or reserve items at other stores for customer pickup.
- Story 1.E.3 (Transfer/Reserve Payment): Transfers and reservations require customer to pay in full before the request is submitted to the system. This prevents unpaid phantom requests.
- Story 1.E.4 (Hold for Pickup): Fully paid orders can be held for customer pickup with a configurable time limit (default 7 days). System alerts staff when holds expire.
- Story 1.E.5 (Ship to Customer): Staff can ship items from other store locations directly to the customer’s address. System integrates with carrier APIs to calculate real-time shipping costs based on origin store and destination address. Customer pays item price + shipping in full before shipment is initiated. Tracking number is shared with customer via email.
Epic 1.F: Tracking & Commissions
- Story 1.F.1 (Serial Numbers): Designated products require serial number capture at sale. System validates serial hasn’t been previously sold. Serial numbers are searchable for warranty/return lookup.
- Story 1.F.2 (Sales Commissions): Each transaction records the employee ID. Commission amounts are calculated based on sale total and stored for reporting.
- Story 1.F.3 (Commission Reversal): Voided sales reverse commission fully. Returns reverse commission proportionally (returned value / original sale value).
- Story 1.F.4 (Commission Reports): Managers can view commission reports by date range and employee, showing total sales, returns, and net commission earned.
Epic 1.G: Loyalty Programs
- Story 1.G.1 (Points Program): Customers earn points per dollar spent. Points can be redeemed for discounts at configurable rates (e.g., 100 points = $1).
- Story 1.G.2 (Punch Cards): Digital punch cards track qualifying purchases (e.g., 10 coffees = 1 free). Punches are automatically applied based on product categories.
- Story 1.G.3 (Spend Thresholds): Customers earn rewards when spending thresholds are met (e.g., spend $100/month, get $10 off). Rewards can be auto-applied or saved.
- Story 1.G.4 (Loyalty Tiers): Customers automatically upgrade tiers (Bronze → Silver → Gold) based on spend. Higher tiers earn more points or get better rewards.
Epic 1.H: Offline & Resilience
- Story 1.H.1 (Offline Detection): System automatically detects network loss and switches to offline mode with clear visual indicator.
- Story 1.H.2 (Offline Sales): Staff can process sales, returns with receipt, and price checks while offline. Transactions queue locally (max 100).
- Story 1.H.3 (Blocked Offline): System blocks risky operations offline: new customer creation, credit checks, gift card activation/reload, multi-store operations.
- Story 1.H.4 (Auto Sync): When network restores, system automatically syncs queued transactions. Conflicts are flagged for manager review.
- Story 1.H.5 (Conflict Resolution): Manager can review sync conflicts (out of stock, price changes) and approve, modify, or cancel affected transactions.
Epic 1.I: Tax Calculation
- Story 1.I.1 (Jurisdiction Taxes): System calculates tax based on store physical address, applying state + county + city + district rates as applicable.
- Story 1.I.2 (Tax Hierarchy): Product-level tax overrides take priority over customer exemptions, which take priority over default store rates.
- Story 1.I.3 (Tax Exemptions): Customers with valid exemption certificates (resale, non-profit, diplomatic) can be flagged for tax-exempt purchases.
- Story 1.I.4 (Tax Display): Receipt shows tax breakdown by jurisdiction (State Tax: $X, Local Tax: $Y).
1.20 Sales Acceptance Criteria (Gherkin)
Feature: Gift Card Operations
Feature: Gift Card Management
As a retail staff member
I need to sell, reload, and redeem gift cards
So that customers can purchase and use store credit
Background:
Given I am logged into the POS system
And the cash drawer is open
Scenario: Sell a new gift card
Given I have a physical gift card with number "GC-000000001234"
And the gift card status is "INACTIVE"
When I scan the gift card
And I enter load amount "$50.00"
And the customer pays "$50.00"
Then the gift card status should be "ACTIVE"
And the gift card balance should be "$50.00"
And a receipt should print showing "Gift Card Activated: $50.00"
Scenario: Check gift card balance
Given a gift card "GC-000000001234" with balance "$35.50"
When I click "Gift Card Balance"
And I scan the gift card
Then the display should show "Balance: $35.50"
And the display should show expiry info based on jurisdiction
Scenario: Partial redemption
Given a gift card "GC-000000001234" with balance "$50.00"
And a cart total of "$30.00"
When I apply the gift card as payment
Then the gift card balance should become "$20.00"
And the order should be marked "PAID"
And the gift card status should remain "ACTIVE"
Scenario: Full redemption depletes card
Given a gift card "GC-000000001234" with balance "$25.00"
And a cart total of "$25.00"
When I apply the gift card as payment
Then the gift card balance should become "$0.00"
And the gift card status should be "DEPLETED"
Scenario: Insufficient balance prompts partial
Given a gift card "GC-000000001234" with balance "$20.00"
And a cart total of "$50.00"
When I apply the gift card as payment
Then the system should display "Card Balance: $20.00. Apply partial?"
When I confirm partial application
Then "$20.00" should be applied from the gift card
And remaining balance should show "$30.00"
Scenario: Reload existing gift card
Given a gift card "GC-000000001234" with balance "$10.00"
When I scan the gift card
And I select "Reload"
And I enter reload amount "$25.00"
And the customer pays "$25.00"
Then the gift card balance should be "$35.00"
Scenario: Cash out in California
Given the store is located in California
And a gift card "GC-000000001234" with balance "$8.50"
When I scan the gift card
Then the system should show "Eligible for Cash Out: $8.50"
When I process cash out
Then the gift card balance should be "$0.00"
And "$8.50" cash should be given to customer
Feature: Exchange Transactions
Feature: Dedicated Exchange Flow
As a retail staff member
I need to process exchanges efficiently
So that customers can swap items in a single transaction
Background:
Given I am logged into the POS system
And order "ORD-001" exists with item "Blue Shirt" at "$40.00"
And the original sale had commission "$2.00"
Scenario: Even exchange - same price items
Given I load order "ORD-001" for exchange
When I select "Blue Shirt" to exchange OUT
And I scan "Red Shirt" priced at "$40.00" to exchange IN
Then the price difference should show "$0.00"
And I should see "Even Exchange - No Payment Required"
When I complete the exchange
Then a new exchange record should link "ORD-001" to the new order
And inventory for "Blue Shirt" should INCREMENT by 1
And inventory for "Red Shirt" should DECREMENT by 1
And commission should remain unchanged
Scenario: Customer owes money - upgrade
Given I load order "ORD-001" for exchange
When I select "Blue Shirt" ($40.00) to exchange OUT
And I scan "Premium Jacket" priced at "$80.00" to exchange IN
Then the price difference should show "Customer Owes: $40.00"
When the customer pays "$40.00"
And I complete the exchange
Then the exchange should be recorded successfully
And additional commission should be recorded for the $40.00 difference
Scenario: Store owes refund - downgrade
Given I load order "ORD-001" for exchange
When I select "Blue Shirt" ($40.00) to exchange OUT
And I scan "Basic Tee" priced at "$25.00" to exchange IN
Then the price difference should show "Refund to Customer: $15.00"
When I process the refund
And I complete the exchange
Then "$15.00" should be refunded to original payment method
And commission should be reduced proportionally
Feature: Multi-Store Transfer (Full Payment Required)
Feature: Multi-Store Inventory Transfer
As a retail staff member
I need to request transfers from other stores
So that customers can get items not available locally
Background:
Given I am logged into POS at "Store A"
And "Store B" has 5 units of "SKU-12345"
And "Store A" has 0 units of "SKU-12345"
Scenario: Transfer request requires full payment
Given a customer wants "SKU-12345" priced at "$75.00"
When I search for "SKU-12345"
And I click "Check Other Stores"
Then I should see "Store B: 5 units"
And I should see "Last updated: X minutes ago"
When I click "Request Transfer" from "Store B"
Then I should see "Customer must pay in full to process"
When the customer pays "$75.00"
Then a transfer record should be created with status "PAID"
And "Store B" should receive a transfer notification
And I should print a transfer receipt for the customer
Scenario: Transfer blocked without payment
Given a customer wants "SKU-12345" priced at "$75.00"
When I click "Request Transfer" from "Store B"
And the customer declines to pay
Then no transfer record should be created
And "Store B" inventory should remain unchanged
Scenario: Reservation at other store requires full payment
Given a customer wants to pick up "SKU-12345" at "Store B"
When I click "Reserve at Store B"
Then I should see "Customer must pay in full to reserve"
When the customer pays "$75.00"
Then a reservation should be created at "Store B"
And the customer should receive a pickup voucher
And the reservation should expire in 7 days
Feature: Return Policy Engine
Feature: Return Policy Enforcement
As a retail staff member
I need the system to enforce return policies
So that returns are handled consistently
Scenario: Full refund within 30 days with receipt
Given order "ORD-001" was placed 15 days ago
And the customer has the receipt
When I scan the receipt
And I select items to return
Then the system should show "Full Refund Eligible"
And the refund should go to the original payment method
And commission should be reversed proportionally
Scenario: Store credit for 31-90 days
Given order "ORD-001" was placed 45 days ago
And the customer has the receipt
When I scan the receipt
And I select items to return
Then the system should show "Store Credit Only"
And I should issue store credit for the return amount
Scenario: Manager override required beyond 90 days
Given order "ORD-001" was placed 120 days ago
And the customer has the receipt
When I scan the receipt
And I select items to return
Then the system should show "Manager Approval Required"
When a manager enters their PIN
And selects exception reason "Customer Goodwill"
Then the return should proceed
Scenario: No receipt - store credit at current price
Given a customer wants to return "Blue Shirt"
And the customer has no receipt
And "Blue Shirt" current price is "$35.00"
When I scan the item barcode
Then the system should show "No Receipt - Store Credit at Current Price"
And the system should show "Manager Approval Required"
When a manager approves
Then store credit for "$35.00" should be issued
Scenario: Final sale items blocked
Given order "ORD-001" contains item "Clearance Item" marked as "Final Sale"
When I attempt to return "Clearance Item"
Then the system should show "BLOCKED: Final Sale - No Returns"
And the return should not proceed
Scenario: Restocking fee for opened electronics
Given order "ORD-001" contains "Headphones" in category "Electronics"
And the item has been opened
When I process the return
Then the system should show "Restocking Fee: 15%"
And the refund should be reduced by 15%
Feature: Void vs Return Distinction
Feature: Void vs Return Rules
As a retail staff member
I need clear rules for when to void vs return
So that corrections are handled properly
Scenario: Void allowed same day with drawer open
Given order "ORD-001" was completed today
And the cash drawer is still open
When I select order "ORD-001"
And I click "Void"
Then the void should be allowed
And inventory should be reversed immediately
And commission should be fully reversed
And the order status should be "VOIDED"
Scenario: Void blocked after drawer close
Given order "ORD-001" was completed today
And the cash drawer has been closed
When I select order "ORD-001"
And I click "Void"
Then the system should show "Cannot void - drawer closed. Use Return instead."
Scenario: Void blocked next day
Given order "ORD-001" was completed yesterday
When I select order "ORD-001"
And I click "Void"
Then the system should show "Cannot void - different business day. Use Return instead."
Scenario: Return uses proportional commission reversal
Given order "ORD-001" has 3 items totaling "$120.00"
And commission was "$6.00" (5%)
When I return 1 item worth "$40.00"
Then commission should be reduced by "$2.00" (40/120 × $6)
And net commission should be "$4.00"
Feature: Special Orders
Feature: Special Order Management
As a retail staff member
I need to create special orders for out-of-stock items
So that customers can order items not currently available
Scenario: Create special order with deposit
Given "SKU-99999" is out of stock
And it is available for special order at "$100.00"
When I create a special order for customer "John Doe"
And I enter quantity "1"
Then the system should calculate deposit as "$50.00" (50%)
When the customer pays "$50.00" deposit
Then special order "SO-12345" should be created
And the status should be "DEPOSIT_PAID"
And the purchasing team should be notified
Scenario: Complete special order on arrival
Given special order "SO-12345" exists with deposit "$50.00"
And the remaining balance is "$50.00"
When the item arrives and status becomes "READY_FOR_PICKUP"
Then customer "John Doe" should receive a notification
When the customer arrives and pays "$50.00"
Then the special order status should be "COMPLETED"
Feature: Cash Drawer Management
Feature: Cash Drawer Operations
As a retail staff member
I need to manage the cash drawer
So that cash is tracked accurately
Scenario: Open drawer with float
Given the cash drawer is closed
When a manager opens the drawer
And enters opening float "$200.00"
Then the drawer session should start
And the drawer status should be "OPEN"
Scenario: Close drawer with balanced count
Given the drawer is open with float "$200.00"
And cash sales today total "$350.00"
And cash refunds today total "$50.00"
When I click "Close Drawer"
And I perform blind count entering "$500.00"
Then expected amount should be "$500.00"
And variance should be "$0.00"
And the system should show "Drawer Balanced"
And a Z-report should print
Scenario: Variance requires manager approval
Given the drawer is open with float "$200.00"
And expected cash is "$500.00"
When I perform blind count entering "$493.00"
Then variance should be "-$7.00"
And the system should show "Variance: -$7.00 - Manager Approval Required"
When a manager enters PIN and reason "Counting Error"
Then the drawer should close
And the variance should be logged
Scenario: Variance within tolerance auto-approves
Given variance tolerance is set to "$5.00"
And expected cash is "$500.00"
When I perform blind count entering "$497.00"
Then variance should be "-$3.00"
And the system should show "Drawer Balanced" (within tolerance)
Scenario: Run X-Report mid-shift
Given the drawer is open with float "$200.00"
And cash sales so far total "$150.00"
And cash refunds so far total "$20.00"
When I click "X-Report"
Then the X-Report should show opening float "$200.00"
And the X-Report should show cash sales "$150.00"
And the X-Report should show cash refunds "$20.00"
And the X-Report should show expected cash "$330.00"
And the drawer should remain OPEN
And I should be able to continue processing transactions
Scenario: Run multiple X-Reports in one shift
Given the drawer is open
When I run an X-Report at 10:00 AM
And I process more sales
And I run another X-Report at 2:00 PM
Then both X-Reports should complete successfully
And the second X-Report should reflect updated totals
And the drawer should remain OPEN
Feature: Coupon System
Feature: Coupon Redemption
As a retail staff member
I need to apply coupons to transactions
So that customers receive their discounts
Scenario: Apply single-use birthday coupon
Given coupon "BDAY-JOHN-2026" exists
And it is a single-use coupon for "$10 off"
And it has not been redeemed
When I scan the coupon
Then "$10.00" discount should apply
When I complete the sale
Then the coupon status should be "REDEEMED"
Scenario: Reject already-used single-use coupon
Given coupon "BDAY-JOHN-2026" has been redeemed
When I scan the coupon
Then the system should show "Coupon Already Redeemed"
And no discount should apply
Scenario: Apply multi-use promotional coupon
Given coupon "SAVE10" exists
And it is a multi-use coupon for "10% off"
And it has been used 50 times (limit 1000)
When I scan the coupon
Then "10% off" discount should apply
When I complete the sale
Then the coupon usage count should be 51
Scenario: Reject expired coupon
Given coupon "SUMMER2025" expired on "2025-08-31"
When I scan the coupon
Then the system should show "Coupon Expired"
And no discount should apply
Feature: Loyalty Programs
Feature: Flexible Loyalty Programs
As a retail staff member
I need to manage customer loyalty
So that customers are rewarded for purchases
Scenario: Earn points per dollar
Given customer "Jane Doe" is attached to the sale
And the loyalty program awards 1 point per $1
And the cart total is "$45.00"
When I complete the sale
Then "Jane Doe" should earn 45 points
And the receipt should show "Points Earned: 45"
Scenario: Redeem points for discount
Given customer "Jane Doe" has 500 points
And redemption rate is 100 points = $1
When I click "Redeem Points"
And I redeem 500 points
Then "$5.00" discount should apply
And "Jane Doe" points should become 0
Scenario: Punch card completion
Given customer "Jane Doe" has a coffee punch card
And she has 9 of 10 punches
And the cart contains 1 coffee
When I complete the sale
Then the punch card should show "FREE ITEM EARNED!"
And the 10th coffee should be free
And a new punch card should start
Scenario: Tier upgrade on spend threshold
Given customer "Jane Doe" is "BRONZE" tier
And she has spent "$950" this year
And the cart total is "$100"
When I complete the sale
Then "Jane Doe" should be upgraded to "SILVER"
And the system should show "Customer upgraded to Silver!"
And she should now earn 1.5x points
Feature: Offline Operations
Feature: Offline Mode Operations
As a retail staff member
I need to continue serving customers when network is down
So that business is not interrupted
Background:
Given I am logged into the POS system
And the cash drawer is open
Scenario: Detect offline and show indicator
Given the network connection is lost
Then the UI should show "OFFLINE MODE" indicator
And the indicator should be prominently visible
Scenario: Process sale while offline
Given I am in offline mode
When I scan items and complete a sale
Then the transaction should be queued locally
And the receipt should print with "OFFLINE" watermark
And I should see "1 transaction pending sync"
Scenario: Block risky operations offline
Given I am in offline mode
When I try to create a new customer
Then the system should show "Not available offline"
When I try to activate a gift card
Then the system should show "Not available offline"
When I try to check multi-store inventory
Then the system should show "Not available offline"
Scenario: Auto-sync when network restores
Given I am in offline mode
And I have 3 queued transactions
When the network connection is restored
Then the UI should show "SYNCING..."
And transactions should sync automatically
When sync completes
Then the UI should show "All transactions synced"
Scenario: Handle sync conflict
Given I am in offline mode
And I sold the last unit of "SKU-123"
When the network restores
And another store also sold the last unit
Then the system should show "Conflict: SKU-123 out of stock"
And I should be prompted for manager review
When the manager approves backorder
Then the transaction should complete with backorder status
Feature: Payment Integration
Feature: SAQ-A Payment Processing
As a retail staff member
I need to process card payments securely
So that customer card data is protected
Scenario: Successful card payment
Given a cart total of "$45.00"
When I click "Pay by Card"
Then the terminal should display "$45.00"
When the customer taps their card
And the payment is approved
Then I should see "Approved - $45.00"
And the receipt should show "VISA ****1234"
And no full card number should be stored
Scenario: Card declined - insufficient funds
Given a cart total of "$200.00"
When I click "Pay by Card"
And the customer's card is declined for insufficient funds
Then I should see "Card Declined: Insufficient Funds"
And I should have options: "Try Another Card" | "Cash" | "Cancel"
Scenario: Terminal timeout
Given a cart total of "$45.00"
When I click "Pay by Card"
And the terminal does not respond within 60 seconds
Then I should see "Terminal not responding"
And I should have options: "Retry" | "Different Terminal" | "Cash" | "Cancel"
Scenario: Refund using stored token
Given order "ORD-001" was paid by card
And the payment token is stored
When I process a refund for "ORD-001"
Then the refund should use the stored token
And I should not need to re-enter card details
And the receipt should show "Refund to VISA ****1234"
Scenario: Pay with Affirm financing
Given a cart total of "$500.00"
When I click "Pay with Affirm"
Then a QR code or redirect should display for the customer
When the customer completes Affirm application on their device
And Affirm approves the loan
Then I should see "Payment Approved (Affirm)"
And the full $500.00 should be received from Affirm
And no card data should be stored
And an Affirm charge_id should be stored
Scenario: Split payment across two credit cards
Given a cart total of "$200.00"
When I click "Pay by Card"
And the customer pays "$100.00" with first card
Then remaining balance should show "$100.00"
When I click "Pay by Card" again
And the customer pays "$100.00" with second card
Then the order should be marked "PAID"
And two separate payment tokens should be stored
Scenario: Combine cash and card payment
Given a cart total of "$150.00"
When I accept "$50.00" cash
Then remaining balance should show "$100.00"
When the customer pays "$100.00" by card
Then the order should be marked "PAID"
Feature: Tax Calculation
Feature: Tax Calculation Engine
As a retail staff member
I need accurate tax calculation
So that the correct tax is collected
Scenario: Standard Virginia tax
Given the store is located in Richmond, Virginia
And the cart contains taxable items totaling "$100.00"
When I calculate tax
Then state tax should be "$4.30" (4.3%)
And local tax should be "$1.00" (1.0%)
And total tax should be "$5.30"
Scenario: Northern Virginia regional tax
Given the store is located in Fairfax, Virginia
And the cart contains taxable items totaling "$100.00"
When I calculate tax
Then state tax should be "$4.30" (4.3%)
And local tax should be "$1.00" (1.0%)
And regional tax should be "$0.70" (0.7%)
And total tax should be "$6.00"
Scenario: Product-level tax override
Given the cart contains "Grocery Item" in tax category "grocery_food"
And "Grocery Item" price is "$20.00"
When I calculate tax
Then tax on "Grocery Item" should be "$0.30" (1.5% reduced rate)
Scenario: Customer tax exemption
Given customer "ABC Nonprofit" has tax exemption status "NONPROFIT"
And the cart contains taxable items totaling "$100.00"
When I attach customer "ABC Nonprofit" to the sale
And I calculate tax
Then total tax should be "$0.00"
And the receipt should show "Tax Exempt: 501(c)(3)"
Scenario: Tax hierarchy - product override takes priority
Given customer "ABC Nonprofit" has tax exemption status
And the cart contains "Prepared Food" (10% tax rate)
When I calculate tax
Then the product-level 10% rate should apply
Because product-level overrides take priority over customer exemption
2. Customers Module
2.1 Customer Management Workflow
Scope: Creating Profiles, Merging Duplicates, Tax Logic, Groups, and Deletion Guards.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Phase 1: Creation & Maintenance
U->>UI: Search Customer (Name / Phone / Email)
UI->>API: GET /customers/search
alt Customer Found
API-->>UI: Return Profile
UI-->>U: Display Customer Details + Group + Price Tier
else Create New
U->>UI: Enter Details (Name, Phone, Email)
U->>UI: Enter Physical Address (Shipping)
U->>UI: Enter Billing Address (if different)
opt Customer Group Assignment
U->>UI: Select Group (Retail/Wholesale/VIP/Staff)
Note right of UI: Group determines Price Tier
end
opt Tax Assignment
U->>UI: Select Custom Tax Rate (e.g., "Tax Exempt")
Note right of UI: Overrides Default Store Tax
end
opt Communication Preferences
U->>UI: Set Email Opt-In (Y/N)
U->>UI: Set SMS Opt-In (Y/N)
U->>UI: Set Preferred Contact Method
end
opt Customer Notes
U->>UI: Enter Size Preferences
U->>UI: Enter Free-Form Notes
end
UI->>API: POST /customers/create
API->>DB: Insert Record
end
2.2 Customer Groups & Tiers
Scope: Automatic and manual group assignment with price tier implications.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Customer Group Management
alt Manual Group Assignment
U->>UI: Open Customer Profile
U->>UI: Click "Change Group"
U->>UI: Select Group (Retail/Wholesale/VIP/Staff)
UI->>API: PATCH /customers/{id}/group
API->>DB: Update Customer Group
API->>DB: Update Price Tier (Based on Group)
API-->>UI: Group Updated
end
Note over API, DB: Automatic Tier Upgrades (Background Job)
API->>DB: Check Customer Spend Totals
alt Spend >= Gold Threshold ($5,000/year)
API->>DB: Upgrade to Gold Tier
API-->>UI: Notify: "Customer upgraded to Gold!"
else Spend >= Silver Threshold ($1,000/year)
API->>DB: Upgrade to Silver Tier
end
Note right of DB: Tier Benefits:
Note right of DB: Bronze: 1x points
Note right of DB: Silver: 1.5x points + 5% discount
Note right of DB: Gold: 2x points + 10% discount
2.3 Customer Notes & Preferences
Scope: Structured fields and free-form notes for customer preferences.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
U->>UI: Open Customer Profile
U->>UI: Click "Notes & Preferences"
Note over UI, DB: Structured Preferences
U->>UI: Enter Clothing Size (S/M/L/XL)
U->>UI: Enter Shoe Size
U->>UI: Select Color Preferences
U->>UI: Enter Brand Preferences
Note over UI, DB: Free-Form Notes
U->>UI: Enter Notes (e.g., "Prefers classic styles, allergic to wool")
UI->>API: PATCH /customers/{id}/preferences
API->>DB: Update Preference Fields
API-->>UI: Saved
Note over U, UI: Notes Display at POS
U->>UI: Attach Customer to Sale
UI->>API: GET /customers/{id}
API-->>UI: Return Profile with Notes
UI-->>U: Display: "Notes: Prefers classic styles, Size M"
2.4 Communication Preferences
Scope: Marketing consent, contact preferences, and opt-out management.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Communication Preference Management
U->>UI: Open Customer Profile -> Communications
UI-->>U: Display Current Settings:
Note right of UI: Email Marketing: [ON/OFF]
Note right of UI: SMS Marketing: [ON/OFF]
Note right of UI: Preferred Contact: [Email/Phone/SMS]
Note right of UI: Do Not Contact: [Y/N]
U->>UI: Toggle Email Marketing ON
U->>UI: Toggle SMS Marketing OFF
U->>UI: Set Preferred: Email
UI->>API: PATCH /customers/{id}/communication-prefs
API->>DB: Update Communication Preferences
API->>DB: Log Consent Change (Audit Trail)
API-->>UI: Preferences Saved
Note over API, DB: Privacy Compliance
Note right of DB: All consent changes logged with timestamp
Note right of DB: Customer can request full data export
Note right of DB: "Do Not Contact" blocks all outreach
2.5 Advanced Customer Actions
Scope: Merge duplicates, safe deletion, data export, and privacy compliance.
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Phase 2: Advanced Actions (Merge & Delete)
alt Action: Merge Duplicates
U->>UI: Select "Source" (Duplicate) & "Target" (Primary)
U->>UI: Click "Merge Customers"
UI->>API: POST /customers/merge
par Data Transfer
API->>DB: Move History, Loyalty, Balance to Target
API->>DB: Merge Notes (Append Source to Target)
API->>DB: Keep Higher Tier
API->>DB: Soft-Delete "Source" Profile
end
API-->>UI: Merge Success
end
alt Action: Delete Customer
U->>UI: Click "Delete Customer"
UI->>API: GET /customers/{id}/balance-check
alt Has Debt or Open Layaway
API-->>UI: Error: "Cannot Delete - Outstanding Balance"
UI-->>U: Alert: "Settle Balance First"
else Safe to Delete
UI->>API: DELETE /customers/{id}
API->>DB: Anonymize Personal Data
API->>DB: Retain Sales History (Linked to "Anonymous")
API-->>UI: Deletion Success
end
end
alt Action: Export Data
U->>UI: Filter List -> Click "Export CSV"
UI->>API: POST /customers/export
Note right of UI: Limit 1000 rows
API-->>UI: Download CSV File
end
alt Action: Data Subject Request (Privacy)
U->>UI: Click "Privacy Request"
U->>UI: Select Type: Export / Delete / Restrict
UI->>API: POST /customers/{id}/privacy-request
API->>DB: Log Request with Timestamp
API-->>UI: Request ID Generated
Note right of API: Must complete within 30 days
end
2.6 Customer Self-Service
Scope: Customer-facing loyalty balance and preference management.
sequenceDiagram
autonumber
participant C as Customer
participant UI as Self-Service Kiosk / App
participant API as Backend
participant DB as DB
Note over C, DB: Customer Self-Service Options
alt Check Loyalty Balance
C->>UI: Enter Phone Number or Scan Card
UI->>API: GET /customers/lookup?phone={phone}
API->>DB: Find Customer
API-->>UI: Return Loyalty Summary
UI-->>C: Display: "Points: 450 | Tier: Silver | $4.50 available"
end
alt Update Preferences
C->>UI: Login (Phone + PIN)
UI->>API: GET /customers/{id}/preferences
API-->>UI: Return Current Preferences
C->>UI: Update Email / SMS Opt-In
UI->>API: PATCH /customers/{id}/communication-prefs
API->>DB: Update & Log Consent Change
API-->>UI: Saved
end
alt Request Data Export
C->>UI: Click "Download My Data"
UI->>API: POST /customers/{id}/data-export
API->>DB: Queue Export Job
API-->>UI: "Export will be emailed within 24 hours"
end
2.6.1 Reports: Customer Module
| Report | Purpose | Key Data Fields |
|---|---|---|
| Customer Activity Report | Track customer engagement | Customer, last purchase date, total spend, visit frequency, loyalty tier |
| New Customer Report | Monitor customer acquisition | New customers by period, acquisition source, first purchase value |
| Customer Merge Audit Log | Track merge operations | Source ID, target ID, merge date, merged by, data transferred |
| Customer Group Distribution | Breakdown by group/tier | Group, customer count, avg spend, revenue contribution |
| Inactive Customer Report | Identify disengaged customers | Customer, last activity, days inactive, lifetime value, tier |
Email Template: TMPL-WELCOME
| Field | Value |
|---|---|
| Trigger | New customer profile created |
| Recipient | Customer (if email provided and opt-in) |
| Content | Welcome message, loyalty program details, store locations |
Email Template: TMPL-TIER-UPGRADE
| Field | Value |
|---|---|
| Trigger | Customer tier upgraded (e.g., Bronze → Silver) |
| Recipient | Customer |
| Content | New tier name, benefits unlocked, points multiplier, discount percentage |
2.7 Customer User Stories (Epics)
Epic 2.A: Profile & Data Management
- Story 2.A.1 (Detailed Profile): Staff can store distinct “Shipping” (Physical) and “Billing” addresses for a customer to support delivery and invoicing.
- Story 2.A.2 (Duplicate Handling / Merge): Managers can merge two customer profiles. The system must transfer all Sales History, Loyalty Points, Account Balances, and Notes to the “Primary” profile and archive the “Duplicate”.
- Story 2.A.3 (Safe Deletion): The system must block deletion if the customer has an active “On Account” debt or an open “Layaway”. If deleted, their historical sales must remain but be anonymized.
- Story 2.A.4 (Export): Staff can export the customer list to CSV for marketing, limited to 1,000 rows per batch for system stability.
- Story 2.A.5 (Customer Notes): Staff can record structured preferences (size, color, brand) and free-form notes on customer profiles. Notes are displayed when customer is attached to a sale.
Epic 2.B: Financial & Tax Settings
- Story 2.B.1 (Custom Tax Rates): Managers can assign a specific tax exemption status (e.g., “Reseller”, “Non-Profit”) to a customer profile. This status overrides the Store Default Tax when the hierarchy allows.
- Story 2.B.2 (Credit Limits): Managers can set a hard “Credit Limit”. The POS must block any “On Account” sale that pushes (Current Debt + Pending Layaways + Cart) over this limit.
- Story 2.B.3 (Loyalty Adjustments): Managers can manually adjust loyalty points (e.g., +50 “Sorry for wait”). A mandatory reason note is required for audit trails.
Epic 2.C: Customer Groups & Tiers
- Story 2.C.1 (Customer Groups): Customers can be assigned to groups (Retail, Wholesale, VIP, Staff). Each group maps to a price tier that determines base pricing.
- Story 2.C.2 (Automatic Tier Upgrades): Customers automatically upgrade tiers (Bronze → Silver → Gold) when spend thresholds are met. Upgrades can also be manually assigned by managers.
- Story 2.C.3 (Tier Benefits): Each tier has configurable benefits: point multipliers, automatic discounts, and early access to sales.
- Story 2.C.4 (Price Tiers): Different customer groups see different base prices (not just discounts). Wholesale customers may see cost+markup pricing while retail sees standard pricing.
Epic 2.D: Communication & Preferences
- Story 2.D.1 (Communication Consent): Customers can opt-in or opt-out of email and SMS marketing. All consent changes are logged for compliance.
- Story 2.D.2 (Preferred Contact): Customers can specify their preferred contact method (Email, Phone, SMS). Staff can see this preference when reaching out.
- Story 2.D.3 (Do Not Contact): Customers can be flagged as “Do Not Contact” which blocks all marketing outreach while preserving transaction notifications.
Epic 2.E: Privacy & Compliance
- Story 2.E.1 (Data Export): Customers can request a full export of their personal data. Export must be provided within 30 days.
- Story 2.E.2 (Right to Deletion): Customers can request deletion of their personal data. System anonymizes records while preserving transaction history for accounting.
- Story 2.E.3 (Consent Audit Trail): All marketing consent changes are logged with timestamp and source for regulatory compliance.
- Story 2.E.4 (Data Retention): Customer data is retained according to configurable retention policies. Inactive customers can be auto-anonymized after retention period.
Epic 2.F: Self-Service
- Story 2.F.1 (Loyalty Balance Check): Customers can check their loyalty points balance via kiosk, app, or website using phone number lookup.
- Story 2.F.2 (Preference Update): Customers can update their communication preferences (opt-in/opt-out) via self-service channels.
- Story 2.F.3 (Data Request): Customers can submit data export or deletion requests via self-service, which are queued for staff processing.
2.8 Customer Acceptance Criteria (Gherkin)
Feature: Customer Groups and Tiers
Feature: Customer Group Management
As a retail manager
I need to assign customers to groups
So that they receive appropriate pricing and benefits
Scenario: Assign customer to wholesale group
Given customer "ABC Corp" exists
When I open the customer profile
And I click "Change Group"
And I select "Wholesale"
Then the customer group should be "Wholesale"
And the price tier should update to "Wholesale Pricing"
Scenario: Automatic tier upgrade on spend
Given customer "Jane Doe" is "BRONZE" tier
And she has spent "$4,900" this year
And annual spend threshold for Silver is "$1,000"
And annual spend threshold for Gold is "$5,000"
When she makes a purchase of "$150"
Then her annual spend becomes "$5,050"
And she should be upgraded to "GOLD" tier
And she should now earn 2x points
And she should receive 10% automatic discount
Feature: Customer Merge
Feature: Merge Duplicate Customers
As a retail manager
I need to merge duplicate customer profiles
So that customer data is consolidated
Scenario: Merge two customer profiles
Given customer "John Doe" (ID: 100) has:
| Loyalty Points | 500 |
| Account Balance | $50.00 |
| Tier | Silver |
And customer "J. Doe" (ID: 101) has:
| Loyalty Points | 200 |
| Account Balance | $25.00 |
| Tier | Bronze |
When I select "J. Doe" as source and "John Doe" as target
And I click "Merge Customers"
Then "John Doe" should have:
| Loyalty Points | 700 |
| Account Balance | $75.00 |
| Tier | Silver |
And "J. Doe" profile should be archived
And sales history from both should be under "John Doe"
Feature: Safe Customer Deletion
Feature: Customer Deletion Guards
As a retail manager
I need deletion to be blocked when unsafe
So that we don't lose important data
Scenario: Block deletion with outstanding debt
Given customer "John Doe" has account balance "$150.00"
When I click "Delete Customer"
Then the system should show "Cannot Delete - Outstanding Balance"
And the deletion should be blocked
Scenario: Block deletion with open layaway
Given customer "John Doe" has an active layaway order
When I click "Delete Customer"
Then the system should show "Cannot Delete - Open Layaway"
And the deletion should be blocked
Scenario: Safe deletion anonymizes data
Given customer "John Doe" has no debt or layaway
And "John Doe" has 5 historical orders
When I click "Delete Customer"
And I confirm the deletion
Then personal data should be anonymized
And the 5 orders should remain linked to "Anonymous Customer"
Feature: Communication Preferences
Feature: Communication Preference Management
As a retail staff member
I need to manage customer communication preferences
So that we comply with privacy regulations
Scenario: Update email opt-in
Given customer "Jane Doe" has email marketing OFF
When I open communications preferences
And I toggle email marketing ON
Then email marketing should be ON
And a consent change should be logged with timestamp
Scenario: Do Not Contact blocks outreach
Given customer "John Doe" is flagged "Do Not Contact"
When the marketing system attempts to send email
Then the email should be blocked
And no marketing should be sent
Scenario: Transaction notifications still sent
Given customer "John Doe" is flagged "Do Not Contact"
When he makes a purchase
Then a receipt email should still be sent
Because transaction notifications are not marketing
Feature: Privacy Compliance
Feature: Customer Privacy Rights
As a customer
I need to exercise my privacy rights
So that my data is protected
Scenario: Request data export
Given I am customer "Jane Doe"
When I request a data export
Then a request should be logged
And I should receive confirmation
And my data should be emailed within 30 days
Scenario: Request data deletion
Given I am customer "Jane Doe"
And I have no outstanding balance or layaway
When I request data deletion
Then my personal data should be anonymized
And my transaction history should be preserved (anonymized)
And I should receive confirmation
Scenario: Deletion blocked with balance
Given I am customer "Jane Doe"
And I have account balance "$50.00"
When I request data deletion
Then the system should show "Please settle outstanding balance first"
And my data should not be deleted
Feature: Customer Self-Service
Feature: Customer Self-Service
As a customer
I want to check my loyalty status
So that I know my rewards
Scenario: Check loyalty balance
Given I am customer with phone "555-1234"
And I have 450 loyalty points
And I am Silver tier
When I enter my phone number at the kiosk
Then I should see "Points: 450"
And I should see "Tier: Silver"
And I should see "$4.50 available for redemption"
Scenario: Update marketing preferences
Given I am logged into self-service
And email marketing is ON
When I toggle email marketing OFF
Then email marketing should be OFF
And a consent change should be logged
3. Catalog Module
3.1 Product Types & Data Model
Scope: Core POS Catalog – all product entries, types, data fields, and the relationships between them. Every item sold, bundled, or serviced flows through the catalog. The data model supports four distinct product types that cover the full range of retail merchandise: individually tracked goods, multi-dimension variants, composite bundles, and non-inventory services. Beyond the standard field set, the model supports tenant-defined custom attributes, product templates for rapid creation, and a matrix management interface for efficient variant operations.
Cross-Reference: See Module 5, Section 5.10 for UoM configuration and Section 5.12 for custom fields.
3.1.1 Product Types
| Type | Description | Inventory Tracked | Example |
|---|---|---|---|
| Standard | Single product, one SKU, one price | Yes | A branded t-shirt, a candle, a phone case |
| Variant (Parent + Children) | Parent product with up to 3 variant dimensions (e.g., Size, Color, Material). Each combination creates a child SKU with independent inventory and optional price overrides | Yes (per child) | “Classic Oxford Shirt” parent with children: S/Blue, M/Blue, L/White, etc. |
| Composite / Bundle | A kit of component products sold together. Bundle price is set independently – it does NOT need to equal the sum of component prices | Yes (per component, decremented on sale) | “Gift Set” = Candle + Soap + Box for $39.99 (components total $48 individually) |
| Service | Non-inventory item representing labor or a service. No stock tracking, no physical attributes | No | Alterations, gift wrapping, custom engraving, hemming |
3.1.2 Product Class Diagram
classDiagram
class Product {
+UUID id
+String sku
+String name
+String product_type
+Decimal base_price
+Decimal cost
+String lifecycle_status
+Boolean shippable
+UUID package_type_id
+Enum selling_uom
+Enum purchasing_uom
+Decimal uom_conversion_factor
+UUID season_id
+UUID brand_id
+DateTime created_at
+DateTime updated_at
}
class StandardProduct {
+String barcode
+String[] alternate_barcodes
+Boolean track_inventory
+Integer low_stock_threshold
}
class VariantParent {
+String[] dimension_names
+Integer dimension_count
+String style_number
+String[] demographics_age_group
+String[] demographics_gender
+String origin
+String fabric
+UUID season_id
}
class VariantChild {
+UUID parent_id
+String dimension_1_value
+String dimension_2_value
+String dimension_3_value
+String barcode
+Decimal price_override
+Decimal msrp
+Boolean track_inventory
+Boolean deletion_protected
}
class CompositeProduct {
+Decimal bundle_price
+Boolean allow_component_substitution
}
class BundleComponent {
+UUID composite_id
+UUID component_product_id
+Integer quantity
+Decimal component_cost_allocation
}
class ServiceProduct {
+Integer estimated_minutes
+Boolean requires_appointment
+String service_category
}
class CustomAttributeDefinition {
+UUID id
+String name
+Enum type
+Boolean required
+UUID tenant_id
}
class ProductCustomAttribute {
+UUID product_id
+UUID definition_id
+String value
}
class PackageType {
+UUID id
+String name
+Decimal length
+Decimal width
+Decimal height
+Decimal empty_weight
+UUID tenant_id
}
Product <|-- StandardProduct
Product <|-- VariantParent
Product <|-- CompositeProduct
Product <|-- ServiceProduct
VariantParent "1" --> "*" VariantChild : has children
CompositeProduct "1" --> "*" BundleComponent : contains
BundleComponent "*" --> "1" Product : references component
Product "0..*" --> "0..*" ProductCustomAttribute : has custom values
ProductCustomAttribute "*" --> "1" CustomAttributeDefinition : defined by
Product "*" --> "0..1" PackageType : ships in
3.1.3 Full Product Data Model
| Group | Field | Type | Required | Description |
|---|---|---|---|---|
| Identity | id | UUID | Yes | Primary key, system-generated |
sku | String | Yes | Unique stock keeping unit per tenant | |
barcode | String | No | Primary barcode (UPC-A, EAN-13, or internal) | |
alternate_barcodes[] | String[] | No | Additional barcodes (vendor SKU, alternate UPC, etc.) | |
| Description | name | String | Yes | Display name (max 255 chars) |
short_description | String | No | Brief summary for POS display (max 500 chars) | |
long_description | String | No | Full marketing description (max 5000 chars) | |
tags[] | String[] | No | Freeform tags for search and filtering | |
category_id | UUID | Yes | Reference to category hierarchy | |
| Pricing | base_price | Decimal(10,2) | Yes | Default selling price |
cost | Decimal(10,2) | No | Cost of goods (landed cost) | |
compare_at_price | Decimal(10,2) | No | Original price for “was/now” display | |
tax_code | String | No | Tax category override (e.g., “grocery_food”, “clothing_exempt”) | |
| Physical | weight | Decimal(8,3) | No | Product weight for shipping calculation |
weight_unit | Enum | No | lb, oz, kg, g | |
length | Decimal(8,2) | No | Package length | |
width | Decimal(8,2) | No | Package width | |
height | Decimal(8,2) | No | Package height | |
dimension_unit | Enum | No | in, cm | |
| Inventory | track_inventory | Boolean | Yes | Whether to track stock levels (default: true) |
allow_negative | Boolean | Yes | Allow sales when stock is zero (default: false) | |
low_stock_threshold | Integer | No | Alert threshold per location (default: 5) | |
| Media | images[] | URL[] | No | Array of image URLs |
primary_image_id | UUID | No | Reference to primary display image | |
| Status | lifecycle_status | Enum | Yes | DRAFT, ACTIVE, DISCONTINUED, ARCHIVED |
| Timestamps | created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp | |
published_at | DateTime | No | When product went Active | |
discontinued_at | DateTime | No | When product was discontinued | |
archived_at | DateTime | No | When product was archived | |
| Audit | created_by | UUID | Yes | User who created the record |
updated_by | UUID | Yes | User who last modified the record | |
| Retail Attributes | style_number | String | No | Manufacturer or internal style identifier (e.g., “NXJ1078”) |
demographics_age_group | String[] | No | Target age groups: Adult, Youth, Kids, Infant, Toddler | |
demographics_gender | String[] | No | Target genders: Men, Women, Unisex, Boys, Girls | |
origin | String | No | Country or region of manufacture: US, Imported, EU, or custom value | |
fabric | String | No | Primary material composition (e.g., “100% Cotton”, “60% Polyester / 40% Cotton”) | |
season_id | UUID | No | FK to Season – assigns product to a buying season (null = year-round core) | |
brand_id | UUID | No | FK to Brand – the brand label on the product (distinct from vendor/supplier) | |
| Shipping | shippable | Boolean | Yes | Whether this product can be shipped (default: false). When true, weight and dimensions become mandatory |
package_type_id | UUID | No | FK to PackageType – pre-defined package dimensions (Box, Envelope, Satchel, etc.) | |
| Unit of Measure | selling_uom | Enum | Yes | Unit customers buy in (default: EACH). Values: EACH, PAIR, SET, YARD, FOOT, METER, LB, KG, OZ, LITER |
purchasing_uom | Enum | No | Unit purchased from vendor – can differ from selling_uom (e.g., buy by CASE, sell by EACH) | |
uom_conversion_factor | Decimal(10,4) | No | Conversion ratio from purchasing_uom to selling_uom (e.g., 12.0000 means 1 case = 12 each) | |
| Variant-Specific (VariantChild) | msrp | Decimal(10,2) | No | Manufacturer’s Suggested Retail Price – used for “Compare at MSRP” display and margin analysis |
Business Rules – Shipping:
- IF
shippable = trueTHENweight,length,width, andheightare all mandatory. The system rejects save attempts where shippable is enabled but physical dimensions are missing. - IF
package_type_idis set, the package dimensions auto-populate from the PackageType record but can be overridden at the product level.
Business Rules – Unit of Measure:
- IF
purchasing_uomdiffers fromselling_uom, thenuom_conversion_factoris mandatory. - Receiving inventory via Purchase Order uses
purchasing_uom; inventory levels and POS transactions useselling_uom. The system automatically converts quantities using the conversion factor.
3.1.4 Custom Attributes
Scope: Tenant-defined key/value custom fields that extend the standard product data model. Custom attributes allow each tenant to capture business-specific data (e.g., “Certification Level”, “Country of Origin Detail”, “Season Year”) without schema changes.
Custom Attribute Definition Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Display label (e.g., “Certification”, “Import Code”) |
type | Enum | Yes | TEXT, NUMBER, LIST, BOOLEAN |
list_values[] | String[] | No | Allowed values when type = LIST (e.g., [“Organic”, “Fair Trade”, “None”]) |
required | Boolean | Yes | Whether this attribute must be filled on every product (default: false) |
searchable | Boolean | Yes | Whether this attribute is indexed for catalog search (default: false) |
filterable | Boolean | Yes | Whether this attribute appears as a filter in the catalog UI (default: false) |
display_order | Integer | Yes | Sort position in the product edit form |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
ProductCustomAttribute Junction Table
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | FK to products table |
definition_id | UUID | Yes | FK to custom_attribute_definitions table |
text_value | String | No | Value when definition type = TEXT |
number_value | Decimal(15,4) | No | Value when definition type = NUMBER |
list_value | String | No | Selected value when definition type = LIST (must match one of list_values[]) |
boolean_value | Boolean | No | Value when definition type = BOOLEAN |
Business Rules:
- Limit: Up to 50 custom attribute definitions per tenant. Attempting to create a 51st returns an error with guidance to archive unused attributes.
- Indexing: All attributes marked
searchable = trueare indexed via a GIN index on a materialized JSONB representation for fast full-text search. - Filtering: Attributes marked
filterable = trueappear as sidebar filters in the Admin Portal product list and can be used in smart collection rules. - Validation: When type = LIST, the system rejects any value not present in
list_values[]. When type = NUMBER, the system validates numeric format. Whenrequired = true, product save is blocked until a value is provided. - Inheritance: Custom attributes are set at the parent product level. Variant children inherit parent custom attributes and cannot override them individually.
3.1.5 Product Templates & Cloning
Scope: Accelerating product creation by cloning existing products and by applying category-based templates with pre-filled default values. Templates reduce repetitive data entry for categories with consistent attributes (e.g., all t-shirts share the same weight range, tax code, and fabric type).
Product Cloning
- Clone any product to create a new DRAFT with a new auto-generated SKU. All fields are copied except identity fields (
id,sku,barcode), timestamps (created_at,updated_at,published_at), and audit fields (created_byis set to the cloning user). - Cloned products always start in
DRAFTstatus regardless of the source product’s status. - For variant products, cloning copies the parent AND all child variants. Each child receives a new auto-generated SKU. Inventory levels are NOT copied (all set to zero).
- Cloned products receive a default name of “{Original Name} (Copy)” which staff must rename before publishing.
Category-Based Templates
Templates save pre-filled default values per category so that new products created in that category start with sensible defaults already populated.
Template Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Template display name (e.g., “Men’s T-Shirt Defaults”) |
category_id | UUID | Yes | FK to product_categories – the category this template applies to |
default_values | JSON | Yes | Key-value map of field names to default values |
tenant_id | UUID | Yes | Owning tenant |
created_by | UUID | Yes | User who created the template |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Example default_values JSON:
{
"fabric": "100% Cotton",
"weight": 0.35,
"weight_unit": "lb",
"tax_code": "clothing_exempt",
"selling_uom": "EACH",
"shippable": true,
"track_inventory": true,
"low_stock_threshold": 5,
"demographics_gender": ["Men"],
"demographics_age_group": ["Adult"],
"origin": "Imported",
"custom_attributes": {
"Certification": "None",
"Care Instructions": "Machine wash cold"
}
}
Business Rules:
- One template per category per tenant. If a template already exists for a category, staff must edit the existing template rather than create a duplicate.
- When a product is created and assigned to a category that has a template, the system auto-fills all fields from
default_values. Staff can override any auto-filled value before saving. - Templates do not retroactively update existing products. They apply only at creation time.
- Template fields that conflict with required product fields are ignored (e.g., a template cannot set
nameorsku).
3.1.6 Product Matrix Management
Scope: A grid-based interface for managing variant products efficiently. The matrix view presents all variant combinations in a spreadsheet-like layout, enabling rapid price/cost/stock editing and bulk variant creation.
Matrix Grid View
For variant products with two dimensions, the matrix displays:
- Rows = Dimension 1 values (e.g., sizes: S, M, L, XL, XXL)
- Columns = Dimension 2 values (e.g., colors: Red, Blue, Navy, Black)
- Each cell shows: price, cost, and current stock level for that combination
| Red | Blue | Navy | Black |
---------------------------------------------------------------------------
S | $29 / $12 | $29 / $12 | $29 / $12 | $29 / $12 |
| Stock: 14 | Stock: 11 | Stock: 9 | Stock: 7 |
---------------------------------------------------------------------------
M | $29 / $12 | $29 / $12 | $29 / $12 | $29 / $12 |
| Stock: 22 | Stock: 18 | Stock: 15 | Stock: 13 |
---------------------------------------------------------------------------
L | $29 / $12 | $29 / $12 | $29 / $12 | $29 / $12 |
| Stock: 19 | Stock: 16 | Stock: 12 | Stock: 10 |
---------------------------------------------------------------------------
XL | $32 / $13 | $32 / $13 | $32 / $13 | $32 / $13 |
| Stock: 8 | Stock: 6 | Stock: 5 | Stock: 4 |
Inline Editing:
- Click any cell to edit price, cost, or stock level directly in the grid.
- Tab key moves to the next cell. Enter confirms the edit.
- Changed cells are highlighted until saved. A single “Save All Changes” action persists all edits in one batch.
Bulk-Create Variants:
- Select dimension value combinations via checkboxes (e.g., check “XXL” row and all color columns) to generate all child SKUs in one operation.
- The system auto-generates SKUs using the configured SKU pattern, assigns the parent’s base price and cost as defaults, and sets initial inventory to zero.
- Staff can review generated variants before confirming creation.
Matrix Import via CSV:
- Upload a CSV file with columns matching dimension values and data fields.
- CSV format:
dimension_1_value, dimension_2_value, price, cost, barcode - The system validates all rows, reports errors (duplicate barcodes, invalid dimension values, non-numeric prices), and applies valid rows in batch.
- Supports both CREATE (new variants) and UPDATE (existing variants matched by dimension values) modes.
Business Rules:
- Matrix view is available only for products with exactly 2 variant dimensions. Products with 1 or 3 dimensions use the standard list editor.
- Cells for non-existent variant combinations display as empty with a “+” icon to create that specific variant on demand.
- Deleting a variant from the matrix is only permitted when the variant has zero inventory across all locations and no open orders referencing it.
3.2 Product Lifecycle
Scope: Managing products from initial creation through active selling, discontinuation, archival, and potential re-creation from archived templates. The lifecycle enforces data completeness before publishing and protects against premature archival while stock remains on hand.
3.2.1 Product Lifecycle State Machine
stateDiagram-v2
[*] --> DRAFT: Product Created
DRAFT --> ACTIVE: Publish
ACTIVE --> DISCONTINUED: Discontinue
DISCONTINUED --> ACTIVE: Reactivate
DISCONTINUED --> ARCHIVED: Archive (stock = 0 all locations)
ARCHIVED --> DRAFT: Clone as New (new SKU)
note right of DRAFT
Incomplete product
Not visible at POS
No sales allowed
end note
note right of ACTIVE
Fully configured
Available for sale
Visible at POS
end note
note right of DISCONTINUED
Sell-through mode
No restock / no new POs
Still sellable until stock = 0
end note
note right of ARCHIVED
Read-only historical record
Not visible at POS
Can clone to create new product
end note
3.2.2 Lifecycle Transition Rules
| Transition | Preconditions | Actions | Post-Conditions |
|---|---|---|---|
| Draft -> Active (Publish) | Name is set, SKU is set, at least one price assigned, category assigned | Set published_at timestamp, make visible at POS, enable inventory tracking | Product appears in POS search and barcode lookup |
| Active -> Discontinued (Discontinue) | Product must be in ACTIVE status | Set discontinued_at timestamp, flag as sell-through only, block from new purchase orders, remove from “New Arrivals” collections | Product remains sellable but will not be restocked |
| Discontinued -> Active (Reactivate) | Product must be in DISCONTINUED status | Clear discontinued_at, re-enable for purchase orders | Product fully available for sale and restocking |
| Discontinued -> Archived (Archive) | qty_on_hand = 0 at ALL locations (stores + warehouse) | Set archived_at timestamp, remove from POS visibility, mark record read-only | Product preserved for historical reporting but invisible to POS operations |
| Archived -> Draft (Clone as New) | Source product must be in ARCHIVED status | Create new product record with new auto-generated SKU, copy all fields except identity and timestamps, set status to DRAFT | New draft product exists independently; original archive unchanged |
Business Rules:
- A product cannot be archived while any location holds stock. The system checks
qty_on_handacross all locations before allowing the transition. - Cloning an archived product does NOT restore the original. It creates a new, separate product that can be edited independently.
- Discontinued products continue to appear in POS barcode lookup and search so that remaining stock can be sold.
3.3 Pricing Engine
Scope: Managing product pricing across channels, customer groups, and promotional periods. The engine supports centralized pricing with cascading overrides, named price books, four promotion types, formal markdown workflows, and best-price conflict resolution. Every price determination is auditable – the system logs which rules were evaluated, which rule won, and the final price applied.
3.3.1 Price Hierarchy
Centralized pricing with cascading override resolution. When determining the price for a product at checkout, the system evaluates five priority levels and applies the highest-priority match:
| Priority | Level | Source | Description |
|---|---|---|---|
| 1 (Highest) | Manual Override | Staff at POS | Staff manually enters a price during the sale. Requires reason selection and manager PIN if discount exceeds configurable threshold |
| 2 | Active Promotion | Promotion Engine | Scheduled or triggered promotions currently in effect for this product |
| 3 | Price Book | Price Book Engine | Customer-group-specific or date-bound pricing from a named price list |
| 4 | Channel Price | Channel Configuration | Per-channel pricing: In-Store, Online, or Wholesale |
| 5 (Lowest) | Global Default | Product Record | The base_price field from the product data model |
flowchart TD
A[Start: Determine Price] --> B{Manual Override?}
B -->|Yes| C[Use Manual Override Price]
B -->|No| D{Active Promotion?}
D -->|Yes| E[Use Promotion Price]
D -->|No| F{Price Book Match?}
F -->|Yes| G[Use Price Book Price]
F -->|No| H{Channel Price Set?}
H -->|Yes| I[Use Channel Price]
H -->|No| J[Use base_price from Product]
C --> K[Log Price Resolution to Audit Trail]
E --> K
G --> K
I --> K
J --> K
K --> L[Return Final Price]
3.3.2 Price Books
Scope: Named price lists that override global pricing for specific customer groups, channels, or date ranges. Price books enable scenarios such as wholesale pricing for B2B customers, employee discount pricing, and seasonal price adjustments without modifying the base product price.
Price Book Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Display name (e.g., “Wholesale”, “Employee Discount”, “Holiday Special”) |
description | String(500) | No | Purpose and scope of this price book |
customer_group_id | UUID | No | FK to customer_groups – restrict this book to a specific customer group. NULL = any customer |
channel | Enum | No | Restrict to channel: IN_STORE, ONLINE, WHOLESALE. NULL = all channels |
start_date | DateTime | Yes | When this price book becomes active |
end_date | DateTime | Yes | When this price book expires |
is_active | Boolean | Yes | Master toggle (default: true). Allows manual deactivation before end_date |
priority | Integer | Yes | When multiple price books match, the highest priority value wins (default: 10) |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
created_by | UUID | Yes | User who created the price book |
Price Book Entry Table
| Field | Type | Required | Description |
|---|---|---|---|
price_book_id | UUID | Yes | FK to price_books table |
product_id | UUID | Yes | FK to products table (parent or standard product) |
override_price | Decimal(10,2) | Yes | The price to charge when this price book is active |
override_cost | Decimal(10,2) | No | Optional cost override for margin reporting accuracy |
Business Rules:
- Exclusivity: Only one price book can be active per customer group per channel at any given time. If a new price book overlaps with an existing active book for the same group/channel combination, the system rejects creation and prompts the user to deactivate the conflicting book first.
- No stacking: Price books do not stack. When multiple price books match (e.g., one by customer group and one by channel), only the book with the highest
priorityvalue applies. - Audit: Every price book activation, deactivation, and entry modification is logged with the acting user and timestamp.
- Bulk entry: Staff can import price book entries via CSV with columns:
sku, override_price, override_cost. The system validates SKU existence and numeric formats before applying.
3.3.3 Promotions
Scope: Four promotion types that cover the most common retail discount scenarios. Promotions can be automatic (applied when conditions are met) or code-based (requiring manual entry at POS).
Promotion Types
| Type | Description | Key Fields |
|---|---|---|
| Basic Discount | Percentage or fixed amount off a single item | discount_type (PERCENT / AMOUNT), discount_value, min_qty (optional), max_uses (optional) |
| Tiered / Volume | Quantity breaks – price per unit decreases as quantity increases | tiers[] array of {min_qty, max_qty, price_per_unit} |
| BOGO / Cross-Item | Buy X get Y at a discount | buy_product_id, buy_qty, get_product_id, get_qty, get_discount_type, get_discount_value |
| Scheduled / Automatic | Activate on date/time without manual intervention | schedule_type (DATE_RANGE / RECURRING), start_datetime, end_datetime, recurrence_pattern |
Tiered / Volume Example:
| Quantity | Price Per Unit |
|---|---|
| 1 | $10.00 |
| 3+ | $8.00 |
| 10+ | $6.00 |
BOGO / Cross-Item Example:
- Buy 2 Shirts (
buy_product_id= shirts category,buy_qty= 2), Get 1 Belt 50% off (get_product_id= belts category,get_qty= 1,get_discount_type= PERCENT,get_discount_value= 50)
Scheduled / Automatic Example:
- Happy Hour:
schedule_type= RECURRING,recurrence_pattern= “MON-FRI 15:00-17:00” – 15% off all drinks every weekday from 3 PM to 5 PM
Promotion Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Display name (e.g., “Summer BOGO”, “Volume Discount”) |
type | Enum | Yes | BASIC, TIERED, BOGO, SCHEDULED |
stackable | Boolean | Yes | Whether this promotion can combine with other promotions (default: false) |
exclusive | Boolean | Yes | When true, this promotion replaces ALL other pricing – no other rules evaluated (default: false) |
product_scope | Enum | Yes | ALL, CATEGORY, PRODUCT_LIST |
product_ids[] | UUID[] | No | Applicable product IDs when scope = PRODUCT_LIST |
category_ids[] | UUID[] | No | Applicable category IDs when scope = CATEGORY |
start_date | DateTime | Yes | Earliest date promotion can activate |
end_date | DateTime | Yes | Latest date promotion remains active |
is_active | Boolean | Yes | Master toggle (default: false – starts as draft) |
priority | Integer | Yes | Tiebreaker when multiple promotions match (higher wins, default: 10) |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
created_by | UUID | Yes | User who created the promotion |
Promotion Lifecycle
stateDiagram-v2
[*] --> DRAFT : Create Promotion
DRAFT --> SCHEDULED : Set dates & activate
DRAFT --> CANCELLED : Cancel before scheduling
SCHEDULED --> ACTIVE : Start date reached
SCHEDULED --> CANCELLED : Cancel before start
ACTIVE --> EXPIRED : End date passed
ACTIVE --> CANCELLED : Manually deactivate
CANCELLED --> [*]
EXPIRED --> [*]
Lifecycle Transition Rules:
- DRAFT: Promotion is being configured. Not visible to POS. Can be edited freely.
- SCHEDULED: Promotion is locked for editing (except cancellation). Awaiting start date.
- ACTIVE: Promotion is live. Applied automatically or via code at POS. Only the
is_activetoggle andend_datecan be modified. - EXPIRED: Terminal state. Promotion has passed its end date. Preserved for reporting. Cannot be reactivated – must be cloned to create a new promotion.
- CANCELLED: Terminal state. Promotion was manually stopped. Preserved for reporting.
3.3.4 Markdown & Clearance Management
Scope: Formal workflows for reducing prices, tracking clearance merchandise, and handling end-of-life write-offs. Every price reduction is accountable – the system enforces approval workflows and logs all changes for audit.
Markdown Request Workflow
sequenceDiagram
autonumber
participant S as Staff
participant UI as Admin Portal
participant API as Backend
participant M as Manager
participant DB as DB
participant POS as POS Client
Note over S, POS: Markdown Request Workflow
S->>UI: Create Markdown Request
Note right of UI: Product, new_price, effective_date, reason
UI->>API: POST /markdowns
API->>DB: Save request (status: PENDING)
API-->>UI: Request #MD-001 created
API->>M: Notification: Markdown request pending
M->>UI: Review request details
Note right of M: Current price, proposed price,<br/>margin impact, reason
alt Approved
M->>API: PATCH /markdowns/MD-001 {status: APPROVED}
API->>DB: Update status, schedule price change
Note right of DB: effective_date triggers price update
DB-->>API: Price updated on effective_date
API->>POS: Push updated price to all terminals
API->>DB: Log to price_audit_trail
Note right of DB: who, when, old_price,<br/>new_price, reason, approval_id
else Rejected
M->>API: PATCH /markdowns/MD-001 {status: REJECTED, reject_reason: "..."}
API-->>S: Notification: Markdown rejected
end
Manual Price Changes:
- Staff with appropriate permissions can change a product’s price directly without the formal markdown workflow.
- Every manual change requires selecting a reason from a configurable list:
Damaged,Price Match,Manager Discretion,Competitive Adjustment,Cost Change,Seasonal Reduction,Error Correction. - All manual changes are logged to the price audit trail:
user_id,timestamp,product_id,old_price,new_price,reason,location_id.
Automatic Markdown Rules:
- Configurable rules engine that evaluates nightly and flags or automatically applies markdowns:
| Rule | Condition | Action |
|---|---|---|
| Slow Mover | No sale in X days AND stock > Y units | Reduce price by Z% |
| Aging Inventory | days_since_receive > N AND stock > threshold | Move to clearance collection, apply clearance_price |
| Season End | Season status = CLEARANCE AND product.season_id matches | Apply configured season clearance discount |
- Automatic rules can be configured to either flag for review (creates a pending markdown request) or apply immediately (executes the price change and logs it as auto-rule).
Liquidation / Write-Off:
- When a product cannot sell at any price (damaged beyond sale, expired, recalled), staff create a write-off record.
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | Product being written off |
quantity | Integer | Yes | Number of units written off |
write_off_value | Decimal(10,2) | Yes | Total cost value of written-off inventory |
reason | Enum | Yes | DAMAGED, EXPIRED, RECALLED, SHRINKAGE, OBSOLETE |
approved_by | UUID | Yes | Manager who authorized the write-off |
location_id | UUID | Yes | Location where inventory is removed |
notes | String | No | Additional context |
created_at | DateTime | Yes | Timestamp of write-off |
Clearance Rack Tracking:
- Each product can be flagged as clearance on a per-location basis.
- Clearance fields:
is_clearance(Boolean),clearance_price(Decimal),clearance_started_at(DateTime). - Products flagged as clearance appear in a dedicated “Clearance” collection on the POS browse screen and can be filtered in reports.
- Clearance pricing takes precedence over the product’s
base_pricebut is overridden by active promotions and manual overrides per the standard price hierarchy.
3.3.5 Conflict Resolution
Scope: Determining the final price when multiple pricing rules match the same product at checkout. The resolution algorithm ensures predictability, transparency, and customer-friendly outcomes.
Resolution Algorithm:
-
Check exclusivity: If any matched promotion has
exclusive = true, use ONLY that promotion. If multiple exclusive promotions match, the one with the highestprioritywins. All other pricing rules are ignored. -
If no exclusive promotion: Apply the best price for the customer (lowest final price) from among:
- The winning price book entry (if any)
- The winning non-exclusive promotion (if any)
- The channel price (if set)
- The base price
-
Stackable promotions: Stackable promotions combine ONLY when both are marked
stackable = true. The combined discount must not exceed the tenant’smax_discount_percentsetting (default: 75%). If stacking would exceed the cap, the system applies the single best promotion instead. -
Manual override: A manual override entered by staff at the POS always wins regardless of all other rules. It bypasses the algorithm entirely.
Conflict Resolution Flowchart:
flowchart TD
A[Checkout: Evaluate Price] --> B[Gather all matching rules]
B --> C{Any exclusive promotion?}
C -->|Yes| D[Use highest-priority exclusive promo]
C -->|No| E{Multiple stackable promos?}
E -->|Yes| F{Combined discount <= max_discount_percent?}
F -->|Yes| G[Apply stacked promotions]
F -->|No| H[Apply single best promotion]
E -->|No| I[Compare: Promo vs Price Book vs Channel vs Base]
I --> J[Use lowest final price for customer]
D --> K[Log: rules evaluated, winner, reason]
G --> K
H --> K
J --> K
Audit: Every price resolution is logged with the following data:
sale_id,line_item_id,product_idrules_evaluated[]– list of all pricing rules that were consideredwinning_rule_idandwinning_rule_type(MANUAL / PROMOTION / PRICE_BOOK / CHANNEL / BASE)original_price(base_price) andfinal_pricetotal_discount_amountandtotal_discount_percenttimestamp
3.3.6 Reports: Pricing
| Report | Purpose | Key Data Fields |
|---|---|---|
| Price Book Usage | Track which price books are active and how often applied | Price book name, times applied, avg discount %, revenue impact, active date range |
| Promotion Performance | Measure promotion effectiveness against business goals | Promo name, type, redemptions, revenue lift vs. pre-promo period, margin impact, cost of discount given |
| Markdown History | Track all price reductions with full accountability | Product SKU, old price, new price, reason, who requested, who approved, when, approval status |
| Margin Impact Analysis | Understand how pricing decisions affect profitability | Product/category, base margin %, discounted margin %, volume at each price point, total margin dollars lost/gained |
| Conflict Resolution Log | Audit which pricing rules win at checkout | Transaction ID, product, rules evaluated, winning rule, final price, discount applied |
| Clearance Tracking | Monitor clearance inventory and sell-through | Product, original price, clearance price, days on clearance, units remaining, sell-through % |
3.4 Barcode Management
Scope: Supporting multiple barcode formats per product, auto-generating internal barcodes when no manufacturer code exists, enforcing uniqueness per tenant, and enabling fast barcode-to-product lookup from any scanner or manual entry.
3.4.1 Barcode Types
| Type | Format | Source | Example | Use Case |
|---|---|---|---|---|
| UPC-A | 12-digit numeric | Manufacturer-assigned | 012345678901 | Standard North American retail products |
| EAN-13 | 13-digit numeric | Manufacturer-assigned (EU/International) | 4006381333931 | International products, European imports |
| Internal | Configurable prefix + auto-increment sequence | System-generated | INT-000001, INT-000002 | Products without manufacturer barcodes (custom items, local goods) |
| Alternate | Any format, free-form | Manual entry | VENDOR-SKU-X99, OLD-UPC-123 | Vendor SKUs, legacy barcodes, secondary identifiers |
3.4.2 Barcode Features
- Multiple barcodes per product: Each product has one primary barcode and unlimited alternate barcodes. Scanning ANY barcode (primary or alternate) resolves to the same product.
- Auto-generate internal barcode: When a product is created without a manufacturer barcode, the system auto-generates an internal barcode using the tenant’s configured prefix and the next available sequence number.
- Uniqueness enforced per tenant: No two products within the same tenant can share a barcode (primary or alternate). The system rejects duplicates at creation and import time.
- Universal scan resolution: POS barcode lookup checks the primary barcode first, then searches alternate barcodes. The lookup returns the same product regardless of which barcode was scanned.
- Bulk barcode import via CSV: Staff can upload a CSV file mapping barcodes to existing SKUs. The system validates uniqueness, reports conflicts, and applies valid mappings in batch.
3.4.3 Barcode Lookup Flow
sequenceDiagram
autonumber
participant U as Staff
participant SC as Scanner
participant UI as POS UI
participant API as Backend
participant DB as DB
Note over U, DB: Barcode Scan to Product Lookup
alt Barcode Scanner
U->>SC: Scan Item Barcode
SC->>UI: Barcode Value (e.g., "012345678901")
else Manual Entry
U->>UI: Type Barcode / SKU
end
UI->>API: POST /products/barcode-lookup
Note right of API: {barcode: "012345678901", tenant_id: "..."}
API->>DB: SELECT * FROM products WHERE barcode = ?
Note right of DB: Check primary barcode first
alt Primary Barcode Match
DB-->>API: Product Found
else No Primary Match
API->>DB: SELECT * FROM alternate_barcodes WHERE barcode = ?
Note right of DB: Search alternate barcodes
alt Alternate Barcode Match
DB-->>API: Product Found (via alternate)
else No Match Found
DB-->>API: No Results
API-->>UI: "Product Not Found"
UI-->>U: "No product matches this barcode"
Note right of UI: Option: Create new product or re-scan
end
end
API-->>UI: Return Product Data
UI-->>U: Display: Name, SKU, Price, Stock Level
Note right of UI: Product ready to add to cart
3.4.4 Reports: Barcode Management
| Report | Purpose | Key Data Fields |
|---|---|---|
| Barcode Coverage Report | Identify products missing barcodes | Products without primary barcode, products with only internal barcodes, total coverage % |
| Barcode Scan Failure Log | Track unrecognized scans | Scanned value, timestamp, terminal, resolution (created new / manual lookup / abandoned) |
| Duplicate Barcode Audit | Detect and prevent barcode conflicts | Barcode value, conflicting products, resolution status |
3.5 Categories, Seasons & Collections
Scope: Organizing products into a navigable hierarchy for POS browsing, reporting, and rule application (tax defaults, commission rates). The system supports up to four levels of nesting, freeform tags, named collections, auto-tagging rules, formal buying seasons with lifecycle management, and multi-dimensional reporting hierarchies for financial analysis.
3.5.1 Category Hierarchy
The catalog supports a 4-level hierarchy. Products are assigned to the most specific level applicable.
Level 1: Department
|-- Level 2: Category
|-- Level 3: Subcategory
|-- Level 4: Sub-subcategory (max depth)
Example:
Men's Apparel (Department)
|-- Tops (Category)
| |-- T-Shirts (Subcategory)
| | |-- Graphic Tees (Sub-subcategory)
| | |-- Plain Tees (Sub-subcategory)
| |-- Dress Shirts (Subcategory)
| |-- Polos (Subcategory)
|-- Bottoms (Category)
| |-- Jeans (Subcategory)
| |-- Chinos (Subcategory)
|-- Outerwear (Category)
|-- Jackets (Subcategory)
|-- Coats (Subcategory)
Accessories (Department)
|-- Bags (Category)
|-- Hats (Category)
|-- Jewelry (Category)
Services (Department)
|-- Alterations (Category)
|-- Gift Services (Category)
3.5.2 Tags & Collections
Tags: Unlimited freeform tags can be applied to any product. Tags are tenant-scoped and support search filtering.
| Feature | Description | Example |
|---|---|---|
| Freeform Tags | Any text, lowercase normalized, no hierarchy | summer, bestseller, organic, limited-edition |
| Named Collections | Curated product groups, manual or rule-based | “Summer Essentials”, “Staff Picks”, “New Arrivals” |
| Manual Collections | Staff manually adds/removes products | “Staff Picks” – staff curates the list |
| Rule-Based Collections | Auto-populated based on conditions | “New Arrivals” = products where published_at is within last 30 days |
| Auto-Tagging Rules | IF condition THEN add tag automatically | IF category = “Outerwear” AND created_at within last 14 days THEN add tag new-outerwear |
3.5.3 Category Features
- Drag-and-drop reordering: Staff can reorder categories and products within categories via drag-and-drop in the catalog management UI.
- Category-level default tax code: Each category can specify a default tax code (e.g., “clothing_exempt”, “grocery_food”). Products inherit the category tax code unless overridden at the product level.
- Category-level default commission rate: Each category can specify a default commission percentage. Used for commission calculation unless overridden at the product level.
- Bulk move products: Staff can select multiple products and move them to a different category in a single action.
- Category image/icon: Each category can have an assigned image or icon for display in POS browse mode and kiosk interfaces.
3.5.4 Reports: Category Management
| Report | Purpose | Key Data Fields |
|---|---|---|
| Category Sales Report | Revenue breakdown by category | Category path, product count, units sold, revenue, avg margin |
| Uncategorized Products | Find products missing a category | Product SKU, name, created date, status |
| Category Depth Report | Audit hierarchy usage | Depth level, category count, product count per level |
3.5.5 Formal Seasons
Scope: Named buying seasons with lifecycle dates that track merchandise from initial buy planning through active selling, clearance, and close-out. Seasons provide a temporal dimension to inventory analysis – enabling sell-through tracking, carryover identification, and season-over-season comparison.
Season Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Display name (e.g., “Spring 2026”, “Holiday 2025”, “Back to School 2026”) |
start_date | Date | Yes | Season merchandise begins arriving / becomes active |
end_date | Date | Yes | Season officially closes – remaining inventory is carryover |
status | Enum | Yes | PLANNING, ACTIVE, CLEARANCE, CLOSED |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Example Seasons:
| Season Name | Start Date | End Date | Status |
|---|---|---|---|
| Spring 2026 | 2026-02-01 | 2026-07-31 | PLANNING |
| Holiday 2025 | 2025-10-15 | 2026-01-15 | CLEARANCE |
| Fall 2025 | 2025-08-01 | 2025-12-31 | CLOSED |
| Summer 2026 | 2026-05-01 | 2026-09-30 | PLANNING |
Season Lifecycle State Machine
stateDiagram-v2
[*] --> PLANNING : Create Season
PLANNING --> ACTIVE : Season start date reached
ACTIVE --> CLEARANCE : Triggered by staff or auto-rule
CLEARANCE --> CLOSED : Season end date passed
PLANNING --> CLOSED : Cancel season (no products received)
CLOSED --> [*]
Lifecycle Transition Rules:
- PLANNING: Season is being prepared. Products can be assigned to this season. Purchase orders can reference this season. No products are expected on the sales floor yet.
- ACTIVE: Season merchandise is on the sales floor and selling. Transition occurs automatically when
start_dateis reached, or manually by staff. Products assigned to this season appear in season-filtered reports. - CLEARANCE: Triggered manually by a buyer/manager OR by an automatic rule (e.g., “30 days before end_date, move to CLEARANCE”). Products in this season become eligible for automatic markdown rules. The clearance collection auto-populates with this season’s products.
- CLOSED: Terminal state. Season has ended. Remaining inventory is flagged as carryover. No further price changes tied to this season. Season data is preserved for historical reporting.
Product-Season Assignment:
- Products are assigned to seasons via the
season_idFK on the product record. - A product can belong to at most ONE season. Products with
season_id = NULLare year-round core items not tied to any season. - When a season transitions to CLOSED, products still assigned to it can be reassigned to a new season (carryover) or have their
season_idset to NULL (promoted to core).
3.5.6 Reporting Dimensions
Scope: Structured classification hierarchies used exclusively for financial reporting and buying analysis. Reporting dimensions are separate from display categories – a product’s display category determines where it appears in the POS browse screen, while reporting dimensions determine how it appears in financial reports, open-to-buy analysis, and margin summaries.
Brand
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Brand name (e.g., “Nike”, “Levi’s”, “Nexus Premier”) |
logo_url | String(500) | No | URL to brand logo image |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
- Products are assigned to brands via the
brand_idFK on the product record. - Brand is distinct from Vendor. Nike the brand (what the customer sees on the label) is different from Nike Direct the vendor (the company you send purchase orders to). A single brand may be sourced from multiple vendors. A single vendor may supply multiple brands.
Merchandise Hierarchy
A 3-level reporting hierarchy independent of display categories:
Department (Level 1)
|-- Class (Level 2)
|-- Subclass (Level 3)
Example:
Footwear (Department)
|-- Athletic Shoes (Class)
| |-- Running (Subclass)
| |-- Basketball (Subclass)
| |-- Training (Subclass)
|-- Casual Shoes (Class)
| |-- Sneakers (Subclass)
| |-- Loafers (Subclass)
|-- Dress Shoes (Class)
|-- Oxfords (Subclass)
|-- Derby (Subclass)
Merchandise Hierarchy Fields on Product:
| Field | Type | Required | Description |
|---|---|---|---|
merch_department_id | UUID | No | FK to merchandise_departments table |
merch_class_id | UUID | No | FK to merchandise_classes table (must be child of selected department) |
merch_subclass_id | UUID | No | FK to merchandise_subclasses table (must be child of selected class) |
- The merchandise hierarchy enables financial reporting by buying category rather than display category. A “Galaxy V-Neck Tee” might display under “Men’s > Casual > T-Shirts” but report under “Apparel > Knit Tops > V-Necks” for buying analysis.
- Merchandise hierarchy assignment is optional. Products without a merchandise hierarchy assignment are grouped under “Unclassified” in financial reports.
Custom Dimensions
Tenant-defined reporting dimensions for analysis needs beyond brand and merchandise hierarchy.
| Field | Type | Required | Description |
|---|---|---|---|
dimension_id | UUID | Yes | Primary key |
name | String(100) | Yes | Dimension name (e.g., “Buyer”, “Margin Tier”, “Velocity Class”) |
values[] | String[] | Yes | List of allowed values (e.g., [“Sarah”, “Mike”, “Unassigned”] for Buyer dimension) |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
Product-Dimension Junction Table:
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | FK to products table |
dimension_id | UUID | Yes | FK to custom_dimensions table |
dimension_value | String | Yes | Selected value (must be one of values[] from the dimension definition) |
Example Custom Dimensions:
| Dimension | Allowed Values | Purpose |
|---|---|---|
| Buyer | Sarah, Mike, Jenny, Unassigned | Track which buyer selected this product for reporting by buyer performance |
| Margin Tier | High (>60%), Medium (40-60%), Low (<40%) | Quick filtering for margin-focused analysis |
| Velocity Class | A (top 20%), B (middle 60%), C (bottom 20%) | ABC analysis classification for inventory planning |
| Sourcing Region | Domestic, Asia, Europe, South America | Supply chain analysis and lead time planning |
3.5.7 Reports: Seasons & Dimensions
| Report | Purpose | Key Data Fields |
|---|---|---|
| Season Sell-Through | Track sell-through rate per season to evaluate buying accuracy | Season name, total units received, total units sold, sell-through %, revenue, avg margin % |
| Season Carryover | Identify unsold season inventory for markdown or transfer decisions | Season name, product SKU, product name, qty remaining, original cost, current retail value, days since season close |
| Brand Performance | Revenue and margin analysis by brand for vendor negotiation and buying decisions | Brand name, product count, units sold, revenue, margin %, return rate, avg selling price |
| Merchandise Hierarchy Report | Financial reporting by Dept/Class/Subclass for open-to-buy and assortment planning | Department, Class, Subclass, revenue, margin %, inventory value at cost, inventory turns, weeks of supply |
| Custom Dimension Report | Flexible analysis by any tenant-defined dimension | Dimension name, dimension value, product count, revenue, margin %, units sold, avg price |
3.6 Multi-Channel Management
Scope: Managing product visibility, inventory allocation, and pricing across multiple sales channels (In-Store POS, Online/Shopify, Wholesale). Each product can be configured independently per channel, enabling retailers to control where products appear, how much stock each channel can sell, and at what price.
Cross-Reference: See Module 4, Section 4.2 for inventory allocation across channels.
3.6.1 Channel Definition
The system ships with three built-in channels. Tenants can define additional custom channels to match their sales operations.
| Channel | Type | Default | Description |
|---|---|---|---|
| IN_STORE | PHYSICAL | Yes | Point-of-sale transactions at physical store locations |
| ONLINE | DIGITAL | No | E-commerce sales via Shopify or other web storefront |
| WHOLESALE | B2B | No | Bulk/wholesale orders from business customers |
Channel Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
name | String | Yes | Display name (e.g., “In-Store POS”, “Shopify Online”, “Wholesale Portal”) |
type | Enum | Yes | PHYSICAL, DIGITAL, B2B |
is_default | Boolean | Yes | Whether this is the default channel for new products (one per tenant) |
is_system | Boolean | Yes | System-defined channels cannot be deleted |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
3.6.2 Channel Visibility Controls
Each product is independently toggled for visibility on each channel. Visibility can be immediate or scheduled with date windows.
Channel Visibility Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | Reference to product |
channel_id | UUID | Yes | Reference to channel |
is_visible | Boolean | Yes | Whether the product appears on this channel |
available_from | DateTime | No | Scheduled visibility start (null = immediately visible when toggled on) |
available_until | DateTime | No | Scheduled visibility end (null = indefinite) |
tenant_id | UUID | Yes | Owning tenant |
Business Rules:
- A product must be visible on at least one channel to remain in
ACTIVElifecycle status. If a product is removed from all channels, the system warns: “Product will be hidden from all sales channels. Consider setting to DISCONTINUED.” - Scheduled visibility windows are evaluated in real time. A product with
available_fromin the future does not appear on the channel until that timestamp. - Bulk operations: staff can select multiple products and toggle channel visibility in batch via the catalog management UI.
Channel Visibility Sequence
sequenceDiagram
autonumber
participant U as Staff
participant UI as Catalog UI
participant API as Backend
participant DB as DB
Note over U, DB: Single Product Channel Toggle
U->>UI: Open Product Detail
UI->>API: GET /products/{id}/channels
API-->>UI: Return Channel Visibility Settings
U->>UI: Toggle "Online" Channel ON
U->>UI: Set available_from "2026-03-01"
U->>UI: Click "Save"
UI->>API: PATCH /products/{id}/channels
API->>DB: Upsert Channel Visibility Record
API-->>UI: Channel Updated
Note over U, DB: Bulk Channel Toggle
U->>UI: Select 25 Products (Checkbox)
U->>UI: Click "Bulk Actions" -> "Channel Visibility"
UI-->>U: Show Channel Toggle Panel
U->>UI: Toggle "Wholesale" ON for All Selected
U->>UI: Click "Apply"
UI->>API: POST /products/bulk-channel-update
Note right of API: {product_ids: [...], channel_id: "...", is_visible: true}
API->>DB: Upsert 25 Channel Visibility Records
API-->>UI: "25 products updated"
3.6.3 Channel Inventory Allocation
Each tenant selects one of two inventory allocation modes. The mode applies globally per tenant.
| Mode | Description | Use Case |
|---|---|---|
| Shared Pool (Default) | All channels sell from the same inventory pool. An online sale decrements the same stock as an in-store sale. | Small-to-medium retailers with unified stock |
| Dedicated Allocation | Reserve specific quantities per channel per location. Each channel has its own available pool. | Large retailers needing channel-specific stock control |
Dedicated Allocation Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | Reference to product |
location_id | UUID | Yes | Reference to store/warehouse location |
channel_id | UUID | Yes | Reference to channel |
allocated_qty | Integer | Yes | Quantity reserved for this channel at this location |
sold_qty | Integer | Yes | Quantity sold through this channel (starts at 0) |
available_qty | Integer | Computed | Calculated: allocated_qty - sold_qty |
tenant_id | UUID | Yes | Owning tenant |
last_updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Sum of
allocated_qtyacross all channels at a location cannot exceed total physical inventory at that location. - When one channel sells out (available_qty = 0), the system flags the product as “Channel Stockout” in the dashboard. Staff can manually release allocation from another channel to replenish.
- Auto-reallocation is not automatic by default. Staff must approve reallocation to prevent unintended stock shifts.
- In Shared Pool mode, the dedicated allocation table is not used. All channels reference the location’s total
qty_on_hand.
3.6.4 Channel Pricing
Each product can carry a different price per channel. Channel pricing ties into the Price Tier hierarchy defined in the pricing engine.
Channel Pricing Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | Reference to product |
channel_id | UUID | Yes | Reference to channel |
channel_price | Decimal(10,2) | No | Override price for this channel (null = use base_price) |
channel_compare_at_price | Decimal(10,2) | No | “Was” price for this channel (used for strikethrough display) |
tenant_id | UUID | Yes | Owning tenant |
Business Rules:
- If no
channel_priceis set for a product-channel combination, the system falls through to the product’s globalbase_price. - Channel pricing is evaluated BEFORE customer price tiers. The resolution order is: Channel Price -> Base Price -> Price Tier Override.
- Wholesale channel pricing typically represents a cost-plus markup and may be lower than in-store pricing.
3.6.5 Reports: Multi-Channel
| Report | Purpose | Key Data Fields |
|---|---|---|
| Channel Sales Comparison | Compare revenue performance across channels | Channel, units sold, revenue, gross margin, avg order value, period |
| Channel Inventory Status | Stock availability by channel | Product, channel, allocated qty, available qty, sold qty, stockout risk flag |
| Channel Visibility Audit | Identify products missing from channels | Product, channels currently visible, channels not visible, last change date, recommended action |
| Channel Price Variance | Price differences across channels for same product | Product, in-store price, online price, wholesale price, variance % |
3.7 Shopify Integration
MOVED TO MODULE 6: This section has been consolidated into Module 6: Integrations & External Systems, Section 6.3 (Shopify Integration). The full Shopify integration specification – including sync modes, field-level ownership, conflict resolution, sync constraints, GraphQL API preference, @idempotent directive, Bulk Operations API, third-party POS integration rules, omnichannel/BOPIS requirements, and hardware compatibility – is now maintained in Section 6.3.
See: Module 6, Section 6.3 for the complete Shopify integration specification.
3.8 Vendor Management
Scope: Tracking supplier information, payment terms, and the many-to-many relationship between vendors and products. A single product can be sourced from multiple vendors, each with vendor-specific cost, SKU, and lead time.
Cross-Reference: See Module 4, Section 4.3 for purchase orders and Section 4.9 for vendor RMA.
Cross-Reference: See Module 5, Section 5.19 for supplier payment terms and lead time configuration.
3.8.1 Vendor Data Model
| Group | Field | Type | Required | Description |
|---|---|---|---|---|
| Identity | id | UUID | Yes | Primary key, system-generated |
name | String | Yes | Vendor company name | |
code | String | Yes | Short unique code (e.g., “NIKE”, “LEVI”) | |
tax_id | String | No | Vendor tax identification number | |
| Contact | email | String | No | Primary contact email |
phone | String | No | Primary phone number | |
address | Object | No | Full mailing address (street, city, state, zip, country) | |
contact_person | String | No | Name of primary contact | |
| Terms | payment_terms | Enum | Yes | NET_30, NET_60, NET_90, COD, PREPAID |
currency | String | Yes | Default currency code (e.g., “USD”) | |
minimum_order | Decimal(10,2) | No | Minimum order amount required | |
| Logistics | default_lead_time_days | Integer | No | Standard delivery lead time in days |
preferred_carrier | String | No | Default shipping carrier | |
| Status | status | Enum | Yes | ACTIVE, INACTIVE |
| Timestamps | created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
3.8.2 Vendor-Product Relationship
Vendors and products share a many-to-many relationship through the vendor_product junction table. Each link carries vendor-specific pricing, SKU mapping, and lead time overrides.
| Field | Type | Required | Description |
|---|---|---|---|
vendor_id | UUID | Yes | Reference to vendor |
product_id | UUID | Yes | Reference to product |
vendor_sku | String | No | The vendor’s own SKU for this product |
vendor_cost | Decimal(10,2) | Yes | Purchase cost from this vendor |
vendor_barcode | String | No | Vendor-specific barcode (can differ from product barcode) |
is_primary_vendor | Boolean | Yes | Whether this is the preferred vendor for this product (one per product) |
lead_time_override_days | Integer | No | Vendor-product specific lead time (overrides vendor default) |
minimum_order_qty | Integer | No | Minimum quantity per order for this product from this vendor |
last_ordered_at | DateTime | No | Timestamp of most recent PO to this vendor for this product |
3.8.3 Vendor-Product Entity Relationship
erDiagram
VENDOR {
UUID id PK
String name
String code
String tax_id
String email
String phone
String payment_terms
String currency
Decimal minimum_order
Integer default_lead_time_days
String status
}
PRODUCT {
UUID id PK
String sku
String name
String product_type
Decimal base_price
Decimal cost
String lifecycle_status
}
VENDOR_PRODUCT {
UUID vendor_id FK
UUID product_id FK
String vendor_sku
Decimal vendor_cost
String vendor_barcode
Boolean is_primary_vendor
Integer lead_time_override_days
Integer minimum_order_qty
}
VENDOR ||--o{ VENDOR_PRODUCT : "supplies"
PRODUCT ||--o{ VENDOR_PRODUCT : "sourced from"
3.8.4 Reports: Vendor Management
| Report | Purpose | Key Data Fields |
|---|---|---|
| Vendor Product List | All products supplied by a vendor | Vendor, product SKU, vendor SKU, vendor cost, is primary, last ordered |
| Vendor Performance Report | Track delivery and quality | Vendor, PO count, on-time %, avg lead time, variance count, return rate |
| Primary Vendor Coverage | Ensure all products have a primary vendor | Products without primary vendor, products with only one vendor source |
| Vendor Cost Comparison | Compare pricing across vendors for same product | Product SKU, vendor name, vendor cost, lead time, minimum qty |
3.9 Product Search & Discovery
Scope: Enabling fast product lookup at the POS terminal through multiple search methods: full-text search, category browsing, advanced filters, favorites, and product recommendations. Search must return results in under 200ms for responsive POS operation.
3.9.1 Full-Text Search
The POS search engine indexes the following fields for fast retrieval:
| Search Field | Weight (Relevance) | Index Type | Example Match |
|---|---|---|---|
sku | 10 (highest) | Exact match | Search “BLK-TEE-001” returns exact product |
barcode / alternate_barcodes | 10 (highest) | Exact match | Search “012345678901” returns exact product |
name | 8 | Full-text, prefix | Search “Oxford” matches “Oxford Button-Down Shirt” |
brand_name | 6 | Full-text | Search “Nike” matches all Nike products |
vendor_name | 5 | Full-text | Search “Levis” matches products from Levi’s vendor |
tags[] | 4 | Exact token match | Search “bestseller” matches tagged products |
short_description | 3 | Full-text, contains | Search “moisture wicking” matches description |
long_description | 2 | Full-text, contains | Lowest priority, broad match |
custom_attributes | 3 | Key-value match | Search “organic” matches custom attribute values |
Relevance Ranking Order:
1. Exact SKU match
2. Exact barcode match (primary or alternate)
3. Name starts with search term
4. Name contains search term
5. Brand name match
6. Vendor name match
7. Tag exact match
8. Description contains search term
9. Custom attribute value match
Fuzzy Matching:
- Handles typos using Levenshtein distance (edit distance <= 2 for words >= 5 characters)
- Examples: “oxfrd” matches “Oxford”, “clasic” matches “Classic”, “niike” matches “Nike”
- Fuzzy results ranked below exact matches
- Disabled for SKU and barcode fields (exact match only)
Auto-Complete:
- Suggestions appear after 2 characters typed
- Returns top 8 suggestions combining product names, SKUs, and recent searches
- Debounced at 150ms to prevent excessive API calls
- Keyboard navigation supported (arrow keys + Enter to select)
Recent Searches:
| Field | Type | Description |
|---|---|---|
id | UUID | Primary key |
user_id | UUID | Staff member who performed the search |
search_term | String(255) | The search query text |
result_count | Integer | Number of results returned |
selected_product_id | UUID (nullable) | Product selected from results (null if no selection) |
tenant_id | UUID | Tenant scope |
created_at | DateTime | When the search was performed |
- Last 10 searches stored per user, displayed in a dropdown when the search field is focused
- Tapping a recent search re-executes the query
- Clear individual or all recent searches
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant IDX as Search Index
participant DB as DB
U->>UI: Type "oxfrd" in search bar
Note right of UI: Debounce 150ms
UI->>API: GET /products/search?q=oxfrd&limit=20
API->>IDX: Full-text query with fuzzy matching
par Search Execution
IDX->>IDX: Exact SKU/barcode check (no match)
IDX->>IDX: Full-text name search (fuzzy: "oxfrd" → "Oxford")
IDX->>IDX: Brand/vendor/tag search
IDX->>IDX: Description search
end
IDX-->>API: Ranked results (Oxford Button-Down first)
API->>DB: Fetch stock levels for results
API-->>UI: Return products with price, stock, images
UI-->>U: Display results grid (name, SKU, price, stock, image)
U->>UI: Tap "Oxford Button-Down Shirt"
UI->>UI: Add to cart
par Background
UI->>API: POST /search-history
API->>DB: Save recent search "oxfrd" for user
end
3.9.2 Category Browsing
Staff can browse the catalog visually through the 4-level category hierarchy defined in Section 3.4.
Visual Grid View:
- Categories displayed as image tiles (using
category_imagefrom Section 3.4.3) - Each tile shows: category name, product count badge, category image (or placeholder icon)
- Grid layout: configurable 3x3, 4x4, or 5x5 per screen (setting per terminal)
Drill-Down Navigation:
Department → Category → Subcategory → Products
Example:
Men's Apparel [42 items] →
Tops [28 items] →
T-Shirts [15 items] →
[Product Grid: Classic Tee, Graphic Tee, V-Neck, ...]
Breadcrumb Navigation:
Always visible at the top of the browse screen:
Home > Men's Apparel > Tops > T-Shirts
Tapping any breadcrumb segment navigates back to that level.
Sort Within Category:
| Sort Option | Direction | Default |
|---|---|---|
| Name | A-Z, Z-A | A-Z (default) |
| Price | Low-High, High-Low | – |
| Newest | Most recently published first | – |
| Best-Selling | Highest sales velocity first | – |
3.9.3 Advanced Filters
Filters are combinable using AND logic. Each active filter narrows the result set.
| Filter | Type | Values | UI Control |
|---|---|---|---|
| Price range | Range | Min-Max price (tenant currency) | Dual-handle slider with manual entry |
| Brand | Multi-select | List of all brands in tenant catalog | Searchable checkbox list |
| Vendor | Multi-select | List of all active vendors | Searchable checkbox list |
| Stock status | Multi-select | In Stock, Low Stock, Out of Stock | Checkbox group |
| Season | Multi-select | Active season tags/collections | Checkbox list |
| Size | Multi-select | Available sizes (from variant dimension values) | Checkbox grid |
| Color | Multi-select | Available colors (from variant dimension values) | Color swatch grid |
| Channel | Multi-select | In-Store, Online, Wholesale | Checkbox group |
| Lifecycle status | Multi-select | Active, Discontinued | Checkbox group |
| Category | Tree-select | Full category hierarchy | Expandable tree |
| Custom attributes | Dynamic | Based on tenant’s custom attribute definitions | Auto-generated per attribute type |
Saved Filters:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Display name (e.g., “Nike Low Stock”, “Clearance Items”) |
filter_definition | JSON | Yes | Serialized filter state: {"brand": ["Nike"], "stock_status": ["LOW_STOCK"]} |
created_by | UUID | Yes | Staff member who created the filter |
is_shared | Boolean | Yes | Whether other staff can see and use this filter (default: false) |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
- Staff can save current filter combination as a named view
- Maximum 20 saved filters per user, 50 shared filters per tenant
- Shared filters visible to all staff in the tenant
3.9.4 Quick-Add & Favorites
Favorites:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
user_id | UUID | Yes | Staff member |
product_id | UUID | Yes | Favorited product |
sort_order | Integer | Yes | Display position |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | When favorited |
- Each staff member can pin up to 50 products
- One-tap add to cart from favorites panel
- Drag-drop reordering of favorites
- Favorites persist across terminals (tied to user, not device)
Quick-Add Buttons:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
location_id | UUID | Yes | Location this configuration applies to |
product_id | UUID | Yes | Product assigned to the button |
grid_position | Integer | Yes | Position in the grid (1-20) |
button_color | String(7) | No | Hex color for button background (e.g., “#FF5733”) |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
- Configurable grid of up to 20 high-velocity items on POS home screen
- Each button displays: product image thumbnail, product name (truncated), price
- Configurable per location (e.g., store-specific quick items)
- Manager or Admin role required to configure quick-add buttons
Recent Products:
- Last 20 products viewed or added to cart, displayed in a “Recents” side panel
- Per-user, per-session (resets on logout)
- One-tap to add to cart
- Stored in local state (not persisted to database)
Department Quick-Keys:
- Configurable shortcut buttons for top-level categories (departments)
- Up to 8 department quick-keys displayed in a horizontal bar above the search field
- Tapping a quick-key navigates directly to that department’s category browse view
- Configurable per location by Manager or Admin
3.9.5 Product Substitutions & Recommendations
Out-of-Stock Alternatives:
When a scanned or searched product has zero available stock at the current location, the system presents alternatives in priority order:
| Priority | Suggestion Type | Logic | Display |
|---|---|---|---|
| 1 | Same product at another location | Query inventory where product_id matches AND qty_available > 0 at other locations | Location name, stock qty, distance (if configured) |
| 2 | Similar products | Same category_id AND base_price within +/- 20% of original AND lifecycle_status = ACTIVE AND qty_available > 0 | Product name, price, stock qty |
| 3 | Same brand alternatives | Same brand AND lifecycle_status = ACTIVE AND qty_available > 0, ordered by sales velocity | Product name, price, stock qty |
Cross-Sell / Upsell Configuration:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Source product |
related_product_id | UUID | Yes | Related product |
relationship_type | Enum | Yes | SUBSTITUTE, CROSS_SELL, UPSELL, ACCESSORY |
priority | Integer | Yes | Display order (1 = highest priority) |
auto_generated | Boolean | Yes | true if system-generated from order history, false if manually defined |
confidence_score | Decimal(3,2) | No | For auto-generated: co-purchase frequency score (0.00-1.00) |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
Manual Relationships:
- Staff defines “related products” per product via the admin product detail page
- Relationship types: SUBSTITUTE (alternative), CROSS_SELL (complementary), UPSELL (upgrade), ACCESSORY (add-on)
- Maximum 10 manual relationships per product per type
Auto-Generated Relationships:
- Background job analyzes completed order history (rolling 90 days)
- Identifies products frequently purchased together (co-purchase frequency >= 3 occurrences)
- Generates
CROSS_SELLrelationships with confidence score based on frequency - Auto-generated relationships are refreshed weekly
- Staff can promote an auto-generated relationship to manual (persists beyond refresh)
POS Display:
- “Customers Also Bought” panel shown during checkout when cart contains products with cross-sell relationships
- Maximum 4 suggestions displayed, ordered by priority then confidence score
- Staff can dismiss suggestions or tap to add to cart
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
U->>UI: Scan barcode for "Classic Tee"
UI->>API: POST /products/barcode-lookup
API->>DB: Fetch product + inventory
DB-->>API: Product found, qty_available = 0
API-->>UI: Product data (OUT OF STOCK)
UI-->>U: "Classic Tee is out of stock at this location"
par Fetch Alternatives
API->>DB: Same product at other locations
DB-->>API: Store B: 12 units, Store C: 5 units
and
API->>DB: Similar products (same category, ±20% price)
DB-->>API: "V-Neck Tee" $27.99 (8 in stock), "Crew Neck" $31.99 (15 in stock)
and
API->>DB: Same brand alternatives
DB-->>API: "Classic Tee V2" $29.99 (20 in stock)
end
UI-->>U: Display alternatives panel
Note right of UI: 1. Same item at Store B (12) / Store C (5)
Note right of UI: 2. V-Neck Tee $27.99 (8) / Crew Neck $31.99 (15)
Note right of UI: 3. Classic Tee V2 $29.99 (20)
alt Staff selects alternative
U->>UI: Tap "V-Neck Tee"
UI->>UI: Add to cart
else Staff initiates transfer
U->>UI: Tap "Request from Store B"
UI->>API: POST /transfers/request
end
3.9.6 Reports: Search & Discovery
| Report | Purpose | Key Data Fields |
|---|---|---|
| Search Failure Log | Searches that returned zero results, identifying catalog gaps | Search term, timestamp, terminal, staff member, location |
| Top Searches | Most frequent search terms for merchandising insight | Search term, frequency, avg results returned, conversion rate (searches that led to cart add) |
| Search Conversion Funnel | Track search-to-sale effectiveness | Total searches, searches with results, searches with cart add, searches leading to sale |
| Substitute Offered vs Accepted | Effectiveness of substitution suggestions | Original product, substitute offered, offered count, accepted count, accepted %, declined % |
| Favorites Usage | Staff utilization of favorites feature | Staff member, favorite count, favorites used in sales, top 10 favorited products |
| Quick-Add Button Performance | Click-through rate of quick-add buttons | Button position, product, click count, resulting sales, revenue from quick-add |
3.10 Label & Price Tag Printing
Scope: Generating and printing barcode labels, shelf price tags, and clearance stickers from the catalog. Supports multiple label formats, batch printing, and integration with standard label printers (Zebra, DYMO, Brother).
3.10.1 Label Types
| Label Type | Content Fields | Use Case | Standard Size |
|---|---|---|---|
| Barcode Label | SKU, barcode (scannable), product name, price | Tagging individual items for checkout scanning | 50mm x 25mm |
| Shelf Price Tag | Product name, price, compare_at_price (if on sale), category, SKU | Shelf-edge display showing current price | 60mm x 40mm |
| Clearance Sticker | Markdown price, original price (strikethrough), discount %, “CLEARANCE” badge | Identifying marked-down clearance items | 40mm x 30mm |
| Variant Label | Parent name, size, color (variant dimensions), SKU, barcode (scannable), price | Per-variant tagging for variant products | 50mm x 25mm |
| Bin Label | SKU, barcode (scannable), product name, location code, bin/shelf position | Warehouse bin identification for inventory management | 100mm x 50mm |
3.10.2 Label Templates
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
name | String(100) | Yes | Template display name (e.g., “Standard Barcode 50x25”) |
label_type | Enum | Yes | BARCODE, SHELF_TAG, CLEARANCE, VARIANT, BIN |
width_mm | Integer | Yes | Label width in millimeters |
height_mm | Integer | Yes | Label height in millimeters |
layout_definition | JSON | Yes | Field positions, font sizes, barcode format, margins (see below) |
barcode_format | Enum | Yes | CODE128, EAN13, UPCA, QR_CODE |
is_default | Boolean | Yes | Default template for this label type (one per type per tenant) |
printer_language | Enum | Yes | ZPL (Zebra), DYMO_XML, BROTHER_ESC, RECEIPT_ESC_POS |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Layout Definition JSON Structure:
{
"fields": [
{ "name": "product_name", "x": 2, "y": 2, "font_size": 10, "max_width": 46, "bold": true },
{ "name": "barcode", "x": 2, "y": 10, "height": 8, "format": "CODE128" },
{ "name": "sku", "x": 2, "y": 20, "font_size": 7, "bold": false },
{ "name": "price", "x": 35, "y": 2, "font_size": 12, "bold": true, "prefix": "$" }
],
"margins": { "top": 1, "right": 1, "bottom": 1, "left": 1 }
}
Supported Printers:
| Printer | Language | Connection | Notes |
|---|---|---|---|
| Zebra ZD/ZT Series | ZPL (Zebra Programming Language) | USB, Network (TCP/IP) | Industry standard for retail labels |
| DYMO LabelWriter | DYMO XML (via DYMO SDK) | USB | Desktop label printing |
| Brother QL Series | Brother ESC/P | USB, Network | Versatile label printing |
| Receipt Printer (fallback) | ESC/POS | USB, Network | Print labels on 80mm receipt paper |
3.10.3 Batch Printing Workflow
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI / Admin
participant API as Backend
participant PS as Print Service
participant PR as Label Printer
Note over U, PR: Batch Label Printing Workflow
U->>UI: Select products (search, category, or bulk select)
UI-->>U: Display selected products (count, names)
U->>UI: Click "Print Labels"
UI-->>U: Show template selection dialog
U->>UI: Choose label template (e.g., "Standard Barcode 50x25")
UI-->>U: Show quantity options
U->>UI: Set quantity per product
Note right of UI: Default: 1 per product
Note right of UI: Option: "Match stock qty" auto-fills
U->>UI: Click "Preview"
UI->>API: POST /labels/preview
API-->>UI: Return rendered label previews (first 5)
UI-->>U: Display label preview grid
U->>UI: Select target printer from configured list
U->>UI: Click "Print"
UI->>API: POST /labels/print
API->>API: Generate label data for all products
API->>PS: Send print job (template + product data)
PS->>PS: Render labels in printer language (ZPL/DYMO/ESC)
PS->>PR: Send to printer
PR-->>PS: Print confirmation
PS-->>API: Job complete (labels_printed count)
API->>API: Log print job to print_log
API-->>UI: "Printed 45 labels on Zebra-Stockroom"
UI-->>U: Success confirmation
Print Job Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
template_id | UUID | Yes | Label template used |
printer_name | String(100) | Yes | Target printer identifier |
product_count | Integer | Yes | Number of distinct products |
label_count | Integer | Yes | Total labels printed (sum of quantities) |
status | Enum | Yes | QUEUED, PRINTING, COMPLETED, FAILED |
error_message | String | No | Error details if status is FAILED |
initiated_by | UUID | Yes | Staff member who started the print job |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | When the job was created |
completed_at | DateTime | No | When printing finished |
3.10.4 Print Triggers
| Trigger Event | Action | Prompt Behavior | Default Setting |
|---|---|---|---|
| On PO Receive | Prompt to print barcode labels for received items | Modal: “Print labels for 48 received items?” with template selection | Enabled (configurable per tenant) |
| On Transfer Receive | Prompt to print barcode labels for transferred items | Same modal as PO receive | Enabled |
| On Price Change | Prompt to print new shelf tags for price-changed products | Modal: “Price changed for 3 products. Print new shelf tags?” | Enabled |
| On Markdown | Prompt to print clearance stickers for marked-down items | Modal: “Print clearance stickers for 12 items?” with clearance template | Enabled |
| On New Product Published | Prompt to print barcode labels for newly published products | Modal: “Product published. Print labels?” | Disabled (configurable) |
| Manual | Staff initiates from product detail, category view, or bulk selection | No prompt – direct template selection | Always available |
Business Rules:
- Print triggers are configurable per tenant (enable/disable each trigger)
- Staff can dismiss any auto-prompt without printing
- All print events are logged regardless of trigger type
- Print triggers fire only at the location where the event occurred
3.10.5 Reports: Label Printing
| Report | Purpose | Key Data Fields |
|---|---|---|
| Label Print Log | Track all label printing activity | Template used, product count, labels printed, printer, staff member, timestamp, trigger type |
| Reprint Needed | Products with price changes since last label print | Product, current price, price at last label print, last print date, price delta |
| Printer Usage Report | Monitor printer workload and failures | Printer name, jobs processed, labels printed, failure count, avg job size |
| Label Cost Estimate | Estimate label media consumption | Label type, labels printed (period), estimated media cost (based on configured cost per label) |
3.11 Product Media
Scope: Managing product images and video content for POS display, catalog browsing, and Shopify synchronization. Each product supports multiple images with one designated primary, plus optional video links. Images are optimized for fast POS terminal rendering.
3.11.1 Image Management
Product Image Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Parent product reference |
variant_id | UUID | No | Variant reference (null = product-level image) |
url | String(500) | Yes | Full-resolution image URL |
thumbnail_urls | JSON | Yes | Generated thumbnails: {"sm": "url_64px", "md": "url_128px", "lg": "url_256px"} |
alt_text | String(255) | No | Accessibility description |
sort_order | Integer | Yes | Display position in gallery (1-based) |
is_primary | Boolean | Yes | Primary display image (one per product, one per variant) |
file_size_bytes | Integer | Yes | Original file size for storage tracking |
width_px | Integer | Yes | Original image width |
height_px | Integer | Yes | Original image height |
uploaded_by | UUID | Yes | Staff member who uploaded |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Upload timestamp |
Image Rules:
| Rule | Specification |
|---|---|
| Primary image | One per product; one per variant (optional). Displayed in POS search results, cart line items, and product detail. |
| Gallery images | Up to 20 additional images per product (max configurable per tenant, default 20) |
| Per-variant images | Variants can have their own images (e.g., different image per color). If no variant image, falls back to parent product primary image. |
| Supported formats | JPEG, PNG, WebP |
| Maximum file size | 5MB per image |
| Minimum resolution | 256 x 256 pixels |
| Maximum resolution | 4096 x 4096 pixels (larger images auto-resized on upload) |
| Drag-drop reordering | Staff reorders gallery images by dragging; sort_order updated in batch |
Thumbnail Generation:
On upload, the system auto-generates three thumbnail sizes:
| Size Key | Dimensions | Use Case |
|---|---|---|
sm | 64 x 64 px | POS cart line item, compact list view |
md | 128 x 128 px | POS search results grid |
lg | 256 x 256 px | POS product detail, category browse tile |
Thumbnails are generated asynchronously via a background job. Original image is stored as-is; thumbnails are derived copies.
3.11.2 Video Support
Product Video Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Parent product reference |
video_url | String(500) | Yes | Video URL (YouTube, Vimeo, or self-hosted) |
video_provider | Enum | No | YOUTUBE, VIMEO, SELF_HOSTED, OTHER |
title | String(255) | Yes | Video display title |
description | String(1000) | No | Brief description of video content |
thumbnail_url | String(500) | No | Custom thumbnail (auto-fetched from provider if not set) |
sort_order | Integer | Yes | Display position |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
Video Rules:
- URL-based video links only (no direct video file upload)
- Use cases: product demos, styling guides, assembly instructions, care guides
- Display: video tab on product detail page in admin portal
- Not displayed at POS terminal (bandwidth and performance consideration)
- Maximum 5 videos per product
3.11.3 Media Sync
Shopify Integration:
| Direction | Behavior |
|---|---|
| POS to Shopify (Initial Publish) | Primary image pushed to Shopify on first product publish. Sets as Shopify product featured image. |
| POS to Shopify (Updates) | Primary image updates push to Shopify. Additional gallery images are NOT auto-synced (managed in Shopify separately). |
| Shopify to POS | Not synced. Shopify-managed images remain in Shopify only. POS images are POS-authoritative. |
Image Optimization Pipeline:
sequenceDiagram
autonumber
participant U as Staff
participant UI as Admin UI
participant API as Backend
participant IMG as Image Service
participant CDN as CDN
participant DB as DB
U->>UI: Upload product image (drag-drop or file select)
UI->>UI: Client-side validation (format, size < 5MB)
UI->>API: POST /products/{id}/images (multipart upload)
API->>IMG: Process image
IMG->>IMG: Validate dimensions (min 256x256)
IMG->>IMG: Auto-resize if > 4096x4096
IMG->>IMG: Compress (quality 85%, strip EXIF metadata)
IMG->>IMG: Convert to WebP (if not already)
par Thumbnail Generation
IMG->>IMG: Generate 64x64 thumbnail (sm)
IMG->>IMG: Generate 128x128 thumbnail (md)
IMG->>IMG: Generate 256x256 thumbnail (lg)
end
IMG->>CDN: Upload original + 3 thumbnails
CDN-->>IMG: Return CDN URLs
IMG-->>API: Return URLs (original + thumbnails)
API->>DB: Save image record with URLs
API-->>UI: Image uploaded successfully
UI-->>U: Display new image in gallery
CDN Delivery:
- All images served via CDN URL for fast display across all locations
- CDN cache TTL: 30 days (images are immutable; new upload = new URL)
- Fallback: if CDN is unavailable, images served from origin storage
3.12 Product Notes & Attachments
Scope: Supporting structured internal notes and file attachments on products for buying decisions, vendor communication, quality tracking, and staff communication. Notes and attachments are visible only in the admin portal, not at the POS terminal.
3.12.1 Structured Note Types
| Note Type | Purpose | Typical Authors | Icon |
|---|---|---|---|
| Buying Note | Purchasing decisions, reorder plans, vendor negotiation details | Buyers, Managers | Shopping cart icon |
| Vendor Note | Vendor communication, lead time updates, quality issues, terms changes | Buyers, Admin | Truck icon |
| Quality Note | Quality inspections, defect reports, customer complaints about product | Staff, Managers | Checkmark/shield icon |
| Staff Note | General internal communication about the product | Any staff | Chat bubble icon |
Product Note Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Parent product reference |
note_type | Enum | Yes | BUYING, VENDOR, QUALITY, STAFF |
content | Text | Yes | Note text (max 5,000 characters) |
is_pinned | Boolean | Yes | Pinned notes display at top (default: false) |
created_by | UUID | Yes | Staff member who created the note |
updated_by | UUID | No | Staff member who last edited (null if never edited) |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Only the note creator or an Admin can edit or delete a note
- Pinned notes always sort to the top, regardless of date
- Maximum 1 pinned note per note type per product (pinning a new note of the same type unpins the previous)
- Notes are never hard-deleted; they are soft-deleted with a
deleted_attimestamp for audit
3.12.2 File Attachments
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Parent product reference |
file_name | String(255) | Yes | Original uploaded file name |
file_url | String(500) | Yes | Storage URL for download |
file_type | Enum | Yes | PDF, JPEG, PNG, XLSX, DOCX, CSV |
file_size_bytes | Integer | Yes | File size for storage tracking |
description | String(500) | No | Brief description of the attachment |
uploaded_by | UUID | Yes | Staff member who uploaded |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Upload timestamp |
Attachment Rules:
| Rule | Specification |
|---|---|
| Supported file types | PDF, JPEG, PNG, XLSX, DOCX, CSV |
| Maximum file size | 10MB per attachment |
| Maximum attachments per product | 20 |
| Common use cases | Spec sheets, certificates of authenticity, vendor catalogs, care instructions, import documents |
| Access control | Any staff can view; Buyer, Manager, or Admin can upload/delete |
3.12.3 Display
Admin Product Detail Page – Notes Tab:
┌─────────────────────────────────────────────────────────┐
│ Notes (7) [+ Add Note ▼] │
│ ─────────────────────────────────────────────────────── │
│ Filter: [All Types ▼] [Show Pinned Only ☐] │
│ │
│ 📌 BUYING NOTE — Jan 15, 2026 by Sarah (Buyer) │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Reorder 200 units for Spring. Vendor confirmed │ │
│ │ lead time of 21 days. Negotiate 5% volume disc. │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ QUALITY NOTE — Jan 12, 2026 by Mike (Manager) │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Customer complaint: stitching loose on collar. │ │
│ │ Inspected 5 units from last batch - 2 defective. │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ STAFF NOTE — Jan 10, 2026 by Jane (Staff) │
│ ┌──────────────────────────────────────────────────┐ │
│ │ This item sells best when displayed near the │ │
│ │ front entrance. Move to endcap for weekends. │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ ─────────────────────────────────────────────────────── │
│ Attachments (3) [+ Upload File] │
│ │
│ 📎 Nike-SS26-Spec-Sheet.pdf (1.2 MB) [Download] │
│ 📎 Care-Instructions-EN.pdf (340 KB) [Download] │
│ 📎 Vendor-Quote-Jan2026.xlsx (85 KB) [Download] │
└─────────────────────────────────────────────────────────┘
Product List View:
- Note count badge displayed on product rows (e.g., “3 notes”) as a small indicator
- Badge color: grey for staff notes only, yellow if any buying/vendor notes exist, red if any quality notes exist
3.13 Catalog Permissions & Approvals
Scope: Controlling access to catalog features through role-based permissions, field-level edit restrictions, and approval workflows for sensitive changes (pricing, cost, lifecycle transitions). All permission-governed actions are logged to the audit trail defined in Section 3.13.4.
3.13.1 Role-Based Catalog Access
| Permission | Admin | Buyer | Manager | Staff |
|---|---|---|---|---|
| View Products | Yes | Yes | Yes | Yes |
| Create Products | Yes | Yes | Yes | No |
| Edit Products | Yes | Yes | Yes | No (view only) |
| Change Price | Yes | No | Yes | No |
| Change Cost | Yes | Yes | No | No |
| Change Lifecycle Status | Yes | No | Yes | No |
| Delete Products | Yes | No | No | No |
| Approve Changes | Yes | No | Yes | No |
| Manage Categories | Yes | No | Yes | No |
| Manage Vendors | Yes | Yes | No | No |
| Create Purchase Orders | Yes | Yes | Yes | No |
| Submit Purchase Orders | Yes | Yes | No | No |
| Receive Inventory | Yes | Yes | Yes | No |
| Configure Templates/Buttons | Yes | No | Yes | No |
| Export Catalog Data | Yes | Yes | Yes | No |
| Import Catalog Data | Yes | Yes | No | No |
Role Assignment:
- Roles are assigned per user per tenant (a user can have different roles in different tenants)
- A user can hold exactly one catalog role per tenant
- Role assignment requires Admin permission
- Role changes take effect on the user’s next login (or session refresh)
3.13.2 Field-Level Permissions
Configurable per role per tenant. The tenant Admin can customize which fields each role can edit versus view as read-only.
Default Field-Level Restrictions:
| Field(s) | Admin | Buyer | Manager | Staff |
|---|---|---|---|---|
base_price, compare_at_price | Editable | Read-only | Editable | Read-only |
cost, vendor_cost | Editable | Editable | Read-only | Read-only |
lifecycle_status | Editable | Read-only | Editable | Read-only |
category_id | Editable | Editable | Editable | Read-only |
name, description, tags | Editable | Editable | Editable | Read-only |
barcode, alternate_barcodes | Editable | Editable | Editable | Read-only |
images, media | Editable | Editable | Editable | Read-only |
tax_code | Editable | Read-only | Editable | Read-only |
track_inventory, allow_negative | Editable | Read-only | Editable | Read-only |
low_stock_threshold | Editable | Editable | Editable | Read-only |
| Custom attributes | Editable | Editable | Editable | Read-only |
Field Permission Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
role_id | UUID | Yes | Reference to role |
field_name | String(100) | Yes | Product field name (e.g., “base_price”, “cost”) |
permission | Enum | Yes | READ_ONLY or EDITABLE |
tenant_id | UUID | Yes | Tenant scope |
updated_at | DateTime | Yes | Last modification timestamp |
Constraint: Unique on (role_id, field_name, tenant_id).
3.13.3 Approval Workflows
Configurable approval rules determine which catalog changes require manager or admin sign-off before taking effect.
Default Approval Rules:
| Change Type | Condition | Approval Required From | Priority |
|---|---|---|---|
| Price decrease | > 10% reduction from current price | Manager | Medium |
| Price decrease | > 30% reduction from current price | Admin | High |
| Price increase | > 50% increase from current price | Manager | Medium |
| Cost change | Any modification to cost or vendor_cost | Buyer or Admin | Medium |
| Product activation | Draft to Active transition | Manager | Low |
| Product deactivation | Active to Discontinued transition | Manager | Medium |
| Product deletion | Any deletion of Active or Discontinued product | Admin | High |
| Bulk price change | Any bulk operation affecting base_price | Manager | High |
| Bulk status change | Any bulk operation affecting lifecycle_status | Manager | High |
Approval Rule Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
change_type | Enum | Yes | PRICE_DECREASE, PRICE_INCREASE, COST_CHANGE, STATUS_CHANGE, PRODUCT_DELETION, BULK_PRICE, BULK_STATUS |
condition | JSON | Yes | Threshold definition: {"field": "base_price", "operator": "decrease_pct_gt", "value": 10} |
required_role | Enum | Yes | Minimum role required to approve: MANAGER, ADMIN, BUYER_OR_ADMIN |
is_active | Boolean | Yes | Whether this rule is currently enforced |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Approval Request Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
approval_rule_id | UUID | Yes | Rule that triggered this approval |
change_type | Enum | Yes | Type of change requested |
product_id | UUID | Yes | Product affected |
field_name | String(100) | Yes | Field being changed |
old_value | String(500) | Yes | Current value (serialized) |
new_value | String(500) | Yes | Proposed value (serialized) |
reason | String(1000) | No | Requester’s justification |
requested_by | UUID | Yes | Staff member requesting the change |
approved_by | UUID | No | Staff member who approved (null if pending or rejected) |
status | Enum | Yes | PENDING, APPROVED, REJECTED, EXPIRED |
rejection_reason | String(1000) | No | Why the change was rejected |
expires_at | DateTime | No | Auto-expire if not acted on (default: 7 days) |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | When the request was created |
resolved_at | DateTime | No | When the request was approved or rejected |
Approval Workflow:
sequenceDiagram
autonumber
participant U as Staff / Buyer
participant UI as Admin UI
participant API as Backend
participant DB as DB
participant N as Notification Service
participant A as Approver (Manager/Admin)
U->>UI: Edit product field (e.g., reduce price by 25%)
UI->>API: PUT /products/{id}
API->>API: Check approval rules for this change
alt Approval Required
API->>DB: Create approval_request (status: PENDING)
API->>DB: Store proposed change (do NOT apply yet)
API->>N: Send notification to eligible approvers
N-->>A: "Price change requires your approval"
API-->>UI: "Change submitted for approval"
UI-->>U: "Pending manager approval. Price unchanged until approved."
Note over A, DB: Approver Reviews
A->>UI: Open "Pending Approvals" queue
UI->>API: GET /approvals?status=PENDING
API-->>UI: List of pending approval requests
A->>UI: Review change details
Note right of UI: Shows: product, field, old value, new value, who requested, reason
alt Approve
A->>UI: Click "Approve"
UI->>API: POST /approvals/{id}/approve
API->>DB: Update approval_request (status: APPROVED, approved_by, resolved_at)
API->>DB: Apply the change to the product
API->>DB: Log to audit_trail (with approval_id reference)
API->>N: Notify requester "Your change was approved"
API-->>UI: "Change approved and applied"
else Reject
A->>UI: Click "Reject" + enter reason
UI->>API: POST /approvals/{id}/reject
API->>DB: Update approval_request (status: REJECTED, rejection_reason, resolved_at)
API->>N: Notify requester "Your change was rejected: [reason]"
API-->>UI: "Change rejected"
end
else No Approval Required
API->>DB: Apply change directly
API->>DB: Log to audit_trail
API-->>UI: "Change saved"
end
Business Rules:
- Pending approval requests expire after 7 days (configurable per tenant) and auto-set to
EXPIRED - A requester cannot approve their own change request
- If the product is edited again while a pending approval exists for the same field, the older request is auto-cancelled
- Admin can bypass approval requirements (self-approving)
- Approval rules can be enabled or disabled per tenant without deleting the rule definition
3.13.4 Audit Trail
Every field-level change to any catalog product is logged to an immutable audit trail.
Audit Trail Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Product that was changed |
field_name | String(100) | Yes | Field that changed |
old_value | Text | No | Previous value (null for new records) |
new_value | Text | Yes | New value |
changed_by | UUID | Yes | User who made the change |
change_source | Enum | Yes | MANUAL, IMPORT, SYNC, BULK, SYSTEM, APPROVAL |
approval_id | UUID | No | Reference to approval request (if approval was required) |
ip_address | String(45) | No | IP address of the client |
user_agent | String(500) | No | Client user-agent string |
tenant_id | UUID | Yes | Tenant scope |
created_at | DateTime | Yes | Timestamp of the change |
Audit Trail Rules:
| Rule | Specification |
|---|---|
| Immutability | Audit records cannot be edited or deleted by any user, including Admin |
| Retention | Minimum 7 years (configurable per tenant; can be increased, never decreased) |
| Sensitive field highlighting | Changes to base_price, cost, compare_at_price, and lifecycle_status are visually flagged with a warning icon in the UI |
| Searchable | Audit trail is searchable per product, per user, per field, per date range, and per change source |
| Export | Audit trail exportable as CSV for compliance and external audit |
| Bulk change tracking | Bulk operations create one audit entry per product per field changed (not a single aggregate entry) |
Audit Trail View (Admin Product Detail Page):
┌─────────────────────────────────────────────────────────────────┐
│ Change History (42 changes) [Export CSV] [Filter ▼] │
│ ───────────────────────────────────────────────────────────── │
│ │
│ ⚠ Jan 20, 2026 14:32 — Mike (Manager) — APPROVAL │
│ base_price: $29.99 → $22.99 (approved by Sarah) │
│ │
│ Jan 18, 2026 09:15 — Jane (Buyer) — MANUAL │
│ tags: ["summer"] → ["summer", "clearance"] │
│ │
│ ⚠ Jan 15, 2026 11:00 — SYSTEM — SYNC │
│ lifecycle_status: ACTIVE → DISCONTINUED │
│ │
│ Jan 10, 2026 16:45 — Import Bot — IMPORT │
│ cost: $12.50 → $13.00 │
│ │
│ [Load More...] │
└─────────────────────────────────────────────────────────────────┘
3.13.5 Reports: Permissions & Audit
| Report | Purpose | Key Data Fields |
|---|---|---|
| Pending Approvals | Changes currently awaiting manager/admin approval | Product, change type, field, old value, new value, requested by, requested at, days pending, approver required |
| Approval History | Completed approval decisions with turnaround metrics | Product, change type, requested by, approved/rejected by, resolution time (hours), reason |
| Approval SLA Report | Measure approval response times against targets | Avg resolution time, % resolved within 24h, % expired, by approver |
| Change Audit Log | Complete catalog change history (filterable) | Product, field, old value, new value, changed by, change source, approval ID, timestamp |
| Change Volume Report | Aggregate change counts for workload analysis | Date, change count by source (manual/import/sync/bulk), change count by field, top changers |
| Permission Violations | Attempted unauthorized actions that were blocked | User, role, attempted action, product, timestamp, blocking rule |
3.14 Product Performance Analytics
Scope: Providing real-time product performance metrics embedded on the product detail page and a dedicated catalog analytics dashboard. These metrics drive inventory optimization, markdown decisions, and merchandising strategy.
3.14.1 Embedded Product Metrics
Displayed on each product’s detail page in the admin portal:
| Metric | Calculation | Display Format | Color Coding |
|---|---|---|---|
| Sell-Through Rate | (Units Sold / Units Received) x 100 over selected period | Percentage with trend arrow (up/down vs prior period) | Green >= 70%, Yellow 40-69%, Red < 40% |
| Days of Supply | Current Stock / Avg Daily Sales (rolling 30 days) | Integer with unit “days” | Red < 14 days, Yellow 14-30 days, Green > 30 days |
| Gross Margin % | ((Selling Price - Weighted Avg Cost) / Selling Price) x 100 | Percentage | Red < 30%, Yellow 30-50%, Green > 50% |
| Sales Velocity | Units sold per week (rolling 4-week average) | Decimal units/week with sparkline chart (8-week trend) | No color coding; sparkline shows trend |
| Inventory Aging | Days since last receive at each location | Days per location | Green < 60 days, Yellow 60-120 days, Red > 120 days |
| ABC Classification | Revenue-based Pareto analysis (see 3.14.2) | Badge: A / B / C / NEW | A = Green, B = Blue, C = Grey, NEW = Purple |
| Stock Turn Rate | (COGS / Avg Inventory Value) annualized | Decimal turns/year | Red < 2, Yellow 2-4, Green > 4 |
| Revenue (Period) | Sum of (qty_sold x selling_price) in selected date range | Currency with period selector | No color coding |
Metric Display Layout (Admin Product Detail):
┌─────────────────────────────────────────────────────────────┐
│ Performance Metrics Period: [Last 30 Days ▼] │
│ ───────────────────────────────────────────────────────── │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Sell-Thru │ │ Days of │ │ Margin │ │ Velocity │ │
│ │ 72% ▲ │ │ Supply │ │ 54.2% │ │ 8.3/wk │ │
│ │ (Green) │ │ 18 days │ │ (Green) │ │ ~~~~~~~~ │ │
│ │ │ │ (Yellow) │ │ │ │ sparkline│ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Aging │ │ ABC │ │ Turns │ │ Revenue │ │
│ │ 45 days │ │ [A] │ │ 6.2/yr │ │ $12,450 │ │
│ │ (Green) │ │ (Green) │ │ (Green) │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────┘
3.14.2 ABC Classification
Calculation Method: Revenue-based Pareto analysis, calculated monthly via background job.
| Class | Revenue Contribution | Product Population | Attention Level |
|---|---|---|---|
| A | Top 20% of products generating ~80% of revenue | ~20% of active SKUs | High: frequent cycle counts, optimal stock levels, priority reorder |
| B | Next 30% of products generating ~15% of revenue | ~30% of active SKUs | Moderate: standard stock levels, regular review |
| C | Bottom 50% of products generating ~5% of revenue | ~50% of active SKUs | Low: review for markdown or discontinuation, minimal safety stock |
| NEW | Products active for < 60 days (insufficient data) | Varies | Exempt from classification until data accumulates |
ABC Classification Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | Product reference |
classification | Enum | Yes | A, B, C, NEW |
revenue_total | Decimal(12,2) | Yes | Revenue in the analysis period |
revenue_rank_pct | Decimal(5,2) | Yes | Percentile rank (0-100) by revenue |
analysis_period_start | Date | Yes | Start of the analysis period |
analysis_period_end | Date | Yes | End of the analysis period |
tenant_id | UUID | Yes | Tenant scope |
calculated_at | DateTime | Yes | When this classification was computed |
Business Rules:
- Recalculated on the 1st of each month using the trailing 12-month sales period
- Products with fewer than 60 days since
published_atare classified asNEW - Classification stored on the product record as
abc_classificationfield for fast query - Classification is tenant-scoped (each tenant’s products are ranked independently)
- ABC drives: reorder priority (A items reordered first), count scheduling (A items counted more frequently), display prominence in analytics
ABC Classification Impact on Operations:
| Area | A Items | B Items | C Items |
|---|---|---|---|
| Cycle Count Frequency | Weekly | Monthly | Quarterly |
| Safety Stock | 2 weeks of supply | 1 week of supply | Minimum (3 days) |
| Reorder Priority | First in auto-PO generation | Second priority | Only if manually flagged |
| Stockout Alerting | Immediate alert to Manager | End-of-day alert | Weekly report only |
| Markdown Review | Quarterly review | Semi-annual review | Monthly review (candidates for clearance) |
3.14.3 Catalog Analytics Dashboard
A dedicated dashboard accessible from the admin portal navigation.
Summary Cards (Top Row):
| Card | Metric | Calculation |
|---|---|---|
| Total Active SKUs | Count of products where lifecycle_status = ACTIVE | Real-time |
| Total Inventory Value | Sum of (qty_on_hand x weighted_avg_cost) across all locations | Refreshed hourly |
| Avg Gross Margin % | Average margin across all active products (weighted by revenue) | Refreshed daily |
| Avg Days of Supply | Average days of supply across all active products | Refreshed daily |
Charts:
| Chart | Type | Data | Purpose |
|---|---|---|---|
| Top 10 by Revenue | Horizontal bar chart | Top 10 products by revenue in selected period | Identify best sellers |
| Bottom 10 by Sell-Through | Horizontal bar chart | Bottom 10 products by sell-through rate | Candidates for markdown or discontinuation |
| ABC Distribution | Pie chart (dual-ring) | Inner ring: product count per class. Outer ring: revenue per class | Visualize catalog health |
| Inventory Aging Histogram | Stacked bar chart | Buckets: 0-30, 30-60, 60-90, 90-120, 120+ days | Identify aging inventory |
| Category Margin Comparison | Horizontal bar chart | Avg margin by top-level category | Compare category profitability |
| Seasonal Sell-Through | Multi-line chart | Sell-through rate per season over time | Compare seasonal performance |
| Velocity vs Stock Scatter | Scatter plot | X = velocity, Y = stock qty, color = ABC class | Spot overstock/understock |
Dashboard Filters:
| Filter | Type | Scope |
|---|---|---|
| Date Range | Date picker | All charts and metrics |
| Location | Multi-select | Filter to specific store(s) or all |
| Category | Tree-select | Filter to specific category branch |
| Brand | Multi-select | Filter to specific brand(s) |
| Season | Multi-select | Filter to season collection |
| ABC Class | Multi-select | Filter by A, B, C, or NEW |
3.14.4 Reports: Analytics
| Report | Purpose | Key Data Fields |
|---|---|---|
| Product Scorecard | Full performance summary per product, exportable | Product, SKU, revenue, units sold, margin %, velocity, aging, ABC class, sell-through %, days of supply, stock turn |
| Slow Movers | Products with low velocity and high stock (markdown candidates) | Product, stock qty, days of supply, velocity, last sale date, ABC class, recommended action |
| Overstock Alert | Products exceeding target days of supply threshold | Product, location, stock qty, target DOS, actual DOS, excess units, excess value at cost |
| Dead Stock | Zero sales in configurable period (default 90 days) | Product, last sale date, stock qty, stock value at cost, days without sale, ABC class |
| Margin Erosion | Products where margin has declined over time | Product, margin 90 days ago, current margin, margin delta, cause (cost increase / price decrease / promo) |
| ABC Migration | Products that changed classification between periods | Product, previous class, current class, revenue change, velocity change |
3.15 Catalog User Stories & Acceptance Criteria (EXPANDED)
Note: Epics 3.A through 3.E and their existing Gherkin scenarios remain unchanged. The following adds new Epics 3.F through 3.S and new Gherkin acceptance criteria for key features.
Epic 3.F: Pricing Engine
- Story 3.F.1 (Price Hierarchy): System resolves product price using cascading hierarchy: Manual Override > Promotion > Price Book > Channel Price > Global Default. The customer always receives the best applicable price.
- Story 3.F.2 (Price Books): Admin can create named price books restricted by customer group, channel, and date range. Price book entries override base price for matched products.
- Story 3.F.3 (Promotions): Staff can create four promotion types: Basic (% or $ off), Tiered (volume pricing), BOGO (cross-item), and Scheduled (time-based). Promotions follow a Draft > Scheduled > Active > Expired lifecycle.
- Story 3.F.4 (Markdown Workflow): Price reductions follow a formal workflow: request > manager approval > scheduled price change. All price changes logged with who/when/why for accountability.
- Story 3.F.5 (Conflict Resolution): When multiple pricing rules match, system applies best-price-for-customer logic. Exclusive promotions override all other pricing. Stackable promotions combine up to configurable max discount.
Epic 3.G: Multi-Channel
- Story 3.G.1 (Channel Visibility): Staff can toggle product visibility per channel (In-Store, Online, Wholesale). Products must be visible on at least one channel to be Active.
- Story 3.G.2 (Channel Inventory): Admin can choose shared pool (default) or dedicated allocation per channel. Dedicated mode reserves specific quantities per channel.
- Story 3.G.3 (Channel Pricing): Products can have different prices per channel. Channel price overrides base price per the price hierarchy.
Epic 3.H: Shopify Integration
- Story 3.H.1 (POS-Master Sync): Product changes in POS automatically push to Shopify for POS-owned fields. Shopify-only fields (SEO, metafields) are preserved.
- Story 3.H.2 (Bidirectional Mode): Admin can enable bidirectional sync per tenant. POS-priority conflict resolution applies to shared fields.
- Story 3.H.3 (Sync Monitoring): Admin dashboard shows sync status, pending changes, failed syncs, and conflict log.
Epic 3.I: Search & Discovery
- Story 3.I.1 (Full-Text Search): POS search supports name, SKU, barcode, tags, vendor, brand, and custom attributes with fuzzy matching and auto-complete. Results return in under 200ms.
- Story 3.I.2 (Favorites & Quick-Add): Staff can pin up to 50 favorite products for one-tap access and configure up to 20 quick-add buttons on the POS home screen.
- Story 3.I.3 (Substitutions): When a product is out of stock, system suggests alternatives: same product at other locations, similar products in same category, and related products from cross-sell configuration.
Epic 3.J: Labels & Printing
- Story 3.J.1 (Label Printing): Staff can select products and print barcode labels, shelf tags, or clearance stickers using configurable templates. Supports Zebra, DYMO, Brother, and receipt printer output.
- Story 3.J.2 (Auto-Print Triggers): System prompts label printing on PO receive, transfer receive, price change, and markdown events. Triggers are configurable per tenant.
Epic 3.K: Media Management
- Story 3.K.1 (Product Images): Products support one primary image plus a gallery of up to 20 images. Per-variant images are supported. Drag-drop reorder. Auto-generated thumbnails (64px, 128px, 256px).
- Story 3.K.2 (Video): Products can link to video URLs (YouTube, Vimeo, self-hosted) for demos and styling guides. Videos displayed on admin product detail page only (not POS terminal).
Epic 3.L: Permissions & Approvals
- Story 3.L.1 (Role-Based Access): Four catalog roles (Admin, Buyer, Manager, Staff) with configurable field-level permissions. Staff is view-only. Admin has full access.
- Story 3.L.2 (Approval Workflows): Price decreases > 10% require Manager approval. Price decreases > 30% require Admin approval. Cost changes require Buyer/Admin approval. Rules are configurable per tenant.
- Story 3.L.3 (Audit Trail): Every field change logged with who, when, old value, new value, and change source. Searchable per product. Minimum 7-year retention. Exportable as CSV.
Epic 3.M: Analytics
- Story 3.M.1 (Product Metrics): Product detail page shows sell-through rate, days of supply, gross margin, sales velocity (sparkline), inventory aging, ABC classification, stock turn rate, and period revenue.
- Story 3.M.2 (ABC Classification): Monthly Pareto analysis classifies products as A (top 20% by revenue), B (next 30%), C (bottom 50%). New products exempt until 60 days of data.
- Story 3.M.3 (Analytics Dashboard): Dedicated catalog dashboard with summary cards, top/bottom performers, ABC distribution chart, aging histogram, category margin comparison, and seasonal sell-through trends.
Catalog Acceptance Criteria: New Gherkin Scenarios
Feature: Price Hierarchy Resolution
Feature: Price Hierarchy Resolution
As a POS system
I need to resolve the correct price using the pricing hierarchy
So that customers always receive the best applicable price
Background:
Given product "Classic Tee" has base_price "$29.99"
And product "Classic Tee" is in "ACTIVE" lifecycle status
Scenario: Base price used when no overrides exist
Given no channel price, price book, or promotion applies to "Classic Tee"
When any customer adds "Classic Tee" to cart
Then the price should be "$29.99"
Scenario: Channel price overrides base price
Given channel "WHOLESALE" has price "$19.99" for "Classic Tee"
When a wholesale customer adds "Classic Tee" to cart
Then the price should be "$19.99"
Scenario: Price book overrides channel price
Given price book "Employee Discount" is active for customer group "Employees"
And "Employee Discount" has "Classic Tee" at "$14.99"
And channel "IN_STORE" has price "$29.99" for "Classic Tee"
When employee "John" adds "Classic Tee" to cart
Then the price should be "$14.99"
Scenario: Promotion overrides price book when better for customer
Given promotion "Summer Sale" is active with 30% off "Classic Tee"
And price book "Employee Discount" has "Classic Tee" at "$14.99"
When employee "John" adds "Classic Tee" to cart
Then the price should be "$14.99"
And the system should apply best-price-for-customer logic
And the applied pricing source should be "PRICE_BOOK"
Scenario: Promotion wins when it gives the lower price
Given promotion "Flash Sale" is active with 60% off "Classic Tee"
And price book "Employee Discount" has "Classic Tee" at "$14.99"
When employee "John" adds "Classic Tee" to cart
Then the price should be "$12.00"
And the applied pricing source should be "PROMOTION"
Scenario: Manual override beats all other pricing
Given promotion "Summer Sale" is active with 30% off "Classic Tee"
And price book "Employee Discount" has "Classic Tee" at "$14.99"
When manager "Mike" applies manual override price "$10.00" to "Classic Tee"
Then the price should be "$10.00"
And the applied pricing source should be "MANUAL_OVERRIDE"
And the override should be logged to the audit trail
Scenario: Exclusive promotion overrides stackable promotions
Given exclusive promotion "VIP Members Only" is active with 40% off "Classic Tee"
And stackable promotion "Summer Sale" is active with 10% off "Classic Tee"
When a VIP customer adds "Classic Tee" to cart
Then only the exclusive promotion should apply
And the price should be "$17.99"
Scenario: Stackable promotions combine up to max discount
Given stackable promotion "Summer Sale" is active with 10% off "Classic Tee"
And stackable promotion "Newsletter Signup" is active with 5% off "Classic Tee"
And tenant max discount is configured at 50%
When a qualifying customer adds "Classic Tee" to cart
Then both promotions should apply
And the combined discount should be 15%
And the price should be "$25.49"
Feature: Markdown Workflow with Approval
Feature: Markdown Workflow with Approval
As a catalog manager
I need price reductions to follow an approval workflow
So that markdowns are controlled and accountable
Background:
Given product "Slow Seller" has base_price "$49.99"
And approval rule exists: price decrease > 10% requires Manager approval
And approval rule exists: price decrease > 30% requires Admin approval
Scenario: Small price decrease requires no approval
When staff "Jane" changes price to "$47.99" (4% decrease)
Then the price should change immediately to "$47.99"
And the change should be logged to the audit trail
And no approval request should be created
Scenario: Moderate price decrease requires manager approval
When staff "Jane" requests markdown to "$39.99" (20% decrease) with reason "Low sell-through"
Then an approval request should be created with status "PENDING"
And the product price should remain "$49.99" until approved
And manager "Mike" should receive a notification
Scenario: Manager approves the markdown
Given a pending approval exists for "Slow Seller" price change to "$39.99"
When manager "Mike" approves the markdown
Then the product price should change to "$39.99"
And the approval status should be "APPROVED"
And the audit log should record the change with requester "Jane", approver "Mike", and reason "Low sell-through"
And staff "Jane" should receive a notification "Your markdown was approved"
Scenario: Manager rejects the markdown
Given a pending approval exists for "Slow Seller" price change to "$39.99"
When manager "Mike" rejects the markdown with reason "Wait for end-of-season clearance"
Then the product price should remain "$49.99"
And the approval status should be "REJECTED"
And staff "Jane" should receive a notification with the rejection reason
Scenario: Large price decrease escalates to admin
When staff "Jane" requests markdown to "$29.99" (40% decrease) with reason "Clearance"
Then an approval request should be created requiring Admin approval
And manager approval should NOT be sufficient
Scenario: Approval request expires after 7 days
Given a pending approval was created 8 days ago for "Slow Seller"
When the expiration job runs
Then the approval status should change to "EXPIRED"
And the product price should remain unchanged
And the requester should be notified of expiration
Scenario: Requester cannot approve their own change
When staff "Jane" requests markdown to "$39.99"
And "Jane" also has Manager role
Then "Jane" should not be able to approve her own request
And the system should show "Cannot approve your own change request"
Feature: POS-Shopify Catalog Sync
Feature: POS-Shopify Catalog Sync
As a multi-channel retailer
I need product changes to sync between POS and Shopify
So that online and in-store catalogs stay consistent
Background:
Given product "Classic Tee" exists in POS with SKU "BLK-TEE-001"
And product "Classic Tee" is synced with Shopify product ID "shop_12345"
Scenario: POS price change pushes to Shopify in POS-Master mode
Given sync mode is "POS-Master"
When staff changes price from "$29.99" to "$24.99" in POS
Then the price should update in Shopify within the sync interval
And Shopify SEO title should remain unchanged
And Shopify metafields should remain unchanged
And sync log should record: field "price", direction "POS_TO_SHOPIFY", status "SUCCESS"
Scenario: POS-owned fields are protected from Shopify overwrite
Given sync mode is "POS-Master"
When someone edits the price to "$27.99" directly in Shopify
Then the next sync cycle should overwrite Shopify price back to "$24.99"
And a conflict entry should be logged with source "SHOPIFY", rejected value "$27.99"
Scenario: Shopify description change syncs in bidirectional mode
Given sync mode is "Bidirectional with POS Priority"
And "long_description" is configured with sync direction "Configurable"
And "long_description" is set to "Shopify-to-POS" direction
When an SEO agency updates the description in Shopify
Then the new description should sync to POS on the next cycle
And POS-owned fields (price, SKU, variants) should not be affected
And sync log should record: field "long_description", direction "SHOPIFY_TO_POS", status "SUCCESS"
Scenario: Conflict resolution when both sides change the same field
Given sync mode is "Bidirectional with POS Priority"
And "base_price" is a POS-priority field
When POS changes price to "$24.99" between sync cycles
And Shopify changes price to "$27.99" between the same sync cycles
Then the POS price "$24.99" should win
And Shopify should be updated to "$24.99"
And a conflict audit entry should be created
And the conflict log should show: POS value "$24.99" (applied), Shopify value "$27.99" (rejected)
Scenario: New product publish creates Shopify listing
Given product "New Arrival Shirt" is in "DRAFT" status in POS
And the product has a primary image
When staff publishes the product (Draft to Active)
And channel "Online" is enabled for this product
Then a new Shopify product should be created
And the primary image should be pushed to Shopify
And Shopify product status should be set to "active"
And the Shopify product ID should be stored in the sync mapping table
Scenario: Sync failure is logged and retried
Given sync mode is "POS-Master"
When staff changes price in POS
And the Shopify API returns a 500 error
Then the sync should be marked as "FAILED" in the sync log
And the change should be queued for retry
And after 3 failed retries, the sync should be flagged for manual review
And the admin sync dashboard should show the failure
Feature: Label Printing
Feature: Label Printing
As a retail staff member
I need to print barcode labels and price tags
So that products are properly tagged for sale
Scenario: Print barcode labels for received PO items
Given PO "PO-2026-00042" has just been received with 48 units of "Air Max 90"
When the receive is confirmed
Then the system should prompt "Print labels for 48 received items?"
When staff clicks "Print Labels"
And selects template "Standard Barcode 50x25"
And sets quantity to 48
And selects printer "Zebra-Stockroom"
And clicks "Print"
Then 48 labels should be sent to "Zebra-Stockroom"
And the print log should record the job
Scenario: Auto-prompt on price change
Given product "Classic Tee" has a printed shelf tag at "$29.99"
When manager changes price to "$24.99"
Then the system should prompt "Price changed. Print new shelf tags?"
And the "Reprint Needed" report should include "Classic Tee"
Scenario: Batch print clearance stickers
Given 12 products have been marked down for clearance
When staff selects all 12 products from the clearance collection
And clicks "Print Labels"
And selects template "Clearance Sticker 40x30"
Then each label should show markdown price, original price (strikethrough), discount %, and "CLEARANCE" badge
And 12 labels should be printed
Scenario: Print labels on receipt printer fallback
Given no dedicated label printer is configured at "Store C"
When staff attempts to print labels
Then the system should offer the receipt printer as fallback
And labels should be formatted for 80mm receipt paper width
Feature: Product Search at POS
Feature: Product Search at POS
As a POS operator
I need to find products quickly using multiple search methods
So that checkout is fast and efficient
Scenario: Fuzzy search handles typos
Given product "Oxford Button-Down Shirt" exists
When staff types "oxfrd" in the search bar
Then "Oxford Button-Down Shirt" should appear in search results
And results should return within 200ms
Scenario: SKU exact match ranks highest
Given product "Classic Tee" has SKU "BLK-TEE-001"
And product "Black Tee Dress" has name containing "Tee"
When staff searches "BLK-TEE-001"
Then "Classic Tee" should be the first result (exact SKU match)
Scenario: Auto-complete shows suggestions after 2 characters
When staff types "Cl" in the search bar
Then auto-complete should show suggestions including "Classic Tee", "Classic Oxford", "Clearance Items"
And suggestions should appear within 150ms
Scenario: Out-of-stock product shows substitution suggestions
Given product "Classic Tee" has 0 units available at current location
And "Store B" has 12 units of "Classic Tee"
And "V-Neck Tee" is in the same category at "$27.99"
When staff searches and selects "Classic Tee"
Then the system should show "Out of stock at this location"
And suggest: "Available at Store B (12 units)"
And suggest: "Similar: V-Neck Tee $27.99 (8 in stock)"
Scenario: Quick-add button adds product to cart in one tap
Given "Classic Tee" is configured as quick-add button at position 1
When staff taps the quick-add button
Then "Classic Tee" should be added to cart with qty 1
And no search or product detail view should be required
Scenario: Saved filter retrieves matching products
Given staff saved a filter named "Nike Low Stock" with brand "Nike" and stock status "Low Stock"
When staff selects "Nike Low Stock" from saved filters
Then only Nike products with stock below low_stock_threshold should be displayed
Feature: Catalog Permissions
Feature: Catalog Role-Based Permissions
As a tenant admin
I need to control who can edit catalog fields
So that sensitive data is protected from unauthorized changes
Scenario: Staff role is view-only
Given user "Tom" has role "Staff"
When "Tom" opens product "Classic Tee" detail page
Then all fields should be displayed as read-only
And no "Edit" or "Save" buttons should be visible
And "Tom" should not see "Create Product" option in navigation
Scenario: Buyer cannot change price
Given user "Sarah" has role "Buyer"
When "Sarah" edits product "Classic Tee"
Then the "base_price" field should be read-only
And the "compare_at_price" field should be read-only
But the "cost" field should be editable
And the "vendor_cost" field should be editable
Scenario: Manager cannot change cost
Given user "Mike" has role "Manager"
When "Mike" edits product "Classic Tee"
Then the "cost" field should be read-only
And the "vendor_cost" field should be read-only
But the "base_price" field should be editable
And the "lifecycle_status" field should be editable
Scenario: Unauthorized action is blocked and logged
Given user "Tom" has role "Staff"
When "Tom" attempts to call PUT /products/{id} via API
Then the request should return 403 Forbidden
And a permission violation should be logged with user, role, and attempted action
And the violation should appear in the "Permission Violations" report
Feature: Product Analytics
Feature: Product Performance Analytics
As a merchandising manager
I need to see product performance metrics
So that I can make data-driven inventory and pricing decisions
Scenario: Product detail shows embedded metrics
Given product "Classic Tee" has been active for 90 days
And has sold 450 units out of 600 received
And current stock is 150 units across all locations
And avg daily sales is 5 units
When manager opens the product detail page
Then sell-through rate should show "75%"
And days of supply should show "30 days"
And sales velocity should show "35.0/wk" with sparkline
And ABC classification badge should be displayed
Scenario: ABC classification calculated monthly
Given it is the 1st of the month
And the monthly ABC job runs
Then the top 20% of products by trailing 12-month revenue should be classified "A"
And the next 30% should be classified "B"
And the bottom 50% should be classified "C"
And products active less than 60 days should be classified "NEW"
Scenario: Dead stock report identifies zero-sale products
Given product "Forgotten Widget" has had zero sales for 95 days
And product "Forgotten Widget" has 30 units on hand
And the dead stock threshold is configured at 90 days
When manager runs the "Dead Stock" report
Then "Forgotten Widget" should appear with last_sale_date, stock_qty 30, and days_without_sale 95
And recommended action should be "Review for markdown or discontinuation"
Scenario: Overstock alert fires for excess inventory
Given product "Winter Coat" has target days of supply of 30
And "Store A" has 200 units and sells 1/day (200 days of supply)
When the overstock alert report runs
Then "Winter Coat" at "Store A" should be flagged
And excess units should show 170 (200 - 30 days x 1/day)
And excess value should show the cost of 170 units
4. Inventory Module
Module 4: Inventory Management (Sections 4.1 – 4.7)
4.1 Overview & Scope
The Inventory Management module governs the complete lifecycle of physical stock within the multi-tenant POS system – from procurement through vendor purchase orders, to warehouse and store receiving, through internal logistics and transfers, and into auditing via physical counts and manual adjustments. It is the operational backbone that ensures every unit of merchandise is tracked, accounted for, and available for sale at the right location at the right time.
4.1.1 Executive Summary
Retail clothing operations across five stores and one HQ warehouse demand real-time, accurate inventory visibility. A single garment may be ordered from a vendor, received at the warehouse, transferred to a retail store, reserved for a customer’s online order, counted during a cycle count, adjusted after discovery of damage, and ultimately sold at the point of sale. Each of these events must be captured, validated, and reflected in the system’s inventory balances within seconds.
Module 4 provides the business rules, workflows, data models, and integration points that make this possible. It covers seven functional domains:
- Procurement – Creating, approving, submitting, and tracking purchase orders to vendors (Section 4.3).
- Receiving – Inspecting and accepting inbound inventory from any source – PO shipments, transfers, customer returns, and vendor RMA replacements (Section 4.4).
- Logistics – Inter-store and warehouse-to-store transfers are documented in Module 5 (Transfers & Logistics). Module 4 provides the inventory status and reservation primitives that Module 5 depends on.
- Auditing – Physical stock counts and manual adjustments that reconcile system quantities with reality (Sections 4.6 and 4.7).
- Costing – Inventory valuation via weighted average cost is applied at receiving time and propagated through the system. Cost data feeds into Module 1 (Sales) for margin calculation.
- Integration – Inventory events trigger real-time updates to the POS terminals (Module 1), the catalog (Module 3), and the movement history audit trail.
- Operations – Reorder management, dead stock detection, and minimum display quantity monitoring ensure proactive inventory health (Section 4.5).
4.1.2 Module Dependencies
Module 4 does not operate in isolation. It depends on and is depended upon by multiple other modules in the system.
flowchart LR
M1["Module 1\nSales & POS"]
M3["Module 3\nCatalog"]
M4["Module 4\nInventory"]
M5["Module 5\nTransfers & Logistics"]
M6["Module 6\nReporting"]
M3 -->|Product data, variants,\nvendor links, barcodes| M4
M4 -->|Available qty per location,\nreservation status| M1
M1 -->|Sale committed → decrement,\nVoid → release reservation| M4
M4 -->|Inventory status,\nreservation holds| M5
M5 -->|Transfer receive → increment,\nTransfer ship → decrement| M4
M4 -->|Stock levels, velocity,\ncost data| M6
M1 -->|Sales velocity data| M4
style M4 fill:#2d6a4f,stroke:#1b4332,color:#fff
style M1 fill:#264653,stroke:#1d3557,color:#fff
style M3 fill:#264653,stroke:#1d3557,color:#fff
style M5 fill:#264653,stroke:#1d3557,color:#fff
style M6 fill:#264653,stroke:#1d3557,color:#fff
Upstream dependencies (Module 4 consumes):
| Source Module | Data Consumed | Purpose |
|---|---|---|
| Module 3 (Catalog) | Product ID, variant ID, SKU, barcode, vendor-product links, vendor cost | Identify what is being counted, received, or ordered. Vendor cost used for PO line items. |
| Module 1 (Sales) | Sales velocity per product per location, sale events (commit, void, cancel) | Drive reorder point calculations and inventory decrements/releases. |
| Module 5 (Transfers) | Transfer ship and receive events | Trigger IN_TRANSIT status changes and inventory increments at destination. |
Downstream consumers (Module 4 provides):
| Consumer Module | Data Provided | Purpose |
|---|---|---|
| Module 1 (Sales) | Available quantity per product per location, reservation status | POS checks available qty before completing a sale. Displays stock info to staff. |
| Module 5 (Transfers) | Inventory status, available qty, reservation holds | Transfer system checks available qty before allowing outbound shipment. |
| Module 6 (Reporting) | Stock levels, cost data, velocity, count variances, adjustment history | Inventory reports, shrinkage analysis, days-of-supply calculations. |
4.1.3 Functional Scope
The following table enumerates the functional areas covered by Module 4 and their section references.
| Domain | Section | Description |
|---|---|---|
| Inventory Status Model | 4.2 | Six-status state machine governing what can be sold, transferred, or must be held. |
| Reservation Model | 4.2 | Reserve inventory for carts, parked transactions, transfers, online orders, and hold-for-pickup. |
| Minimum Display Quantity | 4.2 | Advisory warnings when stock drops below configured floor display minimums. |
| Purchase Orders | 4.3 | Full PO lifecycle from draft through receiving and close. |
| PO Approval Workflow | 4.3 | Threshold-based approval routing for high-value purchase orders. |
| Receiving & Inspection | 4.4 | Unified receiving workflow for POs, transfers, returns, and RMA replacements. |
| Discrepancy Handling | 4.4 | Triple-approach handling of receiving variances: note, RMA draft, quarantine. |
| Non-PO Receiving | 4.4 | Accept inventory without a purchase order using mandatory reason codes. |
| Return-to-Stock | 4.4 | Customer return items re-enter available inventory. |
| Reorder Management | 4.5 | Velocity-based reorder points with auto-generated draft POs. |
| Static Override | 4.5 | Manager-locked manual reorder points overriding dynamic calculations. |
| Dead Stock Detection | 4.5 | Alert on products with zero sales velocity over configurable period. |
| Inventory Counting | 4.6 | Five count types with configurable freeze/snapshot modes. |
| Inventory Adjustments | 4.7 | Manual corrections with mandatory manager approval and custom reason codes. |
4.1.4 Key Business Rules Summary
The following rules apply across all Module 4 operations:
- Only AVAILABLE stock can be sold at POS. Inventory in any other status (QUARANTINE, DAMAGED, RESERVED, IN_TRANSIT, PENDING_INSPECTION) is excluded from the sellable quantity displayed to cashiers.
- Only AVAILABLE stock can be transferred between locations. Transfer requests that would reduce available stock below zero are rejected.
- Every inventory change is logged. All status transitions, quantity changes, and cost updates create movement records in the audit trail (see Module 3, Section 3.16 – Movement History).
- Inventory is tracked per product, per variant, per location, per status. A single product may have quantities spread across multiple statuses at a single location simultaneously.
- All monetary values use the tenant’s configured currency. Multi-currency is not supported in v1.
- Tenant isolation is enforced at the data layer. Every inventory record carries a
tenant_idforeign key. Cross-tenant queries are impossible by design.
4.1.5 Inventory Balance Equation
The system maintains the following balance equation at all times for each product-variant-location combination:
Available = On-Hand - Reserved - In-Transit - Quarantine - Damaged
Where:
| Term | Definition |
|---|---|
| On-Hand | Total physical units at the location (all statuses combined). This is what you would count if you physically counted every item. |
| Available | Units that can be sold or transferred right now. This is the number displayed to POS staff. |
| Reserved | Units allocated to a pending sale cart, parked transaction, outbound transfer, online order, or hold-for-pickup. Not yet physically moved but committed. |
| In-Transit | Units that have shipped from this location to another location but have not yet been received at the destination. Decremented from source available, not yet incremented at destination. |
| Quarantine | Units held for inspection or quality review. Cannot be sold or transferred. |
| Damaged | Units identified as damaged. Cannot be sold. May be written off or returned to vendor via RMA. |
The system does not store Available as a separate field. It is always computed from the status-based quantity fields. This ensures the balance equation is always consistent and cannot drift due to bugs in update logic.
4.2 Inventory Status Model
4.2.1 Inventory Status State Machine
Each unit of inventory at each location carries a status that controls whether it can be sold, transferred, or must be held for inspection. The system supports six statuses organized into a state machine with well-defined transitions.
stateDiagram-v2
[*] --> AVAILABLE: Stock Received & Inspected
AVAILABLE --> QUARANTINE: Quality Concern Flagged
AVAILABLE --> RESERVED: Allocated to Order/Transfer
AVAILABLE --> DAMAGED: Damage Identified
QUARANTINE --> AVAILABLE: Inspection Passed
QUARANTINE --> DAMAGED: Inspection Failed
PENDING_INSPECTION --> AVAILABLE: Inspection Passed
PENDING_INSPECTION --> QUARANTINE: Needs Further Review
PENDING_INSPECTION --> DAMAGED: Inspection Failed
DAMAGED --> WRITE_OFF: Unrepairable
DAMAGED --> VENDOR_RMA: Return to Vendor
IN_TRANSIT --> AVAILABLE: Transfer Received & OK
IN_TRANSIT --> PENDING_INSPECTION: Received - Needs Inspection
RESERVED --> AVAILABLE: Reservation Released
note right of AVAILABLE
Sellable at POS
Transferable between locations
end note
note right of QUARANTINE
Blocked from sale
Blocked from transfer
Awaiting inspection
end note
note right of DAMAGED
Blocked from sale
Can be written off or returned to vendor
end note
note right of RESERVED
Allocated but not yet shipped/sold
Decremented from available count
end note
note right of IN_TRANSIT
Moving between locations
Not available at source or destination
end note
Status Definitions:
| Status | Sellable | Transferable | Description |
|---|---|---|---|
AVAILABLE | Yes | Yes | Stock is on the shelf and ready for sale or transfer. |
QUARANTINE | No | No | Stock is held pending quality inspection. Triggered by a staff member flagging a quality concern, or by a receiving inspection that requires further review. |
DAMAGED | No | No | Stock is identified as damaged and cannot be sold. Terminal states from here are WRITE_OFF (removed from inventory) or VENDOR_RMA (returned to vendor for credit or replacement). |
PENDING_INSPECTION | No | No | Stock has arrived (from transfer or PO) and needs inspection before it can be placed on the sales floor. |
RESERVED | No | No | Stock is allocated to a specific purpose (sale cart, parked transaction, outbound transfer, online order, or hold-for-pickup) but has not yet been physically moved or sold. |
IN_TRANSIT | No | No | Stock has shipped from the source location but has not yet arrived at the destination location. It is not available at either location during transit. |
Business Rules:
- Only
AVAILABLEstock can be sold at POS. The POS terminal displays only the AVAILABLE quantity as the sellable count. - Only
AVAILABLEstock can be transferred between locations. Transfer requests that would reduce AVAILABLE stock below zero at the source location are rejected. QUARANTINEandDAMAGEDstock is blocked from sale and transfer. It must be inspected and resolved before it can re-enter the sellable pool.RESERVEDstock is decremented from the available count but not yet physically moved. If the reservation is released (e.g., cart abandoned, parked transaction voided), the stock returns to AVAILABLE.- All status changes require a reason code and are logged to the movement history audit trail.
- Status changes can only follow the transitions defined in the state machine above. Any attempt to make an invalid transition (e.g., QUARANTINE directly to RESERVED) is rejected by the API.
Inventory Status Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | Reference to product (FK to catalog) |
variant_id | UUID | No | Reference to specific variant, if applicable (FK to catalog) |
location_id | UUID | Yes | Reference to store/warehouse location |
status | Enum | Yes | AVAILABLE, QUARANTINE, DAMAGED, PENDING_INSPECTION, RESERVED, IN_TRANSIT |
qty | Integer | Yes | Quantity in this status at this location |
last_status_change_at | DateTime | Yes | Timestamp of most recent status change |
changed_by | UUID | Yes | User who made the status change |
reason_code | String | Yes | Reason for current status (e.g., QUALITY_CONCERN, TRANSFER_ALLOCATED, TRANSIT_DAMAGE, CART_RESERVE, PARKED_RESERVE) |
tenant_id | UUID | Yes | Owning tenant |
4.2.2 Reservation Model
Reservations temporarily hold inventory for a specific purpose, preventing it from being sold or transferred to another customer or location. The reservation model is central to ensuring that the POS system does not oversell stock in a multi-terminal, multi-channel environment.
When a reservation is created, the specified quantity is moved from AVAILABLE status to RESERVED status. When the reservation is committed (sale completed, transfer shipped), the reserved quantity is decremented from inventory. When the reservation is released (cart abandoned, transaction voided), the reserved quantity returns to AVAILABLE.
Reservation Types
The system supports five distinct reservation types, each with its own lifecycle and rules:
| Type | Trigger | Hold Duration | Behavior | Release Trigger |
|---|---|---|---|---|
| Sale Cart | Item added to POS cart | Until payment or void | Hard reserve. Other terminals see reduced available qty. | Payment completes (commit) or cart voided/abandoned (release). |
| Parked Transaction | Sale saved as parked | Until recalled or expired | Soft reserve. Other terminals see reduced available qty but with a visual warning: “2 units reserved by parked sale P-0045.” Staff can still sell through the soft reserve if they override the warning. | Parked transaction recalled and completed (commit), or voided (release), or expired after configurable timeout (release). |
| Transfer | Transfer approved and picking starts | Until transfer shipped or cancelled | Hard reserve at source location. Items being picked for an outbound transfer are reserved to prevent them from being sold before they ship. | Transfer shipped (status moves to IN_TRANSIT) or transfer cancelled (release). |
| Online Order | Online order placed and allocated to nearest store | Until fulfilled or cancelled | Hard reserve at the assigned store. The nearest-store allocation algorithm (see Module 1, Section 1.10 if applicable) assigns the order to the store with the most available stock. | Order fulfilled (commit) or order cancelled (release). |
| Hold-for-Pickup | Staff places a hold for a customer | Configurable expiry (default: 48 hours) | Hard reserve with auto-release on expiry. Customer has a window to pick up. If not picked up, system auto-releases the hold and notifies the store. | Customer picks up (commit), or expiry timer elapses (auto-release), or staff manually releases. |
Reservation Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
product_id | UUID | Yes | FK to product |
variant_id | UUID | No | FK to variant (if applicable) |
location_id | UUID | Yes | Location where the stock is reserved |
qty | Integer | Yes | Quantity reserved |
type | Enum | Yes | SALE_CART, PARKED_TRANSACTION, TRANSFER, ONLINE_ORDER, HOLD_FOR_PICKUP |
status | Enum | Yes | ACTIVE, COMMITTED, RELEASED, EXPIRED |
source_document_id | UUID | Yes | FK to the source document (sale ID, parked transaction ID, transfer ID, online order ID, or hold ID) |
source_document_type | String | Yes | Type of source document for polymorphic FK resolution |
reserved_by | UUID | Yes | User who created the reservation |
reserved_at | DateTime | Yes | Timestamp when the reservation was created |
expires_at | DateTime | No | Expiry timestamp for time-limited reservations (parked transactions, hold-for-pickup). Null for reservations without expiry. |
committed_at | DateTime | No | Timestamp when the reservation was committed (sale completed, transfer shipped). |
released_at | DateTime | No | Timestamp when the reservation was released (void, cancel, expiry). |
release_reason | String | No | Reason for release: VOID, CANCEL, EXPIRY, OVERRIDE |
tenant_id | UUID | Yes | Owning tenant |
Reservation Lifecycle State Machine
stateDiagram-v2
[*] --> ACTIVE: Reserve Created
ACTIVE --> COMMITTED: Sale Paid / Transfer Shipped / Pickup Completed
ACTIVE --> RELEASED: Void / Cancel / Manual Release
ACTIVE --> EXPIRED: Expiry Timer Elapsed
COMMITTED --> [*]
RELEASED --> [*]
EXPIRED --> [*]
note right of ACTIVE
Qty moved from AVAILABLE to RESERVED
Other terminals see reduced available qty
end note
note right of COMMITTED
Qty decremented from inventory
Reservation fulfilled
end note
note right of RELEASED
Qty moved from RESERVED back to AVAILABLE
Stock returned to sellable pool
end note
note right of EXPIRED
Auto-triggered by background job
Qty returned to AVAILABLE
Notification sent to staff
end note
Business Rules:
- When a reservation is created, the system atomically decrements the AVAILABLE status qty and increments the RESERVED status qty for the product-variant-location combination.
- When a reservation is committed, the RESERVED qty is decremented (stock leaves inventory via sale, or moves to IN_TRANSIT for transfer).
- When a reservation is released or expired, the RESERVED qty is decremented and the AVAILABLE qty is incremented (stock returns to sellable pool).
- Reservations are checked by a background job every 5 minutes for expiry. Expired reservations are auto-released and a notification is sent to the staff member who created the reservation.
- Parked transaction override: If a staff member at another terminal attempts to sell a product that has units reserved by a parked transaction, the system shows a warning: “2 of 5 units reserved by parked sale P-0045 at Terminal 2. Proceed anyway?” If the staff member confirms, the system sells through the available stock and the parked transaction’s reserved quantity is reduced when it is recalled (the system reconciles at recall time).
- Concurrent reservation conflict: If two terminals attempt to reserve the last available unit simultaneously, the first transaction to commit the database write wins. The second terminal receives an error: “Insufficient available stock. 0 units available.” This is enforced by database-level row locking on the inventory status record.
4.2.4 Minimum Display Quantity
Retail clothing stores rely on visual merchandising – an empty rack or sparse display reduces sales. The minimum display quantity feature provides an advisory warning system that alerts store staff when the available inventory at a location drops below a configured floor display minimum.
Key Behaviors:
- Minimum display quantity is configured per product (or variant) per location. Not all products require a minimum display – the field is optional and defaults to null (no warning).
- The warning is advisory only. It does not block sales, transfers, or any other operation. It is a soft alert that appears on the store dashboard and in the inventory list view.
- When available qty at a location drops below the configured minimum display qty, the system creates a dashboard alert: “Product XYZ at Store A has 1 unit remaining (minimum display: 3). Consider replenishment.”
- The alert clears automatically when stock is replenished above the minimum display qty (via receiving, transfer, or adjustment).
- Minimum display qty is distinct from the reorder point (Section 4.5). The reorder point triggers purchase order generation. The minimum display qty triggers a store-level visual merchandising alert.
Minimum Display Quantity Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | FK to product |
variant_id | UUID | No | FK to variant (if applicable). When set, the min display applies to the specific variant. When null, it applies to the product aggregate. |
location_id | UUID | Yes | FK to location. Min display is set per location since different stores may have different display requirements. |
min_display_qty | Integer | Yes | The minimum number of units that should be on display at this location. |
is_active | Boolean | Yes | Whether this minimum display rule is active. Allows temporary disabling without deleting the configuration. |
set_by | UUID | Yes | User who configured the minimum display qty. |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Minimum display quantity alerts appear on the store dashboard under a “Low Display Stock” section.
- Alerts are generated when
AVAILABLE qty < min_display_qtyat a location. The check runs whenever inventory changes at the location (sale, transfer, adjustment, receive). - Alerts include a suggested action: “Request transfer from [location with highest available qty]” with a one-click “Request Transfer” button.
- Minimum display quantity does NOT factor into the reorder point calculation (Section 4.5). They are independent systems.
- Setting a minimum display quantity of 0 is equivalent to disabling the alert for that product-location combination.
4.3 Purchase Orders & Procurement
Scope: Creating, approving, submitting, receiving, and closing purchase orders to replenish inventory from vendors. The PO workflow supports approval routing for high-value orders, partial receives, variance tracking, inspection steps, overdue alerts, and auto-generation from low-stock alerts.
4.3.1 Purchase Order State Machine
stateDiagram-v2
[*] --> DRAFT: PO Created
DRAFT --> PENDING_APPROVAL: Submit for Approval (above threshold)
DRAFT --> SUBMITTED: Submit to Vendor (below threshold / auto-approved)
PENDING_APPROVAL --> SUBMITTED: Manager Approves
PENDING_APPROVAL --> REJECTED: Manager Rejects
REJECTED --> DRAFT: Revise and Resubmit
SUBMITTED --> PARTIALLY_RECEIVED: Partial Shipment Arrived
PARTIALLY_RECEIVED --> PARTIALLY_RECEIVED: Additional Shipment
PARTIALLY_RECEIVED --> FULLY_RECEIVED: All Items Received
SUBMITTED --> FULLY_RECEIVED: Full Shipment Arrived
FULLY_RECEIVED --> CLOSED: PO Closed
DRAFT --> CANCELLED: Cancel Before Submit
SUBMITTED --> CANCELLED: Cancel After Submit
CANCELLED --> [*]
CLOSED --> [*]
note right of DRAFT
Editable line items
No inventory impact
Not sent to vendor
end note
note right of PENDING_APPROVAL
PO total exceeds approval threshold
Awaiting manager/owner approval
Line items locked for review
end note
note right of SUBMITTED
Sent to vendor (email/EDI/manual)
Awaiting shipment
Line items locked
end note
note right of PARTIALLY_RECEIVED
Some items received
Inventory incremented for received qty
Remaining items still expected
end note
note right of FULLY_RECEIVED
All line items received
Pending final review
Ready to close
end note
4.3.2 PO Approval Workflow
Purchase orders above a configurable dollar threshold require manager or owner approval before submission to the vendor. This prevents unauthorized large purchases while allowing routine restocking to flow without friction.
Approval Threshold Configuration:
| Setting | Type | Default | Description |
|---|---|---|---|
po_auto_approve_threshold | Decimal(10,2) | $2,000.00 | PO total value at or below this amount is auto-approved and moves directly to SUBMITTED. |
po_approval_role | Enum | MANAGER | Minimum role required to approve POs above threshold. Options: MANAGER, OWNER. |
po_approval_notify | Boolean | true | Whether to send push notification to approvers when a PO is pending. |
Approval Rules:
- When a staff member clicks “Submit” on a PO whose total value (
SUM of line_total) is at or below thepo_auto_approve_threshold, the PO moves directly from DRAFT to SUBMITTED. No approval step is needed. - When a staff member clicks “Submit” on a PO whose total value exceeds the
po_auto_approve_threshold, the PO moves from DRAFT to PENDING_APPROVAL. A notification is sent to all users with the configured approval role at the PO’s destination location. - The approver can APPROVE (moves to SUBMITTED), REJECT with a reason (moves to REJECTED), or request modifications (the PO creator is notified to revise).
- A REJECTED PO can be revised (returns to DRAFT status with editable line items) and resubmitted.
- The approval threshold is configurable per tenant in tenant settings. Different tenants may have different spending limits.
- Auto-generated draft POs from the reorder engine (Section 4.5) follow the same approval rules – they are not exempt from the threshold check.
Approval Workflow Sequence
sequenceDiagram
autonumber
participant S as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
participant NOTIF as Notification Service
participant M as Manager/Owner
S->>UI: Click "Submit PO"
UI->>API: POST /purchase-orders/{id}/submit
API->>DB: Calculate PO Total (SUM of line_total)
API->>DB: Lookup Tenant's po_auto_approve_threshold
alt PO Total <= Threshold (Auto-Approve)
API->>DB: Update Status: SUBMITTED
API->>DB: Lock Line Items
API-->>UI: "PO Submitted to Vendor"
Note right of API: PO proceeds to vendor submission
else PO Total > Threshold (Requires Approval)
API->>DB: Update Status: PENDING_APPROVAL
API->>DB: Lock Line Items for Review
API->>NOTIF: Send Approval Request to Manager(s)
API-->>UI: "PO Sent for Manager Approval"
NOTIF-->>M: "PO #PO-2026-00042 ($4,500) awaiting your approval"
alt Manager Approves
M->>UI: Review PO -> Click "Approve"
UI->>API: POST /purchase-orders/{id}/approve
API->>DB: Update Status: SUBMITTED
API->>DB: Record approved_by, approved_at
API->>NOTIF: Notify Creator: "PO Approved"
NOTIF-->>S: "Your PO #PO-2026-00042 was approved"
else Manager Rejects
M->>UI: Review PO -> Click "Reject"
M->>UI: Enter Rejection Reason
UI->>API: POST /purchase-orders/{id}/reject
API->>DB: Update Status: REJECTED
API->>DB: Record rejection_reason
API->>NOTIF: Notify Creator: "PO Rejected"
NOTIF-->>S: "Your PO #PO-2026-00042 was rejected: reason"
end
end
4.3.3 Purchase Order Lifecycle
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant DB as DB
participant V as Vendor
Note over U, V: Step 1: Create Purchase Order
U->>UI: Click "New Purchase Order"
UI->>UI: Select Vendor from List
UI->>API: GET /vendors/{id}/products
API-->>UI: Return Vendor's Product Catalog with Vendor Costs
loop Add Line Items
U->>UI: Select Product
UI->>UI: Auto-Fill Vendor SKU, Vendor Cost
U->>UI: Enter Quantity Ordered
U->>UI: Set Expected Delivery Date
UI->>UI: Calculate Line Total (qty x unit_cost)
end
UI->>UI: Display PO Summary (line count, total cost)
U->>UI: Add Notes (optional)
U->>UI: Click "Save Draft"
UI->>API: POST /purchase-orders
API->>DB: Create PO Record (Status: DRAFT)
Note right of DB: Auto-generated PO Number: PO-2026-00042
API-->>UI: PO #PO-2026-00042 Created
Note over U, V: Step 2: Submit to Vendor
U->>UI: Review PO -> Click "Submit"
UI->>API: POST /purchase-orders/{id}/submit
Note over API: Approval check runs here (see Section 4.3.2)
alt Email Submission
API->>V: Send PO via Email (PDF attachment)
Note right of V: Vendor receives PO email
else EDI Submission
API->>V: Transmit PO via EDI
else Manual Submission
API-->>UI: "PO marked Submitted - send manually"
Note right of UI: Staff prints PO and calls/faxes vendor
end
API->>DB: Update Status: SUBMITTED
API->>DB: Lock Line Items (no edits)
API-->>UI: PO Submitted Successfully
Note over U, V: Step 3: Receive Inventory
V-->>U: Shipment Arrives at Store/Warehouse
U->>UI: Open PO #PO-2026-00042 -> Click "Receive"
loop Receive Line Items
U->>UI: Enter Qty Received per Line Item
opt Variance Detected
UI-->>U: "Ordered: 50, Received: 48 - Enter Variance Note"
U->>UI: Enter Note: "2 units damaged in transit"
end
end
U->>UI: Click "Confirm Receive"
UI->>API: POST /purchase-orders/{id}/receive
par Inventory Updates
API->>DB: Increment Inventory (received qty per location)
API->>DB: Update PO Line Items (qty_received)
API->>DB: Record Variance Notes
end
alt All Items Received
API->>DB: Update Status: FULLY_RECEIVED
API-->>UI: "All items received"
else Partial Receive
API->>DB: Update Status: PARTIALLY_RECEIVED
API-->>UI: "Partial receive recorded - awaiting remaining"
end
Note over U, DB: Step 4 (Optional): Inspect Received Goods
opt Quality Inspection
U->>UI: Open Received Items -> Click "Inspect"
U->>UI: Mark Items as Passed / Failed
Note right of UI: Failed items logged for vendor claim
UI->>API: POST /purchase-orders/{id}/inspection
API->>DB: Record Inspection Results
end
Note over U, DB: Step 5: Close PO
U->>UI: Click "Close PO"
UI->>API: POST /purchase-orders/{id}/close
API->>DB: Update Status: CLOSED
API->>DB: Finalize Cost Records
API-->>UI: PO Closed
4.3.4 PO Header Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
po_number | String | Yes | Auto-generated: PO-{YEAR}-{SEQ} per tenant |
vendor_id | UUID | Yes | FK to vendor |
status | Enum | Yes | DRAFT, PENDING_APPROVAL, SUBMITTED, PARTIALLY_RECEIVED, FULLY_RECEIVED, CLOSED, CANCELLED, REJECTED |
destination_location_id | UUID | Yes | FK to the location where goods will be received |
total_value | Decimal(12,2) | Yes | Calculated: SUM of all line_total values |
expected_delivery_date | Date | No | Overall expected delivery date for the PO |
overdue_alert_buffer_days | Integer | No | Number of buffer days after expected_delivery_date before overdue alert triggers. Default: 3 days. |
submission_method | Enum | No | EMAIL, EDI, MANUAL. How the PO is sent to the vendor. |
notes | Text | No | Free-text notes for the PO |
auto_generated | Boolean | Yes | Whether this PO was auto-generated by the reorder engine (Section 4.5). Default: false. |
approved_by | UUID | No | Manager who approved the PO (if approval was required) |
approved_at | DateTime | No | Timestamp of approval |
rejection_reason | Text | No | Reason for rejection (if rejected) |
created_by | UUID | Yes | Staff member who created the PO |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
closed_at | DateTime | No | Timestamp when PO was closed |
4.3.5 PO Line Item Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Line item primary key |
purchase_order_id | UUID | Yes | Reference to parent PO |
product_id | UUID | Yes | Reference to product being ordered |
variant_id | UUID | No | Reference to specific variant (if applicable) |
vendor_sku | String | No | Vendor’s SKU (auto-filled from vendor-product link) |
qty_ordered | Integer | Yes | Quantity ordered from vendor |
qty_received | Integer | Yes | Quantity received so far (starts at 0) |
unit_cost | Decimal(10,2) | Yes | Cost per unit from vendor |
line_total | Decimal(10,2) | Yes | Calculated: qty_ordered x unit_cost |
expected_date | Date | No | Expected delivery date for this line |
received_date | Date | No | Actual date items were received |
variance_notes | String | No | Notes on quantity/quality discrepancies |
inspection_status | Enum | No | PENDING, PASSED, FAILED |
4.3.6 Expected Delivery Date & Overdue Alerts
Each purchase order has an expected_delivery_date field representing when the vendor is expected to deliver the goods. The system uses this date, combined with a configurable buffer period, to generate overdue alerts when a PO has not been received within the expected timeframe.
Overdue Alert Logic:
overdue_trigger_date = expected_delivery_date + overdue_alert_buffer_days
- If the current date exceeds
overdue_trigger_dateand the PO status is stillSUBMITTED(nothing received), the system generates an overdue alert. - If the PO status is
PARTIALLY_RECEIVEDand the current date exceedsoverdue_trigger_date, the system generates a different alert: “PO partially received but remaining items overdue.” - Overdue alerts appear on the purchasing dashboard and are sent as push notifications to the PO creator and the destination location’s manager.
- The
overdue_alert_buffer_daysdefaults to 3 days but can be overridden per PO when the expected delivery date is uncertain (e.g., international shipments). - Overdue alerts auto-clear when the PO reaches
FULLY_RECEIVEDorCLOSEDstatus.
Business Rules:
- If
expected_delivery_dateis not set on the PO, the system falls back to the vendor’s defaultlead_time_days(from the vendor record) plusoverdue_alert_buffer_days. - Overdue POs appear in the Open PO Report with a visual indicator (red highlight) and are sorted to the top by default.
- The background job that checks for overdue POs runs daily at a configurable time (default: 8:00 AM local time per tenant timezone).
4.3.7 Purchase Order Features
- Auto-generate PO from low-stock alerts: When inventory drops below
reorder_pointat any location, the system generates a suggested draft PO with the primary vendor and recommended quantities based on sales velocity (see Section 4.5 for reorder management details). - PO templates: Staff can save frequently ordered product sets as templates (e.g., “Weekly Nike Restock”) and generate new POs from templates with one click. Templates store vendor, product list, and default quantities but not dates or notes.
- Partial receives: Each receive operation records the quantity received per line item. Multiple receives accumulate until all items arrive. Each partial receive increments inventory at the destination location immediately.
- Variance tracking: When received quantity differs from ordered quantity, staff must enter a variance note. Variances are tracked for vendor performance reporting (see Section 4.3.8).
- Auto-increment PO number per tenant: PO numbers follow the format
PO-{YEAR}-{SEQUENCE}and auto-increment per tenant. Example:PO-2026-00001,PO-2026-00002. Sequence resets annually. - Receive to specific location: When receiving, staff selects the destination location (store or warehouse). Inventory increments at that location. The destination is pre-filled from the PO header’s
destination_location_idbut can be overridden during receiving. - PO duplication: Staff can duplicate an existing PO (any status) to create a new DRAFT with the same vendor and line items. Useful for recurring orders.
- Approval threshold: POs above a configurable dollar threshold require manager approval before vendor submission (see Section 4.3.2).
4.3.8 Reports: Purchase Orders
| Report | Purpose | Key Data Fields |
|---|---|---|
| Open PO Report | Track all non-closed purchase orders | PO number, vendor, status, total value, expected date, days since submitted, overdue flag |
| PO Receiving Report | Monitor receiving activity | PO number, line items, qty ordered vs received, variance %, receive date |
| Vendor Lead Time Report | Measure actual vs expected delivery | Vendor, PO count, avg expected lead time, avg actual lead time, on-time %, overdue count |
| PO Variance Report | Track quantity and quality discrepancies | PO number, line item, qty ordered, qty received, variance, variance notes |
| Cost Analysis Report | Review purchasing spend | Vendor, total PO value, product categories, avg unit cost, cost trends over time |
| Approval Pipeline Report | Monitor POs awaiting approval | PO number, total value, created by, created date, days pending, approver assigned |
| Overdue PO Report | Track POs past expected delivery | PO number, vendor, expected date, days overdue, last contact notes, status |
4.4 Receiving & Inspection
Scope: A single unified receiving workflow handles all inbound inventory regardless of source type – PO shipments, inter-store transfers, customer returns, vendor RMA replacements, and non-PO receives. This section documents the open receive mode, discrepancy handling, non-PO receiving, over-shipment handling, return-to-stock processing, and scanner-primary receiving operations.
4.4.1 Receiving Source Types
| Source Type | Origin | Example |
|---|---|---|
PO_RECEIVE | Purchase order from vendor | PO-2026-00042 shipment arrives |
TRANSFER_RECEIVE | Inter-store transfer | Transfer from Store A received at Store B |
RETURN_TO_STOCK | Customer return | Returned item added back to inventory |
RMA_REPLACEMENT | Vendor RMA replacement | Vendor sent replacement items |
NON_PO_RECEIVE | Stock received without a PO | Vendor sample, found stock, replacement |
4.4.2 Receiving Data Model – Header
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
receive_number | String | Yes | Auto-generated: RCV-{YEAR}-{SEQ} |
source_type | Enum | Yes | PO_RECEIVE, TRANSFER_RECEIVE, RETURN_TO_STOCK, RMA_REPLACEMENT, NON_PO_RECEIVE |
source_document_id | UUID | Conditional | FK to source document (PO, transfer, return, RMA). Required for all types except NON_PO_RECEIVE. |
non_po_reason_code | Enum | Conditional | Required when source_type = NON_PO_RECEIVE. See Section 4.4.6. |
location_id | UUID | Yes | Destination location where stock is received |
status | Enum | Yes | PENDING, IN_PROGRESS, COMPLETED |
received_by | UUID | Yes | Staff member processing the receive |
notes | Text | No | General notes for the receiving session |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
completed_at | DateTime | No | Timestamp when receiving was completed |
4.4.3 Receiving Data Model – Line Items
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
receive_id | UUID | Yes | Reference to parent receive record |
product_id | UUID | Yes | Reference to product |
variant_id | UUID | No | Reference to specific variant (if applicable) |
expected_qty | Integer | Yes | Quantity expected from source document. 0 for non-PO receive lines. |
received_qty | Integer | Yes | Actual quantity received |
variance | Integer | Computed | Calculated: received_qty - expected_qty |
condition | Enum | Yes | GOOD, DAMAGED, WRONG_ITEM |
condition_notes | Text | No | Notes describing the condition (especially for DAMAGED or WRONG_ITEM) |
initial_status | Enum | Yes | Inventory status assigned on receive: AVAILABLE (default for GOOD), DAMAGED, PENDING_INSPECTION |
serial_numbers[] | String[] | No | Serial numbers captured (if serial tracked) |
lot_number | String | No | Lot/batch number (if lot tracked) |
notes | Text | No | Notes on received items |
4.4.4 Open Receive Mode
Open receive mode is the primary workflow for receiving inventory against a purchase order. Staff sees the expected quantities from the PO and records the actual received quantities. Variances are automatically calculated and documented.
Workflow:
- Staff opens the PO in the receiving screen and clicks “Start Receiving.”
- The system displays all PO line items with their
qty_orderedand currentqty_received(from any prior partial receives). - For each line item, staff enters (or scans – see Section 4.4.9) the actual quantity received in this shipment.
- The system calculates the variance for each line:
received_qty_this_session - (qty_ordered - qty_previously_received). - If the variance is negative (short-shipped), the system highlights the line and prompts for a variance note.
- If the variance is positive (over-shipped), the system applies over-shipment rules (see Section 4.4.7).
- Staff confirms the receive. Inventory is incremented at the destination location for all GOOD items. DAMAGED items are placed in DAMAGED status. WRONG_ITEM items are flagged for RMA processing.
Open Receive Sequence Diagram
sequenceDiagram
autonumber
participant U as Staff
participant UI as Receiving UI
participant API as Backend
participant DB as DB
U->>UI: Open PO #PO-2026-00042
UI->>API: GET /purchase-orders/{id}/lines
API-->>UI: Return Line Items with Expected Qty
U->>UI: Click "Start Receiving"
UI->>API: POST /receiving/start
API->>DB: Create Receive Record (Status: IN_PROGRESS)
API-->>UI: Receive Session #RCV-2026-00108 Started
Note over U, UI: Staff sees expected qty for each line
loop Receive Each Line Item
alt Scanner Mode (Default)
U->>UI: Scan Item Barcode
UI->>UI: Match to PO Line Item
UI->>UI: Increment Received Qty by 1
Note right of UI: Each scan = +1 unit
else Manual Entry
U->>UI: Enter Received Qty for Line
end
UI->>UI: Calculate Variance (received - expected remaining)
opt Short-Shipped (Negative Variance)
UI-->>U: "Expected: 50, Received: 48 — 2 units short"
U->>UI: Enter Variance Note
end
opt Over-Shipped (Positive Variance)
UI-->>U: "Expected: 50, Received: 55 — 5 units over"
Note right of UI: Over-shipment rules apply (Section 4.4.7)
end
opt Damaged Item Found
U->>UI: Mark Item as DAMAGED
U->>UI: Enter Condition Notes
end
opt Wrong Item Found
U->>UI: Mark Item as WRONG_ITEM
U->>UI: Enter Condition Notes
end
end
U->>UI: Click "Confirm Receive"
UI->>API: POST /receiving/{id}/confirm
par Post-Receive Updates
API->>DB: Increment AVAILABLE Inventory (GOOD items)
API->>DB: Set DAMAGED items to DAMAGED Status
API->>DB: Flag WRONG_ITEM for RMA Processing
API->>DB: Update PO Line Items (qty_received)
API->>DB: Record Variance Notes
API->>DB: Log Movement Records (RECEIVE movement type)
API->>DB: Update Receive Status: COMPLETED
end
alt All PO Items Now Received
API->>DB: Update PO Status: FULLY_RECEIVED
API-->>UI: "PO fully received"
else Remaining Items Outstanding
API->>DB: Update PO Status: PARTIALLY_RECEIVED
API-->>UI: "Partial receive recorded — awaiting remaining items"
end
4.4.5 Discrepancy Handling (Triple Approach)
When receiving reveals discrepancies between expected and actual quantities or conditions, the system applies a triple approach to ensure nothing falls through the cracks:
- Note variance on PO line (always): Every variance is recorded on the PO line item’s
variance_notesfield with the quantity difference and the staff member’s explanation. This is mandatory for all discrepancies regardless of type. - Auto-create RMA draft for wrong/defective items: When items are marked as
WRONG_ITEMorDAMAGEDwith a condition indicating a vendor fault (not transit damage), the system auto-creates an RMA draft record linked to the PO and the vendor. The RMA draft appears in the returns management queue for staff to review and submit to the vendor. - Quarantine damaged items: Items marked as
DAMAGEDduring receiving are placed in theDAMAGEDinventory status at the receiving location. They are blocked from sale and transfer until resolved (write-off or vendor RMA).
Discrepancy Decision Flowchart
flowchart TD
A[Receive Line Item] --> B{Qty Matches Expected?}
B -->|Yes| C{Condition OK?}
B -->|No - Short| D[Note Variance on PO Line]
B -->|No - Over| E[Apply Over-Shipment Rules\nSection 4.4.7]
D --> F{All Items in Good Condition?}
F -->|Yes| G[Accept Short Shipment\nRecord Variance Note]
F -->|No| H{What Condition?}
C -->|Yes - GOOD| I[Accept to AVAILABLE Status]
C -->|No| H
H -->|DAMAGED| J[Move to DAMAGED Status]
H -->|WRONG_ITEM| K[Flag for RMA]
J --> L{Vendor Fault?}
L -->|Yes| M[Auto-Create RMA Draft\nLinked to PO & Vendor]
L -->|No - Transit Damage| N[Record Damage Note\nFile Carrier Claim if Applicable]
K --> M
E --> O{Within Over-Shipment Threshold?}
O -->|Yes - Accept| P[Accept Overage\nNote Variance]
O -->|No - Above Threshold| Q[Require Manager Approval\nto Accept Overage]
G --> R[Log Movement Record]
I --> R
M --> R
N --> R
P --> R
Q --> R
Business Rules:
- Every discrepancy, regardless of type, generates a variance note on the PO line item. This is non-negotiable.
- RMA drafts are auto-created only for vendor-attributable issues (wrong item, defective item). Transit damage is handled separately through carrier claims.
- Damaged items are immediately placed in DAMAGED status. They do not count toward the PO’s “received in good condition” tally.
- The PO Variance Report (Section 4.3.8) aggregates all discrepancies for vendor performance analysis.
- Discrepancy records include: PO number, line item, expected qty, received qty, variance, condition, notes, and whether an RMA was auto-created.
4.4.6 Non-PO Receiving
In some situations, stock arrives at a location without an associated purchase order. The system supports receiving without a PO, provided the staff member selects a mandatory reason code explaining why the stock is being received outside the normal procurement workflow.
Non-PO Reason Codes:
| Reason Code | Description | Example |
|---|---|---|
VENDOR_SAMPLE | Vendor sent sample merchandise for evaluation | New season sample box from Nike |
REPLACEMENT | Vendor sent replacement for previously defective/returned items outside the RMA process | Vendor shipped replacement directly without formal RMA |
RETURN_TO_STOCK | Items being re-entered into inventory after being temporarily removed (not a customer return – use RETURN_TO_STOCK source type for that) | Items returned from a photo shoot or trade show |
FOUND_STOCK | Stock discovered that was not in the system (e.g., found in back room, mislabeled) | Unscanned box found during warehouse cleanup |
OTHER | None of the above. Requires free-text explanation in notes field. | Unusual circumstance requiring documentation |
Non-PO Receive Data Model
Non-PO receives use the same receiving header and line item data models (Section 4.4.2 and 4.4.3) with the following specifics:
source_type=NON_PO_RECEIVEsource_document_id= nullnon_po_reason_codeis required (one of the codes above)expected_qtyon line items is set to 0 (since there is no source document to establish expectations)- Variance is not calculated for non-PO receives (there is no expected baseline)
Business Rules:
- Non-PO receives always require a reason code. The system rejects a non-PO receive without a reason code.
- When
reason_code = OTHER, the notes field on the receive header becomes mandatory. Staff must provide a free-text explanation. - Non-PO receives are flagged in the Receiving Log report for visibility. Management can filter the report to show only non-PO receives for audit purposes.
- Inventory incremented via non-PO receiving is costed at the product’s current weighted average cost (from the catalog). If no cost data exists, the system prompts staff to enter a unit cost.
4.4.7 Over-Shipment Handling
When a vendor ships more units than ordered, the system applies configurable rules to determine whether the overage is automatically accepted or requires manager approval.
Over-Shipment Configuration:
| Setting | Type | Default | Description |
|---|---|---|---|
over_shipment_threshold_pct | Decimal(5,2) | 10.00 | Maximum percentage above ordered qty that can be auto-accepted. |
over_shipment_approval_role | Enum | MANAGER | Role required to approve over-shipments above the threshold. |
Business Rules:
- Within threshold: If the received quantity exceeds the ordered quantity by up to the configured threshold percentage, the overage is auto-accepted. Inventory is incremented for the full received quantity. The variance is noted on the PO line item.
- Example: Ordered 100 units, threshold is 10%. Receiving up to 110 units is auto-accepted.
- Above threshold: If the received quantity exceeds the ordered quantity by more than the configured threshold percentage, the system blocks acceptance of the overage and requires manager approval.
- Example: Ordered 100 units, threshold is 10%. Receiving 115 units triggers manager approval for the 15-unit overage.
- Until approved, the over-threshold units are held in
PENDING_INSPECTIONstatus. The units within the threshold (110) are accepted immediately.
- Manager approval flow: The manager receives a notification: “Over-shipment on PO #PO-2026-00042, line 3: Ordered 100, Received 115 (15% over, threshold 10%). Approve acceptance?” The manager can approve (units move to AVAILABLE) or reject (units are flagged for return to vendor).
- Per-line calculation: The threshold is applied per line item, not per PO total. Each line item’s overage is evaluated independently.
- Cost impact: Over-shipped units accepted at the same unit cost as the PO line item. The PO total value is recalculated to reflect the actual received quantity.
4.4.8 Return-to-Stock
When a customer returns merchandise (processed through Module 1, Sales – Returns), the returned items re-enter the inventory system through the return-to-stock workflow.
Default Behavior:
- Customer returns automatically move to
AVAILABLEstatus. No inspection is required by default. - The rationale: clothing returns in this retail context are typically tried-on garments, not defective products. The default assumption is that returned items are saleable.
- Staff has the option to mark any returned item as
DAMAGEDduring the return process if the item is visibly damaged, soiled, or otherwise unsaleable.
Business Rules:
- Return-to-stock creates a receiving record with
source_type = RETURN_TO_STOCKandsource_document_idpointing to the return/refund transaction. - The returned item is added to inventory at the location where the return was processed (the store where the customer brought the item back).
- If the item is marked DAMAGED during return, it enters
DAMAGEDstatus instead ofAVAILABLE. The staff member must enter a condition note. - Return-to-stock inventory is costed at the original sale’s cost basis (from the sale transaction), not at current weighted average cost. This ensures accurate margin reporting.
- A
RETURN_TO_STOCKmovement record is logged to the audit trail.
4.4.9 Scanner-Primary Receiving
The default receiving workflow is scanner-primary: staff uses a barcode scanner to scan each individual item as it is unpacked. Each scan auto-increments the received count for the matching PO line item by one unit.
Workflow:
- Staff opens the PO receiving screen and clicks “Start Receiving.”
- The system enters scanner mode (default). The cursor focus is on the barcode scan input field.
- Staff scans an item’s barcode. The system:
- Looks up the barcode in the catalog (Module 3).
- Matches it to a PO line item.
- Increments the
received_qtyfor that line by 1. - Plays an audible confirmation beep.
- Displays a running count: “Item XYZ: 23 of 50 received.”
- If the barcode does not match any PO line item, the system shows an alert: “Barcode not found on this PO. Wrong item?” Staff can flag it as WRONG_ITEM or search manually.
- Staff repeats scanning until all items are processed.
- Staff clicks “Confirm Receive” to finalize.
Manual Override:
- For items with damaged or missing barcodes, staff can switch to manual entry mode for individual line items.
- In manual mode, staff selects the product from the PO line item list and enters the quantity directly.
- The system logs whether each line was received via scanner or manual entry (for accuracy tracking).
Business Rules:
- Scanner mode is the default. The receiving screen opens in scanner mode unless the staff member explicitly switches to manual.
- Each barcode scan increments the count by exactly 1. There is no “scan and enter quantity” mode in scanner-primary workflow – every physical unit is scanned individually.
- If the same barcode is scanned more than the expected quantity for that line, over-shipment rules (Section 4.4.7) apply.
- Scanning speed is optimized for high throughput: the system processes each scan in under 200ms and immediately updates the on-screen count.
- The receiving screen shows a progress summary at all times: total items expected, total scanned so far, and lines remaining.
4.4.10 Unified Receiving Sequence (All Source Types)
sequenceDiagram
autonumber
participant U as Staff
participant UI as Receiving UI
participant API as Backend
participant DB as DB
Note over U, DB: Unified Receiving Flow
U->>UI: Select Source Document (PO / Transfer / Return / RMA / Non-PO)
UI->>API: GET /receiving/source/{type}/{id}
API-->>UI: Return Expected Line Items (empty for Non-PO)
U->>UI: Click "Start Receiving"
API->>DB: Create Receive Record (Status: IN_PROGRESS)
loop Receive Each Line Item
alt Scanner Verification (Default)
U->>UI: Scan Item Barcode
UI->>UI: Match to Expected Line Item
UI->>UI: Increment Received Qty
else Manual Entry
U->>UI: Enter Received Qty per Line
end
opt Variance Detected
UI-->>U: "Expected: 50, Received: 48"
U->>UI: Enter Variance Notes
end
opt Damaged Item
U->>UI: Mark Condition: DAMAGED
U->>UI: Enter Damage Notes
end
opt Wrong Item
U->>UI: Mark Condition: WRONG_ITEM
U->>UI: Enter Notes
end
opt Serial Tracked Product
U->>UI: Scan/Enter Serial Number for Each Unit
end
opt Lot Tracked Product
U->>UI: Enter Lot Number
end
end
U->>UI: Click "Confirm Receive"
UI->>API: POST /receiving/{id}/confirm
par Post-Receive Updates
API->>DB: Increment Inventory at Location (Received Qty - GOOD → AVAILABLE)
API->>DB: Set Damaged Items to DAMAGED Status
API->>DB: Flag Wrong Items for RMA Processing
API->>DB: Update Source Document (PO/Transfer/Return/RMA status)
API->>DB: Log Movement Records (per source type)
API->>DB: Update Receive Status: COMPLETED
end
API-->>UI: Receiving Complete
4.4.11 Reports: Receiving & Inspection
| Report | Purpose | Key Data Fields |
|---|---|---|
| Receiving Log | All receiving activity across all source types | Receive number, source type, source document, location, items expected, items received, variances, receive date |
| Non-PO Receiving Report | Track inventory received outside of PO process | Receive number, reason code, products, qty, staff member, date, notes |
| Damaged Goods Report | Track items received in damaged condition | Receive number, source, product, qty damaged, condition notes, RMA created (Y/N) |
| Over-Shipment Report | Track vendor over-shipments and approval outcomes | PO number, line item, qty ordered, qty received, overage %, auto-accepted (Y/N), approval status |
| Receiving Accuracy Report | Measure scanner vs manual receive accuracy | Location, total items received, scanner-received count, manual-received count, variance rate by method |
4.5 Reorder Management
Scope: Automating inventory replenishment through velocity-based reorder points, seasonal demand adjustments, and auto-generated draft purchase orders for staff review and approval. The reorder engine reduces stockouts by proactively identifying when products need replenishment and pre-building purchase orders for staff to review. This section also covers static override of reorder points and dead stock detection.
4.5.1 Velocity-Based Reorder Points
The system calculates dynamic reorder points per product per location using sales velocity, vendor lead time, and configurable safety stock. This ensures that reorder triggers adapt automatically to changing demand patterns without requiring manual intervention.
Formula:
reorder_point = (avg_daily_sales x lead_time_days x seasonal_factor) + safety_stock
Components:
| Component | Source | Description |
|---|---|---|
avg_daily_sales | Calculated from rolling 90-day sales velocity per product per location. | The average number of units sold per day over the trailing 90 days. New products with less than 90 days of history use the available history. Products with zero sales use 0. |
lead_time_days | Sourced from vendor-product relationship (vendor default or per-product override). | The number of days between placing a PO with the vendor and receiving the goods. |
safety_stock | Configurable multiplier applied to the standard deviation of daily sales. Default: 1.5 sigma. | Buffer stock to account for demand variability and supply uncertainty. Higher sigma values provide more protection against stockouts at the cost of higher inventory. |
seasonal_factor | Multiplier derived from historical same-period data (e.g., December velocity in prior years). | Applied to avg_daily_sales before reorder point calculation. A factor of 1.5 means the system expects 50% higher sales than the rolling average suggests. Default: 1.00 (no seasonal adjustment). |
Recalculation: Weekly via background job (configurable schedule per tenant). The job recalculates reorder points for all active products at all locations and updates the reorder point data model.
Reorder Point Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
product_id | UUID | Yes | Reference to product |
location_id | UUID | Yes | Reference to store/warehouse location |
avg_daily_velocity | Decimal(8,3) | Yes | Rolling 90-day average daily sales |
lead_time_days | Integer | Yes | Vendor lead time for this product |
safety_stock_units | Integer | Yes | Calculated safety stock buffer |
reorder_point | Integer | Yes | Stock level that triggers reorder |
reorder_qty | Integer | Yes | Economic order quantity (recommended purchase qty) |
seasonal_factor | Decimal(5,2) | No | Seasonal adjustment multiplier (default: 1.00) |
override_reorder_point | Integer | No | Manager-set manual override. When not null, takes precedence over the calculated reorder_point. See Section 4.5.3. |
override_reason | Text | No | Documentation for why the override was set. Required when override_reorder_point is not null. |
override_set_by | UUID | No | User who set the override. |
override_set_at | DateTime | No | Timestamp when the override was set. |
last_calculated_at | DateTime | Yes | Timestamp of last recalculation |
tenant_id | UUID | Yes | Owning tenant |
4.5.2 Auto-Generated Draft POs
When stock at any location drops below the calculated reorder_point (or the override_reorder_point if set), the system creates a draft purchase order for staff review.
Auto-PO Features:
- Pre-filled with the primary vendor for each product below reorder point.
- Recommended quantity set to
reorder_qty(economic order quantity). - Vendor consolidation: If multiple products from the same vendor hit reorder simultaneously, the system combines them into a single draft PO. This reduces the number of POs and may help reach vendor minimum order values.
- Staff receives notification: “3 draft POs auto-generated for review.”
- Staff can edit quantities, add/remove products, then submit – or discard the draft.
- Auto-generated draft POs are marked with
auto_generated = trueon the PO header (Section 4.3.4) so they can be filtered and reported on separately. - Auto-generated POs follow the same approval threshold rules as manually created POs (Section 4.3.2). They are not exempt from the approval workflow.
Auto-PO Generation Sequence
sequenceDiagram
autonumber
participant JOB as Background Job
participant DB as DB
participant API as Backend
participant NOTIF as Notification Service
participant U as Staff
Note over JOB, U: Reorder Point Check (Runs on Schedule)
JOB->>DB: Query Products Below Reorder Point (or Override)
DB-->>JOB: Return List (Product, Location, Current Qty, Reorder Point)
JOB->>JOB: Exclude products with existing DRAFT/SUBMITTED PO for same vendor
loop For Each Product Below Reorder
JOB->>DB: Lookup Primary Vendor
JOB->>DB: Get Reorder Qty (Economic Order Quantity)
alt Existing Draft PO for Same Vendor
JOB->>DB: Add Line Item to Existing Draft PO
else No Existing Draft PO
JOB->>DB: Create New Draft PO for Vendor
JOB->>DB: Add Line Item
end
end
JOB->>DB: Finalize Draft POs (calculate totals)
JOB->>NOTIF: "3 draft POs auto-generated for review"
NOTIF-->>U: Push Notification / Dashboard Alert
Note over U, DB: Staff Review
U->>API: GET /purchase-orders?status=DRAFT&auto_generated=true
API-->>U: Return Auto-Generated Draft POs
alt Approve as-is
U->>API: POST /purchase-orders/{id}/submit
Note right of API: Approval threshold check applies (Section 4.3.2)
API->>DB: Update Status: SUBMITTED (or PENDING_APPROVAL)
else Modify and Submit
U->>API: PATCH /purchase-orders/{id}
Note right of U: Edit quantities, add/remove lines
U->>API: POST /purchase-orders/{id}/submit
else Discard
U->>API: DELETE /purchase-orders/{id}
API->>DB: Delete Draft PO
end
Business Rules:
- The reorder check job does not create duplicate draft POs. If a product already has an open draft or submitted PO for the same vendor, it is excluded from auto-PO generation.
- The reorder check evaluates the
override_reorder_pointfirst. If set, it uses the override value. If null, it uses the calculatedreorder_point. - Auto-PO generation uses the product’s AVAILABLE quantity (not total on-hand) to determine if reorder is needed. RESERVED, IN_TRANSIT, and other non-available statuses are excluded.
- The system accounts for in-transit stock when calculating whether to reorder. If a product has an open PO with expected delivery within the lead time window, the system may skip reorder to avoid over-ordering (configurable behavior:
account_for_open_possetting, default: true).
4.5.3 Static Override
In some cases, the velocity-based reorder point calculation does not reflect the manager’s knowledge of the business. For example, a product may have low sales velocity (suggesting a low reorder point) but the manager knows it will be featured in an upcoming promotion and needs extra stock. Or a product may have high velocity but the manager knows it is being discontinued and does not want to reorder.
The static override feature allows a manager to lock any product at any location to a manual reorder point that overrides the dynamic velocity calculation.
Behavior:
- When
override_reorder_pointis set (not null), the reorder engine uses this value instead of the calculatedreorder_point. - The calculated
reorder_pointcontinues to be recalculated weekly by the background job, but it is ignored for reorder triggering as long as the override is active. - The override is visible in the product’s inventory detail screen with a visual indicator: “Reorder point manually set to 25 by [Manager Name] on [Date]. Calculated value: 12.”
- Managers can remove the override at any time, returning the product to dynamic reorder point calculation.
Business Rules:
- Only users with MANAGER or OWNER role can set or remove reorder point overrides.
- When setting an override, the
override_reasonfield is mandatory. The manager must document why the override is being set (e.g., “Upcoming Black Friday promotion – need extra stock”, “Discontinuing product – do not reorder”). - Overrides are per product per location. A manager can override the reorder point at Store A without affecting the calculation at Store B.
- The Reorder Alerts report (Section 4.5.5) shows which products are using overridden reorder points vs. calculated reorder points, so management can audit active overrides.
- There is no expiry on overrides. They remain active until manually removed. A periodic review reminder can be configured (e.g., “3 reorder overrides have been active for 90+ days – review?”).
4.5.4 Dead Stock Detection
Dead stock (also called slow-moving or stagnant inventory) represents products that have not sold at a location for an extended period. These items tie up capital, occupy shelf space, and may need to be marked down, transferred to a higher-traffic store, or written off.
Detection Logic:
- The system monitors the sales velocity of every active product at every location.
- When a product’s sales velocity at a location is zero for a configurable number of consecutive days (default: 90 days), it is flagged as dead stock.
- The dead stock flag is recalculated by the same weekly background job that recalculates reorder points.
Alert Behavior:
- Dead stock items appear on the Dead Stock Report (dashboard and exportable).
- A dashboard alert is displayed: “15 products at Store A have had zero sales for 90+ days.”
- The alert is informational only. No automatic action is taken. The manager decides the appropriate action for each item.
Manager Actions (Manual):
| Action | Description |
|---|---|
| Markdown | Reduce the price to accelerate sales. Handled via Module 3 (Catalog) price update. |
| Transfer | Move stock to a location with higher demand. Handled via Module 5 (Transfers). |
| Write-Off | Remove from inventory as a loss. Handled via Section 4.7 (Adjustments) with reason code WRITE_OFF. |
| Dismiss Alert | Acknowledge the alert without taking action. The product remains flagged but the alert is silenced for a configurable period (default: 30 days). |
Business Rules:
- Dead stock detection only applies to products with
lifecycle_status = ACTIVEin the catalog. Discontinued or inactive products are excluded. - The configurable threshold (default 90 days) is set per tenant in tenant settings. Different tenants may have different thresholds based on their product turnover expectations.
- Dead stock is evaluated per location. A product may be dead stock at Store A but selling well at Store B. The system highlights this imbalance and suggests transfer as an action.
- Products that have been in inventory for less than the threshold period (e.g., a new product received 30 days ago) are excluded from dead stock detection regardless of sales velocity.
- The Dead Stock Report includes: product, location, current qty, last sale date, days since last sale, total value at cost, suggested action.
Dead Stock Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | FK to product |
location_id | UUID | Yes | FK to location |
days_since_last_sale | Integer | Yes | Number of days since the last sale of this product at this location |
last_sale_date | Date | No | Date of the most recent sale. Null if never sold at this location. |
qty_on_hand | Integer | Yes | Current AVAILABLE quantity at this location |
value_at_cost | Decimal(10,2) | Yes | qty_on_hand x weighted_avg_cost |
is_flagged | Boolean | Yes | Whether this product-location is currently flagged as dead stock |
flagged_at | DateTime | No | Timestamp when the dead stock flag was set |
alert_dismissed_at | DateTime | No | Timestamp when a manager dismissed the alert. Alert is silenced until dismissed_at + dismiss_duration. |
dismiss_duration_days | Integer | No | Number of days to silence the alert after dismissal. Default: 30. |
tenant_id | UUID | Yes | Owning tenant |
4.5.5 Reports: Reorder Management
| Report | Purpose | Key Data Fields |
|---|---|---|
| Reorder Alerts | Products below reorder point needing attention | Product, location, current qty, reorder point (calculated vs override), days of supply remaining, suggested qty, primary vendor, override active (Y/N) |
| Auto-PO Performance | Effectiveness of automatic reorder system | Auto-generated PO count, submitted as-is count, modified count, cancelled count, avg fill rate, avg time from draft to submit |
| Velocity Trends | Sales velocity changes over time per product | Product, 30-day velocity, 60-day velocity, 90-day velocity, trend direction (increasing/stable/decreasing), seasonal factor |
| Days of Supply | How long current stock will last at current velocity | Product, location, qty on hand, avg daily velocity, days of supply, reorder urgency (Critical/Low/OK) |
| Dead Stock Report | Products with zero velocity over threshold period | Product, location, current qty, last sale date, days since last sale, value at cost, suggested action, alert status |
| Override Audit Report | Active reorder point overrides for review | Product, location, override value, calculated value, override reason, set by, set date, days active |
4.6 Inventory Counting & Auditing
Scope: Maintaining inventory accuracy through structured counting workflows that reconcile system quantities with physical reality. The system supports five counting methods, two count modes (freeze and snapshot), scanner-primary counting, and a complete review-and-approve workflow for count variances.
4.6.1 Count Types
The system supports five counting methods to maintain inventory accuracy. Each method is suited to different operational needs.
| Method | Description | Frequency | Scope |
|---|---|---|---|
| Full Physical Count | All products at a location counted | Annual or semi-annual | Entire location |
| Cycle Count | Rolling partial counts by category | Weekly (configurable schedule) | Category rotation |
| Scanner-Assisted Count | Barcode/RFID scanner used to tally items | As needed | Configurable scope |
| Monthly Scan | Scheduled full-location scan | Auto-created 1st of each month (configurable) | Entire location |
| On-Demand Count | Ad-hoc count triggered by manager | As needed | Specific products |
4.6.2 Count Data Model – Header
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
count_number | String | Yes | Auto-generated: CNT-{YEAR}-{SEQ} |
type | Enum | Yes | FULL, CYCLE, SCANNER, MONTHLY, ON_DEMAND |
location_id | UUID | Yes | Location being counted |
status | Enum | Yes | CREATED, IN_PROGRESS, REVIEW, APPROVED, CANCELLED |
count_mode | Enum | Yes | FREEZE or SNAPSHOT. See Section 4.6.4. |
scope | Enum | Yes | ALL, CATEGORY, PRODUCT_LIST |
category_ids[] | UUID[] | No | Categories included (when scope = CATEGORY) |
product_ids[] | UUID[] | No | Specific products included (when scope = PRODUCT_LIST) |
snapshot_taken_at | DateTime | No | Timestamp when the inventory snapshot was taken (SNAPSHOT mode only). |
created_by | UUID | Yes | Manager who initiated the count |
assigned_to | UUID | No | Staff member assigned to perform the count |
approved_by | UUID | No | Manager who approved adjustments |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
completed_at | DateTime | No | Timestamp when count was approved/cancelled |
4.6.3 Count Data Model – Line Items
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
count_id | UUID | Yes | Reference to parent count |
product_id | UUID | Yes | Product being counted |
variant_id | UUID | No | Specific variant (if applicable) |
expected_qty | Integer | Yes | System’s recorded AVAILABLE quantity at count start (or snapshot time in SNAPSHOT mode) |
counted_qty | Integer | No | Actual quantity counted by staff |
variance | Integer | Computed | Calculated: counted_qty - expected_qty |
variance_pct | Decimal(5,2) | Computed | Calculated: (variance / expected_qty) x 100. Null if expected_qty is 0. |
count_method | Enum | No | SCANNER or MANUAL – indicates how this line was counted |
adjustment_approved | Boolean | No | Whether the variance adjustment was approved by the reviewing manager |
notes | Text | No | Staff notes on variance explanation |
4.6.4 Configurable Count Freeze
When initiating a stock count, the manager chooses one of two counting modes. Each mode has different trade-offs between accuracy and operational impact.
| Aspect | FREEZE Mode | SNAPSHOT Mode |
|---|---|---|
| POS Sales | Blocked at the counting location for the duration of the count. Other locations continue to sell normally. | Continue normally. Sales are recorded and reconciled after the count. |
| Transfers | Outbound transfers blocked. Inbound transfers held until count completes. | Continue normally. Transfers are recorded and reconciled after the count. |
| Accuracy | Highest accuracy – no inventory movement during count ensures perfect reconciliation. | Slightly lower accuracy – reconciliation must account for sales and transfers that occurred during the count window. |
| Business Impact | High – store cannot sell during count. Best for after-hours or slow periods. | Low – store operates normally. Suitable for counts during business hours. |
| Duration Limit | No system limit, but operational pressure to complete quickly since sales are blocked. | No limit. Count can span hours or even days if needed. |
| Reconciliation | Direct comparison: counted_qty vs expected_qty. No adjustment needed for concurrent activity. | System calculates: adjusted_expected = snapshot_qty - sales_during_count + receives_during_count. Variance = counted_qty - adjusted_expected. |
| Recommended For | Full physical counts, high-value inventory sections, annual audits. | Cycle counts, monthly scans, routine counting during business hours. |
Business Rules:
- The count mode is selected at count creation time and cannot be changed after the count starts (status = IN_PROGRESS).
- FREEZE mode: When a count is started in FREEZE mode, the POS at the counting location displays a message: “Inventory count in progress. Sales temporarily suspended at this location.” POS terminals at other locations are unaffected. Transfers to/from the counting location are queued and processed after the count is approved.
- SNAPSHOT mode: When a count is started in SNAPSHOT mode, the system takes a point-in-time snapshot of all in-scope product quantities. This snapshot is stored as the
expected_qtyfor each count line item. Sales and transfers that occur after the snapshot continue normally. At review time, the system recalculates the expected values by applying all inventory movements that occurred between the snapshot time and the count submission time. - Only users with MANAGER or OWNER role can create counts and select the count mode.
- The system defaults to SNAPSHOT mode. FREEZE mode must be explicitly selected.
4.6.5 Scanner-Primary Counting
The default counting workflow is scanner-primary: staff uses a barcode scanner to scan each physical item on the shelf. Each scan increments the count for the matching product by one unit.
Scanner-Assisted Count Workflow
sequenceDiagram
autonumber
participant M as Manager
participant U as Staff
participant SC as Scanner Device
participant UI as Count UI
participant API as Backend
participant DB as DB
Note over M, DB: Step 1: Create Count Session
M->>UI: Click "New Stock Count"
M->>UI: Select Type, Location, Scope, Mode
UI->>API: POST /stock-counts
API->>DB: Create Count Header (Status: CREATED)
alt SNAPSHOT Mode
API->>DB: Snapshot Current Qty for All In-Scope Products
Note right of DB: Snapshot stored as expected_qty per line
else FREEZE Mode
API->>DB: Set Location POS to Count Mode (sales blocked)
end
API->>DB: Create Count Line Items (one per in-scope product-variant)
API-->>UI: Count #CNT-2026-00031 Created
Note over M, DB: Step 2: Assign & Start Counting
M->>UI: Assign Count to Staff Member
API-->>U: Notification: "Count assigned to you"
U->>UI: Open Count -> Click "Start Counting"
API->>DB: Update Status: IN_PROGRESS
Note over U, SC: Step 3: Scan Items
loop For Each Physical Item on Shelf
U->>SC: Scan Item Barcode
SC-->>UI: Barcode Data
UI->>UI: Lookup Product by Barcode
UI->>UI: Increment counted_qty by 1 for Matching Line
alt Barcode Matched
UI-->>U: Beep + "Item XYZ: 23 counted"
else Barcode Not Found
UI-->>U: Alert "Unknown barcode — enter product manually?"
U->>UI: Search Product by SKU/Name
U->>UI: Confirm Match
UI->>UI: Increment counted_qty by 1
Note right of UI: Line marked as count_method = MANUAL
end
end
opt Items with Damaged/Missing Barcodes
U->>UI: Switch to Manual Entry for Specific Line
U->>UI: Enter counted_qty Directly
Note right of UI: Line marked as count_method = MANUAL
end
Note over U, DB: Step 4: Submit for Review
U->>UI: Click "Submit for Review"
UI->>API: POST /stock-counts/{id}/submit
API->>DB: Calculate Variances per Line
API->>DB: Update Status: REVIEW
Note over M, DB: Step 5: Review & Approve
M->>UI: Open Count for Review
UI->>API: GET /stock-counts/{id}/variances
API-->>UI: Return Variance Report
M->>UI: Review Each Variance Line
Note right of M: Accept or reject each line adjustment
M->>UI: Click "Approve Adjustments"
UI->>API: POST /stock-counts/{id}/approve
par Inventory Updates
API->>DB: Apply Approved Adjustments to Inventory
API->>DB: Log Each Adjustment as COUNT_ADJUST Movement
API->>DB: Update Count Status: APPROVED
end
alt FREEZE Mode
API->>DB: Release Location POS from Count Mode (sales resume)
API->>DB: Process Queued Transfers
end
API-->>UI: Count Approved — Inventory Updated
Business Rules:
- Scanner mode is the default. The count screen opens in scanner-listening mode when the count is started.
- Each barcode scan increments the count by exactly 1. Staff scans every physical unit individually.
- Manual quantity entry is available as a fallback for items with damaged or missing barcodes. Staff can switch between scanner and manual mode on a per-line basis.
- Items not scanned during the count have
counted_qty = 0at submission time. If the expected qty was greater than 0, this is flagged as a variance (potential shrinkage or miscount). - Products scanned that are not in the count scope (e.g., wrong category during a cycle count) are flagged with a warning but can be added to the count at the staff member’s discretion.
- Count line items track whether they were counted via scanner or manual entry (
count_methodfield) for accuracy auditing.
4.6.6 Count Workflow Sequence
sequenceDiagram
autonumber
participant M as Manager
participant U as Staff
participant UI as Inventory UI
participant API as Backend
participant DB as DB
Note over M, DB: Step 1: Create Count
M->>UI: Click "New Stock Count"
M->>UI: Select Type (Full/Cycle/On-Demand)
M->>UI: Select Location
M->>UI: Select Count Mode (Freeze/Snapshot)
M->>UI: Define Scope (All / Category / Product List)
UI->>API: POST /stock-counts
API->>DB: Create Count (Status: CREATED)
API->>DB: Snapshot Expected Qty for All In-Scope Products
API-->>UI: Count #CNT-2026-00031 Created
Note over M, DB: Step 2: Assign & Count
M->>UI: Assign Count to Staff Member
API-->>U: Notification: "Count assigned to you"
U->>UI: Open Count -> Start Counting
API->>DB: Update Status: IN_PROGRESS
loop Count Each Product
alt Scanner-Assisted (Default)
U->>UI: Scan Item Barcode
UI->>UI: Increment Counted Qty for Product
else Manual Entry (Fallback)
U->>UI: Enter Counted Qty per Product
end
end
U->>UI: Click "Submit for Review"
UI->>API: POST /stock-counts/{id}/submit
API->>DB: Update Status: REVIEW
Note over M, DB: Step 3: Review & Approve
M->>UI: Open Count for Review
UI->>API: GET /stock-counts/{id}/variances
API-->>UI: Return Variance Report
M->>UI: Review Each Variance
Note right of M: Accept or reject each line adjustment
M->>UI: Click "Approve Adjustments"
UI->>API: POST /stock-counts/{id}/approve
par Inventory Updates
API->>DB: Apply Approved Adjustments to Inventory
API->>DB: Log Each Adjustment as COUNT_ADJUST Movement
API->>DB: Update Status: APPROVED
end
API-->>UI: Count Approved - Inventory Updated
4.6.7 Reports: Inventory Counting
| Report | Purpose | Key Data Fields |
|---|---|---|
| Count Variance Report | Variances discovered during stock counts | Count number, type, mode (Freeze/Snapshot), product, expected qty, counted qty, variance, variance %, count method (Scanner/Manual), adjustment status |
| Count Schedule Report | Upcoming and overdue scheduled counts | Count type, location, scheduled date, status, assigned to, overdue flag |
| Count Accuracy Trend | Track counting accuracy over time | Month, location, total counts, avg variance %, scanner-counted %, manual-counted %, accuracy trend |
| Shrinkage by Location | Inventory loss detected through counts per location | Location, period, total negative variances, value at cost, shrinkage % of total inventory value |
4.6.8 RFID-Assisted Counting (Raptag)
RFID counting operates as a dedicated subsystem separate from barcode-scanner counting. While scanner-primary counting (Section 4.6.5) handles individual barcode scans at the POS, RFID counting enables bulk tag reads using handheld RFID readers via the Raptag mobile application (Chapter 16).
Scope: RFID counting is for inventory counts ONLY. It does not participate in receiving (Section 4.4), transfers (Section 4.5), or sales (Module 1). Configuration for the RFID subsystem is in Section 5.16.
RFID Count Session Lifecycle
Manager Creates Session → Assigns Sections to Operators → Operators Join via Raptag
→ Parallel Scanning (offline-capable) → Upload Chunks → Server Merges & Deduplicates
→ Variance Calculation → Manager Reviews → Approve / Recount
Multi-Operator Sessions
A single RFID count session can have multiple operators (up to 10), each assigned to a section of the store. This enables parallel counting for enterprise-scale inventories (100,000+ items).
Workflow:
- Manager creates a count session in the Admin Portal or Raptag app, selecting count type (full_inventory, cycle_count, spot_check) and location
- Manager assigns sections to operators (e.g., “Sarah: Men’s Tops”, “James: Women’s Bottoms”)
- Each operator launches Raptag, sees the active session on their Home Dashboard, and taps “Join Session”
- Each operator’s device scans independently using the Zebra reader, recording tag reads locally in SQLite (offline-capable)
- On upload, the system merges all operator reads into one session via chunked upload (5,000 events per chunk)
- Server-side deduplication: If two operators scan the same tag (same EPC), the system keeps the read with the strongest RSSI (closest proximity = most accurate location)
Multi-Operator Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to shared.tenants |
session_id | UUID | Yes | FK to rfid_scan_sessions |
operator_id | UUID | Yes | FK to shared.users |
assigned_section | Text | No | Section/area label assigned by manager |
device_id | UUID | No | FK to devices — reader device used by operator |
joined_at | Timestamp | Yes | When operator joined the session |
left_at | Timestamp | No | When operator left or session ended |
Deduplication Rules:
- Same EPC scanned by multiple operators → keep the read with highest RSSI (strongest signal indicates closest proximity)
- Merge happens server-side during chunk upload processing
rfid_scan_eventsrecords which operator reported each tag (viasession_operatorslink)
RFID-Specific Business Rules:
- Maximum 10 operators per session
- Each operator can only participate in one active session at a time
- Manager must create the session and assign at least one section before operators can join
- Session cannot be completed until all operators have submitted their chunks (or been removed)
- Operators can leave a session (voluntarily or by manager removal) without losing already-uploaded data
- If an operator’s device loses power mid-scan, auto-save preserves data locally; they can resume on the same or different device
RFID Counting vs Scanner Counting
| Aspect | Scanner-Primary (Section 4.6.5) | RFID-Assisted (This Section) |
|---|---|---|
| Input Device | Barcode scanner (USB HID) | Zebra RFID reader (MC3390R, RFD40) |
| Application | POS terminal | Raptag mobile app (.NET MAUI) |
| Read Speed | 1 item per scan | 40+ tags per second |
| Tracking Level | SKU-level (barcode → product lookup) | EPC-level (individual item tracking) |
| Offline | Connected to POS (online) | Fully offline with SQLite + sync |
| Operators | Single operator per count | Up to 10 operators per session |
| Data Flow | Real-time via POS API | Batch upload via chunked sync |
| Count Method Value | SCANNER or MANUAL | RFID |
4.7 Inventory Adjustments
Scope: Manual inventory adjustments handle corrections outside of stock counts. Adjustments are used when a staff member discovers a discrepancy between the system quantity and physical reality and needs to correct the system immediately, without waiting for a scheduled count. All adjustments – regardless of direction or size – require manager approval before stock levels change.
4.7.1 Adjustment Approval Workflow
All adjustments require manager approval. Unlike the previous threshold-based approach, this module enforces a universal approval requirement for every inventory adjustment. This ensures that no stock level change bypasses management oversight, which is critical for loss prevention and financial accuracy in a multi-store retail environment.
Workflow:
- A staff member identifies a discrepancy (e.g., found 3 extra units on a shelf, or 2 units are missing and believed stolen).
- The staff member creates an adjustment request in the system, specifying the product, location, quantity change (positive or negative), reason code, and notes.
- The adjustment is created with
approval_status = PENDING. Inventory is NOT changed yet. - A notification is sent to all users with MANAGER or OWNER role at the adjustment’s location.
- The manager reviews the adjustment request. They can:
- Approve: The adjustment is applied to inventory. The status changes to
APPROVED. Inventory quantity is updated. A movement record is logged. - Reject: The adjustment is not applied. The status changes to
REJECTED. The requesting staff member is notified with the rejection reason. Inventory is unchanged.
- Approve: The adjustment is applied to inventory. The status changes to
- Rejected adjustments can be revised and resubmitted as new adjustment requests.
Adjustment Approval Sequence
sequenceDiagram
autonumber
participant S as Staff
participant UI as Inventory UI
participant API as Backend
participant DB as DB
participant NOTIF as Notification Service
participant M as Manager
S->>UI: Identify Discrepancy
S->>UI: Click "New Adjustment"
S->>UI: Select Product, Location
S->>UI: Enter Qty Change (+3 or -2)
S->>UI: Select Reason Code
S->>UI: Enter Notes
UI->>API: POST /adjustments
API->>DB: Create Adjustment (Status: PENDING)
Note right of DB: ADJ-2026-00015 created
Note right of DB: Inventory NOT changed yet
API->>NOTIF: Send Approval Request to Manager(s)
API-->>UI: "Adjustment submitted for approval"
NOTIF-->>M: "Adjustment ADJ-2026-00015: +3 units of SKU NXJ1078 at Store A — awaiting approval"
alt Manager Approves
M->>UI: Review Adjustment -> Click "Approve"
UI->>API: POST /adjustments/{id}/approve
API->>DB: Update Status: APPROVED
API->>DB: Update Inventory Qty (apply qty_change)
API->>DB: Record approved_by, approved_at
API->>DB: Log Movement Record (ADJUSTMENT_UP or ADJUSTMENT_DOWN)
API->>NOTIF: Notify Staff: "Adjustment approved"
NOTIF-->>S: "Your adjustment ADJ-2026-00015 was approved"
API-->>UI: Adjustment Approved — Inventory Updated
else Manager Rejects
M->>UI: Review Adjustment -> Click "Reject"
M->>UI: Enter Rejection Reason
UI->>API: POST /adjustments/{id}/reject
API->>DB: Update Status: REJECTED
API->>DB: Record rejection_reason
API->>NOTIF: Notify Staff: "Adjustment rejected"
NOTIF-->>S: "Your adjustment ADJ-2026-00015 was rejected: reason"
API-->>UI: Adjustment Rejected — Inventory Unchanged
end
4.7.2 Adjustment Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
adjustment_number | String | Yes | Auto-generated: ADJ-{YEAR}-{SEQ} |
product_id | UUID | Yes | Reference to product |
variant_id | UUID | No | Reference to specific variant (if applicable) |
location_id | UUID | Yes | Location where adjustment applies |
qty_change | Integer | Yes | Positive (found stock) or negative (shrinkage/damage). Must not be 0. |
reason_code | String | Yes | Standard reason code or custom reason code (see Section 4.7.3). |
notes | Text | No | Explanation of adjustment. Mandatory for certain reason codes (e.g., OTHER). |
requested_by | UUID | Yes | Staff member who requested the adjustment |
approved_by | UUID | No | Manager who approved (null until approved) |
rejected_by | UUID | No | Manager who rejected (null unless rejected) |
approval_status | Enum | Yes | PENDING, APPROVED, REJECTED |
rejection_reason | Text | No | Manager’s reason for rejection (if rejected) |
cost_impact | Decimal(10,2) | Computed | qty_change x weighted_avg_cost. Positive for found stock, negative for shrinkage. Calculated at approval time. |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
approved_at | DateTime | No | Timestamp of approval |
rejected_at | DateTime | No | Timestamp of rejection |
4.7.3 Custom Reason Codes
The system provides a standard set of reason codes for inventory adjustments. In addition, tenants can define custom reason codes to capture business-specific adjustment scenarios.
Standard Reason Codes
| Code | Direction | Description |
|---|---|---|
DAMAGED | Negative | Item identified as damaged and removed from sellable inventory. |
THEFT | Negative | Item believed to be stolen (shoplifting, employee theft). |
COUNT_CORRECTION | Either | Correction resulting from a stock count variance that was not captured in the count approval process. |
SAMPLE | Negative | Item removed from inventory and given as a sample (to vendor, customer, or for marketing). |
WRITE_OFF | Negative | Item permanently removed from inventory as a loss. Requires cost documentation for accounting. |
FOUND_STOCK | Positive | Stock discovered that was not in the system (e.g., found behind a shelf, unscanned box). |
RETURN_TO_STOCK | Positive | Item returned to inventory outside the normal return-to-stock workflow (e.g., item used for display purposes and now returned to sellable stock). |
OTHER | Either | None of the above. Notes field becomes mandatory. |
Custom Reason Code Data Model
Tenants can create additional reason codes beyond the standard set. Custom reason codes behave identically to standard codes in all workflows – they appear in the reason code dropdown, are logged in movement records, and are available in reports.
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
code | String(50) | Yes | Unique code identifier (e.g., EMPLOYEE_PURCHASE, PHOTO_SHOOT, CHARITY_DONATION). Must be uppercase, alphanumeric with underscores. Must not conflict with standard reason codes. |
display_name | String(100) | Yes | Human-readable name shown in the UI dropdown (e.g., “Employee Purchase”, “Photo Shoot”, “Charity Donation”). |
description | Text | No | Optional description of when this reason code should be used. |
direction | Enum | Yes | POSITIVE, NEGATIVE, or BOTH. Controls whether this code can be used for positive adjustments, negative adjustments, or both. |
requires_notes | Boolean | Yes | Whether the notes field is mandatory when this reason code is selected. Default: false. |
is_active | Boolean | Yes | Whether this custom reason code is available for selection. Inactive codes are hidden from the dropdown but preserved in historical records. |
sort_order | Integer | Yes | Display order in the reason code dropdown (after standard codes). |
tenant_id | UUID | Yes | Owning tenant |
created_by | UUID | Yes | User who created the custom reason code |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Custom reason codes are tenant-specific. Each tenant manages their own set of custom codes.
- Custom code identifiers must be unique within a tenant and must not duplicate any standard code.
- Custom codes can be deactivated (soft delete) but not hard-deleted, since historical adjustment records may reference them.
- Only users with MANAGER or OWNER role can create, edit, or deactivate custom reason codes.
- When a custom reason code has
requires_notes = true, the adjustment form enforces a mandatory notes field when that code is selected. - Custom reason codes appear in all reports alongside standard codes. Reports can filter by standard vs. custom codes.
- Standard reason codes cannot be deactivated or modified by tenants. They are system-level constants.
4.7.4 Business Rules Summary
- All adjustments require manager approval. Positive adjustments (found stock), negative adjustments (shrinkage, damage), and zero-net adjustments (reclassification) all require explicit manager approval before inventory quantities change. There is no auto-approval threshold.
- Adjustment status flow:
PENDING(created, awaiting approval) ->APPROVED(manager approved, inventory updated) orREJECTED(manager rejected, inventory unchanged). - Inventory is not changed until approval. The
PENDINGadjustment is a request only. The physical inventory quantity in the system remains unchanged until a manager explicitly approves the adjustment. WRITE_OFFadjustments require cost documentation. When the reason code isWRITE_OFF, the system calculates and records thecost_impactfield for accounting reconciliation. The manager can see the dollar impact before approving.- All approved adjustments are logged as
ADJUSTMENT_UP(positive qty_change) orADJUSTMENT_DOWN(negative qty_change) movements in the movement history audit trail. - Rejected adjustments are preserved. Rejected adjustments remain in the system for audit purposes. They are not deleted. The rejection reason is recorded.
- Concurrent adjustment protection: If two staff members submit adjustments for the same product at the same location, the manager sees both pending adjustments and can approve or reject each independently. The system recalculates the inventory impact at approval time based on the current quantity, not the quantity at request time.
4.7.5 Reports: Inventory Adjustments
| Report | Purpose | Key Data Fields |
|---|---|---|
| Adjustment History | All manual inventory adjustments | Adjustment number, product, location, qty change, reason code (standard/custom), requested by, approved/rejected by, date, cost impact |
| Pending Adjustments | Adjustments awaiting manager approval | Adjustment number, product, location, qty change, reason code, requested by, request date, days pending |
| Shrinkage Report | Track inventory loss by reason code and location | Period, location, reason code, qty lost, value at cost, % of total inventory value |
| Reason Code Analysis | Frequency and impact of each reason code | Reason code, adjustment count, total qty impact, total cost impact, avg approval time, rejection rate |
| Custom Reason Code Usage | Track usage of tenant-defined reason codes | Custom code, display name, adjustment count, total qty impact, last used date |
4.8 Inter-Store Transfers
Scope: Moving inventory between store locations and HQ warehouse with full workflow tracking, variance detection on receipt, and auto-rebalancing recommendations based on sales velocity analysis. This section covers bi-directional transfer initiation (HQ push and store pull), manual allocation for scarce items, and auto-suggest workflows. The customer-facing paid transfer request (from Section 1.7) feeds into this system as the triggering event.
4.8.1 Transfer State Machine
The transfer lifecycle supports 10 states covering the full journey from request through completion, including rejection and cancellation paths.
stateDiagram-v2
[*] --> REQUESTED: Transfer Requested
REQUESTED --> APPROVED: Source Manager Approves
REQUESTED --> REJECTED: Source Manager Rejects
APPROVED --> PICKING: Pick List Generated
PICKING --> SHIPPED: Items Shipped
SHIPPED --> IN_TRANSIT: Carrier Confirmed Pickup
IN_TRANSIT --> RECEIVED: Destination Receives
RECEIVED --> COMPLETED: All Items Verified
APPROVED --> CANCELLED: Cancelled After Approval
REJECTED --> CLOSED: No Further Action
CANCELLED --> CLOSED: No Further Action
note right of REQUESTED
Destination store or HQ requests stock
OR HQ pushes stock to store
Awaiting source approval
end note
note right of APPROVED
Source manager authorized the transfer
Pick list ready for warehouse/store
end note
note right of PICKING
Source store picking items
Inventory not yet decremented
end note
note right of SHIPPED
Items handed to carrier or internal transport
Source inventory decremented
Tracking number recorded
end note
note right of IN_TRANSIT
Carrier confirmed pickup
Items between locations
end note
note right of RECEIVED
Destination received and is verifying
Variances being recorded
end note
note right of COMPLETED
All items accounted for
Destination inventory incremented
end note
4.8.2 Transfer Initiation Directions
Transfers can be initiated in two directions. Both directions enter the state machine at the same REQUESTED state.
Pull Model (Store Requests from Source)
A destination store identifies a need (low stock, customer demand, auto-suggest alert) and creates a transfer request specifying the source location and desired items. The source manager reviews and approves or rejects.
Use cases:
- Store manager sees low stock on a bestseller and requests units from HQ or another store.
- Customer requests an item available at another location (paid transfer from Section 1.7).
- System auto-suggest identifies an imbalance and the destination manager initiates the transfer.
Push Model (HQ Pushes to Stores)
HQ warehouse staff or a regional manager initiates a transfer from HQ to one or more stores. The HQ manager acts as both requester and approver, so the transfer can skip the approval wait if the same user has both roles.
Use cases:
- New seasonal inventory arrives at HQ and is distributed to stores.
- HQ manager reviews rebalancing suggestions and pushes stock to understocked stores.
- Vendor replacement shipment received at HQ needs distribution to affected stores.
Initiation Direction Data
The transfer header includes a direction field to distinguish the two models:
| Field | Type | Required | Description |
|---|---|---|---|
direction | Enum | Yes | PULL (destination requests) or PUSH (source initiates) |
initiated_by_location_id | UUID | Yes | Location that created the transfer (source for PUSH, destination for PULL) |
Business Rules for Direction:
- PULL transfers require source manager approval before proceeding to PICKING.
- PUSH transfers from HQ may be auto-approved if the initiating user holds the
inventory_manageroradminrole at the source location. - PUSH transfers between peer stores (non-HQ) still require source manager approval.
- Both directions produce identical downstream workflow (PICKING through COMPLETED).
4.8.3 Transfer Data Model
Transfer Header
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
transfer_number | String | Yes | Auto-generated: TRF-{YEAR}-{SEQ} (e.g., TRF-2026-00088) |
source_location_id | UUID | Yes | Location sending stock |
destination_location_id | UUID | Yes | Location receiving stock |
direction | Enum | Yes | PULL (destination requests) or PUSH (source initiates) |
initiated_by_location_id | UUID | Yes | Location that created the transfer |
status | Enum | Yes | REQUESTED, APPROVED, REJECTED, PICKING, SHIPPED, IN_TRANSIT, RECEIVED, COMPLETED, CANCELLED, CLOSED |
priority | Enum | No | NORMAL, URGENT, CUSTOMER_REQUEST (default: NORMAL) |
requested_by | UUID | Yes | Staff member who initiated the request |
approved_by | UUID | No | Manager who approved/rejected |
shipped_date | Date | No | Date items were shipped |
received_date | Date | No | Date items were received |
tracking_number | String | No | Carrier tracking number |
carrier | String | No | Carrier name (e.g., internal, UPS, FedEx) |
estimated_arrival | Date | No | Expected delivery date |
auto_suggest_id | UUID | No | FK to auto-suggest recommendation that triggered this transfer (null if manually created) |
notes | Text | No | Transfer notes |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Transfer Line Items
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
transfer_id | UUID | Yes | Reference to parent transfer |
product_id | UUID | Yes | Product being transferred |
variant_id | UUID | No | Specific variant (if applicable) |
qty_requested | Integer | Yes | Quantity requested by destination |
qty_shipped | Integer | No | Actual quantity shipped by source |
qty_received | Integer | No | Quantity verified at destination |
variance | Integer | Computed | Calculated: qty_received - qty_shipped |
variance_notes | Text | No | Explanation of any variance |
condition_on_receive | Enum | No | GOOD, DAMAGED, WRONG_ITEM |
4.8.4 Transfer Workflow (Pull Model)
sequenceDiagram
autonumber
participant DST as Destination Staff
participant SRC_M as Source Manager
participant SRC as Source Staff
participant UI as Transfer UI
participant API as Backend
participant DB as DB
Note over DST, DB: Step 1: Request Transfer (Pull)
DST->>UI: Click "Request Transfer"
DST->>UI: Select Source Location
DST->>UI: Add Products & Quantities
UI->>API: POST /transfers
API->>DB: Create Transfer (Status: REQUESTED, Direction: PULL)
API-->>DST: Transfer #TRF-2026-00088 Created
API-->>SRC_M: Notification: "Transfer request from Store B"
Note over SRC_M, DB: Step 2: Approve/Reject
SRC_M->>UI: Review Transfer Request
SRC_M->>UI: Check Source Stock Availability
alt Approve
SRC_M->>UI: Click "Approve"
UI->>API: POST /transfers/{id}/approve
API->>DB: Update Status: APPROVED
API->>DB: Generate Pick List
else Reject
SRC_M->>UI: Click "Reject" + Enter Reason
UI->>API: POST /transfers/{id}/reject
API->>DB: Update Status: REJECTED -> CLOSED
end
Note over SRC, DB: Step 3: Pick & Ship
SRC->>UI: Open Pick List
SRC->>UI: Pick Items (Scan to Verify)
SRC->>UI: Enter Qty Shipped per Line
SRC->>UI: Enter Tracking Number (if applicable)
SRC->>UI: Click "Ship"
UI->>API: POST /transfers/{id}/ship
API->>DB: Decrement Source Inventory
API->>DB: Update Status: SHIPPED
API->>DB: Log TRANSFER_OUT Movement (Section 4.12)
Note over DST, DB: Step 4: Receive & Verify
DST->>UI: Open Transfer -> Click "Receive"
loop Verify Each Line Item
DST->>UI: Scan/Count Received Items
DST->>UI: Enter Qty Received per Line
opt Variance
UI-->>DST: "Shipped: 20, Received: 18"
DST->>UI: Enter Variance Notes
end
opt Damaged Items
DST->>UI: Mark Condition: DAMAGED
end
end
DST->>UI: Click "Confirm Receive"
UI->>API: POST /transfers/{id}/receive
par Post-Receive
API->>DB: Increment Destination Inventory
API->>DB: Log TRANSFER_IN Movement (Section 4.12)
API->>DB: Record Variances
API->>DB: Update Status: COMPLETED
end
API-->>DST: Transfer Complete
4.8.6 Transfer Workflow (Push Model)
sequenceDiagram
autonumber
participant HQ as HQ Manager
participant SRC as HQ Warehouse Staff
participant UI as Transfer UI
participant API as Backend
participant DB as DB
participant DST as Destination Staff
Note over HQ, DST: Step 1: HQ Initiates Push Transfer
HQ->>UI: Click "Push Stock to Store"
HQ->>UI: Select Destination Store(s)
HQ->>UI: Add Products & Quantities
UI->>API: POST /transfers
API->>DB: Create Transfer (Status: REQUESTED, Direction: PUSH)
alt HQ Manager Has Approval Authority
API->>DB: Auto-Approve (Status: APPROVED)
API->>DB: Generate Pick List
API-->>HQ: Transfer Auto-Approved, Pick List Ready
else Requires Separate Approval
API-->>HQ: Transfer Created, Awaiting Approval
end
Note over SRC, DB: Step 2: Pick & Ship (same as Pull model)
SRC->>UI: Open Pick List
SRC->>UI: Pick Items (Scan to Verify)
SRC->>UI: Enter Qty Shipped per Line
SRC->>UI: Enter Tracking Number
SRC->>UI: Click "Ship"
UI->>API: POST /transfers/{id}/ship
API->>DB: Decrement Source Inventory
API->>DB: Update Status: SHIPPED
API->>DB: Log TRANSFER_OUT Movement (Section 4.12)
API-->>DST: Notification: "Incoming transfer from HQ"
Note over DST, DB: Step 3: Receive & Verify (same as Pull model)
DST->>UI: Open Transfer -> Click "Receive"
DST->>UI: Verify Items, Enter Qty Received
UI->>API: POST /transfers/{id}/receive
API->>DB: Increment Destination Inventory
API->>DB: Log TRANSFER_IN Movement (Section 4.12)
API->>DB: Update Status: COMPLETED
API-->>DST: Transfer Complete
4.8.7 Auto-Suggest Transfers
The system continuously monitors inventory distribution relative to sales velocity across all locations and generates transfer suggestions when significant imbalances are detected.
Auto-Suggest Algorithm
Step 1: Calculate Days of Supply per Product per Location
days_of_supply = qty_on_hand / avg_daily_velocity
Where avg_daily_velocity is the trailing 30-day sales average (configurable per tenant).
Step 2: Detect Imbalances
An imbalance is flagged when:
- One location has > 60 days of supply (overstocked threshold, configurable)
- Another location has < 15 days of supply (understocked threshold, configurable)
- Both locations are active retail stores (HQ warehouse uses separate thresholds)
Step 3: Calculate Suggested Quantity
target_days_of_supply = 30 (configurable per tenant)
qty_needed = (target_days_of_supply - current_days_of_supply) x avg_daily_velocity
qty_available_to_send = qty_on_hand - (target_days_of_supply x avg_daily_velocity)
suggested_qty = MIN(qty_needed at destination, qty_available_to_send from source)
The algorithm ensures the source location retains at least target_days_of_supply worth of stock after the transfer.
Step 4: Generate Suggestion
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
product_id | UUID | Yes | Product with detected imbalance |
variant_id | UUID | No | Specific variant (if applicable) |
source_location_id | UUID | Yes | Overstocked location (sender) |
destination_location_id | UUID | Yes | Understocked location (receiver) |
suggested_qty | Integer | Yes | Recommended transfer quantity |
source_days_of_supply | Decimal(8,1) | Yes | Current days of supply at source |
destination_days_of_supply | Decimal(8,1) | Yes | Current days of supply at destination |
source_velocity | Decimal(8,2) | Yes | Avg daily sales at source |
destination_velocity | Decimal(8,2) | Yes | Avg daily sales at destination |
status | Enum | Yes | PENDING, APPROVED, REJECTED, EXPIRED, CONVERTED |
reviewed_by | UUID | No | Manager who reviewed the suggestion |
reviewed_at | DateTime | No | Timestamp of review |
transfer_id | UUID | No | FK to transfer created from this suggestion (if approved) |
batch_id | UUID | Yes | Groups suggestions from the same analysis run |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Timestamp of suggestion generation |
Auto-Suggest Workflow
sequenceDiagram
autonumber
participant CRON as Scheduler (Weekly/On-Demand)
participant SVC as Rebalancing Service
participant DB as DB
participant MGR as Manager
participant UI as Dashboard UI
participant API as Backend
Note over CRON, API: Phase 1: Generate Suggestions
CRON->>SVC: Trigger Rebalancing Analysis
SVC->>DB: Query qty_on_hand per product per location
SVC->>DB: Query trailing 30-day sales velocity per product per location
SVC->>SVC: Calculate days_of_supply for each product/location
SVC->>SVC: Detect imbalances (>60 days vs <15 days)
SVC->>SVC: Calculate suggested transfer quantities
SVC->>DB: Insert suggestions (Status: PENDING, batch_id: {batch})
SVC-->>MGR: Notification: "12 rebalancing suggestions ready for review"
Note over MGR, API: Phase 2: Manager Review
MGR->>UI: Open Rebalancing Dashboard
UI->>API: GET /transfer-suggestions?status=PENDING
API-->>UI: Return suggestion list with velocity data
loop Review Each Suggestion
UI-->>MGR: Show: Product, Source (85 days supply), Dest (8 days supply), Suggested Qty: 15
alt Approve Suggestion
MGR->>UI: Click "Approve" (may adjust qty)
UI->>API: POST /transfer-suggestions/{id}/approve
API->>DB: Update suggestion status: APPROVED
API->>DB: Create Transfer (Status: REQUESTED, auto_suggest_id: {suggestion_id})
API-->>MGR: Transfer #TRF-2026-00090 Created
else Reject Suggestion
MGR->>UI: Click "Reject" + Enter Reason
UI->>API: POST /transfer-suggestions/{id}/reject
API->>DB: Update suggestion status: REJECTED
end
end
Note over MGR, API: Phase 3: Approve All (Batch)
opt Batch Approve
MGR->>UI: Click "Approve All Remaining"
UI->>API: POST /transfer-suggestions/batch/{batch_id}/approve
API->>DB: Update all PENDING to APPROVED
API->>DB: Create Transfer records for each
API-->>MGR: "8 transfers created from suggestions"
end
Note over SVC, DB: Phase 4: Expiration
SVC->>DB: Expire PENDING suggestions older than 7 days
SVC->>DB: Update status: EXPIRED
Configuration:
| Setting | Default | Description |
|---|---|---|
rebalance_schedule | Weekly (Monday 6:00 AM) | When auto-suggest analysis runs |
overstocked_threshold_days | 60 | Days of supply above which a location is considered overstocked |
understocked_threshold_days | 15 | Days of supply below which a location is considered understocked |
target_days_of_supply | 30 | Target days of supply after rebalancing |
velocity_lookback_days | 30 | Trailing days used to calculate average daily velocity |
suggestion_expiry_days | 7 | Days before unreviewed suggestions expire |
min_suggested_qty | 1 | Minimum quantity for a suggestion to be generated |
hq_overstocked_threshold_days | 90 | Separate overstocked threshold for HQ warehouse |
Business Rules:
- Auto-suggest never creates transfers automatically. All suggestions require manager review.
- Manager can modify the suggested quantity before approving (e.g., reduce from 15 to 10).
- Suggestions expire after
suggestion_expiry_daysif not reviewed. Expired suggestions are excluded from the next run to avoid duplicates. - The algorithm excludes products with zero velocity at both source and destination (dead stock requires manual review, not rebalancing).
- HQ warehouse uses separate thresholds because HQ holds distribution stock, not retail selling stock.
- If multiple destinations need the same product from the same source, the system generates one suggestion per source-destination pair.
4.8.8 Manual Allocation for Scarce Items
When multiple stores need the same scarce item and HQ has limited stock, the system does not attempt to automatically split the available quantity. Instead, a manager manually decides the allocation based on business judgment.
Scarce Item Scenario
A scarce item condition exists when:
- Two or more stores have submitted transfer requests (or auto-suggest has flagged multiple destinations) for the same product.
- The source location (typically HQ) does not have enough stock to fulfill all requests in full.
Allocation Dashboard
When a scarce item condition is detected, the system presents an Allocation Dashboard that consolidates all competing requests:
| Column | Description |
|---|---|
| Product | SKU, name, variant |
| HQ Available Qty | Current on-hand at HQ (source) |
| Store | Each requesting store listed as a row |
| Requested Qty | Quantity each store is requesting |
| Current On-Hand | Quantity each store currently holds |
| Days of Supply | Calculated days of supply at each store |
| 30-Day Velocity | Average daily sales at each store |
| Allocated Qty | Editable field – manager enters allocation per store |
Example:
| Product | HQ Available | Store | Requested | On-Hand | Days of Supply | 30-Day Velocity | Allocated |
|---|---|---|---|---|---|---|---|
| BLK-TEE-M | 20 | Store GM | 12 | 2 | 4 days | 0.5/day | ___ |
| BLK-TEE-M | 20 | Store HM | 15 | 1 | 2 days | 0.5/day | ___ |
| BLK-TEE-M | 20 | Store NM | 8 | 5 | 12 days | 0.4/day | ___ |
| Total Requested | 35 | ___ / 20 |
The manager sees that total requests (35) exceed available stock (20) and enters allocations that sum to at most 20. The system validates that SUM(allocated_qty) <= available_qty.
Business Rules:
- No automated splitting. The system presents data; the human decides.
- Manager can allocate zero to any store (decline that store’s request entirely).
- The allocation creates individual transfers for each store receiving stock.
- Priority field on each transfer request (
NORMAL,URGENT,CUSTOMER_REQUEST) is displayed to help inform the manager’s decision. - Stores with
CUSTOMER_REQUESTpriority (paid customer transfers from Section 1.7) should generally be prioritized to avoid customer disappointment. - The allocation dashboard is accessible from the Transfer Management screen when the system detects competing requests for the same product.
4.8.9 Variance Handling
When the destination receives a different quantity than was shipped, a variance is recorded.
Variance Types:
| Variance | Description | Resolution |
|---|---|---|
| Short | Received less than shipped (e.g., shipped 20, received 18) | Record variance notes. Investigate: lost in transit, miscount at source, or carrier damage. Source location does not get stock back automatically; requires adjustment (Section 4.7) if items are confirmed lost. |
| Over | Received more than shipped (e.g., shipped 20, received 22) | Rare. Likely miscount at source. Record variance. Destination inventory reflects actual received count. |
| Damaged | Items received in damaged condition | Record condition as DAMAGED. Damaged items enter damaged inventory status at destination. May trigger RMA (Section 4.9) or write-off. |
| Wrong Item | Different product received than expected | Record condition as WRONG_ITEM. Requires follow-up: return to source or create adjustment at both locations. |
Business Rules:
- Variance percentage > 10% triggers a notification to both source and destination managers.
- All variances are logged in the Product Movement History (Section 4.12) with reason codes.
- Unresolved variances appear on the Transfer Variance Report for management review.
4.8.10 Carrier Tracking
For transfers shipped via external carriers (not internal transport), tracking information is recorded.
| Field | Type | Required | Description |
|---|---|---|---|
carrier | String | No | Carrier name: INTERNAL, UPS, FEDEX, USPS, DHL, OTHER |
tracking_number | String | No | Carrier tracking number |
estimated_arrival | Date | No | Expected delivery date |
actual_arrival | Date | No | Actual delivery date (set on receive) |
shipping_cost | Decimal(10,2) | No | Cost of shipment (for internal cost tracking) |
Business Rules:
- Carrier and tracking number are required when
carrieris notINTERNAL. INTERNALcarrier indicates the transfer is hand-delivered by staff or via company vehicle.- The system does not integrate with carrier tracking APIs in v1. Tracking numbers are recorded for manual lookup.
4.8.11 Reports: Transfers
| Report | Purpose | Key Data Fields |
|---|---|---|
| Open Transfer Report | Track in-progress transfers | Transfer number, source, destination, direction (Push/Pull), status, item count, total units, days in transit, priority |
| Transfer Variance Report | Discrepancies between shipped and received | Transfer number, product, qty shipped, qty received, variance, variance %, condition, notes |
| Transfer Volume Report | Volume of transfers between locations | Source, destination, transfer count, total units transferred, total value, period, direction breakdown |
| Rebalancing Suggestions | Auto-generated transfer recommendations | Product, source location (days of supply), destination location (days of supply), suggested qty, current velocity data, suggestion status |
| Scarce Item Allocation Log | Audit trail of manual allocation decisions | Product, HQ available qty, stores requesting, qty allocated per store, manager who allocated, date |
| Transfer Lead Time | Average time from request to completion | Source, destination, carrier, avg days requested-to-shipped, avg days shipped-to-received, avg total lead time |
4.9 Vendor RMA & Returns
Scope: Managing the return of defective, damaged, incorrect, or overstock merchandise to vendors. The RMA (Return Merchandise Authorization) workflow tracks the complete lifecycle from initial request through vendor approval, shipment back to the vendor, and receipt of credit or replacement inventory. This section covers both defective/quality RMA returns and overstock returns, which follow distinct workflows.
4.9.1 RMA State Machine
stateDiagram-v2
[*] --> DRAFT: RMA Created
DRAFT --> SUBMITTED: Submit to Vendor
SUBMITTED --> VENDOR_APPROVED: Vendor Approves Return
SUBMITTED --> VENDOR_REJECTED: Vendor Rejects Return
VENDOR_APPROVED --> SHIPPED_BACK: Items Shipped to Vendor
SHIPPED_BACK --> CREDIT_RECEIVED: Vendor Issues Credit Memo
SHIPPED_BACK --> REPLACEMENT_RECEIVED: Vendor Sends Replacement
CREDIT_RECEIVED --> CLOSED: RMA Finalized
REPLACEMENT_RECEIVED --> CLOSED: RMA Finalized
VENDOR_REJECTED --> CLOSED: RMA Closed (No Action)
note right of DRAFT
Staff assembles return list
Line items editable
No inventory impact
end note
note right of SUBMITTED
Sent to vendor for review
Awaiting vendor response
Line items locked
end note
note right of VENDOR_APPROVED
Vendor authorized return
Ready to ship back
end note
note right of SHIPPED_BACK
Items in transit to vendor
Inventory decremented at source
Tracking number recorded
end note
note right of CLOSED
Credit applied or replacement received
Audit trail complete
end note
4.9.2 RMA Data Model
RMA Header
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
rma_number | String | Yes | Auto-generated: RMA-{YEAR}-{SEQ} (e.g., RMA-2026-00015) |
vendor_id | UUID | Yes | Reference to vendor |
source_location_id | UUID | Yes | Location from which items are being returned |
status | Enum | Yes | DRAFT, SUBMITTED, VENDOR_APPROVED, VENDOR_REJECTED, SHIPPED_BACK, CREDIT_RECEIVED, REPLACEMENT_RECEIVED, CLOSED |
reason | Enum | Yes | DEFECTIVE, DAMAGED, WRONG_ITEM, OVERSTOCK |
rma_type | Enum | Yes | DEFECTIVE_RETURN or OVERSTOCK_RETURN (see Section 4.9.6) |
notes | Text | No | Free-form notes about the return |
vendor_agreement_ref | String | No | Reference to vendor agreement or pre-authorization (required for OVERSTOCK returns) |
created_by | UUID | Yes | Staff member who created the RMA |
approved_by | UUID | No | Vendor contact or reference who approved |
ship_date | Date | No | Date items were shipped back to vendor |
tracking_number | String | No | Carrier tracking number for return shipment |
credit_amount | Decimal(10,2) | No | Credit amount issued by vendor |
restocking_fee_pct | Decimal(5,2) | No | Vendor restocking fee percentage (applicable to OVERSTOCK returns) |
restocking_fee_amount | Decimal(10,2) | No | Calculated restocking fee deducted from credit |
net_credit_amount | Decimal(10,2) | No | Calculated: credit_amount - restocking_fee_amount |
replacement_po_id | UUID | No | FK to replacement purchase order (if vendor sends replacement) |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
RMA Line Items
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
rma_id | UUID | Yes | Reference to parent RMA |
product_id | UUID | Yes | Reference to product being returned |
variant_id | UUID | No | Reference to specific variant (if applicable) |
qty | Integer | Yes | Quantity being returned to vendor |
unit_cost | Decimal(10,2) | Yes | Cost per unit (from original PO or weighted avg cost) |
line_total | Decimal(10,2) | Yes | Calculated: qty x unit_cost |
condition_notes | Text | No | Description of item condition |
inspection_result | Enum | No | CONFIRMED_DEFECTIVE, COSMETIC_DAMAGE, NOT_AS_DESCRIBED, NOT_INSPECTED (for overstock items) |
original_po_id | UUID | No | Reference to the original purchase order that delivered this item (for traceability) |
4.9.3 RMA Workflow (Defective Returns)
sequenceDiagram
autonumber
participant U as Staff
participant UI as RMA UI
participant API as Backend
participant DB as DB
participant V as Vendor
Note over U, V: Step 1: Create RMA
U->>UI: Click "New Vendor RMA"
UI->>UI: Select Vendor
UI->>API: GET /vendors/{id}/products
API-->>UI: Return Vendor's Products
loop Add RMA Line Items
U->>UI: Select Product
U->>UI: Enter Quantity to Return
U->>UI: Select Reason (Defective/Damaged/Wrong Item)
U->>UI: Enter Condition Notes
U->>UI: Record Inspection Result
end
U->>UI: Click "Save Draft"
UI->>API: POST /vendor-rma
API->>DB: Create RMA Record (Status: DRAFT, Type: DEFECTIVE_RETURN)
API-->>UI: RMA #RMA-2026-00015 Created
Note over U, V: Step 2: Submit to Vendor
U->>UI: Review RMA -> Click "Submit"
UI->>API: POST /vendor-rma/{id}/submit
alt Email Submission
API->>V: Send RMA via Email (PDF attachment)
else Manual Submission
API-->>UI: "RMA marked Submitted - contact vendor manually"
end
API->>DB: Update Status: SUBMITTED
API-->>UI: RMA Submitted
Note over V, U: Step 3: Vendor Response
alt Vendor Approves
U->>UI: Mark RMA as "Vendor Approved"
UI->>API: POST /vendor-rma/{id}/approve
API->>DB: Update Status: VENDOR_APPROVED
else Vendor Rejects
U->>UI: Mark RMA as "Vendor Rejected"
U->>UI: Enter Rejection Reason
UI->>API: POST /vendor-rma/{id}/reject
API->>DB: Update Status: VENDOR_REJECTED
API->>DB: Update Status: CLOSED
end
Note over U, V: Step 4: Ship Items Back
U->>UI: Enter Tracking Number & Ship Date
UI->>API: POST /vendor-rma/{id}/ship
API->>DB: Decrement Inventory at Source Location
API->>DB: Log RMA_OUT Movement (Section 4.12)
API->>DB: Update Status: SHIPPED_BACK
API-->>UI: Items Shipped
Note over V, DB: Step 5: Vendor Resolution
alt Credit Memo
V-->>U: Vendor Issues Credit Memo
U->>UI: Enter Credit Amount
UI->>API: POST /vendor-rma/{id}/credit
API->>DB: Record Credit Amount
API->>DB: Update Vendor Account Balance
API->>DB: Update Status: CREDIT_RECEIVED
else Replacement Shipment
V-->>U: Vendor Sends Replacement
U->>UI: Click "Receive Replacement"
UI->>API: POST /vendor-rma/{id}/replacement
API->>DB: Create Linked Purchase Order
API->>DB: Increment Inventory (Replacement Items)
API->>DB: Log RMA_IN Movement (Section 4.12)
API->>DB: Update Status: REPLACEMENT_RECEIVED
end
Note over U, DB: Step 6: Close RMA
U->>UI: Click "Close RMA"
UI->>API: POST /vendor-rma/{id}/close
API->>DB: Update Status: CLOSED
API-->>UI: RMA Closed
4.9.4 Business Rules (Defective Returns)
- RMA can only be created for products with an active vendor relationship (exists in
vendor_producttable with anACTIVEvendor). - Items must be in
AVAILABLEorDAMAGEDinventory status to be placed on an RMA. Items inIN_TRANSIT,RESERVED, orQUARANTINEcannot be returned until their status resolves. - Inventory is decremented from the source location when the RMA status changes to
SHIPPED_BACK, not before. This ensures accurate on-hand counts until items physically leave. - Credit amounts are reconciled with the vendor’s account balance. If the tenant tracks payables, the credit reduces the outstanding amount owed to the vendor.
- Replacement purchase orders link back to the original RMA via
replacement_po_idfor complete audit trail. - Auto-increment RMA number per tenant:
RMA-{YEAR}-{SEQUENCE}. - Each RMA line item must have an
inspection_resultrecorded before the RMA can be submitted. This ensures quality documentation accompanies the return request. - All inventory movements (out for RMA, in for replacement) are recorded in the Product Movement History (Section 4.12) with event types
RMA_OUTandRMA_IN.
4.9.5 Reports: Vendor RMA
| Report | Purpose | Key Data Fields |
|---|---|---|
| Open RMA Report | Track outstanding vendor returns | RMA number, vendor, status, rma_type, total value, days open, last action date |
| Vendor Return Rate | Quality tracking per vendor | Vendor, RMA count, units returned, return % of total purchased, top reasons, defective vs overstock breakdown |
| RMA Aging | Identify stalled returns | RMA number, vendor, current status, days in current status, last action, escalation flag |
| RMA Credit Reconciliation | Track credits received vs expected | RMA number, vendor, expected credit, actual credit, restocking fees, net credit, variance, reconciliation status |
| Overstock Return Summary | Track overstock-specific returns | Vendor, period, units returned as overstock, gross credit, restocking fees paid, net credit received |
4.9.6 Overstock Return Workflow
Overstock returns represent a fundamentally different business process from defective RMAs. Overstock returns involve negotiated return of unsold seasonal, end-of-line, or slow-moving merchandise to the vendor. The vendor has pre-agreed to accept the return, often with a restocking fee.
Key Differences from Defective RMA
| Aspect | Defective RMA | Overstock Return |
|---|---|---|
| Trigger | Quality issue discovered in stock | Excess inventory / end of season |
| Inspection | Required – each item inspected for defect | Not required – items are in sellable condition |
| Vendor Pre-Agreement | Not always required (warranty claims) | Always required – must reference agreement |
| Restocking Fee | Typically none (vendor’s quality failure) | Common – vendor charges 10-25% restocking fee |
| Credit Calculation | Full original cost or replacement | Negotiated – may differ from original cost |
| Urgency | High (defective items tie up shelf space) | Moderate (planned seasonal transition) |
| Reason Code | DEFECTIVE, DAMAGED, WRONG_ITEM | OVERSTOCK |
| RMA Type | DEFECTIVE_RETURN | OVERSTOCK_RETURN |
Overstock Return Process
sequenceDiagram
autonumber
participant MGR as Store/HQ Manager
participant UI as RMA UI
participant API as Backend
participant DB as DB
participant V as Vendor
Note over MGR, V: Pre-Condition: Vendor Agreement Exists
MGR->>UI: Click "New Overstock Return"
UI->>UI: Select Vendor
UI->>UI: Prompt: "Enter Vendor Agreement Reference"
MGR->>UI: Enter Agreement Ref (e.g., "Email 2026-01-15, 20% restocking agreed")
loop Add Overstock Items
MGR->>UI: Select Product (filter: in-stock, this vendor)
MGR->>UI: Enter Quantity to Return
UI-->>MGR: Show: Unit Cost, Extended Total
end
MGR->>UI: Enter Restocking Fee % (e.g., 20%)
UI->>UI: Calculate: Gross Credit, Restocking Fee, Net Credit
UI-->>MGR: "Gross: $2,500 | Restocking Fee (20%): $500 | Net Credit: $2,000"
MGR->>UI: Click "Save Draft"
UI->>API: POST /vendor-rma
API->>DB: Create RMA (Status: DRAFT, Type: OVERSTOCK_RETURN, Reason: OVERSTOCK)
API-->>UI: RMA #RMA-2026-00032 Created
Note over MGR, V: Submit, Vendor Response, Ship, Credit (same state machine)
MGR->>UI: Submit RMA
UI->>API: POST /vendor-rma/{id}/submit
API->>DB: Update Status: SUBMITTED
V-->>MGR: Vendor Confirms Acceptance
MGR->>UI: Mark Vendor Approved
UI->>API: POST /vendor-rma/{id}/approve
API->>DB: Update Status: VENDOR_APPROVED
MGR->>UI: Ship Items Back (enter tracking)
UI->>API: POST /vendor-rma/{id}/ship
API->>DB: Decrement Inventory
API->>DB: Log RMA_OUT Movement (Section 4.12)
API->>DB: Update Status: SHIPPED_BACK
V-->>MGR: Vendor Issues Credit (net of restocking fee)
MGR->>UI: Enter Credit Received
UI->>API: POST /vendor-rma/{id}/credit
API->>DB: Record credit_amount, restocking_fee_amount, net_credit_amount
API->>DB: Update Vendor Account Balance (net credit)
API->>DB: Update Status: CREDIT_RECEIVED
MGR->>UI: Close RMA
UI->>API: POST /vendor-rma/{id}/close
API->>DB: Update Status: CLOSED
Overstock Return Business Rules
- Vendor agreement required: The
vendor_agreement_reffield must be populated before an overstock RMA can be submitted. This is a free-text field documenting the pre-agreement (e.g., email reference, contract clause, verbal confirmation date). - No inspection step: Overstock items set
inspection_resulttoNOT_INSPECTED. The items are in sellable condition; they are simply excess. - Restocking fee handling:
restocking_fee_pctis entered by the user (e.g., 20.00 for 20%).restocking_fee_amountis calculated:SUM(line_totals) x (restocking_fee_pct / 100).net_credit_amountis calculated:credit_amount - restocking_fee_amount.- The vendor account balance is credited with the
net_credit_amount, not the gross amount.
- Credit may differ from cost: The
credit_amountfield on the header is the negotiated total credit from the vendor. This may be less than the sum of line item costs if the vendor negotiated a reduced rate. The system displays both the calculated cost total and the actual credit for variance tracking. - Reason code enforcement: When
rma_typeisOVERSTOCK_RETURN, thereasonfield is automatically set toOVERSTOCKand cannot be changed. - Eligibility: Only items in
AVAILABLEstatus can be placed on an overstock return.DAMAGEDitems should go through the defective RMA workflow instead. - Seasonal timing: Overstock returns are typically created in bulk at end-of-season. The system supports adding many line items (50+) to a single overstock RMA.
4.10 Serial & Lot Tracking
Scope: Tracking individual high-value items by serial number and managing batch/lot numbers for recall readiness and FIFO inventory management. Serial tracking captures the full chain of custody from receiving through sale. Lot tracking enables batch-level recall identification.
4.10.1 Serial Number Tracking
Serial tracking is enabled per product via the serial_tracked boolean flag on the product record. When enabled, serial numbers are captured at two critical points: receiving (inbound) and sale (outbound).
Serial Number Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
product_id | UUID | Yes | Reference to product |
serial_number | String | Yes | Unique serial number (unique per tenant) |
status | Enum | Yes | IN_STOCK, SOLD, RETURNED, RMA, WRITE_OFF |
location_id | UUID | No | Current location (null if sold or written off) |
received_at | DateTime | Yes | Timestamp when serial was first received |
received_via | Enum | Yes | PO_RECEIVE, TRANSFER_RECEIVE, RETURN_TO_STOCK, RMA_REPLACEMENT |
source_document_id | UUID | No | FK to the receiving source document |
sold_at | DateTime | No | Timestamp when sold |
sold_to_customer_id | UUID | No | Customer who purchased (if customer attached to sale) |
sale_order_id | UUID | No | Reference to the sale order |
tenant_id | UUID | Yes | Owning tenant |
Serial Number State Machine
stateDiagram-v2
[*] --> IN_STOCK: Received (PO/Transfer/Return)
IN_STOCK --> SOLD: Sold to Customer
SOLD --> RETURNED: Customer Returns Item
RETURNED --> IN_STOCK: Returned to Available Stock
IN_STOCK --> RMA: Sent Back to Vendor
IN_STOCK --> WRITE_OFF: Damaged Beyond Repair
RMA --> [*]: Vendor Received
WRITE_OFF --> [*]: Removed from Inventory
note right of IN_STOCK
Available for sale
Location tracked
end note
note right of SOLD
Customer association
Order reference
end note
Business Rules:
- Cannot sell a serial-tracked product at POS without scanning or entering the serial number.
- Cannot receive a serial-tracked product without assigning a serial number to each unit.
- Serial numbers are unique per tenant. Duplicate serial numbers for the same product are rejected.
- Customer association: serial -> customer link enables after-sale lookup (“Customer X purchased serial Y on date Z at location W”).
- Serial history is immutable. Status changes are appended, never overwritten.
- Serial number transfers between locations update the
location_idfield and create a movement record in the Product Movement History (Section 4.12).
4.10.2 Lot/Batch Tracking
Lot tracking is enabled per product via the lot_tracked boolean flag. Lot numbers are assigned at receiving and tracked through the sales lifecycle.
Lot Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
product_id | UUID | Yes | Reference to product |
lot_number | String | Yes | Lot/batch identifier (from vendor or manually assigned) |
qty_received | Integer | Yes | Total quantity received in this lot |
qty_on_hand | Integer | Yes | Current quantity remaining |
qty_sold | Integer | Yes | Total quantity sold from this lot |
received_date | Date | Yes | Date lot was received |
expiry_date | Date | No | Expiration date (optional; supported for future non-clothing use cases) |
source_po_id | UUID | No | Reference to purchase order that delivered this lot |
location_id | UUID | Yes | Location holding this lot |
tenant_id | UUID | Yes | Owning tenant |
Business Rules:
- FIFO enforcement: when selling lot-tracked items, the system selects from the oldest lot first (by
received_date). - Recall support: given a lot number, the system identifies all units – in stock (with location) and sold (with customer and date) – for recall action.
- Lot numbers can be entered manually or scanned from vendor packaging during receiving.
- Multiple lots of the same product can exist at the same location simultaneously.
- Lot quantities are decremented on sale and incremented on return. The return process associates the returned item back to its original lot where possible.
- Lot transfers between locations create a new lot record at the destination (same lot number, new location) and decrement the source lot’s
qty_on_hand.
4.10.3 Reports: Serial & Lot
| Report | Purpose | Key Data Fields |
|---|---|---|
| Serial Number Lookup | Find current status and full history of a serial | Serial number, product, current status, current location, customer (if sold), purchase date, receiving source |
| Lot Inventory | Stock on hand by lot | Lot number, product, qty received, qty on hand, qty sold, received date, age (days), location |
| Lot Trace (Recall) | Find all units from a specific lot | Lot number, product, units in stock (by location), units sold (customer, sale date, order number), units returned |
| Serial Warranty Lookup | Customer and purchase info for warranty claims | Serial number, product, customer name, purchase date, purchase location, order number |
4.11 Landed Cost & Costing
Scope: Tracking the true cost of inventory by calculating landed cost (purchase price plus all additional costs to get product to the selling floor) and maintaining weighted average cost across multiple purchase orders. Accurate costing is critical for margin reporting and inventory valuation.
4.11.1 Landed Cost Calculation
Landed cost captures the total acquisition cost per unit, including all expenses beyond the vendor’s unit price.
Components:
| # | Component | Description | Example |
|---|---|---|---|
| 1 | Unit Cost | Vendor price per unit (from PO line item) | $25.00 per unit |
| 2 | Freight/Shipping | Carrier charges allocated per unit | $500 total / 200 units = $2.50/unit |
| 3 | Duties/Tariffs | Import duties allocated per unit (based on product category and origin country) | $300 total / 200 units = $1.50/unit |
| 4 | Customs/Brokerage | Customs clearance fees allocated per unit | $100 total / 200 units = $0.50/unit |
| 5 | Handling | Warehouse handling fees allocated per unit | $80 total / 200 units = $0.40/unit |
Formula:
landed_cost_per_unit = unit_cost + (freight / units) + (duties / units) + (customs / units) + (handling / units)
Example: $25.00 + $2.50 + $1.50 + $0.50 + $0.40 = $29.90 landed cost per unit
Cost Allocation Methods
The system supports three methods for distributing PO-level costs across individual line items:
| Method | Allocation Logic | Best For |
|---|---|---|
| BY_UNIT | Equal cost per unit across all lines | Uniform-size items (e.g., t-shirts in poly bags) |
| BY_VALUE | Proportional to unit cost (higher-cost items absorb more) | Mixed-value POs (e.g., accessories + outerwear) |
| BY_WEIGHT | Proportional to item weight | Heavy items driving freight cost (e.g., denim vs. silk) |
Cost Allocation Data Model (per PO)
| Field | Type | Required | Description |
|---|---|---|---|
po_id | UUID | Yes | Reference to purchase order |
freight_total | Decimal(10,2) | No | Total freight cost for PO shipment |
duties_total | Decimal(10,2) | No | Total duties and tariffs |
customs_total | Decimal(10,2) | No | Customs brokerage fees |
handling_total | Decimal(10,2) | No | Warehouse handling fees |
allocation_method | Enum | Yes | BY_UNIT (equal per unit), BY_VALUE (proportional to unit cost), BY_WEIGHT |
tenant_id | UUID | Yes | Owning tenant |
Per-Line Landed Cost Data Model
| Field | Type | Required | Description |
|---|---|---|---|
po_line_id | UUID | Yes | Reference to PO line item |
unit_cost | Decimal(10,2) | Yes | Base vendor price per unit |
freight_per_unit | Decimal(10,2) | No | Allocated freight per unit |
duties_per_unit | Decimal(10,2) | No | Allocated duties per unit |
customs_per_unit | Decimal(10,2) | No | Allocated customs per unit |
handling_per_unit | Decimal(10,2) | No | Allocated handling per unit |
landed_cost_per_unit | Decimal(10,2) | Yes | Sum of all cost components |
4.11.2 Weighted Average Cost
The system maintains a weighted average cost per product per location. This cost is recalculated on every PO receive event.
Formula:
new_avg_cost = ((existing_qty x existing_avg_cost) + (received_qty x landed_cost_per_unit))
/ (existing_qty + received_qty)
Example:
- Existing: 100 units at $28.00 avg cost = $2,800.00
- Received: 50 units at $29.90 landed cost = $1,495.00
- New avg cost: ($2,800 + $1,495) / (100 + 50) = $28.63
Weighted Average Cost Data Model
| Field | Type | Required | Description |
|---|---|---|---|
product_id | UUID | Yes | Reference to product |
location_id | UUID | Yes | Reference to store/warehouse location |
weighted_avg_cost | Decimal(10,4) | Yes | Current weighted average cost (4 decimal places for precision) |
last_updated_at | DateTime | Yes | Timestamp of last recalculation |
Business Rules:
- Recalculated on every PO receive event using landed cost (not raw vendor cost).
- Used for: COGS calculation, margin reporting, inventory valuation, shrinkage valuation.
- Historical cost snapshots are preserved for each receive event to support audit and retroactive analysis.
- Initial weighted average cost for a new product is set to the first PO’s landed cost.
- Transfers between locations do not change the weighted average cost. The receiving location inherits the sending location’s cost for those units.
- Inventory adjustments (Section 4.7) use the current weighted average cost for valuation of adjusted quantities.
- Write-offs and shrinkage are valued at the weighted average cost at the time of the event.
4.11.3 Reports: Costing
| Report | Purpose | Key Data Fields |
|---|---|---|
| Landed Cost Analysis | Breakdown of cost components per PO | PO number, product, unit cost, freight, duties, customs, handling, total landed cost, allocation method |
| Margin Analysis | True margin using landed cost | Product, selling price, weighted avg cost, gross margin $, gross margin %, comparison to target margin |
| Inventory Valuation | Total inventory value at cost | Location, product count, total units, total value (at weighted avg cost), value by status |
| Cost Trend | Cost changes over time per product | Product, vendor, landed cost per PO over time, cost trend direction, % change |
4.12 Product Movement History & Stock Ledger
Scope: Maintaining a complete audit trail of every inventory movement for each product across all locations. The movement history serves as the authoritative record for inventory reconciliation, shrinkage analysis, and regulatory compliance. Every change to inventory quantity must be traced to a source document. This is the single source of truth for “what happened to inventory and why.”
4.12.1 Movement Audit Trail
Every inventory change creates a movement record. No inventory quantity changes without a corresponding movement entry.
| Event Type | Description | Qty Impact | Source Document |
|---|---|---|---|
PO_RECEIVE | Received from vendor via purchase order | +qty | Purchase Order |
TRANSFER_OUT | Shipped to another location (Section 4.8) | -qty | Transfer |
TRANSFER_IN | Received from another location (Section 4.8) | +qty | Transfer |
SALE | Sold to customer | -qty | Sale Order |
RETURN | Customer return to stock | +qty | Return Order |
ADJUSTMENT_UP | Manual positive adjustment (Section 4.7) | +qty | Adjustment |
ADJUSTMENT_DOWN | Manual negative adjustment (Section 4.7) | -qty | Adjustment |
WRITE_OFF | Inventory write-off (damaged, expired, theft) | -qty | Write-Off |
RMA_OUT | Shipped back to vendor via RMA (Section 4.9) | -qty | Vendor RMA |
RMA_IN | Replacement received from vendor via RMA (Section 4.9) | +qty | Vendor RMA |
COUNT_ADJUST | Adjustment from stock count variance | +/- qty | Stock Count |
RESERVE | Reserved for order or transfer | -available | Reservation |
UNRESERVE | Reservation released | +available | Reservation |
4.12.2 Movement Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
product_id | UUID | Yes | Reference to product |
variant_id | UUID | No | Reference to specific variant (if applicable) |
location_id | UUID | Yes | Location where movement occurred |
event_type | Enum | Yes | One of the 13 event types listed above |
qty_change | Integer | Yes | Positive (inbound) or negative (outbound) quantity change |
running_balance | Integer | Yes | Running balance at this location after this movement |
source_document_type | String | Yes | Type of source document (e.g., “PurchaseOrder”, “Transfer”, “SaleOrder”, “Adjustment”, “VendorRMA”, “StockCount”) |
source_document_id | UUID | Yes | FK to the source document |
reference_number | String | No | Human-readable reference (e.g., PO-2026-00042, TRF-2026-00088, ADJ-2026-00005) |
actor_id | UUID | Yes | User or system process that caused the movement |
reason_code | String | No | Reason code for adjustments and write-offs (see Section 4.7 for adjustment reason codes) |
notes | Text | No | Additional context |
tenant_id | UUID | Yes | Owning tenant |
created_at | DateTime | Yes | Timestamp of movement (immutable) |
Business Rules:
- Movement records are immutable. Once created, they cannot be edited or deleted. Corrections are made by creating a new, opposite movement (e.g., an
ADJUSTMENT_UPto correct an erroneousADJUSTMENT_DOWN). - The
running_balanceis computed at insert time:previous_running_balance + qty_change. This provides an instant snapshot of inventory level at any point in time without recalculating from the beginning. - Every module that changes inventory quantity (PO receiving, sales, transfers, adjustments, RMAs, stock counts) must insert a movement record as part of the same database transaction. No inventory change should be committed without its corresponding movement.
- The
actor_idfield distinguishes between human actions (staff user ID) and system actions (system process ID for automated events like auto-suggest transfers or scheduled adjustments).
4.12.3 Stock Ledger
The stock ledger provides a chronological, per-product, per-location view of all movements with running balances.
Features:
- Running balance updated with every movement. The
running_balancefield on each movement record represents the inventory level immediately after that movement was applied. - Ledger view: chronological list of all movements for a product at a location, showing date, event type, quantity change, running balance, source document, and actor.
- Drill-down: clicking any ledger entry navigates to the source document (PO, sale, transfer, adjustment, RMA, etc.) for full context.
- Reconciliation: compare the latest ledger running balance to a physical count to identify discrepancies. Any difference indicates missing or undocumented movements.
- Date-range filtering: view movements for a specific period (e.g., last 30 days, last quarter) to analyze trends.
- Event-type filtering: view only specific movement types (e.g., show only adjustments and write-offs to analyze shrinkage).
4.12.4 Stock Ledger Entity Relationship
erDiagram
PRODUCT {
UUID id PK
String sku
String name
}
LOCATION {
UUID id PK
String name
String type
}
MOVEMENT {
UUID id PK
UUID product_id FK
UUID variant_id FK
UUID location_id FK
String event_type
Integer qty_change
Integer running_balance
String source_document_type
UUID source_document_id
String reference_number
UUID actor_id
String reason_code
DateTime created_at
}
PURCHASE_ORDER {
UUID id PK
String po_number
}
TRANSFER {
UUID id PK
String transfer_number
}
SALE_ORDER {
UUID id PK
String order_number
}
ADJUSTMENT {
UUID id PK
String adjustment_number
}
VENDOR_RMA {
UUID id PK
String rma_number
}
STOCK_COUNT {
UUID id PK
String count_number
}
PRODUCT ||--o{ MOVEMENT : "has movements"
LOCATION ||--o{ MOVEMENT : "at location"
MOVEMENT }o--|| PURCHASE_ORDER : "source: PO_RECEIVE"
MOVEMENT }o--|| TRANSFER : "source: TRANSFER_IN/OUT"
MOVEMENT }o--|| SALE_ORDER : "source: SALE/RETURN"
MOVEMENT }o--|| ADJUSTMENT : "source: ADJUSTMENT"
MOVEMENT }o--|| VENDOR_RMA : "source: RMA_IN/OUT"
MOVEMENT }o--|| STOCK_COUNT : "source: COUNT_ADJUST"
4.12.5 Reports: Movement History
| Report | Purpose | Key Data Fields |
|---|---|---|
| Product Movement Log | Full chronological history for one product at one location | Product, location, event type, qty change, running balance, source document, reference number, actor, timestamp |
| Location Movement Summary | Aggregate all movements at a location for a period | Location, period, event type, total events, total qty in, total qty out, net change |
| Shrinkage Analysis | Identify unexplained inventory loss | Location, period, expected balance (from ledger), actual balance (from count), unexplained variance, shrinkage %, shrinkage value (at weighted avg cost from Section 4.11) |
| Movement by Source | Volume of movements grouped by source document type | Source type (PO, Transfer, Sale, Adjustment, RMA), event count, total qty moved, period |
| Adjustment Audit Trail | All manual adjustments with reason codes | Adjustment reference, product, location, qty change, reason code, actor, timestamp, notes (see Section 4.7 for adjustment workflow) |
4.13 POS & Sales Integration
Scope: Real-time interaction between inventory management and point-of-sale operations. This section defines how inventory quantities are affected at each stage of a sales transaction, including cart operations, payment, voids, returns, and serial number capture.
4.13.1 Reserve on Add to Cart
When a cashier adds an item to a transaction, the system creates a soft reservation against the selling location’s available quantity. The reservation is tied to the terminal and transaction session.
Reservation behavior:
- Available qty at the selling location is decremented immediately in the UI and API layer.
- Other terminals at the same location see the reduced available quantity in real time.
- The reservation is temporary and tied to the active transaction session.
- If the item is removed from the cart, the reservation releases instantly.
- If the entire transaction is voided before payment, all reservations release instantly.
Data written on Add to Cart:
| Field | Value |
|---|---|
reservation_type | SOFT |
reservation_source | POS_CART |
terminal_id | Current terminal |
transaction_session_id | Active session |
product_variant_id | Selected variant |
location_id | Selling location |
qty_reserved | Quantity added |
created_at | Timestamp |
expires_at | NULL (cleared on payment or void) |
Cross-reference: See Section 1.1 (Core Sales Workflow) for the full item entry flow. Reservation logic integrates with the cart state machine defined there.
4.13.2 Commit on Payment
When payment completes successfully, the soft reservation converts to a permanent inventory decrement. A SALE movement is logged in the stock ledger (See Section 4.12).
Payment completion rules:
- On successful payment, the reservation record is deleted and replaced with a finalized
SALEmovement record. - The Weighted Average Cost (WAC) is captured at the moment of sale for COGS calculation.
- If payment fails (card decline, insufficient funds, terminal timeout), the reservation holds for 30 seconds then auto-releases.
- The 30-second hold prevents a race condition where another terminal could claim the last unit while the cashier retries payment.
- If the cashier retries payment within 30 seconds, the existing reservation is reused.
- After 30 seconds with no retry, the reservation releases and the item returns to available stock.
On payment success, the system writes:
| Record | Fields |
|---|---|
| Stock Movement | type: SALE, qty: -N, location_id, product_variant_id, reference_type: ORDER, reference_id: order_id, cost_at_time: WAC, created_by: staff_id |
| Inventory Level Update | available_qty -= N at selling location |
| Reservation Cleanup | Delete soft reservation record |
4.13.3 Reserve for Parked Transactions
Parked (suspended) sales maintain soft reservations throughout their lifecycle. The parked sale state machine (See Section 1.1.1) governs the reservation lifecycle.
Parked sale reservation rules:
| Rule | Value | Configurable |
|---|---|---|
| Reservation type | SOFT | No |
| Visibility to other terminals | Available with warning | No |
| Warning message format | “N units available, M reserved by Terminal X” | No |
| Maximum parked sales per terminal | 5 | Yes |
| Time-to-live (TTL) | 4 hours | Yes |
| On TTL expiry | Auto-release all reservations | No |
| On retrieval | Reservations transfer to active cart session | No |
Parked sale warning display example:
Product: Classic Fit Tee - Navy / M
Available: 2 units
Warning: 1 unit reserved by Terminal 3 (Parked Sale)
When a parked sale expires:
- All reservations release to available stock.
- A
RESERVATION_EXPIREDevent is logged with the parked sale ID. - The expired sale is archived with reason
TTL_EXCEEDED.
Cross-reference: See Section 1.1.1 (Parked Sale State Machine) for TTL configuration and the
parked_salesYAML config in Section 4.18.
4.13.4 Reserve for Hold-for-Pickup
Fully paid orders designated for customer pickup create hard reservations. These items are not visible to other terminals as available stock.
Hold-for-pickup reservation rules:
| Rule | Value | Configurable |
|---|---|---|
| Reservation type | HARD | No |
| Visibility to other terminals | Not visible as available | No |
| Inventory status | RESERVED | No |
| Default hold period | 7 days | Yes |
| Maximum hold extension | 30 days | Yes |
| Reminder before expiry | 2 days | Yes |
| On expiry | Auto-refund process triggers | Yes |
Hard reservation behavior:
- Available qty is decremented. The qty is moved to
reserved_qtyin the inventory level record. - Other terminals see only the
available_qty(excluding reserved). - The POS dashboard shows held orders with countdown timers.
- When the customer picks up, the reservation clears and the sale is marked
COMPLETED. - When the hold expires without pickup, the system initiates the auto-refund workflow and the inventory moves back to
AVAILABLEstatus.
Cross-reference: See Section 1.11 (Hold for Pickup) for the full pickup workflow and BOPIS integration. See
hold_for_pickupYAML config in Section 4.18.
4.13.5 Inventory Decrement on Sale
Each completed sale logs a SALE movement in the stock ledger. This is the authoritative record of inventory leaving the business through a customer transaction.
Movement record for a sale:
| Field | Value |
|---|---|
movement_type | SALE |
qty_change | Negative (e.g., -1) |
location_id | Selling location |
product_variant_id | Sold variant |
reference_type | ORDER |
reference_id | Order ID |
unit_cost_at_time | WAC at moment of sale |
created_by | Staff ID |
created_at | Transaction timestamp |
notes | NULL (auto-generated from sale) |
Multi-item transactions: Each line item in a transaction generates its own movement record. All movements for a single transaction share the same reference_id (order ID).
WAC capture: The system snapshots the current Weighted Average Cost at the moment of sale. This ensures COGS calculations remain accurate even if costs change later due to new receiving events.
Cross-reference: See Section 4.12 (Stock Ledger & Movement Log) for the complete movement type taxonomy.
4.13.6 Inventory Increment on Return
Customer returns automatically restore inventory at the return location. The default behavior places returned items into AVAILABLE status, but staff can override to DAMAGED if the item is not resalable.
Return-to-stock rules:
| Rule | Behavior |
|---|---|
| Default return status | AVAILABLE |
| Staff override options | DAMAGED, QUARANTINE |
| Movement type logged | RETURN |
| Location | Return location (may differ from sale location) |
| Cross-store returns | Allowed; inventory incremented at return location, not original sale location |
| WAC impact | No WAC recalculation on return (original cost basis preserved) |
| Serial-tracked items | Serial status reverts to IN_STOCK at return location |
| Lot-tracked items | Lot assignment restored; FIFO position maintained |
Movement record for a return:
| Field | Value |
|---|---|
movement_type | RETURN |
qty_change | Positive (e.g., +1) |
location_id | Return location |
product_variant_id | Returned variant |
reference_type | RETURN |
reference_id | Return transaction ID |
unit_cost_at_time | Original sale cost |
created_by | Staff ID |
notes | Reason code (e.g., DEFECTIVE, WRONG_SIZE, CHANGED_MIND) |
Cross-reference: See Section 1.4.1 (Void vs. Return Rules) for return eligibility logic and Section 4.7 (Vendor RMA) for items returned to vendor instead of restocked.
4.13.7 Serial Number Capture at POS
For products flagged as serial-tracked, the POS enforces serial number capture during both sale and return transactions.
Serial capture at sale:
- Cashier scans or adds a serial-tracked product to the cart.
- POS prompts: “Scan or enter serial number.”
- Cashier scans barcode or manually enters serial number.
- System validates:
- Serial number exists in the system (was recorded at receiving).
- Serial status is
IN_STOCKat the selling location. - Serial is not already linked to another active sale.
- On validation pass, serial is attached to the line item.
- On payment completion, serial status changes to
SOLDand is linked to the customer record.
Serial capture at return:
- Staff initiates return and scans the serial number.
- System looks up the serial and retrieves the original sale record.
- Serial status reverts to
IN_STOCKat the return location. - If staff marks item as damaged, serial status changes to
DAMAGED.
Serial validation errors:
| Error | Message | Action |
|---|---|---|
| Serial not found | “Serial number not recognized. Verify and retry.” | Block sale line |
| Serial already sold | “Serial already linked to Order #X. Investigate.” | Block sale line |
| Serial at wrong location | “Serial is at [Location]. Transfer required.” | Block sale line |
| Serial in quarantine | “Serial is quarantined. Manager override required.” | Require manager PIN |
Cross-reference: See Section 4.10 (Serial & Lot Tracking) for the full serial lifecycle and Section 1.10 (Serial Number Tracking) for POS-side serial workflows.
4.13.8 Sale Flow Sequence Diagram
sequenceDiagram
autonumber
participant U as Cashier
participant POS as POS Terminal
participant API as Inventory API
participant DB as Database
participant LED as Stock Ledger
Note over U, LED: Phase 1: Item Added to Cart
U->>POS: Scan Item (SKU-1001)
POS->>API: POST /inventory/reserve
API->>DB: Create soft reservation (terminal_id, session_id, product_variant_id, qty)
API->>DB: Decrement available_qty at location
DB-->>API: Reservation confirmed
API-->>POS: Reserved (available: 4 → 3)
POS-->>U: Item added to cart
Note over U, LED: Phase 2: Payment Processing
U->>POS: Tender Payment
POS->>API: POST /orders/finalize
API->>DB: Validate reservation still active
API->>DB: Delete soft reservation
API->>LED: INSERT movement (type: SALE, qty: -1, cost: WAC)
API->>DB: Update available_qty (permanent decrement)
DB-->>API: Sale committed
API-->>POS: Order finalized (Order #ORD-5678)
POS-->>U: Print receipt
Note over U, LED: Phase 3: Payment Failure Path
Note right of POS: If payment fails:
Note right of POS: Reservation holds 30 seconds
Note right of POS: Cashier retries → reuse reservation
Note right of POS: No retry within 30s → auto-release
4.13.9 Return Flow Sequence Diagram
sequenceDiagram
autonumber
participant U as Staff
participant POS as POS Terminal
participant API as Inventory API
participant DB as Database
participant LED as Stock Ledger
Note over U, LED: Phase 1: Return Initiated
U->>POS: Initiate Return
POS-->>U: "Scan receipt or enter order number"
U->>POS: Scan Receipt Barcode
POS->>API: GET /orders/{order_id}
API-->>POS: Order details + line items
Note over U, LED: Phase 2: Validate & Process
U->>POS: Select items to return
POS->>API: POST /returns/validate
API->>API: Check return policy (window, final sale, etc.)
API-->>POS: Return eligible
U->>POS: Confirm return + select condition
alt Item is resalable
POS->>API: POST /returns/process (status: AVAILABLE)
else Item is damaged
POS->>API: POST /returns/process (status: DAMAGED)
end
Note over U, LED: Phase 3: Inventory Updated
API->>DB: Increment available_qty (or damaged_qty) at return location
API->>LED: INSERT movement (type: RETURN, qty: +1, cost: original_cost)
API->>DB: Update order line status to RETURNED
DB-->>API: Return processed
API-->>POS: Return complete (Refund: $29.99)
POS-->>U: Process refund to original payment method
POS-->>U: Print return receipt
4.14 Online Order Fulfillment
Scope: How online orders placed through Shopify interact with physical store inventory, including store assignment, inventory reservation, bidirectional sync, and the pick-pack-ship workflow.
4.14.1 Reserve from Nearest Store
When a customer places an online order through Shopify, the system identifies the optimal fulfillment location based on proximity to the customer’s shipping address and stock availability.
Reservation flow:
- Shopify webhook fires
orders/createevent. - System receives the order and extracts the shipping address.
- Store assignment algorithm (Section 4.14.2) selects the fulfillment location.
- Inventory is hard reserved at the selected store (status:
RESERVED, source:ONLINE_ORDER). - The order appears on the selected store’s fulfillment queue.
- If no store has sufficient stock, the order is flagged for manual assignment.
Online order reservation record:
| Field | Value |
|---|---|
reservation_type | HARD |
reservation_source | ONLINE_ORDER |
shopify_order_id | Shopify order reference |
assigned_location_id | Selected store |
product_variant_id | Ordered variant |
qty_reserved | Order quantity |
status | PENDING_FULFILLMENT |
created_at | Order timestamp |
expires_at | NULL (does not expire; requires manual cancel) |
4.14.2 Store Assignment Algorithm
The store assignment algorithm determines which physical store fulfills an online order. The algorithm prioritizes proximity while ensuring stock availability.
Algorithm steps:
1. FILTER: Stores where available_qty >= ordered_qty for ALL line items
2. IF no single store has all items:
a. IF split_fulfillment_enabled = true:
- Find minimum set of stores to cover all items
- Prefer fewer splits (2 stores over 3)
- Within equal splits, prefer nearest stores
b. IF split_fulfillment_enabled = false:
- Flag order for manual assignment
- Notify HQ manager
3. CALCULATE: Distance from each qualifying store to customer shipping address
- Uses haversine formula on store lat/lng vs. shipping address lat/lng
4. SORT: By distance ascending
5. SELECT: Nearest qualifying store
6. RESERVE: Inventory at selected store
Store assignment decision matrix:
| Scenario | Single Store Available | Multiple Stores Available | No Store Available |
|---|---|---|---|
| Full order at one store | Assign nearest | Assign nearest with full stock | Flag for manual |
| Partial at multiple stores | Flag for manual (split disabled) | Split across nearest stores (split enabled) | Flag for manual |
| HQ warehouse only | Assign HQ | Prefer retail store over HQ | Backorder or cancel |
flowchart TD
A[Online Order Received] --> B{Any store has\nall items?}
B -->|Yes| C[Filter stores with full stock]
C --> D[Calculate distance to each store]
D --> E[Select nearest store]
E --> F[Reserve inventory at store]
F --> G[Order appears on\nstore fulfillment queue]
B -->|No| H{Split fulfillment\nenabled?}
H -->|Yes| I[Find minimum store\nset covering all items]
I --> J[Create split shipments]
J --> K[Reserve at each store]
K --> G
H -->|No| L{HQ has stock?}
L -->|Yes| M[Assign to HQ warehouse]
M --> F
L -->|No| N[Flag for manual\nassignment]
N --> O[Notify HQ manager]
4.14.3 Inventory Sync with Shopify
MOVED TO MODULE 6: Shopify inventory sync triggers, architecture, and reconciliation details have been consolidated into Module 6, Section 6.3.14 (Inventory Sync Triggers) and Section 6.7 (Cross-Platform Inventory Sync Rules).
See: Module 6, Section 6.3.14 for Shopify-specific inventory sync triggers and Section 6.7 for cross-platform inventory sync architecture including safety buffers and oversell prevention.
4.14.4 Pick-Pack-Ship Workflow
Once an online order is assigned to a store, the store staff fulfills it through a structured pick-pack-ship process.
Workflow stages:
| Stage | Action | System Effect |
|---|---|---|
| 1. Order Received | Order appears on store’s fulfillment queue | Status: PENDING_FULFILLMENT. Inventory reserved. |
| 2. Pick | Staff locates and scans each item | Status: PICKING. System validates each scanned item against the order. |
| 3. Pack | Staff packages items for shipping | Status: PACKING. Staff selects box size and records weight. |
| 4. Ship | Staff enters carrier and tracking number | Status: SHIPPED. Inventory decremented (SALE movement logged). Shopify order marked fulfilled with tracking. |
| 5. Deliver | Carrier delivers to customer | Status: DELIVERED. Updated via carrier webhook or manual confirmation. |
Pick validation rules:
- Each item must be scanned individually.
- If scanned item does not match order line, system rejects with “Item not on this order.”
- If item is serial-tracked, serial number must be captured during pick.
- If item is lot-tracked, system auto-selects FIFO lot and records lot number.
- Staff can flag a “short pick” if an item is not found. This triggers a recount at the store and potential reassignment to another store.
sequenceDiagram
autonumber
participant SH as Shopify
participant API as POS API
participant DB as Database
participant ST as Store Staff
participant CR as Carrier
Note over SH, CR: Phase 1: Order Assignment
SH->>API: Webhook: orders/create
API->>API: Run store assignment algorithm
API->>DB: Reserve inventory at selected store
API->>DB: Create fulfillment record (PENDING_FULFILLMENT)
API-->>ST: New order on fulfillment queue
Note over SH, CR: Phase 2: Pick & Pack
ST->>API: Start picking (fulfillment_id)
API->>DB: Status: PICKING
loop Each order line
ST->>API: Scan item barcode
API->>DB: Validate item matches order line
API-->>ST: Item confirmed
end
ST->>API: Picking complete
API->>DB: Status: PACKING
ST->>API: Record package details (weight, dimensions)
API->>DB: Status: READY_TO_SHIP
Note over SH, CR: Phase 3: Ship & Track
ST->>API: Enter carrier + tracking number
API->>DB: Status: SHIPPED
API->>DB: Log SALE movement for each line item
API->>DB: Decrement inventory (release reservation, permanent decrement)
API->>SH: POST fulfillment with tracking number
SH-->>API: Fulfillment confirmed
CR->>API: Webhook: delivered
API->>DB: Status: DELIVERED
4.15 Offline Inventory Operations
Scope: How inventory operations function when a store loses network connectivity to the central server. This section defines which operations continue locally, which are blocked, and how conflicts are resolved upon reconnection.
4.15.1 Queue All Changes
During offline mode, the POS client maintains a local inventory cache and queues all changes for synchronization when connectivity restores. The local cache is updated immediately so staff can continue working.
Queued operations (allowed offline):
| Operation | Local Cache Effect | Queue Entry |
|---|---|---|
| Sale decrement | available_qty -= N in local cache | { type: SALE, product_variant_id, location_id, qty: -N, order_ref, timestamp } |
| Return increment | available_qty += N in local cache | { type: RETURN, product_variant_id, location_id, qty: +N, return_ref, timestamp } |
| Inventory adjustment | available_qty += delta in local cache | { type: ADJUSTMENT, product_variant_id, location_id, qty: +/-N, reason_code, timestamp } |
| Stock count entry | Stored in local count session | { type: COUNT, count_session_id, product_variant_id, counted_qty, timestamp } |
| Parked sale create | Reservation in local cache | { type: PARK, session_id, items: [...], terminal_id, timestamp } |
| Parked sale retrieve | Reservation transferred in local cache | { type: UNPARK, session_id, terminal_id, timestamp } |
Local cache structure:
local_inventory_cache = {
product_variant_id: {
available_qty: number, // Last synced value +/- local changes
reserved_qty: number, // Local reservations
last_synced_at: timestamp, // When cache was last updated from server
pending_changes: [ // Ordered list of unsynced changes
{ type, qty, ref, timestamp }
]
}
}
Queue limits:
- Maximum queued inventory changes: 500 entries (configurable).
- Maximum offline duration before warning: 4 hours.
- If queue reaches 90% capacity, POS displays warning: “Offline queue nearly full. Reconnect soon.”
- If queue reaches 100%, POS blocks further inventory-modifying operations.
4.15.2 Blocked Operations
The following operations require server connectivity and are blocked during offline mode. The POS displays a clear message explaining why the operation is unavailable.
| Blocked Operation | Reason | User Message |
|---|---|---|
| Multi-store inventory lookup | Requires real-time data from all locations | “Multi-store lookup unavailable offline. Check local stock only.” |
| Cross-store transfer request | Requires server to coordinate with other stores | “Transfers unavailable offline. Queue request when online.” |
| Online order fulfillment | Requires Shopify sync and server coordination | “Online fulfillment unavailable offline.” |
| Shopify inventory sync | Requires Shopify API connectivity | Syncs automatically on reconnect. |
| PO submission to vendor | Requires email delivery and server logging | “PO submission unavailable offline. Save as draft.” |
| PO receiving | Requires WAC recalculation and server-side validation | “Receiving unavailable offline. Wait for connectivity.” |
| Gift card activation/reload | Requires server validation of card status | “Gift card operations unavailable offline.” |
| Customer account creation | Requires server-side deduplication | “Customer creation unavailable offline.” |
| RMA creation | Requires server-side RMA number generation | “RMA creation unavailable offline.” |
Cross-reference: See Section 1.16 (Offline Operations) for the complete offline mode state machine and the
offline_modeYAML config in the BRD Section 4 Business Rules.
4.15.3 Conflict Resolution on Reconnect
When connectivity restores, the system uploads all queued changes and reconciles them with changes that occurred at other locations during the offline period.
Sync process (step by step):
1. POS detects network restored → Status: SYNCING
2. Upload queued changes in strict chronological order (oldest first)
3. For each queued change:
a. Server calculates expected qty = (last_synced_qty + all server-side changes since sync)
b. Server applies the offline change
c. IF resulting qty < 0 AND allow_negative_inventory = false:
- Flag as CONFLICT
- Record: { product_variant_id, expected_qty, offline_change, resulting_qty, conflict_type: NEGATIVE_INVENTORY }
d. IF resulting qty >= 0:
- Apply change normally
- Log movement with offline_synced: true flag
4. After all changes processed:
a. If conflicts exist → Status: CONFLICT_REVIEW
b. If no conflicts → Status: ONLINE
5. Manager reviews and resolves each conflict
Conflict resolution rules:
| Conflict Type | Resolution Strategy | Manager Action |
|---|---|---|
| Negative inventory (same item sold at two stores) | Last-write-wins for quantities | Review and approve negative balance, or adjust |
| Count submitted offline while count also submitted at server | Server count wins; offline count flagged for re-review | Accept server count or request recount |
| Adjustment conflicts | Both adjustments applied; resulting qty may need review | Verify physical count matches |
| Parked sale item no longer available | Auto-release parked sale reservation | Notify customer, offer alternatives |
Conflict review dashboard fields:
| Field | Description |
|---|---|
conflict_id | Unique conflict identifier |
product_variant_id | Affected product |
location_id | Affected store |
offline_terminal_id | Terminal that made the offline change |
offline_change | The queued change (type, qty, timestamp) |
server_qty_at_sync | Server’s qty when sync occurred |
resulting_qty | Calculated result after applying offline change |
conflict_type | NEGATIVE_INVENTORY, COUNT_CONFLICT, ADJUSTMENT_CONFLICT |
resolution | PENDING, ACCEPTED, ADJUSTED, REVERSED |
resolved_by | Manager who resolved |
resolved_at | Resolution timestamp |
sequenceDiagram
autonumber
participant POS as POS Terminal
participant Q as Local Queue
participant API as Server API
participant DB as Database
participant MGR as Manager
Note over POS, MGR: Phase 1: Offline Operations
POS->>POS: Detect network lost → OFFLINE MODE
POS->>Q: Queue sale: SKU-1001, qty: -1, 10:15 AM
POS->>Q: Queue sale: SKU-1001, qty: -2, 10:32 AM
POS->>Q: Queue adjustment: SKU-2005, qty: -3, 11:00 AM
Note right of Q: Meanwhile, Store B sells 2x SKU-1001 on server
Note over POS, MGR: Phase 2: Reconnect & Sync
POS->>POS: Detect network restored → SYNCING
POS->>API: Upload queue (3 entries, chronological)
API->>DB: Apply sale SKU-1001 qty: -1 (10:15 AM)
DB-->>API: OK (server qty was 5, now 4)
API->>DB: Apply sale SKU-1001 qty: -2 (10:32 AM)
DB-->>API: OK (server qty was 4, now 2)
API->>DB: Apply adjustment SKU-2005 qty: -3 (11:00 AM)
DB-->>API: CONFLICT: resulting qty = -1
API-->>POS: Sync result: 2 OK, 1 conflict
Note over POS, MGR: Phase 3: Conflict Resolution
POS->>POS: Status → CONFLICT_REVIEW
POS-->>MGR: Alert: 1 inventory conflict requires review
MGR->>API: Review conflict: SKU-2005, expected qty 2, adjustment -3 = -1
MGR->>API: Resolution: Accept negative, schedule recount
API->>DB: Apply adjustment, flag for investigation
API-->>POS: All conflicts resolved → ONLINE
4.16 Alerts, Notifications & Email Templates
Scope: Proactive inventory alerts that notify staff and managers of conditions requiring attention, plus automated email templates for key inventory events.
4.16.1 Alert Types
The system supports five inventory alert types, each with configurable thresholds, severity levels, and delivery channels.
| Alert | Trigger | Severity | Delivery | Configurable Parameters |
|---|---|---|---|---|
| Low Stock | Qty falls below reorder point for product at location | WARNING | Dashboard + daily email digest | Reorder point per product per location; digest send time |
| Overstock | Days of supply exceeds threshold | INFO | Dashboard only | Days of supply threshold (default: 90 days) |
| Shrinkage Threshold | Count variance exceeds % threshold of expected qty | CRITICAL | Dashboard + immediate email | Variance % threshold (default: 5%) |
| Aging Inventory | No sales for product at location in X days | WARNING | Dashboard + weekly email digest | Days threshold (default: 90 days); weekly digest day (default: Monday) |
| PO Overdue | PO not received within vendor lead time + buffer days | WARNING | Dashboard + email to buyer | Buffer days beyond lead time (default: 3 days) |
Alert trigger logic (detailed):
Low Stock:
FOR each (product_variant_id, location_id):
IF available_qty <= reorder_point
AND no active OPEN alert exists for this product/location combo
THEN create LOW_STOCK alert
Overstock:
FOR each (product_variant_id, location_id):
days_of_supply = available_qty / avg_daily_sales_velocity_90d
IF days_of_supply > overstock_threshold_days
AND avg_daily_sales_velocity_90d > 0 -- exclude dead stock (handled separately)
THEN create OVERSTOCK alert
Shrinkage Threshold:
ON count finalization:
FOR each counted product:
variance_pct = ABS(counted_qty - expected_qty) / expected_qty * 100
IF variance_pct > shrinkage_threshold_pct
THEN create SHRINKAGE alert (CRITICAL)
Aging Inventory:
WEEKLY job:
FOR each (product_variant_id, location_id):
last_sale_date = MAX(sale_date) for this product at this location
days_since_sale = TODAY - last_sale_date
IF days_since_sale > aging_threshold_days
AND available_qty > 0
THEN create AGING_INVENTORY alert
PO Overdue:
DAILY job:
FOR each PO in status OPEN or PARTIAL:
expected_date = po.created_at + vendor.lead_time_days + buffer_days
IF TODAY > expected_date
AND no active PO_OVERDUE alert exists for this PO
THEN create PO_OVERDUE alert
4.16.2 Alert Data Model
| Field | Type | Description |
|---|---|---|
alert_id | UUID | Unique alert identifier |
tenant_id | UUID | Tenant scope |
alert_type | ENUM | LOW_STOCK, OVERSTOCK, SHRINKAGE, AGING_INVENTORY, PO_OVERDUE |
severity | ENUM | INFO, WARNING, CRITICAL |
product_variant_id | UUID | Affected product (nullable for PO alerts) |
location_id | UUID | Affected location |
reference_type | VARCHAR | INVENTORY_LEVEL, COUNT_SESSION, PURCHASE_ORDER |
reference_id | UUID | ID of the triggering entity |
message | TEXT | Human-readable alert description |
data_snapshot | JSONB | Key metrics at alert time (e.g., { "available_qty": 2, "reorder_point": 5 }) |
triggered_at | TIMESTAMP | When alert was created |
acknowledged_by | UUID | Staff who acknowledged (nullable) |
acknowledged_at | TIMESTAMP | When acknowledged (nullable) |
resolved_at | TIMESTAMP | When condition cleared (nullable) |
auto_resolved | BOOLEAN | true if resolved by system (e.g., stock replenished) |
4.16.3 Alert Lifecycle
stateDiagram-v2
[*] --> TRIGGERED: Condition detected
TRIGGERED --> ACKNOWLEDGED: Staff views/clicks alert
ACKNOWLEDGED --> RESOLVED: Condition cleared (manual or auto)
TRIGGERED --> RESOLVED: Condition auto-clears (e.g., stock received)
RESOLVED --> [*]
note right of TRIGGERED
Appears on dashboard
May send email/notification
end note
note right of ACKNOWLEDGED
Staff is aware
Working on resolution
end note
note right of RESOLVED
Condition no longer active
Retained for history/reporting
end note
Auto-resolution rules:
| Alert Type | Auto-Resolves When |
|---|---|
| Low Stock | available_qty > reorder_point (stock received or transferred in) |
| Overstock | days_of_supply <= overstock_threshold (stock sold or transferred out) |
| Shrinkage | Never auto-resolves; requires manual acknowledgment |
| Aging Inventory | Sale occurs for the product at the location |
| PO Overdue | PO status changes to RECEIVED or COMPLETED |
4.16.4 Email Templates
Four email templates cover the primary inventory communication needs. Each template uses dynamic field substitution.
Template 1: TMPL_INV_PO_VENDOR
| Property | Value |
|---|---|
| Template ID | TMPL_INV_PO_VENDOR |
| Trigger | PO status changes to OPEN (submitted to vendor) |
| Recipient | Vendor email address (from vendor record) |
| Subject | Purchase Order {po_number} from {tenant_name} |
| Dynamic Fields | vendor_name, ship_to_address (store or HQ), po_number, po_date, expected_delivery_date, line_items_table (SKU, description, qty, unit cost, line total), po_total, special_instructions, buyer_name, buyer_email, buyer_phone |
Template 2: TMPL_INV_TRANSFER_ALERT
| Property | Value |
|---|---|
| Template ID | TMPL_INV_TRANSFER_ALERT |
| Trigger | Transfer status changes to SHIPPED |
| Recipient | Destination store manager email |
| Subject | Incoming Transfer {transfer_number} from {source_store_name} |
| Dynamic Fields | source_store_name, destination_store_name, transfer_number, shipped_date, expected_arrival_date, manifest_table (SKU, description, qty shipped), total_items, tracking_number, carrier_name, shipper_name |
Template 3: TMPL_INV_LOW_STOCK
| Property | Value |
|---|---|
| Template ID | TMPL_INV_LOW_STOCK |
| Trigger | Daily job (configurable time, default: 7:00 AM) |
| Recipient | Store manager and/or HQ inventory manager (configurable) |
| Subject | Low Stock Alert: {count} items below reorder point at {location_name} |
| Dynamic Fields | location_name, report_date, count (number of items), items_table (SKU, product name, current qty, reorder point, reorder qty, primary vendor, vendor lead time), total_reorder_value |
Template 4: TMPL_INV_COUNT_REMINDER
| Property | Value |
|---|---|
| Template ID | TMPL_INV_COUNT_REMINDER |
| Trigger | Scheduled count is N days away (configurable, default: 2 days) |
| Recipient | Store manager |
| Subject | Inventory Count Reminder: {count_type} scheduled for {scheduled_date} |
| Dynamic Fields | count_type (Full, Cycle, Spot Check), scheduled_date, scheduled_time, location_name, scope_description (e.g., “Category: Tops” or “Full Store”), assigned_staff_names, estimated_duration, special_instructions |
4.17 Inventory Dashboard & Reports
Scope: A dedicated inventory dashboard providing at-a-glance KPIs and a comprehensive reporting suite consolidating all reports referenced across Sections 4.3 through 4.16.
4.17.1 Dashboard KPIs
The inventory dashboard displays eight primary KPI cards in a responsive grid layout. Each card shows the current value, trend indicator, and drill-down link.
| KPI Card | Metric | Trend | Drill-Down |
|---|---|---|---|
| Total Inventory Value (WAC) | Sum of (available_qty x WAC) across all locations | 30-day trend arrow (up/down/flat) + % change | Inventory Valuation report |
| Low Stock Items | Count of products where available_qty <= reorder_point | Delta from prior week | Low Stock Alert report |
| Pending PO Count | Count of POs in OPEN or PARTIAL status | Total pending PO value in parentheses | Open PO Report |
| Open Transfers | Count of transfers in REQUESTED, APPROVED, PICKING, or SHIPPED status | Total in-transit items count | Open Transfer Report |
| Upcoming Counts | Count of scheduled counts in next 7 days | Next count date and type | Count schedule calendar |
| Shrinkage % | (Total variance value / Total inventory value) x 100 for last 30 days | vs. prior 30-day period | Shrinkage Analysis report |
| Dead Stock Count | Count of products with zero sales velocity in last 90 days | Delta from prior month | Dead Stock Report |
| Avg Days of Supply | Average days_of_supply across all active products by category | Top 3 categories with lowest supply | Days of Supply report |
Dashboard filters:
- Location: All Stores, specific store, or HQ
- Category: All, or specific category
- Date range: Applies to trend calculations
- Brand: All, or specific brand
4.17.2 Master Report Suite
The following table consolidates all inventory reports from across the module. Each report includes its source section, purpose, key data fields, grouping keys, and export formats.
| # | Report Name | Source Section | Purpose | Key Data Fields | Grouping Keys | Export |
|---|---|---|---|---|---|---|
| 1 | Inventory Snapshot | 4.3 | Current QoH and total value at a point in time | SKU, product name, variant, location, available qty, reserved qty, total qty, WAC, extended value | Location, Category, Brand, Vendor | CSV, PDF |
| 2 | Low Stock Alert | 4.5 | Items below reorder point requiring restock action | SKU, product name, location, available qty, reorder point, reorder qty, primary vendor, lead time | Location, Vendor, Category | CSV, PDF, Dashboard |
| 3 | Stock Movement Log | 4.12 | Full audit trail of all inventory ins and outs | Movement ID, timestamp, type, SKU, product name, qty change, location, reference type, reference ID, staff, notes | Movement Type, Location, Date Range, Staff | CSV |
| 4 | Inventory Valuation (WAC) | 4.11 | Total inventory value using Weighted Average Cost | SKU, product name, location, qty, WAC per unit, extended value, % of total value | Location, Category, Brand | CSV, PDF |
| 5 | Shrinkage Analysis | 4.6 | Expected vs. actual quantities from count sessions | Count session, date, location, SKU, expected qty, counted qty, variance, variance %, variance value | Location, Date Range, Category | CSV, PDF |
| 6 | Vendor Performance Scorecard | 4.4 | Vendor reliability metrics across POs | Vendor name, total POs, on-time %, avg lead time, fill rate %, defect rate %, cost variance %, total spend | Vendor, Date Range | CSV, PDF |
| 7 | Inventory Turnover | 4.5 | Stock efficiency metrics | SKU, product name, category, avg inventory, COGS, turnover rate, days of supply, sell-through % | Category, Brand, Location | CSV, PDF |
| 8 | ABC Classification | 4.5 | Pareto analysis of products by revenue contribution | SKU, product name, revenue (period), cumulative revenue %, classification (A/B/C), qty sold, avg margin | Category, Brand, Location | CSV, PDF |
| 9 | Aging Analysis | 4.5 | Inventory age distribution by time buckets | SKU, product name, location, qty, receive date, age (days), age bucket (0-30, 31-60, 61-90, 90+), value | Location, Category, Age Bucket | CSV, PDF |
| 10 | Dead Stock Report | 4.5 | Items with zero sales velocity over threshold period | SKU, product name, location, available qty, value, last sale date, days since last sale, receive date | Location, Category, Vendor | CSV, PDF |
| 11 | Open PO Report | 4.4 | Purchase orders pending receipt | PO number, vendor, status, created date, expected date, total lines, received lines, total value, outstanding value | Vendor, Status, Location | CSV, PDF |
| 12 | PO Receiving Report | 4.4 | Details of received PO line items | PO number, receive date, SKU, ordered qty, received qty, variance, unit cost, cost variance, receiver staff | PO Number, Vendor, Date Range | CSV, PDF |
| 13 | Vendor Lead Time Report | 4.4 | Actual vs. expected delivery times by vendor | Vendor name, PO number, ordered date, expected date, received date, actual lead time, variance (days) | Vendor, Date Range | CSV, PDF |
| 14 | PO Variance Report | 4.4 | Discrepancies between ordered and received | PO number, SKU, ordered qty, received qty, qty variance, ordered cost, actual cost, cost variance | PO Number, Vendor | CSV, PDF |
| 15 | Cost Analysis Report | 4.11 | WAC trends and cost changes over time | SKU, product name, WAC (current), WAC (30d ago), WAC (90d ago), cost change %, last PO cost, vendor | Category, Vendor, Date Range | CSV, PDF |
| 16 | Reorder Alerts | 4.5 | Products approaching or at reorder point | SKU, product name, location, available qty, reorder point, days of supply, velocity, recommended qty, primary vendor | Location, Vendor, Category | CSV, PDF, Dashboard |
| 17 | Auto-PO Performance | 4.5 | Effectiveness of automatic PO generation | Month, auto-PO count, manual PO count, auto-PO accuracy %, stock-out events, avg days to stock-out | Month, Location | CSV, PDF |
| 18 | Velocity Trends | 4.5 | Sales velocity over time per product | SKU, product name, velocity (7d), velocity (30d), velocity (90d), trend direction, seasonality index | Category, Brand, Location | CSV, PDF |
| 19 | Days of Supply | 4.5 | How long current stock will last at current velocity | SKU, product name, location, available qty, avg daily velocity, days of supply, classification | Location, Category, Risk Level | CSV, PDF |
| 20 | Open RMA Report | 4.7 | Vendor returns in progress | RMA number, vendor, status, created date, total lines, total units, total value, expected credit | Vendor, Status | CSV, PDF |
| 21 | Vendor Return Rate | 4.7 | Defect and return rates by vendor | Vendor name, total received units, returned units, return rate %, top return reasons, credit recovered | Vendor, Date Range | CSV, PDF |
| 22 | RMA Aging | 4.7 | RMA age analysis for follow-up | RMA number, vendor, status, created date, age (days), last action date, total value, expected credit | Vendor, Status, Age Bucket | CSV, PDF |
| 23 | RMA Credit Reconciliation | 4.7 | Expected vs. received vendor credits | RMA number, vendor, expected credit, received credit, variance, credit date, reconciliation status | Vendor, Date Range, Status | CSV, PDF |
| 24 | Open Transfer Report | 4.8 | Transfers in progress between stores | Transfer number, source, destination, status, created date, shipped date, total items, expected arrival | Status, Source, Destination | CSV, PDF |
| 25 | Transfer Variance Report | 4.8 | Discrepancies between shipped and received quantities | Transfer number, SKU, shipped qty, received qty, variance, variance reason, source, destination | Transfer Number, Location | CSV, PDF |
| 26 | Transfer Volume Report | 4.8 | Transfer activity metrics over time | Period, total transfers, total units moved, avg transit days, top source locations, top destination locations | Date Range, Location Pair | CSV, PDF |
| 27 | Rebalancing Suggestions | 4.8 | System-generated transfer recommendations | SKU, product name, source location, source qty, source days of supply, destination location, destination qty, destination days of supply, suggested transfer qty | Category, Priority | CSV, PDF |
| 28 | Serial Number Lookup | 4.10 | Complete history of a serial-tracked unit | Serial number, SKU, product name, current status, current location, receive date, PO number, sale date, order ID, customer, return history | Status, Location | CSV, PDF |
| 29 | Lot Inventory Report | 4.10 | Current stock by lot number | Lot number, SKU, product name, location, qty available, receive date, expiry date (if applicable), age (days), PO number | Location, SKU, Expiry Status | CSV, PDF |
| 30 | Lot Trace (Recall) | 4.10 | Full traceability for recall management | Lot number, SKU, received qty, receive date, PO number, vendor, sold qty, remaining qty, customer list (with order IDs), locations distributed to | Lot Number | CSV, PDF |
| 31 | Landed Cost Analysis | 4.11 | Cost breakdown including freight, duty, and handling | PO number, SKU, base unit cost, freight allocation, duty allocation, handling allocation, total landed cost, allocation method | Vendor, PO Number | CSV, PDF |
| 32 | Margin Analysis | 4.11 | Gross margin by product and category | SKU, product name, category, sell price, WAC (landed), gross margin $, gross margin %, units sold, total margin | Category, Brand, Location | CSV, PDF |
| 33 | Cost Trend Report | 4.11 | Historical cost changes per product | SKU, product name, vendor, cost (current), cost (3m ago), cost (6m ago), cost (12m ago), trend, % change (YoY) | Vendor, Category, Trend Direction | CSV, PDF |
4.17.3 Report Access Control
| Role | View | Export | Schedule | Create Custom |
|---|---|---|---|---|
| Admin | All reports | All formats | Yes | Yes |
| HQ Manager | All reports | All formats | Yes | No |
| Store Manager | Store-scoped reports | CSV, PDF | Yes | No |
| Buyer | PO and vendor reports | CSV, PDF | Yes | No |
| Staff | Inventory Snapshot, Serial Lookup only | CSV only | No | No |
4.18 Inventory Business Rules — YAML Configuration
Cross-Reference: All inventory business rules configuration has been consolidated into Module 5: Setup & Configuration, Section 5.19.4 (Inventory Configuration). See Module 5, Section 5.19 for the complete YAML configuration reference covering all modules.
4.19 Inventory User Stories & Acceptance Criteria
Scope: All user stories organized by epic and Gherkin acceptance criteria for the Inventory Module. Epics 4.A through 4.F are moved from BRD Section 3.23 (renumbered). Epics 4.G through 4.P are new.
Epic 4.A: Vendor RMA
(Moved from Epic 3.I, renumbered)
Story 4.A.1: Create RMA
- As a Store Manager, I want to create a Return Merchandise Authorization with line items (product, qty, reason) so that I can return defective or overstock items to the vendor.
- Constraint: RMA numbers auto-increment per tenant using format
RMA-{YEAR}-{SEQUENCE}.
Story 4.A.2: RMA Workflow
- As a Buyer, I want an RMA to follow Draft > Submitted > Vendor Approved > Shipped Back > Credit/Replacement Received > Closed lifecycle so that vendor returns are tracked through every stage.
- Constraint: Each status transition is logged with timestamp and staff ID.
Story 4.A.3: Credit/Replacement
- As a Buyer, I want to record vendor credits against the RMA and track replacement shipments so that I can reconcile vendor credits with expected amounts.
- Constraint: When vendor sends replacement, a linked PO is created for receiving. Credit variance > 5% triggers alert.
Epic 4.B: Reorder Management
(Moved from Epic 3.J, renumbered)
Story 4.B.1: Dynamic Reorder Points
- As an Inventory Manager, I want the system to calculate reorder points from 90-day sales velocity, lead time, and safety stock so that reorder points stay current without manual maintenance.
- Constraint: Recalculated weekly via background job. Safety stock uses 1.65 sigma (95% service level).
Story 4.B.2: Auto-PO Generation
- As a Buyer, I want the system to auto-create draft POs when stock drops below reorder point so that I can review and submit without manually building each PO.
- Constraint: Staff reviews before submission. POs consolidated by vendor. Minimum PO value enforced.
Story 4.B.3: Seasonal Adjustment
- As an Inventory Manager, I want reorder velocity to adjust based on historical same-period data so that seasonal demand patterns are accounted for in reorder calculations.
- Constraint: Uses trailing 3-year same-month average when available. New products without history use category-level seasonality.
Epic 4.C: Inventory Control
(Moved from Epic 3.K, renumbered)
Story 4.C.1: Inventory Status
- As a Store Manager, I want each inventory unit to have a status (Available, Quarantine, Damaged, In-Transit, Reserved, On-Hold) so that only sellable stock is available for sale or transfer.
- Constraint: Only AVAILABLE status allows sale. Status transitions are logged to the movement history.
Story 4.C.2: Stock Counting
- As a Store Manager, I want five counting methods (full physical count, cycle count, scanner-assisted count, monthly spot check, on-demand count) so that I can choose the right method for each situation.
- Constraint: Workflow is Count > Review Variances > Approve Adjustments. High-variance items require manager approval.
Story 4.C.3: Adjustments
- As a Staff Member, I want to submit manual inventory adjustments with a reason code so that discrepancies can be corrected and tracked.
- Constraint: Reason codes required (DAMAGED, THEFT, COUNT_CORRECTION, VENDOR_RETURN, OTHER). Adjustments above configurable threshold require manager approval.
Story 4.C.4: Unified Receiving
- As a Warehouse Clerk, I want a single receiving workflow that handles all receiving types (PO receive, transfer receive, customer return, RMA replacement) so that the process is consistent regardless of source.
- Constraint: Barcode scanner verification. Variance tracking against expected quantities.
Story 4.C.5: Bulk Operations
- As a Buyer, I want to import products via CSV, export catalog data to CSV/XLSX, and make bulk changes to price/cost/status/category so that large-scale updates are efficient.
- Constraint: Bulk changes above configurable thresholds require approval workflow integration.
Epic 4.D: Inter-Store Transfers
(Moved from Epic 3.L, renumbered)
Story 4.D.1: Transfer Workflow
- As a Store Manager, I want inter-store transfers to follow Request > Approve > Pick > Ship > Receive > Complete lifecycle with variance tracking at each stage so that transfer accuracy is maintained.
- Constraint: Shipped qty vs. received qty variance logged. Destination must scan-confirm received items.
Story 4.D.2: Auto-Rebalancing
- As an HQ Manager, I want the system to analyze velocity vs. stock across locations and suggest transfers to equalize days of supply so that no store is overstocked while another is understocked.
- Constraint: Staff reviews and approves suggested transfers. Minimum imbalance threshold is configurable (default: 14 days difference).
Epic 4.E: Serial & Lot Tracking
(Moved from Epic 3.M, renumbered)
Story 4.E.1: Serial Tracking
- As a Cashier, I want serial-tracked products to require serial number entry at both receive and sale so that each unit is individually tracked for warranty and after-sale support.
- Constraint: Serial number linked to customer on sale. Serial validated as IN_STOCK at selling location before sale proceeds.
Story 4.E.2: Lot Tracking
- As a Warehouse Clerk, I want lot numbers assigned at receiving with FIFO enforcement on sale so that lot-tracked products are sold in order and full traceability is available for recall management.
- Constraint: FIFO enforced automatically. Lot trace report shows all customers who received items from a specific lot.
Epic 4.F: Landed Cost & Costing
(Moved from Epic 3.N, renumbered)
Story 4.F.1: Landed Cost
- As a Buyer, I want PO receiving to include cost allocation for freight, duties, customs, and handling so that the true per-unit cost is known for accurate margin calculations.
- Constraint: Three allocation methods supported (By Value, By Quantity, Manual). Landed cost stored as the true cost basis.
Story 4.F.2: Weighted Average Cost
- As an Inventory Manager, I want the system to maintain weighted average cost per product per location, recalculated on each receive, so that COGS calculations and margin reporting are accurate.
- Constraint: WAC formula:
New WAC = ((Existing Qty x Existing WAC) + (Received Qty x Received Cost)) / (Existing Qty + Received Qty). WAC is used for all COGS and margin calculations.
Epic 4.G: Receiving & Inspection
(New)
Story 4.G.1: Open Receive with Discrepancy Handling
- As a Warehouse Clerk, I want to receive a quantity different from the PO ordered quantity so that I can handle partial shipments, over-shipments, and damaged goods.
- Constraint: Uses the “triple approach” – (1) accept what arrived, (2) quarantine damaged items, (3) auto-create RMA for damages. PO status updates to PARTIAL if not all lines received.
Story 4.G.2: Non-PO Receiving
- As a Store Manager, I want to receive stock without a linked PO (e.g., consignment, found stock, vendor replacement) so that all inventory entering the store is tracked regardless of source.
- Constraint: Reason code required. Creates a standalone receiving record with movement log entry.
Story 4.G.3: Over-Shipment Threshold
- As a Buyer, I want the system to enforce a configurable over-receive threshold so that warehouses cannot accept significantly more than what was ordered without authorization.
- Constraint: Default threshold is 10% above PO line qty. Over-receive above threshold requires manager approval. Over-receive below threshold is accepted and logged.
Epic 4.H: POS Integration
(New)
Story 4.H.1: Reserve on Cart Add
- As a Cashier, I want inventory to be soft-reserved when I add an item to the cart so that another terminal at the same store does not sell the last unit before I complete payment.
- Constraint: Soft reservation decrements available qty immediately. Removing item from cart releases reservation instantly. Other terminals see reduced qty.
Story 4.H.2: Commit on Payment
- As a Cashier, I want the inventory reservation to convert to a permanent decrement when payment completes so that the stock ledger accurately reflects completed sales.
- Constraint: SALE movement logged. WAC captured at time of sale. If payment fails, reservation holds for 30 seconds then auto-releases.
Story 4.H.3: Return to Stock
- As a Store Staff Member, I want customer returns to automatically restore inventory to AVAILABLE status at the return location so that returned items are immediately available for resale.
- Constraint: Default status is AVAILABLE. Staff can override to DAMAGED if item is not resalable. RETURN movement logged. WAC is not recalculated (original cost preserved).
Epic 4.I: Online Fulfillment
(New)
Story 4.I.1: Nearest Store Reservation
- As an Online Operations Manager, I want online orders to automatically reserve inventory at the nearest store with stock so that orders ship quickly from the closest location.
- Constraint: Store assignment uses distance calculation from store to customer shipping address. If no single store has full stock and split fulfillment is disabled, order flags for manual assignment.
Story 4.I.2: Shopify Inventory Sync
- As an Online Operations Manager, I want inventory quantities to always sync bidirectionally between POS and Shopify so that online customers see accurate availability.
- Constraint: Webhook-driven sync (< 5 second target). Reconciliation every 15 minutes. POS is source of truth. Sync operates independently of catalog sync mode.
Story 4.I.3: Pick-Pack-Ship Workflow
- As a Store Staff Member, I want online orders assigned to my store to appear on a fulfillment queue with a guided pick-pack-ship workflow so that fulfillment is accurate and tracked.
- Constraint: Each item scanned during pick. Serial/lot numbers captured. Carrier and tracking entered at ship. Shopify order updated with tracking automatically.
Epic 4.J: Offline Operations
(New)
Story 4.J.1: Queue Inventory Changes Offline
- As a Store Staff Member, I want all inventory changes (sales, returns, adjustments, counts) to be queued locally during network outage so that I can continue working without interruption.
- Constraint: Local cache updated immediately. Maximum 500 queued changes. Queue warning at 90% capacity.
Story 4.J.2: Conflict Resolution on Reconnect
- As a Store Manager, I want the system to reconcile offline changes with server state on reconnect so that inventory accuracy is maintained across all locations.
- Constraint: Changes uploaded in chronological order. Negative inventory conflicts flagged for manager review. Last-write-wins for quantities by default.
Epic 4.K: Alerts & Notifications
(New)
Story 4.K.1: Configurable Inventory Alerts
- As an HQ Manager, I want configurable alerts for low stock, overstock, shrinkage, aging inventory, and overdue POs so that I am proactively informed of conditions requiring action.
- Constraint: Each alert type has configurable thresholds, severity, and delivery channels. Alert thresholds can be set per product per location.
Story 4.K.2: Email Templates
- As a Buyer, I want automated email templates for PO submission, transfer notifications, low stock digests, and count reminders so that stakeholders receive timely, formatted communications.
- Constraint: Four templates with dynamic field substitution. Templates support HTML formatting. Digest emails consolidate multiple alerts.
Story 4.K.3: Alert Acknowledgment
- As a Store Manager, I want to acknowledge alerts on the dashboard so that my team knows I am aware of and working on the issue.
- Constraint: Alert lifecycle: TRIGGERED > ACKNOWLEDGED > RESOLVED. Alerts auto-resolve when the triggering condition clears (except shrinkage, which requires manual resolution).
Epic 4.L: Dashboard & Reporting
(New)
Story 4.L.1: Dedicated Inventory Dashboard
- As a Store Manager, I want a dedicated inventory dashboard with KPI cards so that I can see the health of my store’s inventory at a glance.
- Constraint: Eight KPI cards (total value, low stock count, pending POs, open transfers, upcoming counts, shrinkage %, dead stock, avg days of supply). Filterable by location, category, brand.
Story 4.L.2: Analytics Report Suite
- As an HQ Manager, I want a comprehensive suite of 33 inventory reports so that I can analyze every aspect of inventory performance.
- Constraint: Reports exportable to CSV and PDF. Role-based access. Store managers see only their store’s data unless granted multi-store access.
Story 4.L.3: ABC Classification
- As a Buyer, I want monthly Pareto analysis that classifies products as A (top 20% revenue), B (next 30%), or C (bottom 50%) so that I can prioritize inventory management efforts.
- Constraint: New products exempt until 60 days of sales data. Classification drives cycle count frequency (A = 30 days, B = 60 days, C = 90 days).
Epic 4.M: PO Approval Workflow
(New)
Story 4.M.1: Threshold-Based PO Approval
- As an Owner, I want purchase orders above a configurable dollar threshold to require approval before submission so that large purchases are reviewed before committing funds.
- Constraint: Two tiers – manager approval at $500+, admin/owner approval at $5,000+. POs below $500 auto-approve.
Story 4.M.2: Auto-Approve Below Threshold
- As a Buyer, I want POs below the approval threshold to be submitted directly to the vendor without waiting for approval so that routine restocking is not delayed.
- Constraint: Auto-approved POs are still logged in the audit trail with source “AUTO_APPROVED”. Notification sent to manager for visibility.
Story 4.M.3: Approval Expiry
- As a Buyer, I want pending approvals to expire after a configurable period so that stale PO requests do not block the workflow indefinitely.
- Constraint: Default expiry is 7 days. Expired approvals notify the requester. Requester can resubmit.
Epic 4.N: Dead Stock Management
(New)
Story 4.N.1: Dead Stock Detection
- As an Inventory Manager, I want the system to automatically identify products with zero sales velocity over a configurable period so that I can take action on non-performing inventory.
- Constraint: Default threshold is 90 days of zero sales. Detection runs daily. Products with qty > 0 and velocity = 0 are flagged.
Story 4.N.2: Dead Stock Alerting
- As a Store Manager, I want to receive dashboard alerts for dead stock items so that I am aware of products that need markdowns, transfers, or vendor returns.
- Constraint: Alert includes product name, location, qty on hand, value (at WAC), and last sale date. Grouped by category.
Story 4.N.3: Dead Stock Reporting
- As a Buyer, I want a dead stock report showing all zero-velocity items with their value and age so that I can make informed decisions about clearance, vendor returns, or donation.
- Constraint: Report filterable by location, category, vendor, and value range. Exportable to CSV and PDF.
Epic 4.P: Overstock Vendor Returns
(New)
Story 4.P.1: Negotiate Vendor Return
- As a Buyer, I want to create overstock vendor return requests with proposed quantities and negotiate terms with the vendor so that excess inventory can be returned before it becomes dead stock.
- Constraint: Overstock returns follow the RMA workflow (Section 4.7). Must be enabled in config (
overstock_returns_enabled: true). Return window configurable per vendor.
Story 4.P.2: Restocking Fee Handling
- As a Buyer, I want to record restocking fees charged by the vendor so that the net credit is accurately tracked.
- Constraint: Restocking fee recorded as a percentage of unit cost. Default is 0%. Maximum configurable (default max: 25%). Net credit = (qty x cost) - restocking fee.
Story 4.P.3: Overstock Return Reporting
- As an HQ Manager, I want reports showing overstock return volume, restocking fees paid, and net credits recovered by vendor so that I can evaluate vendor return programs.
- Constraint: Includes vendor return rate, total credits recovered, total restocking fees, net recovery ratio. Filterable by vendor, date range, and category.
Inventory Acceptance Criteria: Gherkin Scenarios
Feature: Receiving with Discrepancy
Feature: Receiving with Discrepancy
As a Warehouse Clerk
I need to handle partial shipments and damaged goods during receiving
So that inventory records accurately reflect what was received
Background:
Given a Purchase Order "PO-2026-00142" exists in status "OPEN"
And the PO contains the following lines:
| SKU | Product | Ordered Qty | Unit Cost |
| SKU-1001 | Classic Fit Tee Navy M | 20 | $12.50 |
| SKU-1002 | Classic Fit Tee Navy L | 15 | $12.50 |
| SKU-1003 | Slim Chino Khaki 32 | 10 | $24.00 |
And the receiving location is "HQ Warehouse"
Scenario: Partial receive with all items in good condition
When I receive the following quantities:
| SKU | Received Qty |
| SKU-1001 | 20 |
| SKU-1002 | 10 |
And I do not receive SKU-1003
Then the stock level for "SKU-1001" at "HQ Warehouse" should increase by 20
And the stock level for "SKU-1002" at "HQ Warehouse" should increase by 10
And the stock level for "SKU-1003" should remain unchanged
And the PO status should update to "PARTIAL"
And the remaining qty for "SKU-1002" should be 5
And the remaining qty for "SKU-1003" should be 10
And WAC for "SKU-1001" should be recalculated using $12.50
And WAC for "SKU-1002" should be recalculated using $12.50
Scenario: Receive with damaged items quarantined
When I receive 20 units of "SKU-1001"
And I mark 3 units of "SKU-1001" as "DAMAGED" with reason "Stained in transit"
Then the stock level for "SKU-1001" at "HQ Warehouse" should show:
| Status | Qty |
| AVAILABLE | 17 |
| QUARANTINE | 3 |
And a movement log entry should be created with type "RECEIVE" and qty 20
And an RMA should be auto-created for 3 units of "SKU-1001"
And the RMA should reference "PO-2026-00142"
And the RMA reason should be "Stained in transit"
Scenario: Over-receive within threshold
Given the over-receive threshold is 10%
When I receive 22 units of "SKU-1001" (PO ordered 20)
Then the system should accept the over-receive (10% = 2 units, within threshold)
And the stock level for "SKU-1001" should increase by 22
And the PO line received qty should show 22
And an over-receive note should be logged
Scenario: Over-receive exceeds threshold requires approval
Given the over-receive threshold is 10%
When I attempt to receive 25 units of "SKU-1001" (PO ordered 20)
Then the system should block the receive with message "Over-receive of 25% exceeds 10% threshold"
And the system should prompt "Manager approval required for over-receive"
When manager "Mike" approves the over-receive
Then the stock level for "SKU-1001" should increase by 25
And the approval should be logged to the audit trail
Feature: POS Inventory Reservation
Feature: POS Inventory Reservation
As a POS system
I need to manage inventory reservations throughout the sale lifecycle
So that multiple terminals do not oversell available stock
Background:
Given product "Classic Fit Tee Navy M" (SKU-1001) has 5 units available at "Store A"
And Terminal 1 and Terminal 2 are active at "Store A"
Scenario: Reserve on add to cart
When cashier on Terminal 1 adds 1 unit of "SKU-1001" to the cart
Then a soft reservation should be created for Terminal 1
And available qty for "SKU-1001" at "Store A" should show 4
And Terminal 2 should see available qty as 4
Scenario: Release on item removal
Given Terminal 1 has 1 unit of "SKU-1001" in the cart (reserved)
And available qty shows 4
When cashier on Terminal 1 removes "SKU-1001" from the cart
Then the soft reservation should be released
And available qty for "SKU-1001" at "Store A" should show 5
Scenario: Commit on successful payment
Given Terminal 1 has 1 unit of "SKU-1001" in the cart (reserved)
When payment completes successfully on Terminal 1
Then the soft reservation should be deleted
And a SALE movement should be logged with qty -1
And available qty for "SKU-1001" at "Store A" should show 4 (permanent)
And WAC should be captured on the movement record
Scenario: Hold on payment failure then auto-release
Given Terminal 1 has 1 unit of "SKU-1001" in the cart (reserved)
And available qty shows 4
When payment fails on Terminal 1 (card declined)
Then the reservation should hold for 30 seconds
And available qty should remain 4 during the hold
When 30 seconds elapse without payment retry
Then the reservation should auto-release
And available qty should return to 5
Scenario: Hold on payment failure with retry
Given Terminal 1 has 1 unit of "SKU-1001" in the cart (reserved)
When payment fails on Terminal 1 (card declined)
And cashier retries payment within 30 seconds
And the retry succeeds
Then the existing reservation should be used (no new reservation)
And a SALE movement should be logged
And available qty should show 4 (permanent)
Scenario: Void releases all reservations
Given Terminal 1 has 2 units of "SKU-1001" in the cart (reserved)
And available qty shows 3
When cashier voids the entire transaction
Then all reservations for the session should be released
And available qty for "SKU-1001" should return to 5
And no movement records should be created
Scenario: Last unit contention between terminals
Given available qty for "SKU-1001" is 1
When cashier on Terminal 1 adds 1 unit of "SKU-1001" to cart
Then available qty should show 0
When cashier on Terminal 2 attempts to add "SKU-1001" to cart
Then Terminal 2 should see "SKU-1001 is out of stock at this location"
And the item should not be added to Terminal 2's cart
Feature: Inventory Count with Scanner
Feature: Inventory Count with Scanner
As a Store Manager
I need scanner-primary counting with variance detection
So that counts are accurate and discrepancies are reviewed
Background:
Given a cycle count session is created for category "Tops" at "Store A"
And the following expected quantities exist:
| SKU | Product | Expected Qty |
| SKU-1001 | Classic Fit Tee Navy M | 25 |
| SKU-1002 | Classic Fit Tee Navy L | 18 |
| SKU-1003 | V-Neck Tee Black M | 12 |
And blind count mode is enabled (expected qty hidden from counter)
And scanner mode is "scan_primary"
And variance approval threshold is 10 units
Scenario: Scanner-primary count with no variance
When staff scans 25 barcodes for "SKU-1001"
And staff scans 18 barcodes for "SKU-1002"
And staff scans 12 barcodes for "SKU-1003"
And staff submits the count
Then all items should show zero variance
And the count status should change to "COMPLETED"
And no adjustments should be created
And the count should be logged to history
Scenario: Count with minor variance auto-adjusts
When staff scans 23 barcodes for "SKU-1001" (expected 25)
And staff scans 18 barcodes for "SKU-1002"
And staff scans 12 barcodes for "SKU-1003"
And staff submits the count
Then variance for "SKU-1001" should show -2 units
And the variance is below the 10-unit threshold
Then "SKU-1001" inventory should be auto-adjusted to 23
And an adjustment record should be created with reason "COUNT_CORRECTION"
And the movement log should record qty change of -2 for "SKU-1001"
Scenario: Count with major variance requires approval
When staff scans 12 barcodes for "SKU-1001" (expected 25)
And staff submits the count
Then variance for "SKU-1001" should show -13 units
And the variance exceeds the 10-unit threshold
Then the adjustment should be set to "PENDING_APPROVAL" status
And manager should receive an approval notification
And "SKU-1001" inventory should remain at 25 until approved
Scenario: Manager approves high-variance adjustment
Given a pending adjustment exists for "SKU-1001" with counted qty 12 (variance -13)
When manager "Mike" reviews and approves the adjustment
Then "SKU-1001" inventory should be adjusted to 12
And the adjustment reason should be "COUNT_CORRECTION"
And the approval should be logged with approver "Mike"
And a SHRINKAGE alert should be triggered (variance 52% exceeds 5% threshold)
Scenario: Recount required for extreme variance
Given variance approval threshold for recount is 20%
When staff counts 5 units for "SKU-1001" (expected 25, variance 80%)
And staff submits the count
Then the system should flag "SKU-1001" for mandatory recount
And the count status for "SKU-1001" should be "RECOUNT_REQUIRED"
And a different staff member should be assigned the recount
Feature: PO Approval Workflow
Feature: PO Approval Workflow
As an Owner
I need purchase orders above a dollar threshold to require approval
So that large purchases are reviewed before funds are committed
Background:
Given the following PO approval thresholds are configured:
| Threshold | Required Approver |
| < $500 | Auto-approve |
| >= $500 | Manager |
| >= $5000 | Admin/Owner |
And approval requests expire after 7 days
Scenario: Auto-approve PO below threshold
When buyer "Sarah" creates PO "PO-2026-00201" with total value $350.00
And submits the PO
Then the PO should be auto-approved
And the PO status should change to "OPEN"
And the audit log should record source "AUTO_APPROVED"
And an email should be sent to the vendor
And manager "Mike" should receive a visibility notification
Scenario: Manager approval required
When buyer "Sarah" creates PO "PO-2026-00202" with total value $1,200.00
And submits the PO for approval
Then an approval request should be created with status "PENDING"
And the PO status should remain "DRAFT"
And manager "Mike" should receive an approval notification
Scenario: Manager approves PO
Given a pending approval exists for PO "PO-2026-00202" ($1,200.00)
When manager "Mike" approves the PO
Then the PO status should change to "OPEN"
And the vendor should receive the PO email (TMPL_INV_PO_VENDOR)
And the audit log should record approver "Mike"
Scenario: Manager rejects PO
Given a pending approval exists for PO "PO-2026-00202" ($1,200.00)
When manager "Mike" rejects the PO with reason "Wait for vendor sale next month"
Then the PO status should remain "DRAFT"
And buyer "Sarah" should receive a notification with the rejection reason
And the rejection should be logged to the audit trail
Scenario: Large PO escalates to admin
When buyer "Sarah" creates PO "PO-2026-00203" with total value $7,500.00
And submits the PO for approval
Then an approval request should require "ADMIN" level approval
And manager approval should NOT be sufficient
And admin/owner should receive the approval notification
Scenario: Approval request expires
Given a pending approval was created 8 days ago for PO "PO-2026-00204"
When the expiration job runs
Then the approval status should change to "EXPIRED"
And the PO status should remain "DRAFT"
And buyer "Sarah" should be notified of the expiration
And "Sarah" should be able to resubmit the PO for approval
Scenario: Requester cannot approve own PO
Given buyer "Sarah" also has Manager role
When "Sarah" creates and submits PO "PO-2026-00205" ($800.00)
Then "Sarah" should not be able to approve her own PO
And the system should show "Cannot approve your own purchase order"
And a different manager must approve
Feature: Transfer Auto-Suggestion
Feature: Transfer Auto-Suggestion
As an HQ Manager
I need the system to detect inventory imbalances and suggest transfers
So that stock is distributed optimally across stores
Background:
Given the auto-suggest imbalance threshold is 14 days of supply
And minimum transfer quantity is 2 units
And minimum source qty after transfer is 2 units
And the following inventory state exists:
| Product | Store A (qty/velocity) | Store B (qty/velocity) | Store C (qty/velocity) |
| Classic Tee Navy | 30 / 1.0 per day | 2 / 1.5 per day | 15 / 0.5 per day |
And days of supply:
| Product | Store A | Store B | Store C |
| Classic Tee Navy | 30 days | 1.3 days| 30 days |
Scenario: Imbalance detected and suggestion generated
When the daily auto-suggest job runs
Then a transfer suggestion should be generated
And the suggestion should recommend moving stock from "Store A" to "Store B"
And the suggested qty should equalize days of supply across stores
And the suggestion should not reduce "Store A" below minimum qty (2 units)
Scenario: Manager reviews and approves suggestion
Given a transfer suggestion exists: 10 units of "Classic Tee Navy" from "Store A" to "Store B"
When HQ manager "Alex" reviews the suggestion
And approves the transfer
Then a transfer record should be created in "APPROVED" status
And "Store A" staff should be notified to pick and ship 10 units
And the suggestion status should change to "ACCEPTED"
Scenario: Manager modifies suggestion before approval
Given a transfer suggestion exists: 10 units from "Store A" to "Store B"
When HQ manager "Alex" changes the quantity to 8 units
And approves the modified transfer
Then a transfer record should be created for 8 units
And the suggestion should be marked "ACCEPTED_MODIFIED"
Scenario: Manager rejects suggestion
Given a transfer suggestion exists: 10 units from "Store A" to "Store B"
When HQ manager "Alex" rejects the suggestion with reason "Seasonal event at Store A"
Then no transfer should be created
And the suggestion should be marked "REJECTED" with the reason
Scenario: No suggestion when imbalance below threshold
Given all stores have days of supply within 14 days of each other
When the daily auto-suggest job runs
Then no transfer suggestions should be generated
Feature: Offline Inventory Sync
Feature: Offline Inventory Sync
As a POS system
I need to queue inventory changes offline and reconcile on reconnect
So that stores can continue operating during network outages
Background:
Given "Store A" has the following inventory:
| SKU | Available Qty |
| SKU-1001 | 10 |
| SKU-2005 | 5 |
And the conflict resolution strategy is "last_write_wins"
And maximum queue size is 500
Scenario: Queue sales offline and sync on reconnect
Given "Store A" loses network connectivity
And POS enters OFFLINE mode
When cashier sells 2 units of "SKU-1001" at 10:15 AM
And cashier sells 1 unit of "SKU-2005" at 10:30 AM
Then local cache should show:
| SKU | Available Qty |
| SKU-1001 | 8 |
| SKU-2005 | 4 |
And the offline queue should contain 2 entries
When network connectivity restores
Then POS should enter SYNCING mode
And all queued changes should upload in chronological order
And server should apply: SKU-1001 qty -2, SKU-2005 qty -1
And POS should enter ONLINE mode
And queue should be empty
Scenario: Conflict detected on reconnect (negative inventory)
Given "Store A" loses network connectivity
And while offline, Store A cashier sells 4 units of "SKU-2005" (local cache: 1)
And while offline, Store B sells 3 units of "SKU-2005" on the server (server qty: 2)
When "Store A" reconnects and syncs the offline sale of 4 units
Then server calculates: server qty (2) - offline change (4) = -2
And a conflict should be flagged:
| Field | Value |
| product | SKU-2005 |
| offline_change | -4 |
| server_qty_at_sync | 2 |
| resulting_qty | -2 |
| conflict_type | NEGATIVE_INVENTORY |
And POS should enter CONFLICT_REVIEW mode
And manager should be notified of the conflict
Scenario: Manager resolves negative inventory conflict
Given a conflict exists for "SKU-2005" with resulting qty -2
When manager "Mike" reviews the conflict
And accepts the negative balance with note "Schedule recount"
Then the adjustment should be applied (qty = -2)
And a recount should be scheduled for "SKU-2005" at "Store A"
And the conflict status should change to "ACCEPTED"
And POS should return to ONLINE mode
Scenario: Blocked operations show clear messages
Given "Store A" is in OFFLINE mode
When staff attempts to check inventory at other stores
Then the system should display "Multi-store lookup unavailable offline. Check local stock only."
When staff attempts to create a transfer request
Then the system should display "Transfers unavailable offline. Queue request when online."
When staff attempts to receive a PO
Then the system should display "Receiving unavailable offline. Wait for connectivity."
Scenario: Queue capacity warning
Given "Store A" is in OFFLINE mode
And the offline queue has 450 of 500 entries (90%)
Then POS should display warning "Offline queue nearly full. Reconnect soon."
When the queue reaches 500 entries
Then POS should block further inventory-modifying operations
And display "Offline queue full. Cannot process more transactions until reconnected."
Feature: Dead Stock Detection
Feature: Dead Stock Detection
As an Inventory Manager
I need to identify products with zero sales velocity
So that dead stock can be addressed through markdowns, transfers, or vendor returns
Background:
Given the dead stock threshold is 90 days
And the following inventory exists at "Store A":
| SKU | Product | Qty | Last Sale Date |
| SKU-3001 | Printed Scarf | 15 | 2025-10-01 |
| SKU-3002 | Wool Beanie | 8 | 2026-01-15 |
| SKU-3003 | Linen Blazer | 3 | 2025-08-20 |
And today's date is 2026-02-04
Scenario: Product flagged as dead stock after 90 days
When the daily dead stock detection job runs
Then "SKU-3001" should be flagged as dead stock (126 days since last sale)
And "SKU-3003" should be flagged as dead stock (168 days since last sale)
And "SKU-3002" should NOT be flagged (20 days since last sale)
Scenario: Dead stock alert created
When "SKU-3001" is flagged as dead stock
Then an AGING_INVENTORY alert should be created with severity "WARNING"
And the alert data snapshot should include:
| Field | Value |
| available_qty | 15 |
| last_sale_date | 2025-10-01 |
| days_since_sale | 126 |
| value_at_wac | calculated |
Scenario: Dead stock report generated
When manager requests the Dead Stock Report for "Store A"
Then the report should include "SKU-3001" and "SKU-3003"
And the report should NOT include "SKU-3002"
And each row should show: SKU, product name, qty, WAC value, last sale date, days since sale
And the report should be exportable to CSV and PDF
Scenario: Dead stock auto-resolves when sale occurs
Given "SKU-3001" has an active AGING_INVENTORY alert
When a customer purchases 1 unit of "SKU-3001" at "Store A"
Then the AGING_INVENTORY alert for "SKU-3001" should auto-resolve
And the resolved_at timestamp should be set
And auto_resolved should be true
Feature: Online Order Fulfillment
Feature: Online Order Fulfillment
As an Online Operations Manager
I need online orders to be assigned to the nearest store and fulfilled
So that customers receive orders quickly and shipping costs are minimized
Background:
Given the following stores with locations:
| Store | City | Lat | Lng | SKU-1001 Qty |
| Store A | Richmond, VA | 37.54 | -77.44 | 10 |
| Store B | Norfolk, VA | 36.85 | -76.29 | 0 |
| Store C | Virginia Beach| 36.85 | -75.98 | 5 |
| HQ | Glen Allen, VA| 37.66 | -77.51 | 25 |
And store assignment strategy is "nearest"
And split fulfillment is disabled
Scenario: Nearest store with stock is selected
Given a customer in "Chesapeake, VA" (lat 36.77, lng -76.29) places an online order for 2 units of "SKU-1001"
When the store assignment algorithm runs
Then "Store C" should be selected (nearest with stock, ~22 miles)
And 2 units of "SKU-1001" should be hard-reserved at "Store C"
And the order should appear on "Store C" fulfillment queue
Scenario: Nearest store has no stock, next nearest selected
Given "Store C" has 0 units of "SKU-1001" (sold out)
And a customer in "Chesapeake, VA" places an online order for 2 units of "SKU-1001"
When the store assignment algorithm runs
Then "Store A" should be selected (next nearest with stock)
And 2 units should be hard-reserved at "Store A"
Scenario: No store has stock, order flagged for manual assignment
Given all stores have 0 units of "SKU-1001"
And HQ has 25 units of "SKU-1001"
And exclude_hq is false
When the store assignment algorithm runs
Then "HQ" should be selected for fulfillment
And 2 units should be hard-reserved at "HQ"
Scenario: Pick-pack-ship workflow completes
Given order "ORD-SHOP-9001" is assigned to "Store A"
And the order contains 2 units of "SKU-1001"
When staff starts picking
And scans 2 barcodes for "SKU-1001"
Then picking should be complete
When staff packs the order and enters weight
And enters carrier "UPS" with tracking "1Z999AA10123456784"
Then the order status should be "SHIPPED"
And a SALE movement should be logged for 2 units at "Store A"
And Shopify should be updated with fulfillment and tracking number
And available qty for "SKU-1001" at "Store A" should decrease by 2
Scenario: Inventory sync from Shopify online sale
Given "Store A" has 10 units of "SKU-1001" in POS
And Shopify shows 10 units for "Store A"
When a customer purchases 1 unit online (assigned to Store A)
Then Shopify webhook fires orders/create
And POS should decrement "SKU-1001" at "Store A" by 1
And POS should show 9 units available
And the movement type should be "ONLINE_SALE"
Feature: Overstock Vendor Return
Feature: Overstock Vendor Return
As a Buyer
I need to return overstock items to vendors with restocking fee tracking
So that excess inventory can be managed and credits recovered
Background:
Given overstock returns are enabled
And the default restocking fee is 0%
And vendor "StyleCo" has a negotiated restocking fee of 15%
And the following overstock inventory exists:
| SKU | Product | Qty | Days of Supply | Vendor |
| SKU-4001 | Floral Dress S | 50 | 250 days | StyleCo |
| SKU-4002 | Floral Dress M | 35 | 175 days | StyleCo |
Scenario: Create overstock vendor return
When buyer "Sarah" creates an overstock RMA for vendor "StyleCo"
And adds the following return lines:
| SKU | Return Qty | Unit Cost |
| SKU-4001 | 30 | $18.00 |
| SKU-4002 | 20 | $18.00 |
Then an RMA should be created with type "OVERSTOCK"
And the RMA status should be "DRAFT"
And the gross return value should be $900.00
Scenario: Restocking fee applied on vendor approval
Given RMA "RMA-2026-00050" is submitted to vendor "StyleCo"
When vendor approves the return with 15% restocking fee
Then the restocking fee should be $135.00 (15% of $900.00)
And the net credit expected should be $765.00
And the RMA status should change to "VENDOR_APPROVED"
And the restocking fee should be recorded on each line:
| SKU | Gross Credit | Restocking Fee | Net Credit |
| SKU-4001 | $540.00 | $81.00 | $459.00 |
| SKU-4002 | $360.00 | $54.00 | $306.00 |
Scenario: Inventory decremented on shipment back to vendor
Given RMA "RMA-2026-00050" is vendor-approved
When warehouse ships the return items back to vendor
And enters carrier "FedEx" with tracking "794644790132"
Then the RMA status should change to "SHIPPED_BACK"
And a RETURN_TO_VENDOR movement should be logged:
| SKU | Qty Change | Location |
| SKU-4001 | -30 | HQ Warehouse |
| SKU-4002 | -20 | HQ Warehouse |
And inventory at "HQ Warehouse" should decrease accordingly
Scenario: Credit received and reconciled
Given RMA "RMA-2026-00050" was shipped back to vendor
When vendor issues credit of $765.00
And buyer records the credit received
Then the RMA status should change to "CREDIT_RECEIVED"
And credit reconciliation should show:
| Expected Credit | Received Credit | Variance |
| $765.00 | $765.00 | $0.00 |
And the RMA status should change to "CLOSED"
Scenario: Credit variance detected
Given RMA "RMA-2026-00050" was shipped back with expected credit $765.00
When vendor issues credit of $700.00 (less than expected)
And buyer records the credit received
Then credit reconciliation should show variance of -$65.00
And a reconciliation alert should be created
And buyer should investigate the discrepancy before closing the RMA
Feature: Inventory Adjustment Approval
Feature: Inventory Adjustment Approval
As a Store Manager
I need inventory adjustments above a threshold to require approval
So that significant inventory changes are reviewed for accuracy
Background:
Given adjustment approval mode is "threshold"
And the approval threshold is 10 units or $100.00 value
And product "SKU-1001" has 25 units available at "Store A"
And WAC for "SKU-1001" is $12.50
Scenario: Small adjustment auto-applies
When staff "Jane" submits an adjustment for "SKU-1001" at "Store A"
And the adjustment is -3 units (value: $37.50) with reason "DAMAGED"
Then the adjustment should be applied immediately
And "SKU-1001" qty at "Store A" should change to 22
And a movement record should be created:
| Type | Qty | Reason | Staff |
| ADJUSTMENT | -3 | DAMAGED | Jane |
And no approval request should be created
Scenario: Large adjustment requires approval
When staff "Jane" submits an adjustment for "SKU-1001" at "Store A"
And the adjustment is -15 units (value: $187.50) with reason "THEFT"
Then the adjustment should be set to "PENDING_APPROVAL" status
And "SKU-1001" qty should remain at 25 until approved
And manager "Mike" should receive an approval notification
And the notification should include: product, qty change, value, reason, requester
Scenario: Manager approves the adjustment
Given a pending adjustment exists: "SKU-1001" at "Store A", -15 units, reason "THEFT"
When manager "Mike" reviews and approves the adjustment
Then "SKU-1001" qty at "Store A" should change to 10
And a movement record should be created with approver "Mike"
And the adjustment status should be "APPROVED"
And staff "Jane" should be notified of the approval
Scenario: Manager rejects the adjustment
Given a pending adjustment exists: "SKU-1001" at "Store A", -15 units, reason "THEFT"
When manager "Mike" rejects the adjustment with reason "Recount needed first"
Then "SKU-1001" qty should remain at 25
And the adjustment status should be "REJECTED"
And staff "Jane" should be notified with the rejection reason
Scenario: Reason code is required
When staff "Jane" submits an adjustment for "SKU-1001" without selecting a reason code
Then the system should display "Reason code is required for inventory adjustments"
And the adjustment should be blocked
Scenario: Custom reason code with required note
When staff "Jane" submits an adjustment with reason code "OTHER"
And does not provide a note
Then the system should display "A note is required for reason code 'Other'"
And the adjustment should be blocked
When "Jane" provides note "Found box behind shelf during cleaning"
And resubmits
Then the adjustment should proceed (subject to threshold rules)
5. Setup & Configuration Module
5.1 Overview & Scope
Module 5 centralizes all tenant-level system configuration for the POS platform. Every operational behavior in Modules 1 through 4 – how a sale is processed, how a customer is identified, how a product is priced, how inventory is tracked – is governed by configuration defined here. Module 5 is the control plane: it does not process transactions, manage catalog records, or move inventory. It defines the rules, structures, and parameters that those modules consume at runtime.
5.1.1 Executive Summary
A multi-tenant POS system serving five retail stores and one HQ warehouse requires a single, authoritative source for all system-wide configuration. Without centralized setup, configuration drifts across locations, roles are inconsistently enforced, and operational rules become embedded in application logic rather than tenant-controlled settings. Module 5 eliminates this by providing a structured configuration layer that every other module references.
The scope of Module 5 encompasses: system identity and branding, currency and localization, physical location definitions, user identity and role-based access, shift and scheduling configuration, register and terminal management, hardware peripherals, tax rules, receipt templates, payment processing integration, financial accounting codes, operational business rules, inter-store transfer policies, notification and alert routing, RFID hardware integration, system integrations with external platforms, data import/export tooling, audit logging configuration, and tenant onboarding workflows.
Design principle: Module 5 defines how things are configured, not how things operate day-to-day. For example, Module 5 defines that a location exists, its timezone, and its tax rate. Module 1 (Sales) uses that tax rate when calculating a transaction total. Module 5 defines that a user has the MANAGER role with permission to void transactions. Module 1 enforces that permission at the point of sale.
5.1.2 Module Dependencies
Module 5 is the foundational configuration layer consumed by all operational modules. It has no upstream module dependencies – it is configured directly by tenant administrators.
flowchart TD
M5["Module 5\nSetup & Configuration"]
M1["Module 1\nSales & POS"]
M2["Module 2\nCustomers"]
M3["Module 3\nCatalog"]
M4["Module 4\nInventory"]
M5 -->|Users, roles, registers,\ntax rules, payment config,\nreceipt templates, clock-in/out config| M1
M5 -->|Users, roles,\nlocation assignments,\ncustomer data policies| M2
M5 -->|Users, roles,\nbarcode config,\nvendor settings,\nlabel printers| M3
M5 -->|Users, roles,\nlocations,\ntransfer rules,\nRFID config| M4
style M5 fill:#7b2d8e,stroke:#5a1d6e,color:#fff
style M1 fill:#264653,stroke:#1d3557,color:#fff
style M2 fill:#264653,stroke:#1d3557,color:#fff
style M3 fill:#264653,stroke:#1d3557,color:#fff
style M4 fill:#264653,stroke:#1d3557,color:#fff
Downstream consumers (Module 5 provides):
| Consumer Module | Configuration Provided | Purpose |
|---|---|---|
| Module 1 (Sales) | Registers, profiles, tax rates, payment processors, receipt templates, user roles, clock-in/out configuration, cash drawer settings, discount limits | Controls POS terminal behavior, payment routing, receipt output, and staff permissions during sales. |
| Module 2 (Customers) | User roles, data retention policies, communication preferences defaults, location assignments | Governs who can view/edit customer data, default privacy settings, and location-scoped customer association. |
| Module 3 (Catalog) | Barcode format, label printer config, vendor registry, user roles, approval thresholds, Shopify integration settings | Controls barcode generation, label printing, vendor management permissions, and external catalog sync. |
| Module 4 (Inventory) | Locations, transfer rules, RFID reader config, user roles, reorder thresholds, count policies | Defines physical topology, transfer approval rules, and counting schedules. |
5.1.3 Functional Scope
The following table enumerates all functional areas covered by Module 5 and their section references.
| # | Section | Domain | Description |
|---|---|---|---|
| 5.1 | Overview & Scope | Foundation | Module purpose, dependencies, and section index |
| 5.2 | System Settings & Branding | Identity | Tenant identity, operational defaults, and visual branding |
| 5.3 | Multi-Currency Configuration | Localization | Currency definitions, exchange rates, and display formatting |
| 5.4 | Locations | Topology | Physical locations and location type definitions |
| 5.5 | Users & Roles | Access Control | User profiles, role definitions, and feature toggle matrix |
| 5.6 | Time Tracking (Clock-In / Clock-Out) | Time Tracking | Simple clock-in/clock-out time recording for payroll |
| 5.7 | Registers & Terminals | Hardware | Register registry, device pairing, profiles, and peripheral assignments |
| 5.8 | Printer Configuration | Hardware | Printer registry, driver settings, and printer-location assignments |
| 5.9 | Tax Configuration | Financial | Tax rates, tax classes, and location-level tax rules |
| 5.10 | Notification & Alert Rules | Communication | Alert routing, escalation paths, and notification channel preferences |
| 5.11 | Payment Processing | Financial | Payment processor integration, terminal pairing, and gateway configuration |
| 5.12 | Accounting & GL Mapping | Financial | Chart of accounts, GL code assignments, and financial period definitions |
| 5.13 | Operational Business Rules | Rules Engine | Configurable thresholds, approval limits, and policy toggles |
| 5.14 | Receipt Templates | Output | Receipt layout configuration, template variables, and format options |
| 5.15 | Transfer & Logistics Rules | Operations | Transfer approval thresholds, routing rules, and carrier configuration |
| 5.16 | RFID Configuration | Hardware | Reader registration, antenna settings, and scan session parameters |
| 5.17 | System Integrations | External | Shopify, QuickBooks, and third-party API connection management |
| 5.18 | Data Import / Export | Data | Bulk data import templates, export scheduling, and format configuration |
| 5.19 | Audit & Compliance | Governance | Audit log retention, compliance settings, and data purge policies |
| 5.20 | Tenant Onboarding | Lifecycle | Initial setup wizard, seed data provisioning, and go-live checklist |
| 5.21 | User Stories | Acceptance | Gherkin-format acceptance criteria for all Module 5 functionality |
5.2 System Settings & Branding
Scope: Tenant-level identity, operational defaults, and visual branding configuration. These settings establish the foundational parameters that all other modules reference – the tenant’s name, timezone, date formatting, session policies, and customer-facing visual identity. Settings are organized into three categories: Core (identity and localization), Operational (runtime behavior), and Branding (visual presentation).
5.2.1 Core Settings
Core settings define the tenant’s identity and localization defaults. These values are established during onboarding (Section 5.20) and rarely change after initial configuration.
| Setting | Key | Type | Required | Default | Description |
|---|---|---|---|---|---|
| Tenant Name | tenant_name | String(100) | Yes | – | Trading name displayed in UI headers and reports |
| Legal Entity Name | legal_entity_name | String(200) | Yes | – | Registered business name for invoices and legal documents |
| Company Logo | company_logo_url | String(500) | No | System default | URL to uploaded logo image (PNG/SVG, max 2MB, min 200x200px) |
| Default Timezone | default_timezone | String(50) | Yes | America/New_York | IANA timezone identifier; applies to all locations unless overridden at location level |
| Default Currency | default_currency | String(3) | Yes | USD | ISO 4217 currency code; set at onboarding, immutable after first transaction |
| Date Format | date_format | Enum | Yes | MM/DD/YYYY | Display format: MM/DD/YYYY or DD/MM/YYYY; tenant-wide preference |
| Time Format | time_format | Enum | Yes | 12h | Display format: 12h (3:00 PM) or 24h (15:00); tenant-wide preference |
| Fiscal Year Start | fiscal_year_start_month | Integer(1-12) | Yes | 1 (January) | Month number when the fiscal year begins; affects financial reporting periods |
5.2.2 Operational Settings
Operational settings control runtime behavior across the POS system. These are actively tuned by administrators as business needs evolve.
| Setting | Key | Type | Required | Default | Description |
|---|---|---|---|---|---|
| Auto-Logout Timeout | auto_logout_minutes | Integer | Yes | 30 | Minutes of inactivity before automatic POS session logout (range: 5-120) |
| Max Session Duration | max_session_hours | Integer | Yes | 8 | Maximum hours a POS session can remain active before forced re-authentication (range: 1-24) |
| Barcode Format | barcode_format | Enum | Yes | CODE128 | Preferred barcode symbology for system-generated barcodes: CODE128, EAN13, UPCA |
| Default Print Mode | default_print_mode | Enum | Yes | THERMAL | Default receipt output: THERMAL (80mm roll), A4 (full page), EMAIL_ONLY |
| Failed Login Lockout | failed_login_max | Integer | Yes | 5 | Number of consecutive failed PIN/password attempts before temporary lockout |
| Lockout Duration | lockout_duration_minutes | Integer | Yes | 15 | Minutes a user account is locked after exceeding failed login threshold |
Cross-Reference: See Module 5, Section 5.14 for receipt template configuration including default printer assignment per location.
5.2.3 Business Hours & Holiday Calendar
Business hours are configured per location, supporting different schedules per day of week. Holidays override normal business hours for specific dates.
Business Hours Data Model
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
location_id | UUID | Yes | – | FK to locations table |
day_of_week | Integer(0-6) | Yes | – | 0 = Sunday, 1 = Monday, …, 6 = Saturday |
open_time | Time | No | 09:00 | Store opening time (null = closed this day) |
close_time | Time | No | 21:00 | Store closing time (null = closed this day) |
is_closed | Boolean | Yes | false | Overrides open/close times; true = location closed this day |
Holiday Calendar Data Model
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
holiday_date | Date | Yes | – | Calendar date of the holiday |
name | String(100) | Yes | – | Holiday name (e.g., “Thanksgiving”, “Independence Day”) |
applies_to | Enum | Yes | ALL | ALL (all locations), STORES_ONLY, SPECIFIC (use junction table) |
is_closed | Boolean | Yes | true | Whether locations are closed on this date |
modified_open_time | Time | No | null | Override open time if not fully closed (e.g., holiday shortened hours) |
modified_close_time | Time | No | null | Override close time if not fully closed |
is_recurring | Boolean | Yes | false | If true, repeats annually on the same month/day |
Business Rules:
- Holiday entries override normal business hours for matching dates.
- When
is_closed = true, POS terminals at affected locations display a “Location Closed” banner and block new transactions. - When modified hours are set, the system uses those hours instead of normal business hours for that date.
- Recurring holidays auto-generate entries for the current fiscal year during onboarding and can be manually adjusted per year.
5.2.4 Branding Settings
Branding settings control the visual presentation of the POS system, customer-facing displays, printed receipts, and exported reports.
| Setting | Key | Type | Required | Default | Description |
|---|---|---|---|---|---|
| Primary Color | brand_primary_color | String(7) | Yes | #1A1A2E | Hex color code for primary UI elements (headers, buttons, navigation) |
| Accent Color | brand_accent_color | String(7) | Yes | #E94560 | Hex color code for accent elements (highlights, active states, badges) |
| Login Background | login_bg_image_url | String(500) | No | System default | URL to custom login page background image (JPG/PNG, max 5MB, 1920x1080 recommended) |
| Login Tagline | login_tagline | String(200) | No | null | Custom text displayed below the company logo on the login screen |
| Receipt Logo | receipt_logo_url | String(500) | No | company_logo_url | Logo printed/displayed on receipts; falls back to company logo if not set |
| Report Header Logo | report_header_logo_url | String(500) | No | company_logo_url | Logo displayed in the header of printed and exported reports |
| Report Header Address | report_header_address | Text | No | null | Company address block printed on report headers |
| Customer Display Welcome | customer_display_welcome | String(200) | No | "Welcome!" | Welcome message shown on customer-facing displays |
| Customer Display Promo Images | customer_display_promo_urls | JSONB | No | [] | Array of image URLs for rotating promotional display on customer-facing screens |
Cross-Reference: See Module 5, Section 5.14 for receipt-specific branding (receipt logo placement, footer text, color printing support).
5.2.5 System Settings Data Model
All settings are stored in a key-value table with JSONB values, enabling flexible schema evolution without database migrations.
system_settings Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
setting_key | String(100) | Yes | – | Unique setting identifier within tenant (e.g., tenant_name, auto_logout_minutes) |
setting_value | JSONB | Yes | – | Setting value; JSONB supports strings, numbers, booleans, arrays, and objects |
category | Enum | Yes | – | CORE, OPERATIONAL, BRANDING |
updated_by | UUID | Yes | – | FK to users table; last user who modified this setting |
updated_at | DateTime | Yes | Auto | Timestamp of last modification |
Unique constraint: (tenant_id, setting_key)
location_settings Table (Per-Location Overrides)
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
location_id | UUID | Yes | – | FK to locations table |
setting_key | String(100) | Yes | – | Setting identifier being overridden at location level |
setting_value | JSONB | Yes | – | Location-specific override value |
updated_by | UUID | Yes | – | FK to users table |
updated_at | DateTime | Yes | Auto | Timestamp of last modification |
Unique constraint: (tenant_id, location_id, setting_key)
Resolution order: Location-level setting overrides tenant-level setting. If no location override exists, the tenant-level value is used.
Overridable settings: Not all settings support location-level override. The following settings are overridable per location: default_timezone, auto_logout_minutes, max_session_hours, default_print_mode, barcode_format.
5.3 Multi-Currency Configuration
Scope: Defining the base currency and optional additional currencies for vendor purchase order support. All POS sales transactions and financial reports operate exclusively in the tenant’s base currency. Multi-currency support exists solely to enable purchase orders denominated in a vendor’s native currency, with manual exchange rate management and date-stamped rate history.
5.3.1 Currency Model
Each tenant has exactly one base currency, established at onboarding and immutable after the first transaction is recorded. Additional currencies are activated to support vendor procurement workflows where the vendor invoices in a foreign currency.
Currency Data Model
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
code | String(3) | Yes | – | ISO 4217 currency code (e.g., USD, EUR, GBP, CAD) |
name | String(50) | Yes | – | Display name (e.g., “US Dollar”, “Euro”, “British Pound”) |
symbol | String(5) | Yes | – | Currency symbol (e.g., $, €, £) |
decimal_places | Integer | Yes | 2 | Number of decimal places for amounts (0-4) |
symbol_position | Enum | Yes | BEFORE | Symbol placement: BEFORE ($100.00) or AFTER (100.00€) |
thousands_separator | String(1) | Yes | , | Thousands grouping character: , or . or (space) |
decimal_separator | String(1) | Yes | . | Decimal point character: . or , |
is_base | Boolean | Yes | false | Whether this is the tenant’s base currency (exactly one per tenant) |
is_active | Boolean | Yes | true | Whether this currency is available for selection on new POs |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, code)
Business Rules:
- Exactly one currency per tenant must have
is_base = true. This constraint is enforced at the application level. - The base currency cannot be deactivated (
is_activecannot be set tofalsefor the base currency). - The base currency code cannot be changed after the first transaction is recorded in the system.
- Deactivating a non-base currency prevents it from being selected on new POs but does not affect existing POs already denominated in that currency.
5.3.2 Exchange Rates
Exchange rates are manually entered by an administrator. The system does not integrate with external rate feeds or auto-update rates. Each rate entry is date-stamped; the system uses the most recent rate on or before the PO date when converting vendor currency amounts to the base currency.
Exchange Rate Data Model
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
from_currency_id | UUID | Yes | – | FK to currencies table; the source currency |
to_currency_id | UUID | Yes | – | FK to currencies table; the target currency (typically the base currency) |
rate | Decimal(12,6) | Yes | – | Exchange rate: 1 unit of from_currency = rate units of to_currency |
effective_date | Date | Yes | – | Date from which this rate is effective |
created_by | UUID | Yes | – | FK to users table; administrator who entered the rate |
created_at | DateTime | Yes | Auto | Record creation timestamp |
Unique constraint: (tenant_id, from_currency_id, to_currency_id, effective_date)
Rate Resolution Logic:
- When a PO is created or received in a foreign currency, the system looks up the most recent
exchange_rateentry whereeffective_date <= PO date. - If no rate exists for the currency pair, the system blocks the PO with an error: “No exchange rate found for [CURRENCY] as of [DATE]. Enter a rate in Setup > Currencies.”
- Exchange rate changes do NOT retroactively affect existing POs. The rate is captured and stored on the PO at creation time.
5.3.3 Currency Use Cases
| Use Case | Currency Behavior | Module Reference |
|---|---|---|
| Vendor Purchase Orders | PO can be denominated in vendor’s currency; line totals display in both vendor currency and base currency | Module 4, Section 4.3 |
| PO Receiving / Landed Cost | Received goods are costed in base currency using the exchange rate captured at PO creation | Module 4, Section 4.4 |
| Sales Transactions | Always in base currency (USD). No foreign currency tender support. | Module 1, Section 1.3 |
| Financial Reports | Always in base currency | Module 5, Section 5.12 |
| Customer Accounts / Credit | Always in base currency | Module 2 |
5.3.4 Currency Display Format Examples
| Currency | Format | Example |
|---|---|---|
| USD (default) | $1,234.56 | Symbol before, comma thousands, period decimal |
| EUR | 1.234,56 € | Symbol after, period thousands, comma decimal |
| GBP | £1,234.56 | Symbol before, comma thousands, period decimal |
| JPY | ¥1,234 | Symbol before, comma thousands, zero decimal places |
5.4 Locations
Scope: Defining the physical topology of the tenant’s retail operation – store locations and warehouse facilities. Locations are the organizational unit for inventory, staffing, registers, and reporting. Every inventory balance, every register, every user assignment, and every transaction is scoped to a location.
5.4.1 Location Types
Two location types are supported. The type determines which operational capabilities are available at the location.
| Type | Code | Description | Capabilities |
|---|---|---|---|
| Store | STORE | Retail store location; customer-facing with POS registers and staff | Sales, returns, exchanges, customer service, inventory counts, receiving, transfers (send and receive) |
| Warehouse | WAREHOUSE | Distribution or HQ facility; receives vendor shipments, distributes to stores | Receiving, transfers (send only to stores), inventory counts, purchase order management. No customer-facing POS. |
Business Rules:
- A warehouse location cannot have registers assigned (no POS capability).
- A warehouse location does not receive inbound transfers from stores. Transfer direction is one-way: Warehouse -> Stores, and bidirectional between Stores (Store <-> Store).
- A tenant must have at least one
STORElocation to process sales. - The
HQwarehouse receives vendor shipments and distributes stock to retail stores.
Cross-Reference: The HQ-as-warehouse pattern is documented in the SalesSight inventory analysis methodology. HQ is the distribution hub, not a retail location. Online orders displayed as “HQ sales” are fulfilled from physical stores.
5.4.2 Location Data Model
locations Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
code | String(10) | Yes | – | Unique short code within tenant (e.g., GM, HM, LM, NM, HQ) |
name | String(100) | Yes | – | Display name (e.g., “Garden Mall”, “Heritage Mall”, “Headquarters”) |
type | Enum | Yes | – | STORE or WAREHOUSE |
address_line_1 | String(200) | Yes | – | Street address |
address_line_2 | String(200) | No | null | Suite, unit, floor |
city | String(100) | Yes | – | City |
state | String(50) | Yes | – | State or province |
zip | String(20) | Yes | – | Postal / ZIP code |
country | String(2) | Yes | US | ISO 3166-1 alpha-2 country code |
phone | String(20) | No | null | Location phone number |
email | String(200) | No | null | Location email address |
timezone | String(50) | Yes | Tenant default | IANA timezone identifier; overrides tenant default timezone |
tax_jurisdiction_id | UUID | Yes | – | FK to tax_jurisdictions table; defines the compound tax jurisdiction for this location. See Section 5.9 for tax jurisdiction and rate configuration. |
is_active | Boolean | Yes | true | Whether this location is operational |
is_franchise | Boolean | Yes | false | Indicates whether this location operates as a franchise (true) or is company-owned (false). Franchise locations may have different operational rules, reporting requirements, and fee structures. |
sort_order | Integer | Yes | 0 | Display ordering in dropdowns and reports |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, code)
Cross-Reference: See Module 5, Section 5.9 for tax jurisdiction and compound rate configuration. Each location references a
tax_jurisdictionsrecord viatax_jurisdiction_id, which defines the compound tax rates (State + County + City) applied at that location. See Section 5.9.1 for thetax_jurisdictionsandtax_ratestables.
5.5 Users & Roles
Scope: Defining user profiles, authentication credentials, role-based access control, location assignments, and the feature toggle matrix that governs what each role can and cannot do within the system. Users are the human operators of the POS platform; roles determine their permissions. Every action in the system is attributable to a specific user, and every capability is gated by that user’s assigned role and feature toggles.
5.5.1 User Profile
Each user represents a staff member, manager, or administrator who interacts with the POS system. Users authenticate via email/password for the admin portal and via PIN for the POS terminal.
users Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
email | String(200) | Yes | – | Login email; unique per tenant |
password_hash | String(255) | Yes | – | Bcrypt-hashed password for admin portal login |
display_name | String(100) | Yes | – | Full name displayed in UI, receipts, and reports |
pin | String(6) | Yes | – | 4-6 digit numeric PIN for POS terminal login; stored as bcrypt hash |
role_id | UUID | Yes | – | FK to roles table |
employee_id | String(20) | No | null | Tenant-assigned employee identifier (e.g., badge number, payroll ID) |
commission_rate_percent | Decimal(5,2) | No | null | Default commission rate for this user (e.g., 5.00 for 5%). Null = no commission. |
default_register_id | UUID | No | null | FK to registers table; preferred register for auto-assignment at login |
hire_date | Date | No | null | Employee hire date for reporting and tenure tracking |
phone | String(20) | No | null | Contact phone number |
avatar_url | String(500) | No | null | URL to user avatar image for POS display |
is_active | Boolean | Yes | true | Whether user can log in; deactivated users cannot authenticate |
last_login_at | DateTime | No | null | Timestamp of most recent successful login |
failed_login_count | Integer | Yes | 0 | Consecutive failed login attempts; resets on successful login |
locked_until | DateTime | No | null | If set, user is locked out until this timestamp |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, email), (tenant_id, pin)
Business Rules:
- PIN must be unique within a tenant. Two users at the same tenant cannot share a PIN.
- Deactivating a user (
is_active = false) immediately invalidates all active sessions. The user cannot log in until reactivated. - Deleting a user is not supported; users are deactivated. All historical transactions, audit entries, and reports retain the user reference.
commission_rate_percentis the user’s default rate. Module 1, Section 1.14 details how commission is calculated per transaction and how the rate can be overridden per sale.
Cross-Reference: See Module 1, Section 1.14 for commission calculation logic including split commissions and override rates.
5.5.2 User-Location Assignment
Users can be assigned to one or more locations. The primary location determines which location the POS terminal defaults to at login. Multi-location users (e.g., managers overseeing two stores) can switch locations within the UI.
user_locations Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
user_id | UUID | Yes | – | FK to users table |
location_id | UUID | Yes | – | FK to locations table |
is_primary | Boolean | Yes | false | Whether this is the user’s primary/default location (exactly one per user) |
created_at | DateTime | Yes | Auto | Record creation timestamp |
Unique constraint: (user_id, location_id)
Business Rules:
- Every active user must have at least one location assignment.
- Exactly one location assignment per user must have
is_primary = true. - Location assignments are informational and used for default location selection, reporting filters, and organizational grouping. They do not restrict transaction processing — any user can process transactions at any location within their tenant.
- Removing a user’s last location assignment automatically deactivates the user.
5.5.3 Predefined Roles
The system ships with five predefined roles. These cover the standard organizational structure of a multi-store retail operation. Roles are tenant-scoped: each tenant gets their own copy of the predefined roles at onboarding, which they can then customize via the feature toggle matrix.
roles Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
code | String(20) | Yes | – | Unique role code within tenant |
name | String(50) | Yes | – | Display name |
description | String(500) | No | null | Role description |
is_system | Boolean | Yes | true | System roles cannot be deleted (but can be customized via feature toggles) |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, code)
Predefined Role Definitions
| Role | Code | Description | Typical Users |
|---|---|---|---|
| Staff | STAFF | POS operator; processes sales, returns, exchanges, and basic inventory tasks | Sales associates, cashiers |
| Manager | MANAGER | Store manager; approves adjustments, refunds, and price overrides; accesses location-level reports | Store managers, assistant managers |
| Admin | ADMIN | System administrator; configures all Module 5 settings, manages users and locations | IT staff, operations director |
| Buyer | BUYER | Procurement specialist; creates and manages purchase orders, vendor relationships | Purchasing agents, buyers |
| Owner | OWNER | Full access; all permissions including financial reports, audit logs, and read-only access to all configuration | Business owner, CEO |
5.5.4 Feature Toggle Matrix
The feature toggle matrix provides granular control over which capabilities each role has access to. Toggles are configured at the tenant level per role – changing a toggle affects all users with that role across the tenant.
role_feature_toggles Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
role_id | UUID | Yes | – | FK to roles table |
feature_code | String(50) | Yes | – | Feature identifier (e.g., process_sale, void_transaction) |
is_enabled | Boolean | Yes | – | Whether this feature is enabled for this role |
updated_by | UUID | No | null | FK to users table; last user who modified this toggle |
updated_at | DateTime | Yes | Auto | Timestamp of last modification |
Unique constraint: (tenant_id, role_id, feature_code)
Default Feature Toggle Configuration
The following matrix defines the default state of each feature toggle per role. Administrators can override any toggle. A checkmark indicates enabled by default; an X indicates disabled by default. All toggles are mutable.
| Feature Code | Description | Staff | Manager | Admin | Buyer | Owner |
|---|---|---|---|---|---|---|
process_sale | Create and complete a sale transaction | Y | Y | Y | N | Y |
process_return | Process merchandise returns with refund | Y | Y | Y | N | Y |
process_exchange | Process merchandise exchanges | Y | Y | Y | N | Y |
apply_discount | Apply line-item or cart-level discounts | Y | Y | Y | N | Y |
void_transaction | Void a completed same-day transaction | N | Y | Y | N | Y |
process_layaway | Create and manage layaway transactions | Y | Y | Y | N | Y |
inventory_count | Participate in physical inventory counts | Y | Y | Y | N | N |
inventory_adjust | Create manual inventory adjustments | N | Y | Y | N | N |
create_transfer | Initiate inter-store inventory transfers | N | Y | Y | N | Y |
approve_transfer | Approve outbound transfer requests | N | Y | Y | N | Y |
create_po | Create vendor purchase orders | N | N | Y | Y | Y |
approve_po | Approve purchase orders for submission | N | Y | Y | N | Y |
receive_shipment | Process inbound receiving (PO or transfer) | Y | Y | Y | Y | N |
price_change | Modify product pricing in the catalog | N | Y | Y | N | Y |
view_reports | Access reporting dashboards and exports | N | Y | Y | Y | Y |
manage_users | Create, edit, deactivate user accounts | N | N | Y | N | Y |
manage_settings | Modify Module 5 configuration settings | N | N | Y | N | Y |
view_cost_data | View product cost, margin, and vendor pricing | N | Y | Y | Y | Y |
manage_customers | Create and edit customer profiles | Y | Y | Y | N | Y |
view_audit_log | Access system audit trail | N | N | Y | N | Y |
manage_gift_cards | Issue and adjust gift card balances | N | Y | Y | N | Y |
override_price | Override selling price at POS beyond discount limits | N | Y | Y | N | Y |
cash_drawer_operations | Open cash drawer, perform cash drops, reconcile drawer | Y | Y | Y | N | Y |
end_of_day | Execute end-of-day close procedures | N | Y | Y | N | Y |
Business Rules:
- Feature toggles are evaluated at runtime. Changing a toggle takes effect immediately for all active sessions of users with that role.
- The
OWNERrole cannot havemanage_settingsormanage_userstoggled off – these are locked totruefor the OWNER role to prevent lockout. - Custom roles can be created by duplicating an existing role’s toggle configuration and modifying it. Custom roles have
is_system = false.
5.5.5 User Authentication Flow
sequenceDiagram
autonumber
participant U as User
participant POS as POS Terminal
participant API as Backend
participant DB as DB
Note over U, DB: POS Terminal Login (PIN-based)
U->>POS: Enter 4-6 Digit PIN
POS->>API: POST /auth/pin-login {pin, register_id}
API->>DB: Lookup user by PIN hash + tenant_id
alt User Found & Active
API->>DB: Check failed_login_count < max threshold
alt Not Locked
API->>DB: Reset failed_login_count = 0
API->>DB: Update last_login_at
API->>DB: Load role + feature toggles
API-->>POS: Auth Token + User Profile + Permissions
POS->>POS: Render POS UI with role-appropriate menus
else Account Locked
API-->>POS: "Account locked. Try again in X minutes."
end
else User Not Found or Inactive
API->>DB: Increment failed_login_count (by register/IP)
API-->>POS: "Invalid PIN"
end
Cross-Reference: See Module 5, Section 5.6 for clock-in/clock-out time tracking.
5.6 Time Tracking (Clock-In / Clock-Out)
Scope: Recording staff clock-in and clock-out times for basic time tracking and payroll reporting. The system provides a simple punch-clock model — staff clock in via the POS terminal using their PIN, and clock out at the end of their work period. This section does not implement shift scheduling, shift types, or workforce management.
5.6.1 Clock-In / Clock-Out
clock_records Table:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | FK to tenants table; owning tenant |
user_id | UUID | Yes | – | FK to users table; employee who clocked in/out |
location_id | UUID | Yes | – | FK to locations table; location where clock event occurred |
clock_in | DateTime | Yes | – | Timestamp when user clocked in |
clock_out | DateTime | No | null | Timestamp when user clocked out (null = currently clocked in) |
notes | Text | No | null | Optional notes (e.g., reason for late clock-out, manager override note) |
created_at | DateTime | Yes | Auto | Record creation timestamp |
Business Rules:
- A user cannot clock in if they are already clocked in at any location (must clock out first).
- Clock-out is required before end-of-day close procedures can complete at the location.
- If a user forgets to clock out, a manager can manually enter the clock-out time with an audit note in the
notesfield. - Clock-in records are retained indefinitely for payroll and audit purposes.
- Maximum clock-in duration: 16 hours. If no clock-out is recorded within 16 hours, the system sends an alert to the location manager.
Cross-Reference: See Module 1, Section 1.8 for end-of-day cash drawer procedures that typically coincide with clock-out.
5.7 Registers & Terminals
Scope: Defining the register registry, device pairing, register profiles that control available functionality, and peripheral device assignments. A register is the logical unit representing a point-of-sale station at a location. Each register is paired with one or more physical devices and linked to peripherals (printers, scanners, payment terminals, cash drawers). Register profiles determine which POS functions are available on each terminal type.
5.7.1 Register Registry
Each location maintains a numbered set of registers. Registers are logical entities that persist across hardware replacements – when a device is swapped, the register retains its identity, transaction history, and peripheral assignments.
registers Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
location_id | UUID | Yes | – | FK to locations table |
register_number | String(20) | Yes | – | Unique register identifier within location (e.g., REG-001, REG-002) |
name | String(100) | No | null | Friendly name (e.g., “Main Counter”, “Back Register”, “Floor Mobile 1”) |
profile_id | UUID | Yes | – | FK to register_profiles table; determines available functions |
status | Enum | Yes | ACTIVE | ACTIVE, MAINTENANCE, RETIRED |
ip_address | String(45) | No | null | Network IP address of the physical device paired to this register. Supports IPv4 and IPv6. |
notes | Text | No | null | Administrative notes (e.g., “New iPad deployed 2026-01-15”) |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, location_id, register_number)
Business Rules:
- A register in
MAINTENANCEstatus cannot accept new transactions. Active sessions are preserved but no new sales can be initiated. - A register in
RETIREDstatus is permanently decommissioned. It cannot be reactivated. Its transaction history is preserved. - Registers cannot be deleted; only retired. This ensures audit trail integrity.
- Warehouse-type locations cannot have registers assigned.
- A register’s network IP address (
ip_address) can be modified a maximum of 2 times within any rolling 365-day period. IP changes are automatically tracked in theregister_ip_changesaudit table. Before updating the IP address, the system queries:SELECT COUNT(*) FROM register_ip_changes WHERE register_id = :id AND changed_at >= NOW() - INTERVAL '365 days'. IfCOUNT >= 2, the update is rejected with error:[ERR-5071] IP address change limit reached. A register's IP address can only be changed 2 times per year. Contact the system owner for an override.
register_ip_changes Table:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | FK to tenants table; owning tenant |
register_id | UUID | Yes | – | FK to registers table |
old_ip | String(45) | No | null | Previous IP address (null for first assignment) |
new_ip | String(45) | Yes | – | New IP address being assigned |
changed_by | UUID | Yes | – | FK to users table; user who made the change |
changed_at | DateTime | Yes | Auto | Timestamp of the IP change |
5.7.2 Register State Machine
stateDiagram-v2
[*] --> ACTIVE: Register Created
ACTIVE --> MAINTENANCE: take_offline
MAINTENANCE --> ACTIVE: bring_online
ACTIVE --> RETIRED: decommission
MAINTENANCE --> RETIRED: decommission
RETIRED --> [*]
note right of ACTIVE
Accepting transactions
Device paired and online
Peripherals connected
end note
note right of MAINTENANCE
Temporarily offline
No new transactions
Active sessions preserved
Hardware swap / repair
end note
note right of RETIRED
Permanently decommissioned
Cannot reactivate
History preserved
end note
State Transition Rules:
| Transition | From | To | Trigger | Authorization | Side Effects |
|---|---|---|---|---|---|
take_offline | ACTIVE | MAINTENANCE | Manual (admin/manager) | ADMIN or MANAGER role | Active sessions warned; no new sales |
bring_online | MAINTENANCE | ACTIVE | Manual (admin/manager) | ADMIN or MANAGER role | Register available for transactions |
decommission | ACTIVE or MAINTENANCE | RETIRED | Manual (owner only) | OWNER role only | Register permanently disabled; device pairing cleared; requires type-to-confirm (see below) |
Register Retirement Safety: Register retirement (decommission) is restricted to the OWNER role only. When the owner initiates retirement, the system displays a confirmation dialog with the following warning:
‘This action permanently retires this register. Retired registers cannot be reactivated. All transaction history will be preserved but no new transactions can be processed. This action cannot be undone or reverted.’
The owner must type the word RETIRE (case-sensitive, exact match) in a confirmation text field before the system proceeds. If the typed text does not match exactly, the action is blocked with error: [ERR-5072] Confirmation text does not match. Type RETIRE to confirm.
5.7.3 Device Pairing
Each register is associated with one or more physical devices. Devices are the hardware (iPad, PC terminal, mobile phone) on which the POS application runs. A register can have multiple paired devices (e.g., a backup iPad) but only one may be the active/primary device at any time.
devices Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
register_id | UUID | Yes | – | FK to registers table |
hardware_id | String(100) | Yes | – | Unique device identifier (serial number, UDID, or system-generated fingerprint) |
device_type | Enum | Yes | – | IPAD, PC_TERMINAL, MOBILE, ANDROID_TABLET |
device_name | String(100) | No | null | Friendly name (e.g., “iPad Pro 12.9 - Main Counter”) |
os_version | String(50) | No | null | Operating system version (e.g., “iPadOS 17.4”, “Windows 11”) |
app_version | String(20) | No | null | POS application version installed (e.g., “2.3.1”) |
is_primary | Boolean | Yes | false | Whether this is the active device for the register (exactly one per register) |
last_seen_at | DateTime | No | null | Last successful heartbeat or API call from this device |
last_sync_at | DateTime | No | null | Last successful data synchronization timestamp |
paired_at | DateTime | Yes | Auto | When this device was first paired to the register |
paired_by | UUID | Yes | – | FK to users table; admin who paired the device |
Unique constraint: (tenant_id, hardware_id)
Business Rules:
- A
hardware_idcan only be paired to one register at a time across the entire tenant. Pairing to a new register automatically unpairs from the previous register. - Exactly one device per register must be
is_primary = true. When a new device is set as primary, the previous primary is automatically set tois_primary = false. - A device that has not sent a heartbeat in 5 minutes is flagged as “Offline” in the admin dashboard. After 24 hours without contact, the device status is escalated to “Disconnected” with an alert to the admin.
app_versionis reported by the device at each heartbeat. The admin dashboard highlights devices running outdated versions.
5.7.4 Register Profiles
Register profiles define which POS functions are available on a terminal. Two profiles are provided by default; tenants cannot create custom profiles (this prevents an explosion of untested UI configurations).
register_profiles Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
code | String(20) | Yes | – | Unique profile code within tenant |
name | String(50) | Yes | – | Display name |
description | String(500) | No | null | Profile description |
allowed_functions | JSONB | Yes | – | Array of function codes available on this profile |
is_system | Boolean | Yes | true | System profiles cannot be deleted |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, code)
Default Profile Definitions
| Profile | Code | Description | Available Functions |
|---|---|---|---|
| Full POS | FULL_POS | Standard counter terminal with complete POS capability | sale, return, exchange, layaway, hold, void, inventory_lookup, price_check, cash_drawer, customer_management, gift_card, reports, end_of_day, park_sale, special_order |
| Mobile Checkout | MOBILE | Handheld device for floor-based sales assistance | sale, price_check, inventory_lookup, customer_lookup, park_sale |
Function availability comparison:
| Function | Full POS | Mobile |
|---|---|---|
sale | Y | Y |
return | Y | N |
exchange | Y | N |
layaway | Y | N |
hold | Y | N |
void | Y | N |
inventory_lookup | Y | Y |
price_check | Y | Y |
cash_drawer | Y | N |
customer_management | Y | N |
customer_lookup | Y | Y |
gift_card | Y | N |
reports | Y | N |
end_of_day | Y | N |
park_sale | Y | Y |
special_order | Y | N |
Business Rules:
- The register profile controls which menu items and action buttons are rendered on the POS UI. Functions not in the profile’s
allowed_functionsarray are hidden from the interface entirely. - User role permissions (Section 5.5.4) are enforced IN ADDITION TO profile restrictions. A function must be allowed by BOTH the register profile AND the user’s role feature toggles. For example, a Staff user on a Full POS terminal cannot void a transaction because their role toggle
void_transaction = false, even though the profile allows thevoidfunction.
5.7.5 Peripheral Assignments
Each register has linked peripheral devices that provide physical I/O capabilities. Peripherals are assigned to registers via a junction table that references the device registry from the appropriate configuration section.
register_peripherals Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key, system-generated |
tenant_id | UUID | Yes | – | Owning tenant |
register_id | UUID | Yes | – | FK to registers table |
peripheral_type | Enum | Yes | – | Type of peripheral (see enumeration below) |
peripheral_ref_id | UUID | No | null | FK to the peripheral’s registry table (e.g., printers.id, payment_terminals.id) |
connection_type | Enum | Yes | – | USB, BLUETOOTH, NETWORK, BUILT_IN |
is_active | Boolean | Yes | true | Whether this peripheral is currently connected and operational |
last_status_check | DateTime | No | null | Last successful peripheral status check |
created_at | DateTime | Yes | Auto | Record creation timestamp |
Unique constraint: (register_id, peripheral_type) – one peripheral per type per register
Peripheral Type Enumeration
| Peripheral Type | Code | Required (Full POS) | Required (Mobile) | Source Configuration |
|---|---|---|---|---|
| Receipt Printer | RECEIPT_PRINTER | Yes | No | Section 5.8 (Printer Configuration) |
| Label Printer | LABEL_PRINTER | No | No | Section 5.8 (Printer Configuration) |
| Barcode Scanner | BARCODE_SCANNER | Yes | Yes | Direct pairing (USB/Bluetooth) |
| Payment Terminal | PAYMENT_TERMINAL | Yes | Yes | Section 5.11 (Payment Processing) |
| Cash Drawer | CASH_DRAWER | Yes | No | Direct pairing (connected via receipt printer kick cable) |
| Customer Display | CUSTOMER_DISPLAY | No | No | Direct pairing (secondary screen, pole display) |
| RFID Reader | RFID_READER | No | No | Section 5.16 (RFID Configuration) |
Business Rules:
- A register with the
FULL_POSprofile must have at minimum: receipt printer, barcode scanner, payment terminal, and cash drawer assigned and active before it can process transactions. - A register with the
MOBILEprofile must have at minimum: barcode scanner and payment terminal assigned. - Missing required peripherals are flagged on the admin dashboard. The POS terminal displays a warning on login: “Register [REG-001] is missing required peripheral: [Cash Drawer]. Some functions may be unavailable.”
- Cash drawers are typically connected to the receipt printer via a kick cable (RJ12 connector). The cash drawer opens when the receipt printer sends a drawer kick signal. For this reason, the cash drawer’s operational status is dependent on the receipt printer’s status.
5.7.6 Register-Peripheral Entity Relationship
erDiagram
REGISTERS ||--o{ DEVICES : "paired with"
REGISTERS ||--|| REGISTER_PROFILES : "uses profile"
REGISTERS ||--o{ REGISTER_PERIPHERALS : "has peripherals"
LOCATIONS ||--o{ REGISTERS : "contains"
REGISTERS {
UUID id PK
UUID tenant_id FK
UUID location_id FK
String register_number
UUID profile_id FK
Enum status
}
DEVICES {
UUID id PK
UUID register_id FK
String hardware_id
Enum device_type
Boolean is_primary
DateTime last_seen_at
}
REGISTER_PROFILES {
UUID id PK
String code
String name
JSONB allowed_functions
}
REGISTER_PERIPHERALS {
UUID id PK
UUID register_id FK
Enum peripheral_type
UUID peripheral_ref_id
Enum connection_type
Boolean is_active
}
LOCATIONS {
UUID id PK
String code
String name
Enum type
}
Cross-Reference: See Module 1 for POS transaction flow and how register context (profile, peripherals) affects the sales workflow. See Module 5, Section 5.8 for the printer registry that
peripheral_ref_idreferences for receipt and label printers. See Module 5, Section 5.11 for the payment terminal configuration thatperipheral_ref_idreferences for payment devices.
5.8 Printers & Peripherals
Scope: Central registry of all printers across all tenant locations, linking printers to registers, and managing network printer discovery. This section covers receipt printers, label printers, and the register-to-printer assignment model.
Cross-Reference: See Module 5, Section 5.7 for register profile definitions and peripheral assignments. See Module 5, Section 5.14 for receipt layout and content configuration.
5.8.1 Printer Registry
Every physical printer in the organization is registered in a central table. Printers are scoped to a specific location and classified by type and connection method.
Printer Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
location_id | UUID | Yes | FK to locations table – physical location where the printer resides |
name | String(100) | Yes | Human-readable name (e.g., “Main Counter Printer”, “Back Office Label Printer”) |
type | Enum | Yes | RECEIPT, LABEL |
connection_type | Enum | Yes | USB, NETWORK_IP, BLUETOOTH |
connection_address | String(255) | Yes | Connection target: IP:port for NETWORK_IP (e.g., “192.168.1.50:9100”), device path for USB (e.g., “/dev/usb/lp0”), MAC address for BLUETOOTH |
model | String(100) | No | Manufacturer model identifier (e.g., “Epson TM-T88VI”, “Zebra ZD421”, “Star TSP143IV”) |
paper_width | Enum | Yes | Receipt: 58MM, 80MM. Label: 25x50MM, 50x25MM, 50x75MM, CUSTOM |
is_shared | Boolean | Yes | Whether multiple registers can use this printer simultaneously (default: false) |
is_active | Boolean | Yes | Soft-delete flag (default: true) |
last_health_check | DateTime | No | Timestamp of the most recent successful health check ping |
last_health_status | Enum | No | ONLINE, OFFLINE, ERROR, UNKNOWN |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
5.8.2 Receipt Printers
Receipt printers produce transaction receipts, X/Z-reports, and end-of-day summaries. Every register with a Full POS profile (see Section 5.7) requires exactly one primary receipt printer.
Receipt Printer Specifications:
| Attribute | Options | Notes |
|---|---|---|
| Paper Width | 58mm (compact), 80mm (standard) | Configured per printer; determines receipt layout column width |
| Connection | USB (direct), Network IP (shared) | USB printers are 1:1 with a register; Network IP printers can be shared |
| Print Speed | Varies by model | Minimum recommended: 200mm/sec for high-volume registers |
| Auto-Cutter | Required for all receipt printers | Full or partial cut supported |
| Cash Drawer Kick | Supported via printer relay | Printer sends electrical pulse to open drawer on receipt print |
Business Rules:
- Each register in a Full POS profile must be linked to exactly one primary receipt printer. Registers with a Mobile POS or Inventory-Only profile do not require a receipt printer.
- A single receipt printer may serve multiple registers only if
is_shared = trueandconnection_type = NETWORK_IP. USB printers cannot be shared. - When a receipt printer is marked
is_active = false, any register linked to it as primary receipt printer will display a configuration warning on the POS terminal dashboard.
5.8.3 Label Printers
Label printers produce barcode labels, price tags, and shelf tags. Label printers are typically shared resources used from the back office or receiving area, though they may also be linked to individual registers.
Supported Label Sizes:
| Size Code | Dimensions | Common Use |
|---|---|---|
25x50MM | 1“ x 2“ | Small barcode labels, jewelry tags |
50x25MM | 2“ x 1“ | Standard shelf tags, price labels |
50x75MM | 2“ x 3“ | Hang tags with barcode + price + description |
CUSTOM | User-defined | Tenant-configured custom dimensions |
Business Rules:
- Label printers are always shared (
is_shared = trueby default) and can be linked to multiple registers. - Label template selection is driven by the label size configured on the printer and the template definitions in Module 3, Section 3.10.
- A location may have zero or more label printers. Label printing is optional – stores without a label printer can still operate but cannot print labels locally.
Cross-Reference: See Module 3, Section 3.10 for label template definitions, barcode symbology, and print queue management.
5.8.4 Register-Printer Linking
The register_printers junction table defines which printers are available to each register and in what role.
Register-Printer Assignment Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
register_id | UUID | Yes | FK to registers table |
printer_id | UUID | Yes | FK to printers table |
printer_role | Enum | Yes | PRIMARY_RECEIPT, LABEL, SECONDARY_RECEIPT |
is_default | Boolean | Yes | Whether this is the default printer for the given role (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
Printer Role Definitions:
| Role | Required | Max Per Register | Description |
|---|---|---|---|
PRIMARY_RECEIPT | Yes (Full POS) | 1 | Main receipt printer for transactions, X/Z-reports |
LABEL | No | 1 | Label printer for barcode/price tag printing |
SECONDARY_RECEIPT | No | 1 | Backup receipt printer; used if primary is offline |
Uniqueness Constraint: A register may have at most one printer per role. The composite key (register_id, printer_role) is unique.
5.8.5 Network Printer Discovery
Administrators can scan the local network subnet for printers to streamline the registration process.
Discovery Flow:
sequenceDiagram
autonumber
participant A as Admin
participant UI as Admin Portal
participant API as Backend
participant NET as Local Network
A->>UI: Click "Discover Printers"
UI->>API: POST /printers/discover
API->>NET: Scan subnet for devices on port 9100 (RAW), 631 (IPP)
NET-->>API: Respond with discovered IPs and device info
API-->>UI: Return discovered printer list
Note right of UI: Display: IP, hostname, model (if available), port
A->>UI: Select discovered printer
A->>UI: Assign name, type (RECEIPT/LABEL), paper width
UI->>API: POST /printers
API-->>UI: Printer registered successfully
Discovery Rules:
- Scan is limited to the local subnet of the location’s network.
- Discovery returns IP address, hostname (if resolvable), and model string (if the printer supports SNMP or IPP device identification).
- Discovered printers are presented as candidates – the administrator must explicitly add them to the registry with a name and type assignment.
- Discovery does not modify any existing printer records.
5.8.6 Printer Health Monitoring
The system performs periodic health checks on all active network printers.
| Setting | Value | Description |
|---|---|---|
| Health check interval | Every 5 minutes | Background ping to NETWORK_IP printers only |
| Offline threshold | 3 consecutive failures | Printer status changes to OFFLINE after 3 failed pings |
| Alert trigger | On status change to OFFLINE | Dashboard notification sent to location manager |
| USB printers | Not health-checked | USB status determined at print time |
Business Rules:
- Health checks apply only to printers with
connection_type = NETWORK_IP. - When a primary receipt printer goes offline, the register automatically attempts to use the secondary receipt printer (if configured).
- Printer health status is visible on the Admin Portal dashboard per location.
5.9 Tax Configuration
Scope: Location-level compound tax configuration using a 3-level jurisdiction model (State / County / City) with support for product-level and customer-level exemptions. Each location references a tax jurisdiction; all active rates within that jurisdiction are summed at time of sale to produce the effective compound rate.
Cross-Reference: See Module 1, Section 1.17 for the tax calculation engine and line-item tax computation. See Module 2 for customer tax exemption fields. See Module 3, Section 3.1 for product-level tax exemption.
5.9.1 Tax Jurisdiction and Rate Setup
Tax is modeled as a 3-level compound system: State, County, and City. Each location references a tax jurisdiction, and each jurisdiction defines up to three rate levels that are summed at time of sale to produce the effective compound tax rate.
tax_jurisdictions Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key |
tenant_id | UUID | Yes | – | FK to tenants table |
code | String(20) | Yes | – | Unique jurisdiction code (e.g., “VA-NFK”, “VA-VB”, “CA-LA”) |
name | String(100) | Yes | – | Human-readable name (e.g., “Norfolk, Virginia”) |
state_name | String(50) | Yes | – | State or province name |
description | String(500) | No | null | Additional notes |
is_active | Boolean | Yes | true | Whether available for assignment |
created_at | DateTime | Yes | Auto | Record creation timestamp |
updated_at | DateTime | Yes | Auto | Last modification timestamp |
Unique constraint: (tenant_id, code)
tax_rates Table
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | Auto | Primary key |
tenant_id | UUID | Yes | – | FK to tenants table |
jurisdiction_id | UUID | Yes | – | FK to tax_jurisdictions table |
level | Enum | Yes | – | Tax level: STATE, COUNTY, CITY |
rate_name | String(100) | Yes | – | Display name (e.g., “Virginia State Tax”, “Norfolk City Tax”) |
rate_percent | Decimal(5,3) | Yes | – | Tax rate as percentage (e.g., 4.300 for 4.3%) |
effective_date | Date | Yes | – | Date this rate becomes active |
end_date | Date | No | null | Date this rate expires (null = no expiry) |
created_by | UUID | Yes | – | FK to users table |
is_active | Boolean | Yes | true | Whether currently in effect |
notes | String(500) | No | null | Reason for rate or change |
created_at | DateTime | Yes | Auto | Record creation timestamp |
Unique constraint: (jurisdiction_id, level, effective_date)
Business Rules:
- A jurisdiction may have up to 3 active tax rate levels (
STATE,COUNTY,CITY). Not all levels are required. - At time of sale, all active rates for the location’s jurisdiction are summed to produce the effective compound tax rate.
- When a new rate’s
effective_datearrives, the system deactivates any existing rate at the same level without anend_date. - Future rates can be scheduled by setting
effective_datein the future. A background job activates the rate at midnight on the effective date. - Rate changes never modify historical records. All past rates are preserved for audit and historical transaction recalculation if needed.
- The system does not support tax-inclusive pricing. All product prices are tax-exclusive; tax is computed and displayed separately.
- Example: Norfolk, VA = State 4.3% + Regional 0.7% + City 1.0% = 6.0% compound rate.
5.9.2 Tax Calculation Priority
Tax determination follows a strict priority order. The first matching rule wins:
flowchart TD
A[Line Item Added to Cart] --> B{Product tax_exempt = true?}
B -->|Yes| C[Tax Amount = $0.00]
B -->|No| D{Customer attached?}
D -->|No| G[Apply Location Jurisdiction Compound Rate]
D -->|Yes| E{Customer tax_exempt = true AND certificate valid?}
E -->|Yes| F[Tax Amount = $0.00]
E -->|No| G
G --> H[Sum all active rates for jurisdiction]
H --> I["Tax = line_subtotal × sum_of_rates / 100"]
Priority Order (highest first):
| Priority | Condition | Result |
|---|---|---|
| 1 | Product tax_exempt = true | No tax on this line item |
| 2 | Customer tax_exempt = true AND exemption_certificate_expiry >= today | No tax on any line item for this customer |
| 3 | Location jurisdiction compound rate | Sum all active rates for location’s jurisdiction (State + County + City). Apply sum_of_rates to taxable line subtotal. Formula: Tax = line_subtotal × sum_of_rates / 100 |
5.9.3 Tax Exemption
Product-Level Exemption:
- The
tax_exemptboolean flag on the product record (Module 3, Section 3.1) exempts individual products from tax regardless of customer or location. - Common use: Food items, certain clothing categories in jurisdictions with clothing exemptions.
Customer-Level Exemption:
- Customer records (Module 2) include three exemption fields:
| Field | Type | Description |
|---|---|---|
tax_exempt | Boolean | Whether this customer is tax-exempt (default: false) |
exemption_certificate_number | String(50) | State or federal tax exemption certificate number |
exemption_certificate_expiry | Date | Expiration date of the certificate – system checks validity at time of sale |
- When a tax-exempt customer is attached to a transaction, the system validates that the certificate has not expired. If expired, the customer is treated as taxable and the cashier sees a warning: “Tax exemption certificate expired – tax will be applied.”
- Exemption applies to all line items in the transaction (unless the product itself is tax-exempt, in which case it remains exempt regardless).
5.9.4 Tax Display and Reporting
Receipt Display:
- Tax is calculated per line item and aggregated at the transaction level.
- Receipt shows: Subtotal (pre-tax) + Tax Amount = Total.
- Compound tax rate is printed on the receipt (e.g., “Tax (6.000%): $4.50”). Optionally, the breakdown by level can be shown (e.g., State 4.3%, County 0.7%, City 1.0%).
- Tax-exempt transactions display “Tax Exempt” with the certificate number.
Tax Reporting Period:
| Setting | Options | Default | Description |
|---|---|---|---|
tax_reporting_period | MONTHLY, QUARTERLY | QUARTERLY | Determines aggregation period for tax liability reports |
- Tax liability reports aggregate taxable sales, exempt sales, and tax collected by reporting period.
- Reports are available per location and consolidated across all locations.
Cross-Reference: See Module 1, Section 1.17 for detailed tax calculation engine, rounding rules, and tax line-item storage.
5.10 Units of Measure
Scope: Predefined and tenant-customizable units of measure (UoMs) used for selling, purchasing, and inventory tracking. The UoM system supports conversion factors between related units, enabling scenarios where products are purchased in bulk units (cases, dozens) and sold in individual units (each, pair).
Cross-Reference: See Module 3, Section 3.1 for product UoM assignment fields (
selling_uom,purchasing_uom,uom_conversion_factor). See Module 4, Section 4.2 for purchase order UoM handling.
5.10.1 System-Predefined UoMs
The following UoMs are provided out-of-the-box and cannot be deleted or modified. They are available to all tenants.
| Code | Name | Category | Base Unit | Conversion to Base | Example Use |
|---|---|---|---|---|---|
EACH | Each | QUANTITY | EACH | 1 (base) | Individual garments, accessories |
PAIR | Pair | QUANTITY | EACH | 2 | Shoes, gloves, earrings, socks |
PACK | Pack | QUANTITY | EACH | Varies (set per product) | Multi-pack underwear, sock bundles |
BOX | Box | QUANTITY | EACH | Varies (set per product) | Boxed gift sets, assortments |
DOZEN | Dozen | QUANTITY | EACH | 12 | Bulk socks, accessories wholesale |
CASE | Case | QUANTITY | EACH | Varies (set per product) | Vendor case packs |
YARD | Yard | LENGTH | YARD | 1 (base) | Fabric, ribbon, trim |
METER | Meter | LENGTH | YARD | 1.0936 | Fabric (metric suppliers) |
FOOT | Foot | LENGTH | YARD | 0.3333 | Chain, cord, elastic |
KG | Kilogram | WEIGHT | KG | 1 (base) | Bulk items by weight |
LB | Pound | WEIGHT | KG | 0.4536 | Bulk items (imperial) |
OZ | Ounce | WEIGHT | KG | 0.02835 | Small items, jewelry |
5.10.2 Custom UoMs
Tenants can create additional UoMs to match their specific business needs. Custom UoMs must belong to an existing category and define a conversion factor to the category’s base unit.
UoM Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table (NULL for system-predefined UoMs) |
code | String(20) | Yes | Unique code within tenant scope (e.g., “ROLL”, “SPOOL”, “BUNDLE”, “SET”) |
name | String(50) | Yes | Display name (e.g., “Roll”, “Spool”, “Bundle”, “Set of 3”) |
category | Enum | Yes | QUANTITY, LENGTH, WEIGHT |
conversion_factor | Decimal(12,6) | Yes | Number of base units in one of this UoM (e.g., ROLL = 25 YARD, so factor = 25) |
base_uom_id | UUID | Yes | FK to uom table – the base unit this converts to (EACH, YARD, or KG) |
is_system | Boolean | Yes | true for predefined UoMs, false for tenant-created (default: false) |
is_active | Boolean | Yes | Soft-delete flag (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Custom UoM Examples:
| Code | Name | Category | Base UoM | Conversion Factor | Meaning |
|---|---|---|---|---|---|
ROLL | Roll | LENGTH | YARD | 25 | 1 Roll = 25 Yards |
SPOOL | Spool | LENGTH | YARD | 100 | 1 Spool = 100 Yards |
BUNDLE | Bundle | QUANTITY | EACH | 5 | 1 Bundle = 5 Each |
SET3 | Set of 3 | QUANTITY | EACH | 3 | 1 Set = 3 Each |
HALFYD | Half Yard | LENGTH | YARD | 0.5 | 1 Half Yard = 0.5 Yards |
5.10.3 UoM Conversion Table
For complex multi-step conversions (e.g., converting between two non-base units), the system maintains an explicit conversion table.
UoM Conversion Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
from_uom_id | UUID | Yes | FK to uom table – source unit |
to_uom_id | UUID | Yes | FK to uom table – target unit |
factor | Decimal(12,6) | Yes | Multiply source quantity by this factor to get target quantity |
tenant_id | UUID | Yes | FK to tenants table |
created_at | DateTime | Yes | Record creation timestamp |
Conversion Examples:
| From | To | Factor | Explanation |
|---|---|---|---|
| DOZEN | EACH | 12 | 1 Dozen = 12 Each |
| CASE | EACH | 24 | 1 Case = 24 Each (varies per product) |
| PAIR | EACH | 2 | 1 Pair = 2 Each |
| METER | YARD | 1.0936 | 1 Meter = 1.0936 Yards |
| LB | KG | 0.4536 | 1 Pound = 0.4536 Kilograms |
| ROLL | YARD | 25 | 1 Roll = 25 Yards |
Uniqueness Constraint: The composite key (from_uom_id, to_uom_id, tenant_id) is unique. The system auto-generates the inverse conversion (e.g., if DOZEN->EACH = 12, then EACH->DOZEN = 0.083333) so both directions are always available.
5.10.4 Product UoM Assignment
Each product specifies how it is sold and how it is purchased. The conversion factor bridges these two units for inventory tracking.
Product UoM Fields (on Product Record):
| Field | Type | Required | Description |
|---|---|---|---|
selling_uom_id | UUID | Yes | FK to uom table – unit used at POS (e.g., EACH, PAIR, YARD) |
purchasing_uom_id | UUID | Yes | FK to uom table – unit used on purchase orders (e.g., CASE, DOZEN, ROLL) |
uom_conversion_factor | Decimal(12,6) | Yes | Number of selling units per purchasing unit (e.g., 24 EACH per CASE) |
Conversion in Practice:
Purchase Order: 5 CASES of "Classic V-Neck Tee"
uom_conversion_factor = 24 (1 CASE = 24 EACH)
→ Receiving adds 5 × 24 = 120 EACH to inventory
POS Sale: Customer buys 3 EACH of "Classic V-Neck Tee"
→ Inventory decremented by 3 EACH
→ Remaining: 117 EACH (or 4.875 CASES)
Business Rules:
- The
selling_uom_iddetermines how inventory quantities are displayed at the POS and in stock reports. - The
purchasing_uom_iddetermines the unit on purchase orders and receiving documents. - When receiving a PO, the system multiplies received quantity by
uom_conversion_factorto compute the inventory increment in selling units. - If
selling_uom_idequalspurchasing_uom_id, thenuom_conversion_factormust be 1. - UoM changes on a product with existing inventory require a manager approval and trigger an inventory adjustment record.
Cross-Reference: See Module 3, Section 3.1 for full product data model. See Module 4, Section 4.2 for purchase order line-item UoM handling. See Module 4, Section 4.3 for receiving UoM conversion.
5.11 Payment Methods & Processors
Scope: Configuration of accepted payment methods per location, payment processor integrations, terminal management, and cash rounding rules. This section defines the payment method registry and processor setup – the transactional payment flow is documented in Module 1.
Cross-Reference: See Module 1, Section 1.18 for payment integration flow and split-payment logic. See Module 5, Section 5.7 for register payment terminal assignment.
5.11.1 Payment Methods
The system supports a fixed set of payment method types. Each method has inherent capabilities (processor requirement, split eligibility, offline support) that cannot be overridden.
Payment Method Registry
| Code | Name | Requires Processor | Can Split | Offline Capable | Description |
|---|---|---|---|---|---|
CASH | Cash | No | Yes | Yes | Physical currency; change calculated automatically |
CREDIT_CARD | Credit/Debit Card | Yes (external) | Yes (multi-card) | No | Chip, swipe, tap, or manual entry via payment terminal |
GIFT_CARD | Gift Card | Internal | Yes | No | Store-issued gift cards with balance tracking |
STORE_CREDIT | Store Credit | Internal | Yes | No | Credit balance on customer account (from returns, adjustments) |
LAYAWAY_PAYMENT | Layaway Payment | Via card/cash | Yes | No | Partial payment applied to layaway balance |
FINANCING | Third-Party Financing | Yes (external) | No | No | Affirm, Klarna, or similar buy-now-pay-later provider |
Payment Method Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
code | String(20) | Yes | Method code from the table above |
name | String(50) | Yes | Display name (customizable by tenant, e.g., “Visa/MC/Amex” instead of “Credit/Debit Card”) |
requires_processor | Boolean | Yes | Whether an external or internal processor is required |
can_split | Boolean | Yes | Whether this method can be combined with other methods in a single transaction |
offline_capable | Boolean | Yes | Whether this method can be used when the terminal is offline |
is_active | Boolean | Yes | Global enable/disable for the tenant (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
5.11.2 Per-Location Payment Configuration
Each payment method can be independently enabled or disabled at each location. This allows tenants to offer different payment options at different store locations (e.g., financing only at the flagship store, no gift cards at popup locations).
Location Payment Method Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
location_id | UUID | Yes | FK to locations table |
payment_method_id | UUID | Yes | FK to payment_methods table |
is_enabled | Boolean | Yes | Whether this payment method is accepted at this location (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Uniqueness Constraint: The composite key (location_id, payment_method_id) is unique.
Business Rules:
- A payment method must be active at the tenant level (
payment_methods.is_active = true) AND enabled at the location level (location_payment_methods.is_enabled = true) to appear as an option on the POS terminal at that location. - CASH is always enabled and cannot be disabled at any location.
- When a payment method is disabled at the tenant level, it is automatically hidden at all locations regardless of the location-level setting.
5.11.3 Payment Processor Configuration
MOVED TO MODULE 6: Payment processor data model, terminal mapping, processor type details, and business rules have been consolidated into Module 6, Section 6.8.3 (Processor Configuration).
See: Module 6, Section 6.8 for the complete payment processor integration specification including SAQ-A architecture, terminal communication, failure handling, and batch settlement.
5.11.4 Cash Rounding Rules
When the total transaction amount results in a fractional cent, rounding rules determine how the final amount is adjusted for cash payments. Card payments are always exact (no rounding applied).
Rounding Rule Options
| Rule | Code | Description | Example |
|---|---|---|---|
| Nearest Cent | NEAREST_CENT | Round to nearest $0.01 (standard – no visible rounding) | $12.347 → $12.35 |
| Nearest Nickel | NEAREST_NICKEL | Round to nearest $0.05 (cash only) | $12.32 → $12.30; $12.33 → $12.35 |
| Nearest Dime | NEAREST_DIME | Round to nearest $0.10 (cash only) | $12.34 → $12.30; $12.36 → $12.40 |
Configuration:
| Field | Type | Required | Description |
|---|---|---|---|
tenant_id | UUID | Yes | FK to tenants table |
cash_rounding_rule | Enum | Yes | NEAREST_CENT, NEAREST_NICKEL, NEAREST_DIME (default: NEAREST_CENT) |
Business Rules:
- Rounding applies ONLY to the total amount when the payment method is CASH or includes a CASH component in a split payment.
- Card payments, gift cards, and store credits are always exact – no rounding.
- The rounding adjustment (positive or negative) is recorded as a separate line on the receipt (e.g., “Cash Rounding: -$0.02”) and tracked in the
cash_rounding_amountfield on the transaction record. - Rounding adjustments are excluded from tax calculations – tax is computed on the pre-rounding subtotal.
5.12 Custom Fields
Scope: Tenant-defined custom fields that extend the standard data model for products, customers, orders, and vendors. Custom fields provide schema flexibility without database migrations, enabling each tenant to capture business-specific attributes unique to their operation.
Cross-Reference: See Module 3, Section 3.1.4 for the original product custom attribute specification. This section generalizes that pattern to all supported entity types.
5.12.1 Supported Entity Types
| Entity Type | Module | Max Fields Per Tenant | Use Cases |
|---|---|---|---|
PRODUCT | Module 3 | 50 | Care instructions, fabric composition, country of origin, certification level, custom sizing notes |
CUSTOMER | Module 2 | 50 | Preferred size, allergies/sensitivities, referral source, VIP notes, personal shopper assignment |
ORDER | Module 1 | 20 | Delivery instructions, gift wrap preference, event name, sales associate notes |
VENDOR | Module 3 | 20 | Internal account number, EDI trading partner code, preferred contact method, payment terms notes |
5.12.2 Custom Field Definition
Each custom field is defined once at the tenant level and then applied to individual entity records.
Custom Field Definition Data Model
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | – | Primary key |
tenant_id | UUID | Yes | – | FK to tenants table – owning tenant |
entity_type | Enum | Yes | – | PRODUCT, CUSTOMER, ORDER, VENDOR |
field_name | String(50) | Yes | – | Internal key in snake_case (e.g., “care_instructions”, “referral_source”). Must be unique within entity_type + tenant. |
label | String(100) | Yes | – | Human-readable display name (e.g., “Care Instructions”, “Referral Source”) |
field_type | Enum | Yes | – | TEXT, NUMBER, DATE, DROPDOWN, BOOLEAN |
is_required | Boolean | Yes | false | Whether this field must be filled when saving the entity |
default_value | String(500) | No | NULL | Default value applied when a new entity is created (must match field_type validation) |
sort_order | Integer | Yes | 0 | Display position in the entity edit form (lower = higher) |
is_active | Boolean | Yes | true | Soft-delete flag; inactive fields are hidden from forms but data is preserved |
show_on_pos | Boolean | Yes | false | Whether this field is visible on the POS terminal (useful for quick customer notes or product care info) |
validation_min | Decimal(12,4) | No | NULL | Minimum value for NUMBER fields |
validation_max | Decimal(12,4) | No | NULL | Maximum value for NUMBER fields |
validation_max_length | Integer | No | 500 | Maximum character length for TEXT fields |
created_at | DateTime | Yes | – | Record creation timestamp |
updated_at | DateTime | Yes | – | Last modification timestamp |
5.12.3 Field Type Specifications
| Field Type | Stored Column | Validation Rules | Example Value |
|---|---|---|---|
TEXT | value_text (VARCHAR 500) | Max length enforced; blank allowed unless is_required | “Dry clean only” |
NUMBER | value_number (DECIMAL 12,4) | Must be numeric; validation_min / validation_max enforced if set | 42.5000 |
DATE | value_date (DATE) | Must be a valid ISO 8601 date | “2026-03-15” |
DROPDOWN | value_text (VARCHAR 100) | Value must match one of the defined options in custom_field_options | “Cotton” |
BOOLEAN | value_boolean (BOOLEAN) | Must be true or false | true |
5.12.4 Dropdown Options
When field_type = DROPDOWN, the allowed values are stored in a separate options table.
Custom Field Options Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
field_id | UUID | Yes | FK to custom_field_definitions table |
option_value | String(100) | Yes | The selectable value (e.g., “Cotton”, “Polyester”, “Silk”) |
sort_order | Integer | Yes | Display position in the dropdown (lower = higher) |
is_active | Boolean | Yes | Soft-delete flag (default: true); inactive options are hidden from new selections but preserved on existing records |
created_at | DateTime | Yes | Record creation timestamp |
Example Dropdown Configuration:
| Field Label | Entity Type | Options |
|---|---|---|
| Material | PRODUCT | Cotton, Polyester, Silk, Wool, Linen, Blend, Leather, Synthetic |
| Referral Source | CUSTOMER | Walk-in, Website, Social Media, Friend/Family, Google, Event, Other |
| Gift Wrap Style | ORDER | None, Standard, Premium, Holiday |
| Payment Terms | VENDOR | Net 30, Net 60, Net 90, COD, Prepaid |
5.12.5 Custom Field Values
Custom field values are stored in a generic key-value table using typed columns. Only the column matching the field’s type is populated; the others remain NULL.
Custom Field Values Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
entity_type | Enum | Yes | PRODUCT, CUSTOMER, ORDER, VENDOR – matches the definition’s entity_type |
entity_id | UUID | Yes | FK to the entity record (product, customer, order, or vendor) |
field_id | UUID | Yes | FK to custom_field_definitions table |
value_text | String(500) | No | Populated when field_type = TEXT or DROPDOWN |
value_number | Decimal(12,4) | No | Populated when field_type = NUMBER |
value_date | Date | No | Populated when field_type = DATE |
value_boolean | Boolean | No | Populated when field_type = BOOLEAN |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Uniqueness Constraint: The composite key (entity_type, entity_id, field_id) is unique – each entity record has at most one value per custom field.
Indexing Strategy:
- GIN index on
(entity_type, entity_id)for fast retrieval of all custom field values for a given entity. - B-tree index on
(field_id, value_text)for custom fields markedsearchable = true(PRODUCT entity type only).
5.12.6 Business Rules
- Field Limit: Maximum 50 custom field definitions per entity type per tenant for PRODUCT and CUSTOMER. Maximum 20 per entity type per tenant for ORDER and VENDOR. Attempting to exceed the limit returns error: “Maximum custom fields reached for [entity_type]. Archive unused fields to create new ones.”
- Archival: Setting
is_active = falseon a field definition hides it from all forms and POS screens but preserves existing values. Reactivating the field restores visibility and all previously stored values. - Deletion: Custom field definitions cannot be hard-deleted. Only soft-delete via
is_active = falseis supported. This ensures data integrity and audit compliance. - POS Visibility: Fields with
show_on_pos = trueappear in a “Custom Info” panel on the POS terminal. Maximum 5 fields per entity type can haveshow_on_pos = trueto prevent POS screen clutter. - Required Field Enforcement: When
is_required = true, the entity cannot be saved without a value for this field. For PRODUCT entities, this applies when transitioning from DRAFT to ACTIVE status (drafts may have incomplete custom fields). - Dropdown Integrity: If an active option is deactivated, existing records that reference that option retain their value (displayed with a “deprecated” indicator). New records cannot select the deactivated option.
5.13 Approval Workflows
Scope: Configurable approval rules that gate sensitive business actions behind manager or administrator review. Each approvable action has its own rule defining whether approval is required, the threshold that triggers it, who can approve, and how notifications are delivered.
Cross-Reference: See Module 1 for refund and void transaction workflows. See Module 3 for price markdown workflows. See Module 4, Section 4.3 for purchase order approval. See Module 4, Section 4.7 for inventory adjustment approval.
5.13.1 Approvable Actions
The system supports the following approvable actions. Each action is identified by a unique code and linked to a specific module.
| Action Code | Module | Description | Threshold Type | Default Threshold |
|---|---|---|---|---|
PO_CREATE | Module 4 | Purchase order creation and submission | Dollar amount | $5,000 |
PO_ABOVE_THRESHOLD | Module 4 | PO exceeding the tenant’s auto-approve limit | Dollar amount | $10,000 |
INVENTORY_ADJUSTMENT | Module 4 | Manual inventory quantity or value adjustment | Unit count or dollar value | 50 units or $500 |
PRICE_MARKDOWN | Module 3 | Price reduction on one or more products | Percentage or dollar amount | 30% or $50 |
REFUND_ABOVE_THRESHOLD | Module 1 | Refund exceeding the per-transaction refund limit | Dollar amount | $200 |
VOID_TRANSACTION | Module 1 | Voiding a completed, finalized transaction | Always (no threshold) | N/A – always requires approval |
INTER_STORE_TRANSFER | Module 4 | Transfer of inventory between locations | Unit count | 100 units |
VENDOR_RMA | Module 4 | Return merchandise authorization to vendor | Dollar amount | $1,000 |
DISCOUNT_OVERRIDE | Module 1 | Discount exceeding the maximum allowed percentage | Percentage | 25% |
5.13.2 Approval Rule Configuration
Each tenant configures one rule per approvable action. Rules can be enabled or disabled independently.
Approval Rule Data Model
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | UUID | Yes | – | Primary key |
tenant_id | UUID | Yes | – | FK to tenants table – owning tenant |
action_code | String(50) | Yes | – | Action code from the approvable actions table (unique per tenant) |
is_enabled | Boolean | Yes | true | Whether this approval rule is active |
threshold_value | Decimal(12,2) | No | NULL | Numeric threshold that triggers the approval requirement (NULL when threshold_type = ALWAYS) |
threshold_type | Enum | Yes | – | AMOUNT (dollar), UNITS (count), PERCENT (percentage), ALWAYS (no threshold – always requires approval) |
approver_role | Enum | Yes | MANAGER | Minimum role required to approve: MANAGER, ADMIN, OWNER |
notification_method | Enum | Yes | BOTH | How the approver is notified: IN_APP, EMAIL, BOTH |
escalation_timeout_hours | Integer | No | 24 | Hours before a pending request escalates to the next higher role (NULL = no escalation) |
auto_reject_on_timeout | Boolean | Yes | false | If true, requests that exceed escalation timeout without action are auto-rejected |
created_at | DateTime | Yes | – | Record creation timestamp |
updated_at | DateTime | Yes | – | Last modification timestamp |
5.13.3 Approval Request Lifecycle
When an action triggers an approval requirement, the system creates an approval request and routes it through the following state machine.
stateDiagram-v2
[*] --> CHECK_THRESHOLD : Action Initiated
CHECK_THRESHOLD --> AUTO_APPROVED : Below Threshold
CHECK_THRESHOLD --> PENDING_APPROVAL : At or Above Threshold
PENDING_APPROVAL --> APPROVED : Approver Accepts
PENDING_APPROVAL --> REJECTED : Approver Rejects
PENDING_APPROVAL --> ESCALATED : Escalation Timeout Reached
ESCALATED --> APPROVED : Higher-Role Approver Accepts
ESCALATED --> REJECTED : Higher-Role Approver Rejects
ESCALATED --> AUTO_REJECTED : Auto-Reject on Timeout (if enabled)
AUTO_APPROVED --> [*]
APPROVED --> [*]
REJECTED --> [*]
AUTO_REJECTED --> [*]
State Definitions:
| State | Description |
|---|---|
CHECK_THRESHOLD | System evaluates the action value against the rule’s threshold |
AUTO_APPROVED | Action value is below the threshold – no human approval needed; action proceeds immediately |
PENDING_APPROVAL | Waiting for a user with the required role to review and accept or reject |
APPROVED | An authorized approver accepted the request – action proceeds |
REJECTED | An authorized approver rejected the request – action is blocked and the requester is notified with the rejection reason |
ESCALATED | The escalation_timeout_hours elapsed without action; the request is re-routed to users with the next higher role |
AUTO_REJECTED | The escalation timeout elapsed AND auto_reject_on_timeout = true – request is automatically rejected |
5.13.4 Escalation Chain
When a request escalates, the system promotes the required approver role one level up.
| Original Role | Escalates To | Final Escalation |
|---|---|---|
MANAGER | ADMIN | OWNER |
ADMIN | OWNER | No further escalation – remains pending until OWNER acts or auto-reject triggers |
OWNER | N/A | Cannot escalate; remains pending or auto-rejects |
5.13.5 Notification Behavior
| Method | Behavior |
|---|---|
IN_APP | Dashboard notification badge and entry in the “Pending Approvals” queue visible to all users with the required role at the relevant location(s) |
EMAIL | Email sent to all users with the approver role at the relevant location(s). Email includes: action description, requested by, threshold value, and a direct link to approve/reject in the Admin Portal. |
BOTH | Dashboard notification AND email are sent simultaneously |
Notification Rules:
- Notifications are scoped to the location where the action originated. If the action is tenant-wide (e.g., a PO for the entire organization), notifications go to all users with the approver role across all locations.
- When a request escalates, a new notification is sent to users with the escalated role. The original notification is updated to show “Escalated.”
- Upon approval or rejection, the requester receives a notification with the decision and any rejection reason.
5.13.6 Approval Request Data Model
Approval Request Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
action_code | String(50) | Yes | Action code that triggered this request |
status | Enum | Yes | PENDING, APPROVED, REJECTED, ESCALATED, AUTO_APPROVED, AUTO_REJECTED |
requested_by | UUID | Yes | FK to users table – user who initiated the action |
requested_at | DateTime | Yes | Timestamp when the request was created |
approved_by | UUID | No | FK to users table – user who approved or rejected (NULL while pending) |
approved_at | DateTime | No | Timestamp of approval/rejection |
rejection_reason | String(500) | No | Free-text reason provided by the approver when rejecting |
reference_type | Enum | Yes | Entity type the request relates to: PO, ADJUSTMENT, TRANSACTION, TRANSFER, RMA, PRODUCT, DISCOUNT |
reference_id | UUID | Yes | FK to the specific entity record (purchase order, transaction, adjustment, etc.) |
threshold_value_at_time | Decimal(12,2) | No | The actual value that triggered the approval (e.g., PO total, refund amount) – captured at request time for audit |
location_id | UUID | No | FK to locations table – location where the action originated (NULL for tenant-wide actions) |
escalated_at | DateTime | No | Timestamp when the request was escalated (NULL if not escalated) |
created_at | DateTime | Yes | Record creation timestamp |
Business Rules:
- Approval requests are immutable once resolved (APPROVED, REJECTED, or AUTO_REJECTED). The status cannot be changed after resolution.
- A user cannot approve their own request – the
approved_byuser must be different from therequested_byuser. - When
VOID_TRANSACTIONis the action, approval is always required regardless of transaction amount (threshold_type = ALWAYS). - Approval requests older than 90 days in PENDING or ESCALATED status are automatically rejected with reason: “Request expired – no action taken within 90 days.”
- The
threshold_value_at_timefield captures the actual value at request creation, ensuring accurate audit even if the underlying rule’s threshold is later changed.
5.14 Receipt Configuration
Scope: Full customization of receipt layout, content, and formatting for both printed thermal receipts and email receipts. Receipt configuration is set at the tenant level with optional location-level overrides.
Cross-Reference: See Module 5, Section 5.8 for receipt printer hardware configuration. See Module 1 for transaction receipt generation and print trigger logic.
5.14.1 Receipt Field Registry
Each field on the receipt can be independently toggled (shown or hidden) and reordered. The following fields are available.
| Field Code | Default Show | Category | Description |
|---|---|---|---|
store_name | Yes | Header | Store or location name |
store_address | Yes | Header | Store street address, city, state, zip (from location configuration, Section 5.3) |
store_phone | Yes | Header | Store phone number (from location configuration) |
cashier_name | Yes | Transaction | Name of the staff member who processed the sale |
register_number | No | Transaction | Register identifier (e.g., “Register 3”) |
transaction_number | Yes | Transaction | Unique transaction ID (e.g., “TXN-2026-001234”) |
transaction_date | Yes | Transaction | Date and time of the transaction |
barcode | Yes | Transaction | Scannable CODE-128 barcode encoding the transaction number (for easy lookup on returns) |
item_list | Yes | Line Items | Itemized list showing: item name, SKU, quantity, unit price, line discount (if any), line total |
subtotal | Yes | Totals | Pre-tax total of all line items |
discount_total | Yes | Totals | Total discount amount applied (shown only if > $0.00) |
tax_amount | Yes | Totals | Tax line showing rate and amount (e.g., “Tax (6.000%): $4.50”) |
total | Yes | Totals | Grand total (subtotal - discounts + tax) |
payment_details | Yes | Payment | Payment method(s) used and amount per method (e.g., “Visa ****1234: $45.00, Cash: $10.00”) |
change_due | Yes | Payment | Change amount returned to customer (shown only for cash payments with overpayment) |
loyalty_points | No | Loyalty | Points earned on this transaction and current balance (shown only if loyalty module is enabled) |
customer_name | No | Customer | Customer name (shown only if a customer is attached to the transaction) |
savings_total | No | Totals | “You saved $X.XX” message showing total promotional and discount savings |
5.14.2 Layout Settings
Receipt Layout Data Model
| Setting | Type | Options | Default | Description |
|---|---|---|---|---|
paper_width | Enum | 58MM, 80MM | 80MM | Paper width – determines character-per-line limit (58mm = ~32 chars, 80mm = ~48 chars) |
font_size | Enum | SMALL, MEDIUM, LARGE | MEDIUM | Print font size – affects line density and readability |
field_order | JSON Array | Array of field_code strings | Default order from field registry | Ordered list defining top-to-bottom print sequence |
line_separator | Enum | DASH, EQUALS, BLANK, STAR | DASH | Character used to separate receipt sections |
alignment | Enum | LEFT, CENTER | CENTER | Header and footer text alignment |
print_density | Enum | LIGHT, NORMAL, DARK | NORMAL | Thermal print darkness (affects readability and paper consumption) |
Line Separator Examples:
| Option | Rendered As |
|---|---|
DASH | -------------------------------- |
EQUALS | ================================ |
BLANK | (empty line) |
STAR | ******************************** |
5.14.3 Header Configuration
The receipt header appears at the top of every printed receipt and supports up to 3 customizable text lines plus an optional logo.
| Field | Type | Max Length | Default | Description |
|---|---|---|---|---|
header_line_1 | String | 100 chars | Tenant name | Primary header text (typically the company name) |
header_line_2 | String | 100 chars | “Thank you for shopping with us!” | Secondary header text (tagline, greeting, or blank) |
header_line_3 | String | 100 chars | (empty) | Tertiary header text (promotional message, seasonal greeting, or blank) |
header_logo | Image URL | – | NULL | Uploaded logo image (max 300px wide; auto-scaled to paper width; monochrome recommended for thermal printers) |
Logo Specifications:
- Format: PNG or BMP (monochrome 1-bit BMP preferred for thermal printers).
- Maximum width: 300 pixels. Height auto-scales proportionally.
- The logo prints above
header_line_1. - If no logo is uploaded, the header begins with
header_line_1.
5.14.4 Footer Configuration
The receipt footer appears at the bottom of every printed receipt and supports up to 3 customizable text lines.
| Field | Type | Max Length | Default | Description |
|---|---|---|---|---|
footer_line_1 | String | 200 chars | “Returns accepted within 30 days with receipt.” | Primary footer text (typically return policy) |
footer_line_2 | String | 200 chars | (empty) | Secondary footer text (website URL, social media handles) |
footer_line_3 | String | 200 chars | “Thank you!” | Tertiary footer text (closing message) |
Business Rules:
- Footer lines can be blank (empty string). Blank lines are omitted from the printed receipt – no empty space is printed.
- Footer text should fit within the character-per-line limit of the configured paper width. Text exceeding the limit is word-wrapped automatically.
5.14.5 Receipt Configuration Data Model
Receipt Config Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
location_id | UUID | No | FK to locations table – NULL for tenant-wide default; non-NULL for location-specific override |
paper_width | Enum | Yes | 58MM, 80MM (default: 80MM) |
font_size | Enum | Yes | SMALL, MEDIUM, LARGE (default: MEDIUM) |
line_separator | Enum | Yes | DASH, EQUALS, BLANK, STAR (default: DASH) |
alignment | Enum | Yes | LEFT, CENTER (default: CENTER) |
print_density | Enum | Yes | LIGHT, NORMAL, DARK (default: NORMAL) |
header_lines | JSON | Yes | {"line_1": "...", "line_2": "...", "line_3": "..."} |
footer_lines | JSON | Yes | {"line_1": "...", "line_2": "...", "line_3": "..."} |
header_logo_url | String(500) | No | URL to uploaded header logo image |
field_order | JSON Array | Yes | Ordered array of field_code strings defining print sequence |
show_fields | JSON Object | Yes | Map of field_code: boolean controlling visibility (e.g., {"store_name": true, "register_number": false, ...}) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Configuration Inheritance:
- The system first looks for a location-specific receipt configuration (
location_id = target location). - If no location-specific configuration exists, it falls back to the tenant-wide default (
location_id = NULL). - Every tenant is initialized with a tenant-wide default receipt configuration using the default values from the field registry.
5.14.6 Email Receipt Template
Email receipts are HTML-formatted and sent when a customer provides an email address at checkout or explicitly requests an email receipt.
Email Receipt Template Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
subject_line | String(200) | Yes | Email subject (default: “Your receipt from {{store_name}}”) |
html_template | Text | Yes | HTML template body with merge fields |
is_active | Boolean | Yes | Whether email receipts are enabled (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Supported Merge Fields:
| Merge Field | Resolves To |
|---|---|
{{store_name}} | Location name |
{{store_address}} | Location full address |
{{store_phone}} | Location phone number |
{{transaction_id}} | Transaction number |
{{transaction_date}} | Formatted date and time |
{{items}} | HTML table of line items (name, qty, price, discount, total) |
{{subtotal}} | Pre-tax subtotal |
{{discount_total}} | Total discounts applied |
{{tax_amount}} | Tax amount with rate |
{{total}} | Grand total |
{{payment_method}} | Payment method(s) used |
{{customer_name}} | Customer name (if attached) |
{{loyalty_points_earned}} | Points earned on this transaction |
{{loyalty_balance}} | Current loyalty point balance |
{{barcode_image}} | Inline barcode image of transaction number |
Email Receipt Business Rules:
- Email receipts use the tenant’s branding colors (from Section 5.2) for header background, button colors, and accent elements.
- The company logo from the receipt header configuration is placed at the top of the email template.
- Every email receipt includes an unsubscribe link at the bottom: “Unsubscribe from receipt emails.” Clicking this sets the customer’s
email_receipt_opt_out = true. - Email receipts are queued asynchronously – the POS terminal does not wait for email delivery confirmation before completing the transaction.
- If the email fails to send (invalid address, mailbox full), the failure is logged but does not affect the transaction. The staff member sees no error; the customer simply does not receive the email.
- Email receipts are retained in the system for 7 years for audit and compliance purposes.
5.14.7 Receipt Preview
The Admin Portal provides a live preview of the receipt configuration, rendering a sample receipt with placeholder data so the administrator can verify layout, field order, and branding before saving.
Preview Behavior:
- Preview updates in real-time as the administrator toggles fields, reorders sections, or modifies header/footer text.
- Preview renders at the configured paper width (58mm or 80mm) using a monospace font to simulate thermal printer output.
- A “Send Test Email” button sends a sample email receipt to the administrator’s email address using the current email template configuration.
Cross-Reference: See Module 5, Section 5.8 for receipt printer hardware configuration and register-printer linking. See Module 1 for the transaction completion flow that triggers receipt printing.
5.15 Email Templates & Communications
Scope: Centralized email provider configuration and template registry for all automated communications sent by the POS system. This section covers SMTP/API provider setup, the complete email template catalog consolidated from all modules, merge field definitions, and per-template enablement controls.
Cross-Reference: See Module 1, Section 1.13 for sales-triggered email events. See Module 2, Section 2.9 for customer communication preferences. See Module 4, Section 4.16 for inventory alert email templates.
5.15.1 Email Provider Configuration
MOVED TO MODULE 6: Email provider configuration, data model, and business rules have been consolidated into Module 6, Section 6.9.1 (Provider Configuration).
See: Module 6, Section 6.9 for the complete email provider integration specification including SMTP/SendGrid/Mailgun configuration and delivery monitoring.
5.15.2 Template Registry
Every automated email sent by the POS system is defined as a template in a central registry. Templates are pre-seeded during tenant onboarding and can be individually enabled or disabled by the tenant administrator.
Consolidated Email Template Catalog
| Template Code | Source Module | Trigger Event | Default Recipients | Description |
|---|---|---|---|---|
TMPL-REFUND-CONFIRMATION | Sales (M1) | Refund processed | Customer email | Confirms refund amount, method, and expected processing time |
TMPL-SPECIAL-ORDER-READY | Sales (M1) | Special order arrived | Customer email | Notifies customer that their special order is ready for pickup |
TMPL-SHIPMENT-TRACKING | Sales (M1) | Ship-to-customer dispatched | Customer email | Provides carrier name and tracking number |
TMPL-DELIVERY-CONFIRMATION | Sales (M1) | Delivery confirmed | Customer email | Confirms package delivery with order summary |
TMPL-OFFLINE-SOLD | Sales (M1) | Reserved item sold offline | Customer email | Informs customer when an offline-sold transfer/ship/reserve item is unavailable |
TMPL-WELCOME | Customers (M2) | New customer created | Customer email | Welcome message with loyalty program introduction |
TMPL-TIER-UPGRADE | Customers (M2) | Loyalty tier change | Customer email | Congratulates customer on tier upgrade with new benefits summary |
TMPL-PO-VENDOR | Inventory (M4) | PO submitted to vendor | Vendor email | Formatted purchase order with line items, quantities, and expected delivery |
TMPL-TRANSFER-ALERT | Inventory (M4) | Transfer shipped | Destination manager | Notifies destination store that a transfer is in transit |
TMPL-LOW-STOCK | Inventory (M4) | Daily low stock digest | Store manager | Consolidated list of products below reorder point at each location |
TMPL-COUNT-REMINDER | Inventory (M4) | Upcoming count scheduled | Assigned counters | Reminder email with count date, location, and scope (full/cycle) |
TMPL-RECEIPT-EMAIL | Setup (M5) | Customer requests email receipt | Customer email | Full transaction receipt in HTML format with scannable barcode |
TMPL-PASSWORD-RESET | Setup (M5) | User password reset request | User email | Secure password reset link with 24-hour expiry |
5.15.3 Template Data Model
Email Template Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
code | String(50) | Yes | Unique template code (e.g., TMPL-REFUND-CONFIRMATION). Immutable after creation. |
name | String(100) | Yes | Human-readable template name (e.g., “Refund Confirmation”) |
subject_template | String(255) | Yes | Email subject with merge fields (e.g., “Your refund of {refund_amount} has been processed”) |
body_template | Text | Yes | HTML email body with merge fields. Supports inline CSS for styling. Maximum 50KB. |
trigger_event | String(100) | Yes | System event that triggers this email (e.g., REFUND_PROCESSED, SPECIAL_ORDER_ARRIVED) |
default_recipient_type | Enum | Yes | CUSTOMER, VENDOR, STORE_MANAGER, ASSIGNED_USER, CUSTOM |
custom_recipient_email | String(255) | No | Static email address when default_recipient_type = CUSTOM |
is_enabled | Boolean | Yes | Whether this template is active and will be sent when triggered (default: true) |
is_system | Boolean | Yes | true for pre-seeded templates, false for tenant-created (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
5.15.4 Merge Fields
Each template uses merge fields enclosed in curly braces. The system resolves merge fields at send time by injecting context-specific values from the triggering event.
Common Merge Fields (available in all templates):
| Merge Field | Type | Description |
|---|---|---|
{tenant_name} | String | Tenant trading name |
{store_name} | String | Location name where the event occurred |
{store_address} | String | Full store address |
{store_phone} | String | Store phone number |
{current_date} | Date | Date the email is generated |
{current_time} | Time | Time the email is generated |
Transaction Merge Fields (sales-triggered templates):
| Merge Field | Type | Description |
|---|---|---|
{customer_name} | String | Customer full name |
{transaction_id} | String | Transaction reference number |
{order_total} | Currency | Total transaction amount |
{refund_amount} | Currency | Refund amount (refund templates only) |
{payment_method} | String | Payment method used |
{tracking_number} | String | Carrier tracking number (shipment templates only) |
{carrier_name} | String | Shipping carrier name (shipment templates only) |
{pickup_deadline} | Date | Pickup deadline date (special order and hold templates) |
Inventory Merge Fields (inventory-triggered templates):
| Merge Field | Type | Description |
|---|---|---|
{po_number} | String | Purchase order number |
{vendor_name} | String | Vendor company name |
{transfer_number} | String | Transfer reference number |
{source_location} | String | Transfer origin location name |
{destination_location} | String | Transfer destination location name |
{count_date} | Date | Scheduled count date |
{count_type} | String | Count type (Full Physical, Cycle, On-Demand, etc.) |
{low_stock_items} | HTML Table | Rendered table of low-stock products (digest templates only) |
Business Rules:
- Unresolved merge fields are replaced with an empty string and logged as a warning. They do not prevent the email from sending.
- Tenant administrators can customize the
subject_templateandbody_templateof system templates but cannot modify thecode,trigger_event, ordefault_recipient_type. - Disabling a template (
is_enabled = false) prevents the email from being sent when the trigger event fires. The event itself still processes normally. - Email receipts (
TMPL-RECEIPT-EMAIL) render the same field data as printed receipts, formatted in responsive HTML with a scannable CODE-128 barcode image.
5.16 RFID Configuration
Scope: Configuration of RFID hardware, EPC encoding parameters, tag printing, and scan session settings for the dedicated inventory counting subsystem. RFID is a counting-only system — it counts inventory through bulk tag reads. It does NOT participate in sales transactions, receiving, or transfers. Barcode Scanners remain the input device for those workflows (see Section 4.4 Receiving, Section 1.A.1 Item Entry).
Terminology Distinction:
- Scanner = barcode input device used at the POS register for sales, returns, receiving, and item lookup (Modules 1, 3, 4). Operates one-item-at-a-time via USB HID keyboard wedge.
- RFID = dedicated counting subsystem using radio-frequency readers for bulk inventory counting and auditing (Module 4, Section 4.6). Operates via the Raptag mobile application, reading 40+ tags per second.
- These are separate abstractions that coexist. Decision #11 (“Scanner Terminology”) applies to barcode input. RFID has its own configuration and workflow documented here.
Cross-References:
- Chapter 16 (Raptag Mobile Application) — mobile RFID counting interface
- Section 4.6.8 (RFID-Assisted Counting) — counting workflow integration
- Module 6, Section 6.11 (Integration Hub) — external system integrations (Shopify, Amazon, Google)
5.16.1 Reader Registration
RFID readers are registered as enterprise devices, paired to a location, and managed via claim codes generated in the Admin Portal.
Supported Reader Models:
| Model | Form Factor | Read Range | Use Case | Connectivity |
|---|---|---|---|---|
| Zebra MC3390R | Handheld gun | 20 ft | Full store inventory counts | WiFi, Bluetooth |
| Zebra RFD40 | Phone sled attachment | 12 ft | Zone/section counts | Bluetooth |
| Zebra FX9600 | Fixed (dock door) | 30 ft | Receiving dock verification | Ethernet, WiFi |
Reader Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to shared.tenants |
name | VARCHAR(100) | Yes | Human-readable name (e.g., “Store GM Handheld #1”) |
model | VARCHAR(50) | Yes | Reader model (MC3390R, RFD40, FX9600) |
serial_number | VARCHAR(50) | Yes | Manufacturer serial number |
location_id | INT | Yes | FK to locations — reader’s assigned location |
connection_type | VARCHAR(20) | Yes | wifi, bluetooth, ethernet |
claim_code | VARCHAR(6) | No | One-time registration code (e.g., “X7K9M2”) |
status | VARCHAR(20) | Yes | active, offline, maintenance, retired |
registered_at | TIMESTAMP | Yes | When device was first registered |
last_seen_at | TIMESTAMP | No | Last heartbeat from device |
Claim Code Registration Workflow:
- Admin generates a 6-character alphanumeric claim code in Admin Portal (
Settings > RFID > Devices) - Claim code is valid for 24 hours from generation
- Operator enters claim code on the Raptag mobile app during setup
- System validates claim code, registers device to the tenant and location
- Claim code is consumed (one-time use)
Business Rules:
- One reader can be registered to one location at a time
- A reader can be reassigned to a different location by an Admin
- Readers with
status = 'offline'for >15 minutes trigger an alert in the Admin Portal - Retired readers cannot be re-activated — a new claim code must be generated
5.16.2 EPC Encoding Configuration
All RFID tags use the SGTIN-96 standard (96-bit EPC, encoded as 24 hexadecimal characters). Each tenant configures their EPC encoding parameters once during onboarding.
SGTIN-96 Structure:
Header (8 bits) | Filter (3) | Partition (3) | Company Prefix (20-40) | Item Ref (4-24) | Serial (38)
Tenant EPC Configuration:
| Field | Type | Default | Description |
|---|---|---|---|
epc_company_prefix | VARCHAR(24) | — | GS1-assigned company prefix (set during onboarding) |
epc_indicator | CHAR(1) | 0 | SGTIN indicator digit |
epc_filter | CHAR(1) | 3 | Filter value (3 = individual trade item, per GS1 spec) |
epc_partition | INT | 5 | Partition value (5 = 20-bit company prefix + 24-bit item reference) |
min_rssi_threshold | SMALLINT | -70 | Minimum RSSI in dBm to accept a tag read; weaker reads are filtered as phantom reads |
Serial Number Management:
- Serial numbers use a PostgreSQL SEQUENCE per tenant (not a column counter)
- Sequence:
CREATE SEQUENCE rfid_epc_serial_{tenant_short_id} START 1 INCREMENT 1 NO CYCLE - Application calls
nextval()during tag encoding — guarantees uniqueness under concurrent printing - 38-bit serial field supports up to 274 billion unique tags per company prefix
EPC Format Validation:
- All EPCs must match:
^[0-9A-F]{24}$(exactly 24 uppercase hexadecimal characters) - Enforced via database CHECK constraint on
rfid_tags.epc
Scope Constraint: RFID is counting-only. The rfid_tags table tracks tag status as active, void, or lost. There are no sold_at, transferred_at, or sold_order_id fields — sales and transfers are tracked by the core inventory system via barcode, not RFID.
5.16.3 Tag Printing Parameters
RFID tags are printed and encoded using dedicated RFID-enabled label printers. The POS system manages the full tag printing lifecycle: template design, job queue, encoding, and verification.
Supported Printer Models:
| Model | Manufacturer | DPI | Connection | RFID Position |
|---|---|---|---|---|
| ZD621R | Zebra | 300 | Network, USB | Center |
| ZD500R | Zebra | 203 | Network, USB, Bluetooth | Center |
| CL4NX | SATO | 305 | Network, USB | Left |
| MX240P | TSC | 203 | Network, USB | Right |
Template Types:
| Type | Use Case | Typical Size |
|---|---|---|
hang_tag | Clothing hang tags with price and size | 2“ x 3“ |
price_tag | Shelf price labels with EPC | 1.5“ x 1“ |
label | Adhesive labels for boxes/bins | 4“ x 2“ |
Templates use ZPL (Zebra Programming Language) format. SATO and TSC printers accept ZPL via built-in translation.
Print Job Configuration:
| Setting | Default | Description |
|---|---|---|
default_priority | 5 | Job priority (1=highest, 10=lowest) |
max_retry_attempts | 3 | Retries for failed tag encoding |
job_timeout_minutes | 30 | Max time before job is marked failed |
| Default printer per location | — | Set in Admin Portal > RFID > Printers |
Business Rules:
- Large print jobs (>1,000 tags) should be split into sub-jobs of 500-1,000 tags for progress tracking
- Failed tags within a job can be retried individually without resubmitting the entire job
- If the assigned printer goes offline mid-job, the job pauses until the printer recovers (no automatic failover in v1.0)
5.16.4 Scan Session Configuration
RFID scan sessions are the core counting operation. A session represents a single counting activity — from starting the reader to submitting results.
Session Types (Counting Only):
| Type Code | Name | Scope | Typical Items |
|---|---|---|---|
full_inventory | Full Store Count | All products at a location | 2,000 – 100,000+ |
cycle_count | Cycle Count | Rolling partial count by category | 200 – 2,000 |
spot_check | Spot Check | Discrepancy verification | 10 – 50 |
find_item | Find Item | Locate a specific SKU using reader | 1 |
Note:
receivingis NOT an RFID session type. Receiving uses barcode scanners (Section 4.4).
Session Parameters:
| Setting | Key | Default | Description |
|---|---|---|---|
| Session Timeout | session_timeout_minutes | 480 (8 hours) | Max session duration before auto-expire |
| Auto-Save Interval | auto_save_interval_seconds | 30 | Frequency of SQLite checkpoint writes on mobile device |
| Chunk Upload Size | chunk_upload_size | 5,000 | Events per upload chunk when syncing to server |
| RSSI Threshold | min_rssi_threshold | -70 dBm | Tag reads below this are filtered (phantom read prevention) |
Variance Thresholds:
| Variance % | Color | Action |
|---|---|---|
| 0% | Green | Auto-approve — no discrepancy |
| 1–2% | Yellow | Review recommended |
| 3–5% | Orange | Manager review required |
| >5% | Red | Mandatory recount with different operator |
Multi-Operator Support:
- A single session can have multiple operators (up to 10), each assigned to a section of the store
- Each operator scans independently using their own device
- Server merges results and deduplicates by EPC (keeps highest RSSI read)
- See Section 4.6.8 for the full multi-operator workflow
5.16.5 RFID Business Rules (YAML)
The following RFID-specific business rules are part of the consolidated configuration system (Section 5.19). They are documented here for reference and cross-referenced from the YAML block.
rfid_config:
# EPC Encoding
epc:
company_prefix: "" # Tenant-specific, set during onboarding
partition: 5 # 20-bit company prefix + 24-bit item reference
filter: 3 # Individual trade item (GS1 standard)
indicator: "0" # SGTIN indicator digit
format: "SGTIN-96" # 96-bit EPC standard
serial_strategy: "sequence" # PostgreSQL SEQUENCE (not column counter)
format_regex: "^[0-9A-F]{24}$"
# Reader Hardware
readers:
supported_models:
- "MC3390R" # Handheld gun, 20 ft range
- "RFD40" # Phone sled, 12 ft range
- "FX9600" # Fixed reader, 30 ft range
claim_code_length: 6
claim_code_expiry_hours: 24
heartbeat_interval_seconds: 300
offline_threshold_minutes: 15
# Scanning Sessions
scanning:
session_types:
- "full_inventory"
- "cycle_count"
- "spot_check"
- "find_item"
# NOTE: "receiving" is NOT an RFID session type
min_rssi_threshold: -70 # dBm, tags weaker than this are filtered
auto_save_interval_seconds: 30
session_timeout_minutes: 480 # 8 hours max
chunk_upload_size: 5000 # Events per upload chunk
max_operators_per_session: 10
# Tag Printing
printing:
default_priority: 5 # 1=highest, 10=lowest
max_retry_attempts: 3
job_timeout_minutes: 30
max_tags_per_sub_job: 1000 # Large jobs split into sub-jobs
# Variance Thresholds
variance:
auto_approve_threshold_percent: 0
review_threshold_percent: 2
manager_review_threshold_percent: 5
recount_required_threshold_percent: 20
5.16.6 Integration Hub Reference
Note: The Integration Hub (integration registry, credentials storage, Shopify/Amazon/Google configurations, sync logging, and health dashboard) has been consolidated into Module 6, Section 6.11. RFID is a first-party subsystem and does NOT use the Integration Hub — it connects directly to the central API via REST endpoints documented in Chapter 18 and Appendix A.
5.17 Loyalty & Rewards Settings
Scope: Configurable parameters for the loyalty program, tier thresholds, reward redemption rates, and gift card settings. This section defines the settings – the configurable values that govern loyalty behavior. The loyalty rules (tier upgrade/downgrade logic, point accrual timing, redemption application in the payment flow) are defined in Module 2 (Customers).
Cross-Reference: See Module 2, Section 2.6 for loyalty tier upgrade/downgrade rules, point accrual logic, and redemption flow. See Module 1, Section 1.15 for loyalty redemption in the payment calculation sequence.
5.17.1 Point Configuration
| Setting | Key | Type | Default | Description |
|---|---|---|---|---|
| Base Earn Rate | points_per_dollar | Integer | 1 | Points earned per dollar spent (before tier multiplier). Applied to the post-tax total. |
| Points Expiry | points_expiry_months | Integer | 12 | Months after last earning activity before points expire. 0 = never expire. |
| Exclude Tax | exclude_tax_from_points | Boolean | false | If true, points are calculated on the pre-tax subtotal. |
| Exclude Discounted Amount | exclude_discounts_from_points | Boolean | false | If true, points are calculated on the original price, not the discounted price. |
5.17.2 Tier Thresholds
Tier definitions are configurable. The system supports up to 4 tiers. Each tier defines a spend threshold, point multiplier, and automatic discount percentage.
| Tier | Code | Annual Spend Threshold | Point Multiplier | Auto Discount % | Description |
|---|---|---|---|---|---|
| Bronze | BRONZE | $0 (default tier) | 1.0x | 0% | Entry tier – all new customers start here |
| Silver | SILVER | $1,000 | 1.5x | 5% | Mid-tier – 50% more points per dollar, 5% discount on all purchases |
| Gold | GOLD | $5,000 | 2.0x | 10% | Premium tier – double points, 10% automatic discount |
| Platinum | PLATINUM | $10,000 | 3.0x | 15% | Top tier – triple points, 15% automatic discount |
Business Rules:
- Tier thresholds are evaluated against the customer’s rolling 12-month spend total. The evaluation period resets annually on the customer’s enrollment anniversary date.
- The automatic discount is applied before any manual or promotional discounts in the calculation order (see Module 1 discount application order).
- Point multiplier applies to the base
points_per_dollarrate. A Gold customer earning 1 point per dollar receives 2 points per dollar.
Cross-Reference: See Module 2, Section 2.6 for tier upgrade trigger logic, downgrade grace period, and tier evaluation cadence.
5.17.3 Reward Redemption
| Setting | Key | Type | Default | Description |
|---|---|---|---|---|
| Redemption Rate | redemption_rate | Integer | 100 | Points required for $1.00 discount |
| Minimum Redemption | minimum_redemption | Integer | 100 | Minimum points that can be redeemed in a single transaction |
| Maximum Redemption % | max_redemption_percent | Integer | 50 | Maximum percentage of the transaction total payable by points (0-100) |
| Allow Partial Redemption | allow_partial_redemption | Boolean | true | Whether customers can redeem a subset of their available points |
5.17.4 Gift Card Settings
| Setting | Key | Type | Default | Description |
|---|---|---|---|---|
| Predefined Denominations | denominations | Array[Decimal] | [10, 25, 50, 100] | Quick-select amounts shown at POS during gift card purchase |
| Allow Custom Amount | allow_custom_amount | Boolean | true | Whether cashiers can enter an arbitrary gift card amount |
| Minimum Load | minimum_load | Decimal(10,2) | 10.00 | Minimum dollar amount for initial activation or reload |
| Maximum Load | maximum_load | Decimal(10,2) | 500.00 | Maximum dollar amount for initial activation or reload |
| Expiry Months | expiry_months | Integer | 0 | Months from activation before the gift card expires. 0 = no expiry (most restrictive jurisdiction default). |
| Allow Reload | allow_reload | Boolean | true | Whether depleted or partially-used gift cards can be reloaded |
Jurisdiction Rules:
- Default expiry is
0(no expiry), conforming to the most restrictive US jurisdiction (California). - Tenants operating in states that permit expiry can override
expiry_monthsto a compliant value (e.g., Virginia: minimum 60 months). - Jurisdiction-specific cash-out rules (e.g., California requires cash redemption below $10.00) are enforced at the POS transaction level per Module 1 gift card rules.
5.17.5 Loyalty Settings Data Model
Loyalty Settings Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant (unique constraint) |
points_per_dollar | Integer | Yes | Base earn rate (default: 1) |
points_expiry_months | Integer | Yes | Months to expiry, 0 = never (default: 12) |
exclude_tax_from_points | Boolean | Yes | Default: false |
exclude_discounts_from_points | Boolean | Yes | Default: false |
redemption_rate | Integer | Yes | Points per $1.00 discount (default: 100) |
minimum_redemption | Integer | Yes | Minimum redeemable points (default: 100) |
max_redemption_percent | Integer | Yes | Max % of transaction payable by points (default: 50) |
allow_partial_redemption | Boolean | Yes | Default: true |
tier_config | JSON | Yes | JSON array of tier objects: [{code, name, threshold, multiplier, discount_percent}] |
gift_card_denominations | JSON | Yes | JSON array of decimal amounts (default: [10, 25, 50, 100]) |
gift_card_allow_custom | Boolean | Yes | Default: true |
gift_card_min_load | Decimal(10,2) | Yes | Default: 10.00 |
gift_card_max_load | Decimal(10,2) | Yes | Default: 500.00 |
gift_card_expiry_months | Integer | Yes | Default: 0 |
gift_card_allow_reload | Boolean | Yes | Default: true |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
5.18 Audit Configuration
Scope: Configuration of the audit logging system – which event categories are tracked, how long logs are retained, archival policies, and export capabilities. The audit log provides a tamper-evident record of every significant action in the system for compliance, investigation, and operational accountability.
Cross-Reference: See Module 5, Section 5.13 for approval workflow audit trails. See Module 4, Section 4.9 for inventory movement history (the inventory-specific audit trail).
5.18.1 Audit Categories
Each audit category can be independently toggled on or off. Disabling a category stops new log entries from being created for events in that category. Existing log entries are never deleted by disabling a category.
| Category Code | Description | Default | Example Events |
|---|---|---|---|
LOGIN | User login and logout events | On | Login success, login failure, logout, session timeout |
SALE | Transaction completed | On | Sale finalized, split payment processed |
RETURN | Return processed | On | Return with receipt, return without receipt, exchange |
VOID | Transaction voided | On | Same-day void, void with manager approval |
ADJUSTMENT | Inventory adjustment | On | Manual qty adjustment, count correction applied |
PRICE_CHANGE | Product price modified | On | Retail price change, cost update, markdown applied |
DISCOUNT | Discount applied | On | Line discount, global discount, coupon applied, loyalty redemption |
PO | Purchase order actions | On | PO created, PO approved, PO submitted, PO received, PO closed |
TRANSFER | Inter-store transfer actions | On | Transfer requested, approved, shipped, received, completed |
USER_MGMT | User account actions | On | User created, role changed, user deactivated, password reset |
SETTINGS | System settings changed | On | Tax rate changed, business rule modified, integration updated |
INVENTORY_COUNT | Count session actions | On | Count started, count submitted, variance approved, count finalized |
5.18.2 Retention Configuration
| Setting | Key | Type | Default | Description |
|---|---|---|---|---|
| Retention Period | retention_days | Integer | 365 | Days to keep detailed audit log entries in the primary database |
| Archive Enabled | archive_enabled | Boolean | true | Whether entries older than retention_days are moved to archive storage |
| Archive Format | archive_format | Enum | COMPRESSED_JSON | Format for archived records: COMPRESSED_JSON (gzip), CSV |
| Purge Archived | purge_archived_after_days | Integer | 2190 | Days to retain archived records before permanent deletion (2190 = 6 years). Set to 0 to retain archives indefinitely. |
Retention Lifecycle:
flowchart LR
A["Active Log\n(Primary DB)"] -->|After retention_days| B["Archive Storage\n(Compressed JSON)"]
B -->|After purge_archived_after_days| C["Permanently Deleted"]
style A fill:#2d6a4f,stroke:#1b4332,color:#fff
style B fill:#264653,stroke:#1d3557,color:#fff
style C fill:#6c757d,stroke:#495057,color:#fff
Business Rules:
- Minimum
retention_daysis 90. The system rejects values below 90 to ensure basic operational audit capability. - Minimum
purge_archived_after_daysis 365 (or 0 for indefinite). Values between 1 and 364 are rejected. - The archival background job runs daily at 02:00 AM (tenant timezone). It processes entries older than
retention_daysin batches of 10,000 records. - Archived records are stored with the same data fidelity as active records – no fields are dropped during archival.
5.18.3 Export Configuration
| Setting | Key | Type | Default | Description |
|---|---|---|---|---|
| Supported Formats | export_formats | Array[Enum] | ["CSV", "JSON", "PDF"] | Formats available for audit log export |
| Max Export Rows | max_export_rows | Integer | 10000 | Maximum rows per single export request. Larger exports must be split by date range. |
| Max Date Range | max_export_date_range_days | Integer | 365 | Maximum date range span allowed in a single export request |
| Include Archived | include_archived_in_export | Boolean | true | Whether exports can pull from archived records (slower but comprehensive) |
5.18.4 Audit Log Data Model
Audit Config Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant (unique constraint) |
categories_enabled | JSON | Yes | Map of category_code: boolean (e.g., {"LOGIN": true, "SALE": true, "VOID": true, ...}) |
retention_days | Integer | Yes | Default: 365 |
archive_enabled | Boolean | Yes | Default: true |
archive_format | Enum | Yes | COMPRESSED_JSON, CSV (default: COMPRESSED_JSON) |
purge_archived_after_days | Integer | Yes | Default: 2190 |
max_export_rows | Integer | Yes | Default: 10000 |
max_export_date_range_days | Integer | Yes | Default: 365 |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Audit Log Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
category_code | Enum | Yes | One of the 12 category codes defined above |
action | String(100) | Yes | Specific action (e.g., SALE_COMPLETED, USER_CREATED, PO_APPROVED) |
actor_user_id | UUID | Yes | FK to users table – user who performed the action |
actor_role | String(50) | Yes | Role of the user at time of action (captured for audit, not FK) |
location_id | UUID | No | FK to locations table – location where the action occurred (NULL for tenant-wide actions) |
register_id | UUID | No | FK to registers table – register involved (NULL for non-POS actions) |
entity_type | String(50) | Yes | Type of entity affected (e.g., TRANSACTION, PRODUCT, USER, PO) |
entity_id | UUID | Yes | FK to the affected entity record |
details | JSON | No | Structured JSON capturing before/after values, amounts, reason codes, and other context |
ip_address | String(45) | No | IP address of the client that initiated the action |
occurred_at | DateTime | Yes | Timestamp when the action occurred (event time, not log write time) |
created_at | DateTime | Yes | Record creation timestamp (log write time) |
Business Rules:
- Audit log entries are immutable. No UPDATE or DELETE operations are permitted on the
audit_logtable. Only INSERT and SELECT are allowed. - The
detailsJSON field captures change context. For settings changes, it records{"old_value": ..., "new_value": ...}. For transactions, it records key financial fields (total,tax,discount,payment_method). - Audit logs are queryable by category, date range, user, location, and entity type. The Admin Portal provides a searchable, filterable audit log viewer.
5.19 Business Rules Configuration – Consolidated YAML
Scope: Single, authoritative source for all configurable business rules across the entire POS system. This section consolidates business rules from Module 1 (Sales), Module 2 (Customers), Module 3 (Catalog), and Module 4 (Inventory) into one centralized YAML configuration. All values shown are defaults and can be overridden at tenant or store level.
Cross-Reference: Individual module sections define the behavior that these rules govern. This section defines the configurable values.
5.19.1 Sales Configuration
# ============================================
# SALES MODULE BUSINESS RULES
# ============================================
# All values shown are defaults and can be
# overridden at tenant or store level.
# ============================================
sales_config:
# ------------------------------------------
# RETURN POLICY
# ------------------------------------------
return_policy:
# Full refund period (with receipt)
full_refund_days: 30
# Store credit only period (with receipt)
store_credit_days: 90
# Restocking fee for opened items (percentage)
restocking_fee_percent: 15
# Categories exempt from restocking fee
restocking_fee_exempt_categories:
- "clothing"
- "accessories"
# Categories marked as final sale (no returns)
final_sale_categories:
- "clearance"
- "as-is"
# Channel-specific policies
online_policy:
return_days: 30
exchange_days: 30
exclude_shipping_fees: true
exclude_processing_fees: true
receipt_required: true
in_store_policy:
return_hours: 24
exchange_hours: 24
receipt_required: true
receipt_scan_validation: true
# ------------------------------------------
# PARKED SALES
# ------------------------------------------
parked_sales:
# Maximum parked sales per terminal
max_per_terminal: 5
# Time-to-live before auto-expiry (hours)
ttl_hours: 4
# Inventory reservation type
reservation_type: "soft" # soft = visible with warning, hard = blocked
# ------------------------------------------
# SPECIAL ORDERS
# ------------------------------------------
special_orders:
# Minimum deposit percentage
minimum_deposit_percent: 50
# Maximum days to hold after arrival
pickup_deadline_days: 30
# Auto-cancel after missed pickup (days)
abandonment_days: 45
# ------------------------------------------
# HOLD FOR PICKUP
# ------------------------------------------
hold_for_pickup:
# Default hold duration
default_days: 7
# Maximum hold extension allowed
max_days: 30
# Days before expiry to send reminder
reminder_days_before: 2
# ------------------------------------------
# TRANSFERS & RESERVATIONS
# ------------------------------------------
transfers:
# Require full payment before processing
require_full_payment: true
# Estimated transit days (for display)
estimated_transit_days: 3
reservations:
# Require full payment before reserving
require_full_payment: true
# Default reservation duration
default_hold_days: 7
# Auto-refund after expiry
auto_refund_on_expiry: true
# ------------------------------------------
# SHIP TO CUSTOMER
# ------------------------------------------
ship_to_customer:
enabled: true
require_full_payment: true
include_shipping_in_total: true
# Carrier integration
carriers:
- provider: "configured_per_tenant"
api_key: "configured_per_tenant"
test_mode: true
# Shipping options
shipping_options:
standard:
label: "Standard (3-5 business days)"
enabled: true
express:
label: "Express (1-2 business days)"
enabled: true
overnight:
label: "Overnight"
enabled: false
# ------------------------------------------
# CASH DRAWER
# ------------------------------------------
cash_drawer:
# Variance tolerance before manager approval required
variance_tolerance: 5.00
# Require blind count (staff can't see expected)
blind_count_enabled: true
# Maximum float amount
max_opening_float: 500.00
# ------------------------------------------
# DISCOUNTS & PRICING
# ------------------------------------------
discounts:
# Maximum line item discount without manager approval
max_line_discount_percent: 20
# Maximum global discount without manager approval
max_global_discount_percent: 15
# Reason codes required for discounts
require_reason_code: true
# Discount application order
application_order:
- "price_tier"
- "line_discount"
- "auto_promo"
- "global_discount"
- "coupon"
- "tax"
- "loyalty_redemption"
# ------------------------------------------
# COMMISSIONS
# ------------------------------------------
commissions:
# Default commission rate (percentage of sale)
default_rate_percent: 2.0
# Higher rate for specific categories
category_rates:
electronics: 3.0
services: 5.0
# Void reverses commission (full)
reverse_on_void: true
void_reversal_method: "full"
# Return reduces commission (proportional)
reduce_on_return: true
return_reversal_method: "proportional"
# ------------------------------------------
# OFFLINE MODE
# ------------------------------------------
offline_mode:
# Maximum transactions to queue locally
max_queue_size: 100
# Auto-sync interval when online (seconds)
sync_interval_seconds: 30
# Conflict resolution strategy
conflict_strategy: "server_wins_with_review"
# Operations allowed offline
allowed_offline:
- "sale_new"
- "return_with_receipt"
- "price_check"
- "parked_sale_create"
- "parked_sale_retrieve"
# Operations blocked offline
blocked_offline:
- "customer_create"
- "on_account_payment"
- "gift_card_activation"
- "gift_card_reload"
- "multi_store_inventory"
- "transfer_request"
- "reservation_create"
# ------------------------------------------
# PAYMENT INTEGRATION
# ------------------------------------------
payment_integration:
# PCI compliance level
pci_scope: "SAQ-A"
# Integration type
integration_type: "semi_integrated"
# Terminal timeout (seconds)
payment_timeout_seconds: 60
connection_timeout_seconds: 10
# Batch close time (24-hour format)
batch_close_time: "23:00"
# Same-day void allowed
same_day_void: true
# ------------------------------------------
# THIRD-PARTY FINANCING
# ------------------------------------------
third_party_financing:
affirm:
enabled: true
minimum_order_amount: 50.00
maximum_order_amount: 5000.00
merchant_id: "configured_per_tenant"
webhook_url: "/api/webhooks/affirm"
Sales Configuration Field Reference
| Key | Type | Default | Description |
|---|---|---|---|
return_policy.full_refund_days | Integer | 30 | Calendar days from purchase for full original-method refund |
return_policy.store_credit_days | Integer | 90 | Calendar days from purchase for store credit refund |
return_policy.restocking_fee_percent | Integer | 15 | Percentage deducted as restocking fee on non-exempt items |
parked_sales.max_per_terminal | Integer | 5 | Maximum concurrent parked sales per register |
parked_sales.ttl_hours | Integer | 4 | Hours before an unrecalled parked sale auto-expires |
special_orders.minimum_deposit_percent | Integer | 50 | Minimum deposit required at special order creation |
special_orders.abandonment_days | Integer | 45 | Days after arrival before abandoned special order is forfeited |
hold_for_pickup.default_days | Integer | 7 | Default hold duration in days |
cash_drawer.variance_tolerance | Decimal | 5.00 | Dollar variance before manager review is required at close |
cash_drawer.blind_count_enabled | Boolean | true | Hide expected drawer total from staff during count |
discounts.max_line_discount_percent | Integer | 20 | Maximum per-line discount before manager approval |
discounts.max_global_discount_percent | Integer | 15 | Maximum transaction-wide discount before manager approval |
commissions.default_rate_percent | Decimal | 2.0 | Default commission percentage of sale amount |
offline_mode.max_queue_size | Integer | 100 | Maximum offline-queued transactions before blocking new sales |
payment_integration.payment_timeout_seconds | Integer | 60 | Seconds to wait for payment terminal response |
payment_integration.batch_close_time | String | “23:00” | Daily batch settlement time (24-hour format) |
5.19.2 Customer Configuration
# ============================================
# CUSTOMER MODULE BUSINESS RULES
# ============================================
customer_config:
# ------------------------------------------
# LOYALTY PROGRAM
# ------------------------------------------
loyalty:
# Points per dollar spent
points_per_dollar: 1
# Points required for $1 redemption
redemption_rate: 100
# Minimum points to redeem
minimum_redemption: 100
# Points expiry (months, 0 = never)
points_expiry_months: 12
# Maximum % of transaction payable by points
max_redemption_percent: 50
# ------------------------------------------
# CUSTOMER TIERS
# ------------------------------------------
customer_tiers:
bronze:
# No threshold - default tier
point_multiplier: 1.0
automatic_discount_percent: 0
silver:
annual_spend_threshold: 1000
point_multiplier: 1.5
automatic_discount_percent: 5
gold:
annual_spend_threshold: 5000
point_multiplier: 2.0
automatic_discount_percent: 10
platinum:
annual_spend_threshold: 10000
point_multiplier: 3.0
automatic_discount_percent: 15
# ------------------------------------------
# LAYAWAY
# ------------------------------------------
layaway:
# Minimum deposit percentage
minimum_deposit_percent: 20
# Maximum layaway duration (days)
max_duration_days: 90
# Minimum payment frequency (days)
payment_frequency_days: 30
# Forfeiture fee percentage (of deposit)
forfeiture_fee_percent: 10
# ------------------------------------------
# GIFT CARDS
# ------------------------------------------
gift_cards:
# Minimum load amount
minimum_load: 10.00
# Maximum load amount
maximum_load: 500.00
# Default expiry period (months from activation)
default_expiry_months: 0 # No expiry (most restrictive)
# Allow reload of depleted cards
allow_reload: true
# Jurisdiction-specific overrides
jurisdiction_rules:
virginia:
expiry_allowed: true
expiry_months: 60
inactivity_fee_allowed: true
inactivity_fee_months: 12
cash_out_threshold: 0
california:
expiry_allowed: false
inactivity_fee_allowed: false
cash_out_threshold: 10.00
cash_out_required: true
# ------------------------------------------
# PRIVACY & DATA RETENTION
# ------------------------------------------
privacy:
# Days to fulfill data export request
export_request_days: 30
# Days to fulfill deletion request
deletion_request_days: 30
# Auto-anonymize inactive customers (months, 0 = never)
auto_anonymize_months: 0
# Consent change audit retention (years)
consent_audit_retention_years: 7
# ------------------------------------------
# EXPORT LIMITS
# ------------------------------------------
exports:
# Maximum rows per CSV export
max_rows: 1000
# Maximum date range for reports (days)
max_date_range_days: 365
Customer Configuration Field Reference
| Key | Type | Default | Description |
|---|---|---|---|
loyalty.points_per_dollar | Integer | 1 | Base points earned per dollar (before tier multiplier) |
loyalty.redemption_rate | Integer | 100 | Points required to redeem $1.00 discount |
loyalty.minimum_redemption | Integer | 100 | Minimum points a customer can redeem per transaction |
loyalty.points_expiry_months | Integer | 12 | Months since last earn before points expire (0 = never) |
loyalty.max_redemption_percent | Integer | 50 | Maximum percentage of transaction total payable by points |
customer_tiers.silver.annual_spend_threshold | Integer | 1000 | Annual spend in dollars to qualify for Silver tier |
customer_tiers.gold.annual_spend_threshold | Integer | 5000 | Annual spend in dollars to qualify for Gold tier |
customer_tiers.platinum.annual_spend_threshold | Integer | 10000 | Annual spend in dollars to qualify for Platinum tier |
layaway.minimum_deposit_percent | Integer | 20 | Minimum deposit as percentage of layaway total |
layaway.max_duration_days | Integer | 90 | Maximum calendar days for layaway completion |
layaway.forfeiture_fee_percent | Integer | 10 | Fee deducted from deposit on layaway cancellation |
gift_cards.minimum_load | Decimal | 10.00 | Minimum dollar amount per activation or reload |
gift_cards.maximum_load | Decimal | 500.00 | Maximum dollar amount per activation or reload |
gift_cards.default_expiry_months | Integer | 0 | Months to expiry (0 = never, California baseline) |
privacy.export_request_days | Integer | 30 | Days to fulfill customer data export request |
privacy.auto_anonymize_months | Integer | 0 | Months of inactivity before auto-anonymization (0 = disabled) |
5.19.3 Catalog Configuration
# ============================================
# CATALOG MODULE BUSINESS RULES
# ============================================
catalog_config:
# ------------------------------------------
# PRODUCT LIFECYCLE
# ------------------------------------------
product_lifecycle:
# Default status for newly created products
default_status: "DRAFT"
# Valid status transitions
# DRAFT -> ACTIVE -> DISCONTINUED
# ACTIVE -> DRAFT (revert to draft for editing)
allowed_transitions:
DRAFT: ["ACTIVE"]
ACTIVE: ["DRAFT", "DISCONTINUED"]
DISCONTINUED: ["ACTIVE"] # Re-activate if needed
# Require at least one image before activation
require_image_on_activate: false
# Require barcode before activation
require_barcode_on_activate: true
# Require price before activation
require_price_on_activate: true
# ------------------------------------------
# PRICING
# ------------------------------------------
pricing:
# Price hierarchy levels (highest priority first)
price_hierarchy:
- "price_book_override"
- "channel_price"
- "sale_price"
- "default_price"
- "msrp"
# Require manager approval for markdown
markdown_approval_required: true
# Maximum markdown percentage without owner approval
max_markdown_percent: 50
# Minimum margin threshold (warn if below)
minimum_margin_warning_percent: 10
# Allow selling below cost
allow_below_cost_sale: false
# ------------------------------------------
# BARCODE
# ------------------------------------------
barcode:
# Default barcode format
default_format: "CODE128" # Options: "CODE128", "EAN13", "UPC_A"
# Auto-generate barcode if none provided
auto_generate: true
# Auto-generated barcode prefix (tenant-specific)
auto_prefix: "POS"
# Barcode uniqueness scope
uniqueness_scope: "tenant" # Options: "tenant", "global"
# ------------------------------------------
# CATEGORIES
# ------------------------------------------
categories:
# Maximum category nesting depth
max_depth: 4
# Require every product to have a category
require_category_on_product: true
# Allow product in multiple categories
allow_multi_category: false
# Default category sort order
default_sort: "name_asc"
# ------------------------------------------
# SHOPIFY SYNC
# ------------------------------------------
shopify_sync:
# Scheduled reconciliation interval (minutes)
reconciliation_interval_minutes: 15
# Field ownership model (determines which system can edit each field)
field_ownership:
pos_owned:
- "title"
- "vendor"
- "product_type"
- "variants.price"
- "variants.compare_at_price"
- "variants.sku"
- "variants.barcode"
- "variants.cost"
- "variants.weight"
shopify_owned:
- "body_html"
- "seo_title"
- "seo_description"
- "images"
- "tags"
bidirectional:
- "status"
- "variants.inventory_quantity"
# Conflict resolution for bidirectional fields
conflict_resolution: "pos_wins" # Options: "pos_wins", "shopify_wins", "latest_wins"
# Delete behavior when product discontinued in POS
delete_on_discontinue: false # Set to draft in Shopify, not delete
# Webhook retry on failure
webhook_retry_count: 3
webhook_retry_backoff_seconds: [5, 15, 45]
# ------------------------------------------
# SCANNER CONFIGURATION
# ------------------------------------------
scanner:
# Maximum tags per bulk lookup request
max_tags_per_request: 50
# Tag read timeout (milliseconds)
read_timeout_ms: 5000
# Auto-lookup on scan
auto_lookup_on_scan: true
# Scan confirmation beep
confirmation_beep_enabled: true
Catalog Configuration Field Reference
| Key | Type | Default | Description |
|---|---|---|---|
product_lifecycle.default_status | Enum | DRAFT | Status assigned to newly created products |
product_lifecycle.require_barcode_on_activate | Boolean | true | Barcode must exist before product can transition to ACTIVE |
pricing.markdown_approval_required | Boolean | true | Whether manager must approve markdown price reductions |
pricing.max_markdown_percent | Integer | 50 | Maximum markdown before owner-level approval |
pricing.allow_below_cost_sale | Boolean | false | Whether POS allows sale price below product cost |
barcode.default_format | Enum | CODE128 | Default barcode symbology for auto-generated barcodes |
barcode.auto_generate | Boolean | true | Auto-create barcode if product has no barcode at creation |
categories.max_depth | Integer | 4 | Maximum category tree nesting levels |
categories.require_category_on_product | Boolean | true | Products must be assigned to at least one category |
shopify_sync.reconciliation_interval_minutes | Integer | 15 | Minutes between scheduled full reconciliation |
shopify_sync.conflict_resolution | Enum | pos_wins | Tie-breaking strategy for bidirectional field conflicts |
scanner.max_tags_per_request | Integer | 50 | Maximum scanner tags per bulk API request |
scanner.read_timeout_ms | Integer | 5000 | Milliseconds before scanner read times out |
5.19.4 Inventory Configuration
# ============================================
# INVENTORY MODULE BUSINESS RULES
# ============================================
# All values shown are defaults and can be
# overridden at tenant or store level.
# ============================================
inventory_config:
# ------------------------------------------
# GENERAL SETTINGS
# ------------------------------------------
general:
# Allow inventory to go negative (e.g., sell without stock)
allow_negative_inventory: false
# Costing method for COGS and valuation
costing_method: "weighted_average" # Options: "weighted_average", "fifo"
# Default currency for all inventory valuations
default_currency: "USD"
# Enforce reorder point monitoring
enforce_reorder_points: true
# Minimum display quantity on POS (show "Low Stock" below this)
min_display_qty: 3
# Show exact quantity or generic "In Stock" / "Low Stock"
show_exact_qty_on_pos: true
# ------------------------------------------
# INVENTORY STATUS MODEL
# ------------------------------------------
status_model:
# Valid inventory statuses
statuses:
- "AVAILABLE"
- "RESERVED"
- "QUARANTINE"
- "DAMAGED"
- "IN_TRANSIT"
- "ON_HOLD"
# Only these statuses allow sale
sellable_statuses:
- "AVAILABLE"
# Only these statuses allow transfer out
transferable_statuses:
- "AVAILABLE"
# ------------------------------------------
# PURCHASE ORDERS
# ------------------------------------------
purchase_orders:
# Auto-close PO threshold (% received to auto-complete)
auto_close_threshold_percent: 100
# Allow editing cost at time of receiving
allow_cost_editing_on_receive: true
# Default tax rate applied to PO lines
default_tax_rate: 0.0
# PO approval thresholds (value-based)
approval:
auto_approve_below: 500.00
manager_approval_threshold: 500.00
admin_approval_threshold: 5000.00
approval_expiry_days: 7
# Auto-PO generation
auto_po:
enabled: true
mode: "draft" # Options: "draft", "auto_submit"
minimum_po_value: 50.00
consolidate_by_vendor: true
# PO number format
number_format: "PO-{YEAR}-{SEQUENCE:5}"
# ------------------------------------------
# RECEIVING
# ------------------------------------------
receiving:
# Receiving mode: "open" or "strict"
mode: "open"
# Over-receive threshold (% above PO qty allowed)
over_receive_threshold_percent: 10
# Scanner mode for receiving
scanner_mode: "scan_primary" # Options: "scan_required", "scan_optional", "scan_primary"
# Auto-print labels on receive
auto_print_labels: true
# Label template for received items
label_template: "standard_barcode"
# Discrepancy handling (the "triple approach")
discrepancy:
allow_partial_receive: true
auto_quarantine_damaged: true
auto_create_rma_on_damage: true
cost_variance_alert_percent: 5
# Non-PO receiving
non_po_receiving:
enabled: true
require_reason: true
valid_reasons:
- "CUSTOMER_RETURN"
- "VENDOR_REPLACEMENT"
- "CONSIGNMENT"
- "FOUND_STOCK"
- "OTHER"
# ------------------------------------------
# REORDER & VELOCITY
# ------------------------------------------
reorder:
# Sales velocity calculation window (days)
velocity_window_days: 90
# Safety stock calculation (standard deviations)
safety_stock_sigma: 1.65 # ~95% service level
# Dead stock threshold (days with zero sales)
dead_stock_days: 90
# Seasonal adjustment
seasonal_adjustment_enabled: true
seasonal_history_years: 3
# Reorder point recalculation frequency
recalculation_frequency: "weekly" # Options: "daily", "weekly"
recalculation_day: "sunday"
# Minimum reorder quantity
min_reorder_qty: 1
# Round reorder qty to vendor's case pack size
round_to_case_pack: true
# ------------------------------------------
# COUNTING (STOCKTAKE)
# ------------------------------------------
counting:
# Freeze mode during full physical count
default_freeze_mode: "soft" # Options: "hard", "soft", "none"
# Blind count (hide expected qty from counter)
blind_count_enabled: true
# Scanner mode for counting
scanner_mode: "scan_primary"
# Variance threshold requiring manager approval (units)
variance_approval_threshold_units: 10
# Variance threshold requiring manager approval (%)
variance_approval_threshold_percent: 5
# Auto-adjust variances below threshold
auto_adjust_below_threshold: true
# Require second count for high-variance items
require_recount_above_percent: 20
# Supported count types
count_types:
- "full_physical"
- "cycle_count"
- "spot_check"
- "scanner_assisted"
- "on_demand"
# Cycle count frequency (days between counts per product)
cycle_count_interval_days:
A_items: 30
B_items: 60
C_items: 90
# ------------------------------------------
# ADJUSTMENTS
# ------------------------------------------
adjustments:
# Approval mode for manual adjustments
approval_mode: "threshold" # Options: "all", "threshold", "none"
# Threshold above which approval is required (units)
approval_threshold_units: 10
# Threshold above which approval is required (value)
approval_threshold_value: 100.00
# Default reason codes (built-in)
default_reason_codes:
- code: "DAMAGED"
label: "Damaged / Defective"
requires_note: false
- code: "THEFT"
label: "Theft / Shrinkage"
requires_note: false
- code: "COUNT_CORRECTION"
label: "Count Correction"
requires_note: false
- code: "VENDOR_RETURN"
label: "Returned to Vendor"
requires_note: false
- code: "FOUND_STOCK"
label: "Found Stock (Positive Adjustment)"
requires_note: true
- code: "SAMPLE"
label: "Removed for Sample / Display"
requires_note: false
- code: "DONATION"
label: "Donated"
requires_note: true
- code: "OTHER"
label: "Other"
requires_note: true
# Allow tenants to create custom reason codes
allow_custom_reason_codes: true
# Require note for all adjustments
require_note_for_all: false
# ------------------------------------------
# TRANSFERS
# ------------------------------------------
transfers:
# Require acceptance scan at destination
require_acceptance_at_destination: true
# Allow partial transfer receive
allow_partial_receive: true
# Transfer auto-suggestion
auto_suggest:
enabled: true
imbalance_threshold_days: 14
min_transfer_qty: 2
min_source_qty_after_transfer: 2
frequency: "daily"
# Transfer number format
number_format: "TRF-{YEAR}-{SEQUENCE:5}"
# Transit time estimate (default days)
default_transit_days: 3
# Auto-cancel unfulfilled transfer requests after N days
auto_cancel_unfulfilled_days: 14
# ------------------------------------------
# VENDOR RMA
# ------------------------------------------
rma:
# Allow overstock returns to vendor
overstock_returns_enabled: true
# Default restocking fee (% of cost)
default_restocking_fee_percent: 0
# Maximum restocking fee allowed
max_restocking_fee_percent: 25
# RMA expiry (days to ship back after vendor approval)
ship_back_deadline_days: 30
# Auto-create RMA for damaged items on receive
auto_create_on_damaged_receive: true
# RMA number format
number_format: "RMA-{YEAR}-{SEQUENCE:5}"
# Credit reconciliation reminder (days after shipment)
credit_followup_days: 30
# ------------------------------------------
# SERIAL & LOT TRACKING
# ------------------------------------------
serial_tracking:
# Require serial scan at POS sale
require_serial_at_sale: true
# Require serial scan at receiving
require_serial_at_receive: true
# Serial number format validation (regex, optional)
format_validation: null
lot_tracking:
# FIFO enforcement on sale
fifo_enforcement: true
# Lot number format
number_format: "LOT-{YEAR}{MONTH}-{SEQUENCE:4}"
# Track expiry dates
expiry_tracking_enabled: false # Clothing typically doesn't expire
# ------------------------------------------
# ALERTS & NOTIFICATIONS
# ------------------------------------------
alerts:
low_stock:
enabled: true
delivery: ["dashboard", "email_digest"]
email_digest_time: "07:00"
severity: "warning"
overstock:
enabled: true
delivery: ["dashboard"]
days_of_supply_threshold: 90
severity: "info"
shrinkage:
enabled: true
delivery: ["dashboard", "email_immediate"]
variance_threshold_percent: 5
severity: "critical"
aging_inventory:
enabled: true
delivery: ["dashboard", "email_digest"]
days_threshold: 90
email_digest_day: "monday"
severity: "warning"
po_overdue:
enabled: true
delivery: ["dashboard", "email"]
buffer_days: 3
severity: "warning"
# ------------------------------------------
# OFFLINE INVENTORY
# ------------------------------------------
offline:
# Conflict resolution strategy
conflict_resolution_strategy: "last_write_wins"
# Maximum queued inventory changes
max_queue_size: 500
# Warning threshold (% of queue capacity)
queue_warning_threshold_percent: 90
# Maximum offline duration before forcing reconnect (hours)
max_offline_hours: 24
# Operations allowed offline
allowed_offline:
- "sale_decrement"
- "return_increment"
- "adjustment"
- "count_entry"
- "parked_sale"
# Operations blocked offline
blocked_offline:
- "multi_store_lookup"
- "transfer_request"
- "online_fulfillment"
- "shopify_sync"
- "po_submission"
- "po_receiving"
- "rma_creation"
# ------------------------------------------
# ONLINE FULFILLMENT
# ------------------------------------------
online_fulfillment:
# Store assignment strategy
strategy: "nearest" # Options: "nearest", "round_robin", "priority"
# Allow split fulfillment across multiple stores
split_fulfillment_enabled: false
max_splits: 3
# Exclude HQ warehouse from online fulfillment
exclude_hq: false
# Minimum stock to retain at store after online order reservation
min_retain_qty: 1
# Shopify sync settings
shopify_sync:
reconciliation_interval_minutes: 15
webhook_retry_count: 3
webhook_retry_backoff: [5, 15, 45]
source_of_truth: "pos"
# ------------------------------------------
# POS INTEGRATION
# ------------------------------------------
pos_integration:
# Soft reservation behavior on add to cart
cart_reservation:
enabled: true
type: "soft"
payment_failure_hold_seconds: 30
# Parked sale reservation
parked_sale_reservation:
type: "soft"
show_warning_to_other_terminals: true
# Hold-for-pickup reservation
hold_for_pickup_reservation:
type: "hard"
# Return to stock default status
return_default_status: "AVAILABLE"
# Allow staff to override return status
return_status_override_allowed: true
# Override options
return_status_options:
- "AVAILABLE"
- "DAMAGED"
- "QUARANTINE"
Inventory Configuration Field Reference
| Key | Type | Default | Description |
|---|---|---|---|
general.allow_negative_inventory | Boolean | false | Whether sales can proceed when stock is zero |
general.costing_method | Enum | weighted_average | Valuation method for COGS calculation |
general.min_display_qty | Integer | 3 | POS shows “Low Stock” warning below this threshold |
purchase_orders.approval.auto_approve_below | Decimal | 500.00 | PO total below which no approval is needed |
purchase_orders.approval.admin_approval_threshold | Decimal | 5000.00 | PO total requiring admin/owner approval |
purchase_orders.auto_po.enabled | Boolean | true | Auto-generate draft POs when stock hits reorder point |
receiving.mode | Enum | open | Staff sees expected qty; can receive different amount |
receiving.over_receive_threshold_percent | Integer | 10 | Maximum over-receive without manager approval |
reorder.velocity_window_days | Integer | 90 | Days of sales history for velocity calculation |
reorder.safety_stock_sigma | Decimal | 1.65 | Standard deviations for safety stock (~95% service level) |
reorder.dead_stock_days | Integer | 90 | Days of zero sales before flagging as dead stock |
counting.default_freeze_mode | Enum | soft | Inventory freeze behavior during physical counts |
counting.blind_count_enabled | Boolean | true | Hide expected quantities from counters |
counting.variance_approval_threshold_percent | Integer | 5 | Variance percentage requiring manager approval |
adjustments.approval_mode | Enum | threshold | When manual adjustments require manager approval |
transfers.auto_suggest.imbalance_threshold_days | Integer | 14 | Days-of-supply imbalance to trigger auto-suggestion |
transfers.default_transit_days | Integer | 3 | Default estimated transit time between locations |
rma.ship_back_deadline_days | Integer | 30 | Days to ship RMA items after vendor approval |
offline.max_queue_size | Integer | 500 | Maximum queued offline inventory changes |
offline.max_offline_hours | Integer | 24 | Hours before system forces reconnect attempt |
online_fulfillment.strategy | Enum | nearest | Store selection algorithm for online orders |
online_fulfillment.min_retain_qty | Integer | 1 | Minimum stock to keep at store after online reservation |
pos_integration.cart_reservation.type | Enum | soft | Reservation type when item is added to POS cart |
pos_integration.return_default_status | Enum | AVAILABLE | Default inventory status for returned items |
5.20 Tenant Onboarding Wizard
Scope: Step-by-step guided setup workflow for provisioning a new tenant from initial registration through operational go-live readiness. The onboarding wizard ensures that every required configuration area is addressed before the tenant begins processing transactions.
Cross-Reference: All configuration sections referenced below (5.2 through 5.19) contain the detailed data models and business rules for each step.
5.20.1 Onboarding Flow
The onboarding wizard presents 13 sequential steps. Steps 1-5 are mandatory for go-live. Steps 6-13 are recommended but can be deferred.
flowchart TD
S1["Step 1: Company Info\n(5.2, 5.3)"] --> S2["Step 2: Locations\n(5.4)"]
S2 --> S3["Step 3: Registers\n(5.7)"]
S3 --> S4["Step 4: Printers\n(5.8)"]
S4 --> S5["Step 5: Users & Roles\n(5.5)"]
S5 --> S6["Step 6: Clock-In/Out\n(5.6)"]
S6 --> S7["Step 7: Tax\n(5.9)"]
S7 --> S8["Step 8: Units of Measure\n(5.10)"]
S8 --> S9["Step 9: Payment Methods\n(5.11)"]
S9 --> S10["Step 10: Email\n(5.15)"]
S10 --> S11["Step 11: Integrations\n(5.16)"]
S11 --> S12["Step 12: Business Rules\n(5.19)"]
S12 --> S13["Step 13: Go-Live Checklist\n(Validation)"]
S13 -->|All checks pass| GL["GO LIVE"]
S13 -->|Checks failed| FIX["Review Failures\n(Return to failing step)"]
FIX --> S13
style S1 fill:#2d6a4f,stroke:#1b4332,color:#fff
style S2 fill:#2d6a4f,stroke:#1b4332,color:#fff
style S3 fill:#2d6a4f,stroke:#1b4332,color:#fff
style S4 fill:#264653,stroke:#1d3557,color:#fff
style S5 fill:#2d6a4f,stroke:#1b4332,color:#fff
style S6 fill:#264653,stroke:#1d3557,color:#fff
style S7 fill:#2d6a4f,stroke:#1b4332,color:#fff
style S8 fill:#264653,stroke:#1d3557,color:#fff
style S9 fill:#2d6a4f,stroke:#1b4332,color:#fff
style S10 fill:#264653,stroke:#1d3557,color:#fff
style S11 fill:#264653,stroke:#1d3557,color:#fff
style S12 fill:#264653,stroke:#1d3557,color:#fff
style S13 fill:#7b2d8e,stroke:#5a1d6e,color:#fff
style GL fill:#2d6a4f,stroke:#1b4332,color:#fff
style FIX fill:#c0392b,stroke:#922b21,color:#fff
5.20.2 Step Details
| Step | Name | Section Ref | Required | Description |
|---|---|---|---|---|
| 1 | Company Info | 5.2, 5.3 | Yes | Set tenant name, legal entity name, logo, timezone, base currency, date/time format, fiscal year start |
| 2 | Locations | 5.4 | Yes | Create all physical locations (stores and warehouses). Set address, phone, timezone override, and location type. |
| 3 | Registers | 5.7 | Yes | Add registers to each store location. Select profile (Full POS, Mobile POS, Inventory-Only). Pair physical devices. |
| 4 | Printers | 5.8 | No | Register receipt and label printers. Link printers to registers. Run network discovery if applicable. |
| 5 | Users & Roles | 5.5 | Yes | Create user accounts. Assign roles (Owner, Admin, Manager, Cashier, Inventory Clerk). Assign users to locations. |
| 6 | Clock-In/Out | 5.6 | No | Configure clock-in/clock-out settings. Enable time tracking for staff if needed. Skip for operations that do not require time tracking. |
| 7 | Tax | 5.9 | Yes | Assign tax jurisdiction to each store location. Review compound tax rates (State/County/City), tax calculation priority, and exemption handling. |
| 8 | Units of Measure | 5.10 | No | Review predefined UoMs. Create custom UoMs if needed (e.g., BUNDLE, SET, ROLL). Skip if standard UoMs are sufficient. |
| 9 | Payment Methods | 5.11 | Yes | Enable payment methods per location (Cash, Credit Card, Gift Card, Store Credit, etc.). Configure payment processor credentials. |
| 10 | 5.15 | No | Configure SMTP or API email provider. Send test email. Enable/disable individual email templates. | |
| 11 | Integrations | 5.16 | No | Connect Shopify store (enter shop URL, API key, access token). Verify connection. Configure sync mode. |
| 12 | Business Rules | 5.19 | No | Review all business rule defaults. Customize return policy, discount limits, offline mode settings, and inventory rules as needed. |
| 13 | Go-Live Checklist | – | Yes | Automated validation of all mandatory requirements. Displays pass/fail for each check. |
5.20.3 Go-Live Validation Rules
The go-live checklist runs automated validation against all mandatory configuration requirements. The tenant cannot begin processing transactions until all mandatory checks pass.
Mandatory Checks (must all pass):
| # | Validation Rule | Failure Message |
|---|---|---|
| 1 | At least 1 location created | “No locations configured. Create at least one store location.” |
| 2 | At least 1 location of type STORE | “No store locations found. At least one location must be a retail store.” |
| 3 | At least 1 register per store location | “Store ‘{location_name}’ has no registers. Add at least one register.” |
| 4 | At least 1 user with OWNER role | “No Owner user found. At least one user must have the Owner role.” |
| 5 | Tax jurisdiction assigned to each store location | “Store ‘{location_name}’ has no tax jurisdiction assigned. Assign a tax jurisdiction with at least one active rate.” |
| 6 | At least 1 payment method enabled per store location | “Store ‘{location_name}’ has no payment methods enabled. Enable at least one.” |
| 7 | Email sender configured | “No email provider configured. Configure SMTP or API email for receipts.” |
| 8 | Base currency set | “Base currency is not configured. Set the tenant’s base currency.” |
| 9 | Default timezone set | “Default timezone is not configured. Set the tenant’s timezone.” |
Recommended Checks (advisory, do not block go-live):
| # | Validation Rule | Advisory Message |
|---|---|---|
| 1 | Shopify integration connected | “Shopify integration is not connected. Online orders will not sync.” |
| 2 | At least 1 label printer per location | “No label printers configured. Barcode printing will be unavailable.” |
| 3 | Receipt printer linked to each Full POS register | “Register ‘{register_name}’ has no receipt printer linked.” |
| 4 | Custom fields configured (if applicable) | “No custom fields defined. Custom fields can be added later.” |
| 5 | Business rules reviewed | “Business rules are using system defaults. Review and customize as needed.” |
| 6 | Clock-in/out reviewed | “Clock-in/out settings not reviewed. Time tracking will use system defaults.” |
| 7 | Receipt header/footer customized | “Receipt template is using system defaults. Customize header and footer.” |
5.20.4 Onboarding State Tracking
Onboarding Progress Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant (unique constraint) |
current_step | Integer | Yes | Current wizard step (1-14) |
steps_completed | JSON | Yes | Map of step_number: {completed: boolean, completed_at: DateTime, skipped: boolean} |
go_live_ready | Boolean | Yes | Whether all mandatory checks pass (default: false) |
go_live_at | DateTime | No | Timestamp when the tenant was activated for live operations |
started_at | DateTime | Yes | Timestamp when onboarding began |
completed_at | DateTime | No | Timestamp when the wizard was fully completed (all 14 steps addressed) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Steps can be completed in any order, though the wizard presents them sequentially as a recommended progression.
- Steps can be revisited and modified at any time before go-live.
- Steps marked as “No” in the Required column can be explicitly skipped. Skipping sets
skipped: truefor that step. - After go-live, the onboarding wizard is no longer displayed. All configuration is managed through the standard Admin Portal settings pages.
- The onboarding wizard state is preserved so that if the tenant administrator abandons the wizard mid-way and returns later, progress is restored.
5.21 User Stories & Acceptance Criteria
Scope: All user stories and Gherkin acceptance criteria for Module 5 (Setup & Configuration). Stories are organized into 17 epics covering all functional areas. Each epic includes user stories in standard format and one or more Gherkin feature files with acceptance scenarios.
Epic 5.A: System Settings & Branding
US-5.A.1: Configure Company Identity
- As a tenant administrator, I want to configure the company name, legal entity name, logo, and timezone so that the system reflects our brand identity across all interfaces and reports.
- Constraint: Logo must be PNG or SVG, max 2MB, min 200x200px. Timezone uses IANA identifiers.
US-5.A.2: Set Business Hours per Location
- As a tenant administrator, I want to set operating hours per location so that shift schedules and report periods align with actual store hours.
- Constraint: Hours are defined per day of week. Locations can have different hours.
US-5.A.3: Customize Branding
- As a tenant administrator, I want to customize the login page branding, primary color scheme, and admin portal header so that the system looks consistent with our brand.
- Constraint: Primary and secondary color must be valid hex codes. Preview available before saving.
Feature: System Settings Configuration
As a tenant administrator
I need to configure company identity and branding
So that the system reflects our business across all touchpoints
Background:
Given I am logged in as a user with "OWNER" role
And I am on the System Settings page
Scenario: Configure company identity on initial setup
When I enter "Nexus Clothing" as the tenant name
And I enter "Nexus Clothing LLC" as the legal entity name
And I upload a valid PNG logo "nexus-logo.png" (500x500px, 150KB)
And I select timezone "America/New_York"
And I select currency "USD"
And I click "Save Settings"
Then the settings should be saved successfully
And the header should display "Nexus Clothing"
And the logo should appear in the top-left corner
Scenario: Reject invalid logo upload
When I attempt to upload a logo file "oversized.png" (5MB)
Then the system should reject the upload with message "Logo must be under 2MB"
And the existing logo should remain unchanged
Scenario: Currency becomes immutable after first transaction
Given at least one transaction has been processed
When I attempt to change the base currency from "USD" to "CAD"
Then the system should block the change with message "Currency cannot be changed after transactions have been processed"
Epic 5.B: Location Management
US-5.B.1: Create Store and Warehouse Locations
- As a tenant administrator, I want to create store and warehouse locations with addresses and contact information so that the system knows the physical topology of the business.
- Constraint: Each location must have a unique name within the tenant. Location type is
STOREorWAREHOUSE.
Feature: Location Management
As a tenant administrator
I need to define physical locations
So that the system maps to our real-world store topology
Background:
Given I am logged in as a user with "ADMIN" role
Scenario: Create a new store location
When I navigate to Locations management
And I click "Add Location"
And I enter name "Georgetown Store"
And I select type "STORE"
And I enter address "3100 M Street NW, Washington, DC 20007"
And I enter phone "(202) 555-0100"
And I click "Save Location"
Then the location "Georgetown Store" should be created
Scenario: Prevent duplicate location names
Given a location named "Georgetown Store" already exists
When I attempt to create another location named "Georgetown Store"
Then the system should reject with message "A location with this name already exists"
Epic 5.C: User & Role Management
US-5.C.1: Create User Accounts
- As a tenant administrator, I want to create user accounts with profiles including name, email, phone, and PIN so that staff can access the system with appropriate credentials.
- Constraint: Email must be unique within the tenant. PIN must be exactly 4 digits. Username auto-generated from email.
US-5.C.2: Configure Feature Toggles per Role
- As a tenant administrator, I want to configure which features each role can access so that staff only see functionality relevant to their job.
- Constraint: Feature toggles are grouped by module. Changes take effect on next login.
US-5.C.3: Assign Users to Multiple Locations
- As a tenant administrator, I want to assign a user to one or more locations so that I can set their default location, control reporting filters, and organize staff by store.
- Constraint: A user must be assigned to at least one location. Location assignments are informational — they set defaults and reporting scope but do not restrict which locations a user can process transactions at.
Feature: User and Role Management
As a tenant administrator
I need to manage staff accounts, roles, and location assignments
So that employees have appropriate defaults and reporting scope
Background:
Given I am logged in as a user with "OWNER" role
And locations "Georgetown Store" and "HQ Warehouse" exist
Scenario: Create a new cashier user
When I navigate to User Management
And I click "Add User"
And I enter first name "Sarah" and last name "Johnson"
And I enter email "sarah.johnson@nexus.com"
And I enter PIN "4521"
And I assign role "CASHIER"
And I assign location "Georgetown Store"
And I click "Create User"
Then user "Sarah Johnson" should be created with role "CASHIER"
And she should be assigned to "Georgetown Store"
And a welcome email should be queued
Scenario: Prevent duplicate email
Given user "sarah.johnson@nexus.com" already exists
When I attempt to create a user with email "sarah.johnson@nexus.com"
Then the system should reject with message "A user with this email already exists"
Scenario: Assign user to multiple locations (informational)
Given user "Mark Chen" exists with role "MANAGER"
When I edit user "Mark Chen"
And I add location assignment "HQ Warehouse"
And I click "Save"
Then "Mark Chen" should be assigned to both "Georgetown Store" and "HQ Warehouse"
And his primary location should be used as the default at login
And he should be able to process transactions at any tenant location regardless of assignment
Epic 5.D: Register & Device Management
US-5.D.1: Add Registers to a Location
- As a tenant administrator, I want to add registers to a store location so that the store can process transactions.
- Constraint: Register name must be unique within the location. Register number auto-increments.
US-5.D.2: Pair Devices with Registers
- As a tenant administrator, I want to pair a physical device (iPad, PC, terminal) with a register so that the device operates as that register.
- Constraint: Device pairing uses a one-time PIN code. One device per register at a time.
US-5.D.3: Assign Register Profiles
- As a tenant administrator, I want to assign a profile (Full POS, Mobile POS, Inventory-Only) to each register so that the interface matches the register’s purpose.
- Constraint: Full POS requires a receipt printer link. Mobile POS and Inventory-Only do not require printers.
Feature: Register and Device Management
As a tenant administrator
I need to configure registers and pair physical devices
So that store staff can process transactions
Background:
Given I am logged in as a user with "ADMIN" role
And location "Georgetown Store" exists
Scenario: Add a Full POS register
When I navigate to Registers for "Georgetown Store"
And I click "Add Register"
And I enter name "Main Counter"
And I select profile "FULL_POS"
And I click "Save Register"
Then register "Main Counter" should be created at "Georgetown Store"
And a configuration warning should show "No receipt printer linked"
And the register status should be "INACTIVE" until a device is paired
Scenario: Pair a device using PIN code
Given register "Main Counter" exists at "Georgetown Store"
When I click "Generate Pairing PIN" on the register
Then a 6-digit pairing PIN should be displayed with 5-minute expiry
When the POS device enters the pairing PIN "847293"
Then the device should be paired to "Main Counter"
And the register status should change to "ACTIVE"
Scenario: Prevent dual device pairing
Given register "Main Counter" is already paired to "iPad-001"
When I attempt to pair "iPad-002" to "Main Counter"
Then the system should prompt "This register is already paired to iPad-001. Unpair first?"
Epic 5.E: Printer Configuration
US-5.E.1: Register Printers
- As a tenant administrator, I want to register receipt and label printers at each location so that transactions can print receipts and staff can print barcode labels.
- Constraint: Printers are classified as RECEIPT or LABEL. Connection types: USB, NETWORK_IP, BLUETOOTH.
US-5.E.2: Link Printers to Registers
- As a tenant administrator, I want to link printers to specific registers so that each register knows which printer to use.
- Constraint: Each Full POS register requires exactly one PRIMARY_RECEIPT printer. LABEL and SECONDARY_RECEIPT are optional.
Feature: Printer Configuration
As a tenant administrator
I need to register and link printers to registers
So that receipts and labels print to the correct devices
Background:
Given I am logged in as a user with "ADMIN" role
And location "Georgetown Store" exists with register "Main Counter"
Scenario: Register a network receipt printer
When I navigate to Printers for "Georgetown Store"
And I click "Add Printer"
And I enter name "Main Counter Printer"
And I select type "RECEIPT"
And I select connection "NETWORK_IP"
And I enter address "192.168.1.50:9100"
And I select paper width "80MM"
And I click "Save Printer"
Then printer "Main Counter Printer" should be registered
And a health check should be initiated
And status should display "ONLINE" or "OFFLINE"
Scenario: Link receipt printer to register
Given printer "Main Counter Printer" exists and is ONLINE
When I navigate to register "Main Counter" printer assignments
And I assign "Main Counter Printer" as "PRIMARY_RECEIPT"
And I click "Save"
Then the configuration warning "No receipt printer linked" should be cleared
And "Main Counter" should print to "Main Counter Printer"
Scenario: USB printer cannot be shared
Given a USB printer "USB Receipt Printer" is linked to "Main Counter"
When I attempt to link "USB Receipt Printer" to register "Side Counter"
Then the system should reject with message "USB printers cannot be shared between registers"
Epic 5.F: Tax Configuration
US-5.F.1: Assign Tax Jurisdiction to Location
- As a tenant administrator, I want to assign a tax jurisdiction to each store location so that compound sales tax (State + County + City) is correctly calculated on transactions.
- Constraint: Each location references exactly one jurisdiction. A jurisdiction defines up to 3 rate levels. Rate changes per level are scheduled with an effective date.
US-5.F.2: Automatic Compound Tax Calculation
- As a cashier, I want tax to automatically calculate on each line item by summing all active rates for the store location’s jurisdiction so that I do not need to manually compute tax.
- Constraint: Tax follows priority: product exemption > customer exemption > jurisdiction compound rate.
Feature: Tax Jurisdiction Configuration
As a tenant administrator
I need to configure tax jurisdictions and assign them to store locations
So that compound sales tax is correctly applied to all transactions
Background:
Given I am logged in as a user with "ADMIN" role
And location "Georgetown Store" exists in Virginia
Scenario: Create a tax jurisdiction and assign to location
When I navigate to Tax Jurisdictions
And I create jurisdiction "VA-NFK" named "Norfolk, Virginia"
And I add a STATE rate "Virginia State Tax" at 4.300%
And I add a COUNTY rate "Hampton Roads Regional Tax" at 0.700%
And I add a CITY rate "Norfolk City Tax" at 1.000%
And I assign jurisdiction "VA-NFK" to "Georgetown Store"
Then the effective compound tax rate for "Georgetown Store" should be 6.000%
And all future transactions at this location should apply 6.000% compound tax
Scenario: Schedule a future rate change at one level
Given "Georgetown Store" has jurisdiction "VA-NFK" with compound rate 6.000%
When I add a new STATE rate of "4.500%" with effective date "2027-01-01"
And I click "Save Tax Rate"
Then the system should show the current compound rate as 6.000%
And the scheduled STATE rate as 4.500% effective January 1, 2027
And the compound rate should automatically change to 6.200% on the effective date
Scenario: Tax exemption applied for exempt customer
Given "Georgetown Store" has a compound tax rate of 6.000%
And customer "City of Alexandria" is marked as tax-exempt with valid certificate
When a sale is processed for "City of Alexandria"
Then all line items should show tax as $0.00
And the receipt should display "Tax Exempt" with the certificate number
Epic 5.G: UoM Management
US-5.G.1: Manage Units of Measure
- As a tenant administrator, I want to view and manage the available units of measure so that products use the correct selling and purchasing units.
- Constraint: System-predefined UoMs cannot be modified or deleted. Custom UoMs can be deactivated.
US-5.G.2: Create Custom UoMs
- As a tenant administrator, I want to create custom units of measure with conversion factors so that I can handle specialty product measurements.
- Constraint: Custom UoMs must specify a category (QUANTITY, LENGTH, WEIGHT) and conversion factor to the base unit.
Feature: Unit of Measure Management
As a tenant administrator
I need to manage units of measure
So that products are sold and purchased in the correct units
Background:
Given I am logged in as a user with "ADMIN" role
Scenario: View predefined UoMs
When I navigate to Units of Measure management
Then I should see system UoMs including "Each", "Pair", "Dozen", "Case", "Yard"
And system UoMs should be marked as non-editable
Scenario: Create a custom UoM
When I click "Add Custom UoM"
And I enter code "BUNDLE" and name "Bundle of 5"
And I select category "QUANTITY"
And I enter conversion factor 5 to base unit "Each"
And I click "Save"
Then UoM "BUNDLE" should be created
And the conversion table should show "1 BUNDLE = 5 EACH"
And the inverse "1 EACH = 0.2 BUNDLE" should be auto-generated
Scenario: Prevent deleting a UoM in use
Given UoM "BUNDLE" is assigned to product "Sock Pack"
When I attempt to delete UoM "BUNDLE"
Then the system should reject with message "Cannot delete UoM in use. Deactivate instead."
Epic 5.H: Payment Methods Setup
US-5.H.1: Enable Payment Methods per Location
- As a tenant administrator, I want to enable or disable payment methods per location so that each store only accepts configured payment types.
- Constraint: At least one payment method must be enabled per store location. Cash cannot be disabled.
US-5.H.2: Configure Payment Processor
- As a tenant administrator, I want to configure the payment processor credentials so that credit card transactions are processed.
- Constraint: Credentials are encrypted at rest. Test transaction available for validation.
Feature: Payment Methods Setup
As a tenant administrator
I need to configure payment methods per location
So that each store can accept the correct payment types
Background:
Given I am logged in as a user with "ADMIN" role
And location "Georgetown Store" exists
Scenario: Enable standard payment methods
When I navigate to Payment Methods for "Georgetown Store"
And I enable "Cash", "Credit Card", "Gift Card", and "Store Credit"
And I disable "Affirm"
And I click "Save"
Then "Georgetown Store" should accept Cash, Credit Card, Gift Card, and Store Credit
And Affirm should not be available as a payment option at this location
Scenario: Prevent disabling all payment methods
When I attempt to disable all payment methods for "Georgetown Store"
Then the system should reject with message "At least one payment method must be enabled"
Scenario: Configure payment processor credentials
When I navigate to Payment Processor settings
And I enter merchant ID "MID_12345"
And I enter API key and secret
And I click "Test Connection"
Then the system should perform a $0.00 authorization test
And the result should display "Connection Successful" or an error message
Epic 5.I: Custom Fields
US-5.I.1: Define Custom Fields per Entity
- As a tenant administrator, I want to define custom fields for products, customers, and transactions so that I can capture business-specific data not covered by standard fields.
- Constraint: Max 20 custom fields per entity type. Supported types: TEXT, NUMBER, DATE, BOOLEAN, SELECT.
US-5.I.2: Custom Fields on Forms
- As a staff member, I want custom fields to appear on the relevant entity forms so that I can enter the custom data during normal workflows.
- Constraint: Custom fields appear in a dedicated “Custom Fields” section. Sort order controls display sequence.
Feature: Custom Fields Configuration
As a tenant administrator
I need to define custom fields for business entities
So that we can capture data specific to our business
Background:
Given I am logged in as a user with "ADMIN" role
Scenario: Create a custom text field for products
When I navigate to Custom Fields management
And I select entity type "PRODUCT"
And I click "Add Custom Field"
And I enter label "Fabric Composition" and field type "TEXT"
And I set max length to 200
And I mark it as required
And I click "Save"
Then custom field "Fabric Composition" should be added to products
And it should appear on the product edit form
Scenario: Create a select field with options
When I add a custom field for "CUSTOMER"
And I enter label "Preferred Contact Method" and field type "SELECT"
And I add options: "Phone", "Email", "SMS", "None"
And I click "Save"
Then the field should appear as a dropdown on customer forms
And only the defined options should be selectable
Scenario: Enforce custom field limit
Given 20 custom fields already exist for entity type "PRODUCT"
When I attempt to add a 21st custom field
Then the system should reject with message "Maximum of 20 custom fields per entity type"
Epic 5.J: Approval Workflows
US-5.J.1: Configure Approval Rules
- As a tenant administrator, I want to configure which actions require approval and the threshold values so that high-value or sensitive operations are reviewed before proceeding.
- Constraint: Approval rules support threshold-based, always-required, and never-required modes per action type.
US-5.J.2: Approve or Reject Requests
- As a manager, I want to view pending approval requests and approve or reject them with an optional reason so that the workflow continues without delay.
- Constraint: A user cannot approve their own request. Pending requests expire after 90 days.
Feature: Approval Workflows
As a tenant administrator
I need to configure approval rules for sensitive actions
So that high-value operations receive proper authorization
Background:
Given I am logged in as a user with "OWNER" role
And approval rules are configured with defaults
Scenario: Configure PO approval threshold
When I navigate to Approval Rules
And I set "PURCHASE_ORDER" manager threshold to $1000.00
And I set "PURCHASE_ORDER" admin threshold to $10000.00
And I click "Save"
Then POs below $1000 should auto-approve
And POs between $1000 and $9999.99 should require manager approval
And POs at $10000+ should require admin approval
Scenario: Approve a pending PO request
Given a PO "PO-2026-00200" for $2500.00 is pending manager approval
And I am logged in as a "MANAGER"
When I navigate to Pending Approvals
And I select "PO-2026-00200"
And I click "Approve"
Then the PO status should change to "APPROVED"
And the requester should receive a notification
Scenario: Self-approval is prevented
Given user "John" submitted PO "PO-2026-00201"
When "John" attempts to approve his own PO
Then the system should reject with message "You cannot approve your own request"
Epic 5.K: Receipt Builder
US-5.K.1: Configure Receipt Layout
- As a tenant administrator, I want to configure the receipt paper width, font size, field order, and separator style so that receipts match our brand and printer capabilities.
- Constraint: Preview must be available before saving. Changes apply to all registers at the affected location.
US-5.K.2: Customize Header and Footer
- As a tenant administrator, I want to customize the receipt header (company name, tagline, logo) and footer (return policy, website, thank-you message) so that receipts include our business information.
- Constraint: Header supports 3 text lines plus logo. Footer supports 3 text lines. Blank lines are omitted from print.
Feature: Receipt Builder
As a tenant administrator
I need to configure receipt layout and content
So that printed receipts match our brand and include required information
Background:
Given I am logged in as a user with "ADMIN" role
Scenario: Configure receipt header
When I navigate to Receipt Configuration
And I set header line 1 to "Nexus Clothing"
And I set header line 2 to "Fashion for Everyone"
And I upload a monochrome logo
And I set footer line 1 to "Returns accepted within 30 days with receipt."
And I set footer line 2 to "www.nexusclothing.com"
And I click "Save"
Then the receipt preview should show the updated header and footer
And all future receipts should use this configuration
Scenario: Toggle receipt fields
When I navigate to Receipt Field Configuration
And I disable "register_number" and "savings_total"
And I enable "loyalty_points" and "customer_name"
And I click "Save"
Then receipts should show loyalty points and customer name
And receipts should not show register number or savings total
Scenario: Location-level receipt override
Given a tenant-wide receipt configuration exists
When I create a location-specific override for "Georgetown Store"
And I set header line 2 to "Georgetown Location - Since 2015"
And I click "Save"
Then "Georgetown Store" should use the override header
And all other locations should use the tenant-wide default
Epic 5.L: Email & Communications
US-5.L.1: Configure Email Provider
- As a tenant administrator, I want to configure the SMTP or API email provider so that the system can send transactional and notification emails.
- Constraint: Configuration must be verified via test email before activation.
US-5.L.2: Enable or Disable Templates
- As a tenant administrator, I want to enable or disable individual email templates so that I control which automated emails are sent.
- Constraint: Disabling a template prevents sending but does not affect the triggering event.
Feature: Email Configuration
As a tenant administrator
I need to configure email delivery and manage templates
So that customers and staff receive appropriate automated communications
Background:
Given I am logged in as a user with "ADMIN" role
Scenario: Configure SMTP email provider
When I navigate to Email Configuration
And I select provider type "SMTP"
And I enter host "smtp.gmail.com" and port 587
And I enter username "pos@nexusclothing.com"
And I enter password and sender name "Nexus Clothing"
And I click "Send Test Email"
Then a test email should be sent to "pos@nexusclothing.com"
And the configuration should be marked as "Verified"
Scenario: Disable a template
Given all email templates are enabled by default
When I navigate to Email Templates
And I disable template "TMPL-TIER-UPGRADE"
And I click "Save"
Then when a customer's tier changes, no email should be sent
And the tier upgrade should still be applied normally
Scenario: Unverified configuration shows warning
Given no email provider has been configured
When I navigate to the Admin Portal dashboard
Then a warning banner should display "Email provider not configured -- email receipts and notifications will not be sent"
Epic 5.M: Integration Hub
US-5.M.1: Connect Shopify Integration
- As a tenant administrator, I want to connect my Shopify store by entering API credentials so that products and inventory sync between the POS and online store.
- Constraint: Connection verified via test API call. Webhook endpoints auto-registered on successful connection.
US-5.M.2: View Integration Health
- As a tenant administrator, I want to view a dashboard showing the health status of all integrations so that I can identify and troubleshoot sync issues.
- Constraint: Dashboard shows status, last sync time, error count, and latency per integration.
Feature: Integration Hub
As a tenant administrator
I need to connect and monitor external system integrations
So that data flows correctly between the POS and external platforms
Background:
Given I am logged in as a user with "ADMIN" role
Scenario: Connect Shopify store
When I navigate to Integration Hub
And I click "Connect Shopify"
And I enter shop URL "nexus-clothes.myshopify.com"
And I enter API key, API secret, and access token
And I select sync mode "pos_master"
And I click "Verify Connection"
Then the system should make a test API call to Shopify
And the Shopify integration status should change to "CONNECTED"
And webhook endpoints should be auto-registered
Scenario: Integration health alert on errors
Given the Shopify integration has accumulated 6 errors in the past 24 hours
Then the Integration Hub dashboard should show Shopify health as "Red"
And a dashboard alert should be sent to all ADMIN and OWNER users
And the error log should be accessible from the integration detail page
Scenario: Disable integration temporarily
Given Shopify integration is currently "CONNECTED" and "ENABLED"
When I toggle the integration to "DISABLED"
Then all sync operations should halt immediately
And queued webhooks should be preserved
And the status should show "CONNECTED" but "DISABLED"
Epic 5.N: Loyalty Settings
US-5.N.1: Configure Loyalty Rates and Tiers
- As a tenant administrator, I want to configure the points-per-dollar rate, tier thresholds, and tier multipliers so that the loyalty program matches our business strategy.
- Constraint: Tier thresholds must be in ascending order. Point multiplier must be >= 1.0.
US-5.N.2: Configure Gift Card Settings
- As a tenant administrator, I want to configure gift card denominations, load limits, and expiry rules so that gift cards comply with our jurisdiction requirements.
- Constraint: Minimum load must be >= $1.00. Maximum load must be >= minimum load. Expiry must comply with state law.
Feature: Loyalty and Rewards Settings
As a tenant administrator
I need to configure loyalty program parameters
So that the rewards program operates according to our business rules
Background:
Given I am logged in as a user with "OWNER" role
Scenario: Configure loyalty point rates
When I navigate to Loyalty Settings
And I set points per dollar to 2
And I set redemption rate to 200 points per dollar
And I set minimum redemption to 200 points
And I set max redemption percent to 25
And I click "Save"
Then customers should earn 2 base points per dollar
And 200 points should be required for $1.00 discount
And customers can pay up to 25% of their total with points
Scenario: Configure tier thresholds
When I set Silver threshold to $500 with 1.25x multiplier
And I set Gold threshold to $2500 with 1.75x multiplier
And I set Platinum threshold to $7500 with 2.5x multiplier
And I click "Save"
Then the tier thresholds should be updated
And existing customers should be re-evaluated against new thresholds
Scenario: Validate tier threshold ordering
When I attempt to set Silver threshold to $5000 and Gold threshold to $2000
Then the system should reject with message "Tier thresholds must be in ascending order"
Epic 5.O: Audit Configuration
US-5.O.1: Toggle Audit Categories
- As a tenant administrator, I want to enable or disable specific audit log categories so that I can control the volume and focus of audit logging.
- Constraint: At least LOGIN and VOID categories must remain enabled. Other categories can be toggled freely.
US-5.O.2: Set Audit Retention Period
- As a tenant administrator, I want to set the audit log retention period and archival policy so that logs are retained long enough for compliance but do not consume unlimited storage.
- Constraint: Minimum retention is 90 days. Minimum archive retention is 365 days.
Feature: Audit Configuration
As a tenant administrator
I need to configure audit logging categories and retention
So that audit data is captured appropriately and stored compliantly
Background:
Given I am logged in as a user with "OWNER" role
Scenario: Disable a non-mandatory audit category
When I navigate to Audit Configuration
And I disable category "DISCOUNT"
And I click "Save"
Then discount events should no longer generate audit log entries
And existing discount audit entries should remain unchanged
Scenario: Prevent disabling mandatory categories
When I attempt to disable category "LOGIN"
Then the system should reject with message "LOGIN audit category cannot be disabled"
When I attempt to disable category "VOID"
Then the system should reject with message "VOID audit category cannot be disabled"
Scenario: Configure retention and archival
When I set retention period to 180 days
And I enable archival with format "COMPRESSED_JSON"
And I set archive purge to 2555 days (7 years)
And I click "Save"
Then logs older than 180 days should be archived during the next nightly job
And archived logs older than 7 years should be permanently deleted
Epic 5.P: Business Rules Configuration
US-5.P.1: Review and Modify Defaults
- As a tenant administrator, I want to review all business rule defaults and modify values that do not fit our operations so that the system behavior matches our policies.
- Constraint: Changes are validated against business rule constraints (e.g., max_line_discount_percent must be 0-100). Changes take effect immediately.
US-5.P.2: Override Rules at Store Level
- As a tenant administrator, I want to override specific business rules at the store level so that different locations can have different policies where needed.
- Constraint: Store-level overrides take precedence over tenant-level defaults. Only overridden values differ; all others inherit from tenant level.
Feature: Business Rules Configuration
As a tenant administrator
I need to review and customize business rules
So that the system enforces our specific operational policies
Background:
Given I am logged in as a user with "OWNER" role
Scenario: Modify return policy
When I navigate to Business Rules > Sales Configuration
And I change "full_refund_days" from 30 to 14
And I change "store_credit_days" from 90 to 60
And I click "Save"
Then the return policy should update immediately
And new transactions should use 14-day refund and 60-day store credit policies
Scenario: Store-level override for discount limits
When I navigate to Business Rules for "Georgetown Store"
And I override "max_line_discount_percent" to 25 (tenant default is 20)
And I click "Save"
Then "Georgetown Store" should allow up to 25% line discounts
And all other stores should continue using the 20% tenant default
Scenario: Validate business rule constraints
When I attempt to set "max_line_discount_percent" to 150
Then the system should reject with message "Value must be between 0 and 100"
When I attempt to set "cash_drawer.variance_tolerance" to -5.00
Then the system should reject with message "Value must be greater than or equal to 0"
Epic 5.Q: Tenant Onboarding
US-5.Q.1: Provision New Tenant via Wizard
- As a platform administrator, I want to provision a new tenant using the step-by-step onboarding wizard so that all required configuration is completed before the tenant goes live.
- Constraint: Wizard tracks progress across sessions. Steps can be revisited. Mandatory steps cannot be skipped.
US-5.Q.2: Validate Go-Live Readiness
- As a platform administrator, I want the system to validate all go-live requirements and display a pass/fail checklist so that I can confirm the tenant is ready for production operations.
- Constraint: All 9 mandatory checks must pass. Advisory checks are informational only. Go-live timestamp is recorded permanently.
Feature: Tenant Onboarding Wizard
As a platform administrator
I need to provision and validate new tenants
So that they are fully configured before processing transactions
Background:
Given I am provisioning a new tenant "Nexus Clothing"
And I am on the onboarding wizard
Scenario: Complete mandatory onboarding steps
When I complete Step 1 (Company Info) with name "Nexus Clothing", timezone "America/New_York", currency "USD"
And I complete Step 2 (Locations) by creating "Georgetown Store" (type: STORE)
And I complete Step 4 (Registers) by adding "Main Counter" (profile: FULL_POS) to "Georgetown Store"
And I complete Step 6 (Users) by creating user "Will" with role "OWNER" at "Georgetown Store"
And I complete Step 8 (Tax) by setting rate 6.000% for "Georgetown Store"
And I complete Step 10 (Payment Methods) by enabling "Cash" and "Credit Card" at "Georgetown Store"
And I complete Step 11 (Email) by configuring SMTP provider
Then the wizard should show steps 1, 2, 4, 6, 8, 10, and 11 as completed
Scenario: Go-live validation passes with all mandatory requirements
Given all mandatory onboarding steps are completed
When I navigate to Step 14 (Go-Live Checklist)
Then all 9 mandatory checks should show "PASS"
And the "Go Live" button should be enabled
When I click "Go Live"
Then the tenant status should change to "ACTIVE"
And the go-live timestamp should be recorded
And the onboarding wizard should no longer be displayed
Scenario: Go-live validation fails with missing requirements
Given Step 7 (Tax) has not been completed for "Georgetown Store"
When I navigate to Step 13 (Go-Live Checklist)
Then mandatory check "Tax jurisdiction assigned to each store location" should show "FAIL"
And the failure message should read "Store 'Georgetown Store' has no tax jurisdiction assigned. Assign a tax jurisdiction with at least one active rate."
And the "Go Live" button should be disabled
And a link should navigate to Step 8 (Tax Configuration)
End of Module 5: Setup & Configuration (Sections 5.15 – 5.21)
6. Integrations & External Systems Module
6.1 Overview & Scope
6.1.1 Module Purpose
Module 6 consolidates every external-system integration into a single, dedicated module. Rather than scattering API credentials, retry logic, webhook handlers, and protocol-specific code across the Sales, Catalog, Inventory, and Setup modules, the Integration Module provides a unified abstraction layer that all other modules call when they need to communicate with the outside world.
This design delivers three key benefits:
- Single Responsibility – Each business module owns its domain logic; Module 6 owns the wire protocol, authentication, error handling, and data mapping for every external provider.
- Swap-ability – Adding a new payment processor or replacing an email provider requires changes only inside Module 6. Upstream modules remain untouched.
- Operational Visibility – Centralised logging, circuit breakers, rate-limit tracking, and webhook ingestion give the operations team one place to monitor every integration.
6.1.2 Scope Statement
In scope – handled inside Module 6:
| Concern | Examples |
|---|---|
| Provider authentication & credential storage | OAuth 2.0 flows, API-key vaults, token refresh |
| Request construction & serialisation | REST, GraphQL, SOAP envelope building |
| Response parsing & normalisation | Mapping provider-specific DTOs to internal canonical models |
| Retry, back-off & circuit-breaker logic | Exponential retry, dead-letter queues, half-open probes |
| Rate-limit management | Leaky-bucket tracking, queue throttling, cost estimation |
| Webhook ingestion & verification | HMAC signature checks, idempotent handler dispatch |
| Idempotency framework | Dedup window, idempotency-key generation, record storage |
| Provider health monitoring | Heartbeat checks, latency histograms, uptime SLAs |
Out of scope – remains in the originating module:
| Concern | Owning Module |
|---|---|
| Deciding when to sync a product listing | Module 3 (Catalog) |
| Deciding which items need restocking from a channel | Module 4 (Inventory) |
| Applying business rules to a payment result (e.g., partial-capture policy) | Module 1 (Sales) |
| Configuring which integrations are enabled per tenant | Module 5 (Setup) |
| Rendering integration status in the Admin Portal UI | Module 5 (Setup) / Front-end layer |
6.1.3 Cross-References to Source Modules
Cross-Reference: See Module 1, Section 1.6 for payment-processing business rules that Module 6 executes against the configured payment provider.
Cross-Reference: See Module 3, Section 3.9 for the product-sync lifecycle that triggers Module 6 outbound calls to Shopify, Amazon, and Google Merchant.
Cross-Reference: See Module 4, Section 4.5 for inventory-level sync events that Module 6 pushes to connected channels.
Cross-Reference: See Module 5, Section 5.4 for tenant-level integration configuration (credentials, enabled providers, sync schedules).
6.1.4 Integration Types
| Code | Provider | Direction | Primary Data |
|---|---|---|---|
SHOPIFY | Shopify (REST + GraphQL) | Bi-directional | Products, orders, inventory levels, fulfillments |
AMAZON | Amazon SP-API | Bi-directional | Listings, orders, FBA inventory, reports |
GOOGLE_MERCHANT | Google Merchant Center | Outbound + webhooks | Product feeds, local inventory, promotions |
PAYMENT_PROCESSOR | Stripe / Square / Adyen | Bi-directional | Charges, refunds, disputes, payouts |
EMAIL_PROVIDER | SendGrid / Postmark / SES | Outbound + webhooks | Transactional email, delivery events |
SHIPPING_CARRIER | EasyPost / ShipStation | Bi-directional | Rate quotes, labels, tracking events |
6.1.5 Module Interconnection Diagram
flowchart TD
M1[Module 1: Sales] -->|Payment flow| M6
M3[Module 3: Catalog] -->|Product sync| M6
M4[Module 4: Inventory] -->|Stock sync| M6
M5[Module 5: Setup] -->|Configuration| M6
M6[Module 6: Integrations]
M6 -->|Product listings| SHOP[Shopify]
M6 -->|Product listings| AMZ[Amazon SP-API]
M6 -->|Local inventory| GOOG[Google Merchant]
M6 -->|Card processing| PAY[Payment Processors]
M6 -->|Transactional email| EMAIL[Email Providers]
M6 -->|Rate/Label/Track| CARRIER[Shipping Carriers]
6.2 Integration Architecture
6.2.1 Provider Abstraction Layer
Every external provider – regardless of protocol – implements a common interface so that calling code never depends on provider-specific details.
Abstract Interface: IIntegrationProvider
public interface IIntegrationProvider
{
Task<ConnectionResult> Connect(ProviderCredentials credentials);
Task<DisconnectResult> Disconnect(string connectionId);
Task<SyncResult> Sync(SyncRequest request);
Task<ProviderStatus> GetStatus(string connectionId);
Task<ValidationResult> ValidateCredentials(ProviderCredentials credentials);
}
| Method | Purpose | Idempotent | Timeout |
|---|---|---|---|
Connect | Establish a new authenticated session or exchange OAuth tokens | No | 30 s |
Disconnect | Revoke tokens and mark the connection inactive | Yes | 15 s |
Sync | Execute a data-synchronisation operation (push or pull) | Yes (via idempotency key) | 120 s |
GetStatus | Return current health, last-sync timestamp, error counts | Yes | 10 s |
ValidateCredentials | Test credentials without persisting a connection | Yes | 15 s |
Provider Registry Pattern
The system maintains a runtime registry of all available provider implementations. At startup, each provider self-registers via dependency injection. Tenant configuration (Module 5) determines which providers are enabled for a given tenant at runtime.
# Example provider registry configuration
integration:
providers:
shopify:
enabled: true
implementation: ShopifyProvider
max_connections_per_tenant: 3
amazon:
enabled: true
implementation: AmazonSpApiProvider
max_connections_per_tenant: 2
google_merchant:
enabled: true
implementation: GoogleMerchantProvider
max_connections_per_tenant: 1
payment_processor:
enabled: true
implementation: StripeProvider # swappable
max_connections_per_tenant: 2
email_provider:
enabled: true
implementation: SendGridProvider # swappable
max_connections_per_tenant: 1
shipping_carrier:
enabled: true
implementation: EasyPostProvider # swappable
max_connections_per_tenant: 5
Data Model: integration_providers
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants.id |
provider_type | VARCHAR(50) | Yes | Enum: SHOPIFY, AMAZON, GOOGLE_MERCHANT, PAYMENT_PROCESSOR, EMAIL_PROVIDER, SHIPPING_CARRIER |
provider_name | VARCHAR(100) | Yes | Human-readable name (e.g., “Nexus Shopify Store”) |
status | VARCHAR(20) | Yes | ACTIVE, INACTIVE, ERROR, PENDING_AUTH |
credentials_vault_ref | VARCHAR(255) | Yes | Reference to encrypted credential store (never plaintext) |
config_json | JSONB | No | Provider-specific configuration overrides |
last_sync_at | TIMESTAMPTZ | No | Timestamp of most recent successful sync |
last_error_at | TIMESTAMPTZ | No | Timestamp of most recent error |
error_count | INTEGER | Yes | Rolling error count (reset on success) – default 0 |
circuit_state | VARCHAR(15) | Yes | CLOSED, OPEN, HALF_OPEN – default CLOSED |
created_at | TIMESTAMPTZ | Yes | Row creation timestamp |
updated_at | TIMESTAMPTZ | Yes | Last modification timestamp |
Cross-Reference: See Module 5, Section 5.4.3 for the admin UI screens that manage rows in this table.
6.2.2 Authentication Patterns
Each provider family uses a different authentication mechanism. The table below summarises the patterns; subsequent subsections detail the flows.
| Provider | Auth Method | Token Lifetime | Refresh Mechanism | Credential Storage |
|---|---|---|---|---|
| Shopify | OAuth 2.0 (offline access tokens) | Permanent (until revoked) | N/A – token does not expire | Encrypted vault, single token per store |
| Amazon SP-API | OAuth 2.0 via Login with Amazon (LWA) | 1 hour | refresh_token grant to LWA endpoint | Vault stores refresh_token; access token cached in memory |
| Google Merchant | OAuth 2.0 (Service Account) | 1 hour | Self-signed JWT assertion exchanged for access token | Vault stores service-account JSON key file |
| Payment Processor | API Key + Secret | Permanent (until rotated) | Manual key rotation via provider dashboard | Vault stores key pair; rotation tracked in audit log |
| Email Provider | API Key or SMTP credentials | Permanent (until rotated) | Manual key rotation | Vault stores API key or SMTP user/pass |
| Shipping Carrier | API Key | Permanent (until rotated) | Manual key rotation | Vault stores API key |
Token refresh is automatic. For providers with expiring tokens (Amazon, Google), the Integration Module schedules a background refresh 5 minutes before expiry. If the refresh fails, the circuit breaker transitions the provider to ERROR state and the retry mechanism takes over.
Cross-Reference: See Module 5, Section 5.4.5 for the credential-rotation workflow that triggers re-validation of stored credentials.
6.2.3 Retry & Backoff Strategy
All outbound calls from Module 6 pass through a shared retry pipeline. The strategy uses exponential backoff with jitter to avoid thundering-herd effects when a provider recovers from an outage.
Retry Parameters
| Parameter | Value | Notes |
|---|---|---|
| Max retries | 3 | After initial attempt |
| Base delay | 5 seconds | First retry wait |
| Multiplier | 3x | 5 s -> 15 s -> 45 s |
| Jitter | +/- 20 % | Randomised to spread load |
| Retryable status codes | 429, 500, 502, 503, 504 | Non-retryable: 400, 401, 403, 404, 409 |
| Dead-letter after | 3 failed retries | Message persisted for manual review |
Retry Sequence Diagram
sequenceDiagram
participant Caller as Business Module
participant RM as Retry Manager
participant Provider as External Provider
participant DLQ as Dead Letter Queue
Caller->>RM: Send request
RM->>Provider: Attempt 1
Provider-->>RM: 503 Service Unavailable
Note over RM: Wait 5s (+/- jitter)
RM->>Provider: Attempt 2
Provider-->>RM: 503 Service Unavailable
Note over RM: Wait 15s (+/- jitter)
RM->>Provider: Attempt 3
Provider-->>RM: 503 Service Unavailable
Note over RM: Wait 45s (+/- jitter)
RM->>Provider: Attempt 4 (final)
Provider-->>RM: 503 Service Unavailable
RM->>DLQ: Persist failed message
RM-->>Caller: SyncResult { Success: false, Reason: "DLQ" }
Dead Letter Queue Processing
Failed messages land in the integration_dead_letters table. An operations dashboard (Module 5 Admin Portal) surfaces these records. Operators can:
- Retry – re-enqueue the message for another attempt cycle.
- Skip – mark as resolved without retrying (e.g., stale data that has since been corrected).
- Escalate – flag for engineering investigation.
6.2.4 Circuit Breaker Pattern
The circuit breaker prevents a failing provider from consuming retry budget and delaying upstream callers. Each integration_providers row maintains its own circuit state.
State Machine
| Current State | Condition | Next State | Action |
|---|---|---|---|
CLOSED | Fewer than 5 failures in 60 s | CLOSED | Pass requests through normally |
CLOSED | 5 or more failures in 60 s | OPEN | Reject all requests immediately; log alert |
OPEN | Less than 30 s since transition | OPEN | Return cached error; do not attempt call |
OPEN | 30 s cooldown elapsed | HALF_OPEN | Allow a single probe request |
HALF_OPEN | Probe succeeds | CLOSED | Reset failure counter; resume normal traffic |
HALF_OPEN | Probe fails | OPEN | Restart 30 s cooldown; increment alert severity |
stateDiagram-v2
[*] --> CLOSED
CLOSED --> OPEN : 5 failures in 60s
OPEN --> HALF_OPEN : 30s cooldown elapsed
HALF_OPEN --> CLOSED : Probe succeeds
HALF_OPEN --> OPEN : Probe fails
CLOSED --> CLOSED : Request succeeds / failures < threshold
Circuit Breaker Configuration
circuit_breaker:
failure_threshold: 5
failure_window_seconds: 60
cooldown_seconds: 30
half_open_max_probes: 1
alert_on_open: true
alert_channel: "ops-integrations"
Cross-Reference: See Module 5, Section 5.18 for the alerting configuration that fires when a circuit opens.
6.2.5 Idempotency Framework
All mutating operations (product creates, inventory updates, payment captures) are wrapped in an idempotency layer. This guarantees exactly-once semantics even when retries or duplicate webhooks occur.
Dedup Window
| Parameter | Value |
|---|---|
| Window duration | 24 hours |
| Key algorithm | SHA-256 |
| Key input | provider + entity_type + entity_id + operation + timestamp_bucket |
| Timestamp bucket | Truncated to nearest hour |
| Cleanup schedule | Daily at 03:00 UTC – purge records older than 48 hours |
Idempotency Key Generation
idempotency_key = SHA-256(
provider = "SHOPIFY"
entity_type = "PRODUCT"
entity_id = "prod_abc123"
operation = "UPDATE"
timestamp_bucket = "2026-02-17T14:00:00Z" // truncated to hour
)
If a record with the same key already exists in the idempotency_records table, the system returns the cached response without re-executing the operation.
Data Model: idempotency_records
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants.id |
idempotency_key | VARCHAR(64) | Yes | SHA-256 hex digest (unique index) |
provider_type | VARCHAR(50) | Yes | Provider enum value |
entity_type | VARCHAR(50) | Yes | E.g., PRODUCT, ORDER, INVENTORY_LEVEL |
entity_id | VARCHAR(100) | Yes | Domain entity identifier |
operation | VARCHAR(30) | Yes | CREATE, UPDATE, DELETE, SYNC |
request_hash | VARCHAR(64) | Yes | SHA-256 of the serialised request body |
response_status | INTEGER | No | HTTP status code of cached response |
response_body | JSONB | No | Cached response payload |
executed_at | TIMESTAMPTZ | Yes | When the operation was executed |
expires_at | TIMESTAMPTZ | Yes | executed_at + 24 hours |
created_at | TIMESTAMPTZ | Yes | Row creation timestamp |
6.2.6 Rate Limit Management
Each provider enforces its own rate limits. Module 6 tracks consumption in real time and throttles outbound requests to stay within published quotas.
Rate Limits by Provider
| Provider | Limit Type | Rate | Strategy |
|---|---|---|---|
| Shopify REST | Leaky bucket | 40-request bucket, leaks at 2 req/s | Queue with inter-request delay; monitor X-Shopify-Shop-Api-Call-Limit header |
| Shopify GraphQL | Calculated query cost | 1,000-point bucket; restores at 50 pts/s (regular) or 500 pts/s (Plus) | Pre-calculate query cost; defer if bucket < required cost |
| Amazon SP-API | Token bucket | Per-endpoint (e.g., getOrders 0.0167 req/s burst 20) | Read x-amzn-RateLimit-Limit response header; adjust dynamically |
| Google Merchant | Daily quota | Products.insert: 2 calls/product/day; batch up to 10,000 entries | Batch mutations into single custombatch call; schedule during off-peak |
| Payment Processor (Stripe) | Sliding window | 100 read/s, 100 write/s (live mode) | Token bucket in-memory; back-off on 429 |
| Email Provider (SendGrid) | Sliding window | Varies by plan (100-10,000 emails/s) | In-memory counter; degrade gracefully |
| Shipping Carrier (EasyPost) | Per-second | 50 req/s | In-memory token bucket |
Rate Limit Tracking
The module maintains an in-memory sliding-window counter per (tenant_id, provider_type) pair. When the counter approaches the limit (80 % threshold), requests are queued rather than rejected. If the queue depth exceeds 500 items, new requests receive a 429 back-pressure signal to the calling module.
rate_limiter:
warning_threshold_pct: 80
max_queue_depth: 500
queue_drain_interval_ms: 100
metrics_export_interval_s: 10
6.2.7 Webhook Processing Pipeline
Inbound webhooks from external providers flow through a multi-stage pipeline that verifies authenticity, deduplicates, and dispatches to the appropriate handler.
Signature Verification by Provider
| Provider | Verification Method | Header / Field | Algorithm |
|---|---|---|---|
| Shopify | HMAC signature | X-Shopify-Hmac-Sha256 | HMAC-SHA256 of raw body with app secret |
| Amazon SP-API | SQS message validation | SignatureVersion, Signature | RSA-SHA1 of canonical message string using Amazon certificate |
| Google Merchant | Push endpoint SSL + JWT | Authorization: Bearer <token> | Verify JWT against Google public keys; validate audience claim |
| Stripe | HMAC signature | Stripe-Signature | HMAC-SHA256 with whsec_ signing secret; timestamp tolerance 300 s |
| SendGrid | Basic auth or OAuth | Authorization header | Compare against stored webhook signing key |
| EasyPost | HMAC signature | X-Hmac-Signature | HMAC-SHA256 of raw body with webhook secret |
Webhook Processing Flow
sequenceDiagram
participant EXT as External Provider
participant GW as API Gateway
participant SIG as Signature Verifier
participant DEDUP as Idempotency Check
participant HANDLER as Webhook Handler
participant DLQ as Dead Letter Queue
participant BUS as Domain Event Bus
EXT->>GW: POST /webhooks/{provider}
GW->>SIG: Verify signature
alt Signature invalid
SIG-->>GW: 401 Unauthorized
GW-->>EXT: 401
else Signature valid
SIG->>DEDUP: Check idempotency key
alt Duplicate webhook
DEDUP-->>GW: 200 OK (cached)
GW-->>EXT: 200
else New webhook
DEDUP->>HANDLER: Dispatch to typed handler
alt Handler succeeds
HANDLER->>BUS: Publish domain event
HANDLER-->>DEDUP: Cache result
DEDUP-->>GW: 200 OK
GW-->>EXT: 200
else Handler fails
HANDLER->>DLQ: Persist failed webhook
HANDLER-->>GW: 200 OK (accepted for retry)
GW-->>EXT: 200
end
end
end
Idempotent Handler Pattern
Every webhook handler follows this template:
- Extract idempotency key from the webhook payload (e.g., Shopify’s
X-Shopify-Webhook-Id, Amazon’snotificationId, Stripe’sevent.id). - Check
idempotency_records– if a record exists and has not expired, return the cached response immediately. - Execute business logic – map the webhook payload to a domain event and publish it to the internal event bus.
- Persist idempotency record – store the key and response so future duplicates are short-circuited.
- Return 200 – always return
200 OKto the provider promptly. Processing failures are handled asynchronously via the dead-letter queue. Returning non-200codes to providers like Shopify triggers exponential re-delivery, which compounds the problem.
Dead Letter Queue for Webhooks
Failed webhooks are stored in integration_dead_letters with full payload and error context. The operations dashboard allows:
| Action | Description |
|---|---|
| Replay | Re-inject the webhook into the processing pipeline |
| Discard | Mark as resolved; no further action |
| Investigate | Attach notes; assign to engineering team |
Cross-Reference: See Module 5, Section 5.18.4 for the admin UI that manages dead-letter webhook records.
6.3 Shopify Integration (Enhanced)
Scope: End-to-end synchronization of product catalog, inventory, orders, and customer data between the Nexus POS system and Shopify e-commerce. This section consolidates all Shopify integration requirements previously distributed across Module 3 (Section 3.7 – Catalog Sync), Module 4 (Section 4.14.3 – Inventory Sync), and Module 5 (Section 5.16.3 – Integration Configuration), and adds new material covering GraphQL API preferences, bulk operations, idempotency, rate limits, webhook topics, POS UI extensions, hardware compatibility, omnichannel requirements, and third-party POS compliance rules.
Cross-Reference: See Section 6.1 for the Integration Hub architecture. See Section 6.2 for integration credentials and security model. See Module 3, Section 3.6 for multi-channel management. See Module 4, Section 4.14 for online order fulfillment workflows.
Consolidation Note: This section replaces and supersedes the following legacy sections:
- Old Section 3.7 (Shopify Integration) – catalog sync modes, field ownership, conflict resolution
- Old Section 4.14.3 (Inventory Sync with Shopify) – inventory sync triggers and architecture
- Old Section 5.16.3 (Shopify Integration) – configuration fields, webhook endpoints
6.3.1 Sync Modes
Two tenant-configurable modes control how data flows between the POS and Shopify:
| Mode | Direction | Default | Use Case |
|---|---|---|---|
| POS-Master | POS -> Shopify (one-way) | Yes | All product data managed in POS. Changes in Shopify for POS-owned fields are overwritten on next sync. Recommended for retailers who manage their catalog exclusively through the POS. |
| Bidirectional with POS Priority | POS <-> Shopify (two-way) | No | Changes flow both directions. POS-owned fields: POS wins on conflict. Shopify-owned fields: Shopify wins. Configurable fields: POS wins with audit trail. Supports retailers whose external teams (SEO agencies, product photographers, copywriters) work directly in Shopify Admin. |
Rationale: Industry standard (Lightspeed, Retail Pro, SKU IQ) uses POS-master as default. Bidirectional supports retailers whose external teams work directly in Shopify Admin. ADR #24 documents this decision.
Important: Inventory sync is ALWAYS bidirectional regardless of catalog sync mode. Even in POS-Master mode, Shopify online sales decrement POS inventory and POS sales decrement Shopify inventory.
Sync Decision Flowchart
flowchart TD
A[Change Detected] --> B{Which System?}
B -->|POS Change| C{Field Ownership?}
B -->|Shopify Change| D{Field Ownership?}
C -->|POS-Owned| E[Push to Shopify via GraphQL Mutation]
C -->|Shopify-Owned| F[Ignore -- Not POS Managed]
C -->|Configurable| G{Sync Mode?}
D -->|POS-Owned| H{Sync Mode?}
D -->|Shopify-Owned| I[Pull to POS as Read-Only]
D -->|Configurable| J{Sync Mode?}
G -->|POS-Master| E
G -->|Bidirectional| E
H -->|POS-Master| K[Overwrite on Next Sync]
H -->|Bidirectional| L[Reject Shopify Change + Log Conflict]
J -->|POS-Master| K
J -->|Bidirectional| M{Conflict?}
M -->|Same Field Changed Both Sides| N[POS Wins + Log Audit Entry]
M -->|Different Fields Changed| O[Merge Both Changes]
E --> P[Log Sync Event to Integration Sync Log]
I --> P
K --> P
L --> P
N --> P
O --> P
6.3.2 Field-Level Ownership
Three ownership categories determine which system has authority over each product field. This model eliminates the majority of sync conflicts by design.
| Category | Fields | Direction | Rationale |
|---|---|---|---|
| POS-Owned | SKU, barcode, base_price, cost, compare_at_price, variants (options/values), vendor, weight, dimensions, tax_code, product_type, lifecycle_status, inventory_qty | POS -> Shopify | Core retail operations data managed exclusively in POS. These fields drive pricing, costing, and inventory accuracy. |
| Shopify-Owned | SEO title (meta_title), SEO description (meta_description), URL handle (slug), metafields, additional images (after primary), image alt text, online collections, sales channel publishing | Shopify -> POS (read-only) | Online-only fields with no POS equivalent. Managed by e-commerce team or SEO agencies directly in Shopify Admin. |
| Configurable | long_description, tags, primary_image | Default: POS -> Shopify. Bidirectional when enabled. | May require external editing when retailer uses agencies for product content. Tenant can toggle per field. |
Business Rules:
- POS-Owned fields changed in Shopify (in POS-Master mode) are silently overwritten on the next sync cycle.
- Shopify-Owned fields are stored in POS as read-only reference data for display purposes (e.g., showing SEO title in admin dashboard).
- Configurable field direction is set per tenant in the Integration Configuration (Section 6.2).
- Field ownership is enforced at the API layer – the sync engine checks ownership before applying any inbound change.
6.3.3 Conflict Resolution
Per-field ownership eliminates the majority of sync conflicts. The remaining edge cases – primarily configurable fields modified on both sides in bidirectional mode – are handled by the conflict resolution engine.
Conflict Audit Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
product_id | UUID | Yes | FK to products table – product where conflict occurred |
shopify_product_id | String(50) | Yes | Shopify product GID (e.g., gid://shopify/Product/123456) |
field_name | String(100) | Yes | The specific field in conflict (e.g., long_description, tags, primary_image) |
pos_value | Text | Yes | Value from POS at time of conflict |
shopify_value | Text | Yes | Value from Shopify at time of conflict |
resolved_value | Text | Yes | Final value written to both systems |
resolution_method | Enum | Yes | AUTO_POS_WINS, AUTO_SHOPIFY_WINS, AUTO_MERGE, MANUAL |
resolved_by | UUID | No | FK to users table – user who manually resolved (null for auto-resolution) |
resolved_at | DateTime | Yes | Timestamp of resolution |
created_at | DateTime | Yes | Conflict detection timestamp |
Conflict Resolution Sequence
sequenceDiagram
autonumber
participant SH as Shopify Webhook
participant GW as Webhook Gateway
participant API as POS Sync Engine
participant DB as POS Database
participant Q as Conflict Review Queue
SH->>GW: Product Updated Webhook (HMAC-SHA256 signed)
GW->>GW: Verify HMAC Signature
GW->>API: Forward Verified Payload
API->>API: Parse Changed Fields from Webhook Body
API->>DB: Fetch Current POS Product State
loop For Each Changed Field
API->>API: Determine Field Ownership Category
alt POS-Owned Field Changed in Shopify
API->>DB: Log Conflict Audit (resolution = AUTO_POS_WINS)
API->>SH: Push POS Value Back to Shopify (GraphQL Mutation)
else Shopify-Owned Field
API->>DB: Update POS Record (read-only reference copy)
else Configurable Field (Bidirectional Mode)
API->>DB: Check POS last_modified Timestamp
alt POS Also Changed Same Field (within sync window)
API->>DB: POS Wins -- Log Conflict Audit
API->>SH: Push POS Value to Shopify
else Only Shopify Changed
API->>DB: Accept Shopify Value into POS
end
end
end
opt Manual Review Flagged
API->>Q: Add to Admin Review Queue
API->>DB: Set conflict status = PENDING_REVIEW
end
Business Rules:
- Configurable fields in bidirectional mode: POS value wins automatically. The overridden Shopify value is preserved in the conflict audit table.
- Non-conflicting changes (different fields modified on each side) merge automatically with no conflict entry created.
- Optional manual review queue: admin dashboard shows pending conflicts flagged for human review. Staff with ADMIN or OWNER role can override auto-resolution.
- Conflict audit records are retained for 12 months for compliance and debugging, then archived.
- Conflicts exceeding 10 per product per day trigger an alert to the admin dashboard (possible integration loop).
6.3.4 Sync Constraints & Technical Notes
| Constraint | Limit | Handling Strategy |
|---|---|---|
| Max variants per product | 100 (Shopify hard limit) | Products exceeding 100 variants are flagged and excluded from Shopify sync with admin notification. Tracked in Sync Coverage report. |
| Max option dimensions | 3 (Shopify hard limit) | POS supports 3 dimensions, aligned with Shopify. Products with >3 dimensions cannot be synced. |
| API rate limit (REST) | 40 req bucket (regular) / 400 req bucket (Plus) | Request queue with leaky bucket tracking; batch operations throttled to stay within budget. |
| API rate limit (GraphQL) | 50 points/sec (regular) / 500 points/sec (Plus) | Query cost pre-calculation before execution; throttle queue when approaching limit. |
| Webhook reliability | HMAC-SHA256 signature verification required | Idempotent handlers with deduplication key. Dead letter queue for failed processing. |
| Product title max length | 255 characters (Shopify limit) | Truncate with ellipsis and log warning. |
| Tag max count | 250 tags per product (Shopify limit) | Excess tags dropped with admin notification. |
Image Sync:
- POS sends the primary image on first product publish only.
- All subsequent image management is performed in Shopify.
- Additional images added in Shopify are pulled to POS as read-only references for display in the POS UI.
- Image URLs from Shopify CDN are stored – images are NOT downloaded to POS storage.
Inventory Sync:
- Inventory sync is ALWAYS bidirectional regardless of catalog sync mode.
- Sales on Shopify decrement POS inventory. POS sales decrement Shopify inventory.
- Uses Shopify Inventory API (
inventorySetQuantitiesGraphQL mutation) with location-level granularity. - Each POS physical location maps 1:1 to a Shopify location.
Sync Timing:
- Real-time for individual changes (webhook-driven + API push).
- Batch processing for bulk operations (imports, stock takes) queued and processed during off-peak hours.
- Periodic reconciliation job runs every 15 minutes to detect and correct drift.
- Daily full reconciliation at 02:00 local time as a safety net.
Variant Limit Handling:
- Products exceeding 100 variants or 3 option dimensions are automatically excluded from Shopify sync.
- Admin receives notification: “Product [SKU] excluded from Shopify sync: exceeds variant limits.”
- Excluded products are tracked in the Sync Coverage report with exclusion reason.
6.3.5 Reports: Shopify Integration
| Report | Purpose | Key Data Fields |
|---|---|---|
| Sync Status Dashboard | Monitor overall sync health in real-time | Products synced, products pending, products failed, last sync time, error count (24h), average sync latency (ms) |
| Conflict Log | Review and resolve sync conflicts | Product SKU, field name, POS value, Shopify value, resolution method, timestamp, resolved by |
| Sync Coverage | Identify products NOT synced to Shopify | Product SKU, product name, exclusion reason (exceeds limits / manually excluded / new / sync error), action needed |
| Sync Activity Log | Detailed sync event history for troubleshooting | Product SKU, direction (POS->Shopify / Shopify->POS), fields synced, timestamp, duration (ms), result (success/failed/partial) |
Report Access: All reports available in Admin Portal under Integrations > Shopify. Exportable to CSV. Sync Status Dashboard refreshes every 60 seconds.
6.3.6 GraphQL API Preference & Query Cost Model
Shopify recommends GraphQL over REST for all new development. The Nexus POS integration MUST use the Shopify GraphQL Admin API as the primary interface.
Rationale:
- GraphQL returns only requested fields, reducing payload size and bandwidth.
- Single request can fetch related resources (product + variants + images) without multiple round trips.
- GraphQL supports bulk operations not available in REST.
- Shopify is actively deprecating REST endpoints in favor of GraphQL.
Query Cost Calculation:
Shopify calculates query cost using the formula:
cost = requested_fields * requested_objects
Each query returns an extensions.cost object with requestedQueryCost, actualQueryCost, and throttleStatus (including currentlyAvailable points).
Query Cost Budget:
| Plan | Points per Second | Max Single Query Cost | Restore Rate |
|---|---|---|---|
| Regular (Development/Basic/Shopify/Advanced) | 50 | 1,000 | 50 points/sec |
| Shopify Plus | 500 | 10,000 | 500 points/sec |
Common Operation Costs:
| Operation | Estimated Cost (points) | Notes |
|---|---|---|
| Fetch single product with variants | 12-15 | Depends on variant count |
| Fetch 50 products (list) | 50-80 | Paginated with cursor |
| Update single product | 10 | productUpdate mutation |
| Update inventory at location | 10 | inventorySetQuantities mutation |
| Fetch 250 inventory levels | 30-50 | inventoryItems query with locations |
| Create product with 5 variants | 15-20 | productCreate mutation |
| Bulk product query (JSONL) | 10 (submit) | Actual processing is async |
Implementation Rules:
- All Shopify API calls MUST use GraphQL Admin API (version
2025-04or later). - REST API is permitted only for endpoints not yet available in GraphQL (e.g., certain carrier service endpoints).
- Every query MUST request the
extensions.costfield to monitor budget consumption. - Sync engine MUST track
currentlyAvailablepoints and pause requests when below 20% of maximum budget. - Query complexity MUST be pre-estimated before execution; queries exceeding 80% of max single query cost must be split.
6.3.7 Idempotency Directive
Starting with API version 2026-04, Shopify requires the @idempotent directive on all mutations to prevent duplicate operations during retries and network failures.
Implementation:
All Shopify mutations issued by the POS sync engine MUST include an idempotencyKey parameter:
idempotency:
key_generation:
algorithm: SHA-256
input_template: "{tenant_id}:{mutation_name}:{entity_id}:{timestamp_bucket}"
timestamp_bucket: 5_minutes # Floor timestamp to 5-minute windows
retry_behavior:
same_key_returns: cached_response
cache_ttl: 60_minutes # Shopify caches idempotent responses for 60 min
example:
tenant_id: "t_abc123"
mutation: "productUpdate"
entity_id: "gid://shopify/Product/789"
timestamp_bucket: "2026-02-17T14:30" # Floored to 5-min boundary
key: "SHA256(t_abc123:productUpdate:gid://shopify/Product/789:2026-02-17T14:30)"
Business Rules:
- Every mutation wrapper function MUST generate and attach an idempotency key before submission.
- If a mutation fails with a network timeout, the sync engine retries with the SAME idempotency key. Shopify returns the cached result if the original mutation succeeded.
- Idempotency keys are logged in the Integration Sync Log for audit and debugging.
- The 5-minute timestamp bucket prevents accidental deduplication of legitimate rapid updates (e.g., price change followed by description change within seconds).
6.3.8 Bulk Operations API
For large data syncs (initial catalog onboarding, full inventory reconciliation, daily product exports), the POS system uses Shopify’s Bulk Operations API instead of individual GraphQL queries.
Bulk Operation Lifecycle:
sequenceDiagram
autonumber
participant POS as POS Sync Engine
participant GQL as Shopify GraphQL API
participant S3 as Shopify Storage (JSONL)
POS->>GQL: Submit bulkOperationRunQuery / bulkOperationRunMutation
GQL-->>POS: Return operation ID + status: CREATED
loop Poll for Completion (every 10 seconds)
POS->>GQL: query { currentBulkOperation { status url } }
GQL-->>POS: status: RUNNING / COMPLETED / FAILED
end
alt Operation Completed
GQL-->>POS: status: COMPLETED, url: "https://storage.shopify.com/result.jsonl"
POS->>S3: Download JSONL Result File
S3-->>POS: Stream JSONL Data
POS->>POS: Parse JSONL, Update Local Database
else Operation Failed
GQL-->>POS: status: FAILED, error details
POS->>POS: Log Error, Queue for Retry
end
Concurrency Limits:
| Operation Type | Max Concurrent | Scope |
|---|---|---|
| QUERY (bulk export) | 5 | Per app per shop |
| MUTATION (bulk import) | 5 | Per app per shop |
| Combined | 10 | 5 queries + 5 mutations simultaneously |
Use Cases:
| Use Case | Operation Type | Frequency | Estimated Duration |
|---|---|---|---|
| Initial catalog sync (new tenant onboarding) | QUERY (export all products from Shopify) | Once | 2-15 min depending on catalog size |
| Full inventory reconciliation | QUERY + MUTATION | Daily at 02:00 | 5-30 min |
| Daily product export to Shopify | MUTATION (staged uploads) | Daily at 03:00 | 5-20 min |
| Price book sync | MUTATION | On demand | 1-10 min |
| Variant bulk update | MUTATION | On demand | 1-10 min |
Implementation Rules:
- Bulk operations MUST be used for any sync involving more than 50 products or 100 inventory levels.
- Individual GraphQL queries are used for real-time, single-entity syncs (e.g., sale completed, product updated).
- JSONL results MUST be streamed (not loaded fully into memory) to handle large catalogs.
- Failed bulk operations are retried up to 3 times with 5-minute intervals before alerting admin.
- Bulk operation results are stored locally for 7 days for debugging.
6.3.9 POS UI Extensions
Shopify POS provides extension points for embedded apps. The Nexus POS integration leverages these APIs for enhanced in-store experiences.
Supported POS Extension APIs:
| API | Minimum Version | Purpose | Nexus Usage |
|---|---|---|---|
| Camera API | POS v10.0+ | Access device camera for barcode scanning | Scan product barcodes within embedded Nexus app view on Shopify POS hardware |
| Translation API | POS v10.19+ | Multi-language support for POS UI | Support bilingual staff (English/Spanish) at retail locations |
| Session Token API | POS v9.0+ | Authenticate embedded app sessions | Secure communication between Nexus embedded app and POS backend without separate login |
| Cart API | POS v10.0+ | Read and modify the active POS cart | Inject custom line items, apply Nexus-managed discounts |
| Customer API | POS v10.0+ | Access customer data in POS context | Display unified customer profile with cross-channel purchase history |
Session Token Authentication:
session_token:
flow: "Shopify POS -> Embedded App"
mechanism: JWT
issued_by: Shopify
validated_by: POS Backend (verify with Shopify public key)
claims:
- iss: "https://{shop}.myshopify.com/admin"
- dest: "https://{shop}.myshopify.com"
- sub: "{staff_member_id}"
- exp: "{expiry_timestamp}"
token_refresh: automatic (before expiry)
no_separate_login: true # Staff authenticates via Shopify POS PIN
Business Rules:
- The Nexus POS app MUST function as a Shopify POS UI extension when deployed alongside Shopify POS hardware.
- Session tokens replace API key authentication for all POS-embedded interactions.
- Camera API usage requires explicit permission grant during app installation.
- Translation API strings are managed in the Nexus localization system and pushed to the extension.
6.3.10 Rate Limits 2026
Shopify enforces rate limits across all API surfaces. The POS sync engine MUST respect these limits to avoid request rejection (HTTP 429) and potential app throttling.
Rate Limit Summary:
| API Type | Regular Store | Shopify Plus | Burst Capacity | Leak Rate / Restore |
|---|---|---|---|---|
| REST Admin API | 40-request bucket | 400-request bucket | Full bucket available immediately | 2 req/sec (regular), 20 req/sec (Plus) |
| GraphQL Admin API | 50 points/sec | 500 points/sec | N/A (cost-based) | Cost-based, restored continuously |
| Bulk Operations | 5 concurrent queries + 5 concurrent mutations | Same | N/A | Poll-based completion |
| Storefront API | 100 req/sec per app per store | Same | N/A | Fixed rate |
| Webhook Delivery | No outbound limit from Shopify | Same | N/A | Must respond within 5 seconds |
Throttle Handling Strategy:
rate_limit_strategy:
monitoring:
track_remaining_points: true # From X-Shopify-Shop-Api-Call-Limit header (REST) or extensions.cost (GraphQL)
alert_threshold: 20_percent # Alert when remaining capacity drops below 20%
backoff:
initial_delay_ms: 500
max_delay_ms: 30000
multiplier: 2.0
jitter: true # Add random jitter to prevent thundering herd
queue:
max_queue_size: 1000
priority_levels:
- CRITICAL: inventory_updates, order_syncs # Process first
- NORMAL: product_updates, customer_syncs
- LOW: bulk_exports, reconciliation
overflow_action: reject_with_retry_after
Business Rules:
- All API calls MUST check remaining rate limit budget before execution.
- When rate limited (HTTP 429), the sync engine MUST use the
Retry-Afterheader value for backoff timing. - Critical operations (inventory updates after sales) receive priority queue placement over non-critical operations (product description syncs).
- Rate limit consumption is logged per minute for capacity planning and plan upgrade recommendations.
6.3.11 Webhook Topics Catalog
The POS system registers for the following Shopify webhook topics. All webhooks are verified using HMAC-SHA256 before processing.
Registered Webhook Topics:
| Topic | Direction | Trigger | POS Action |
|---|---|---|---|
orders/create | Shopify -> POS | Customer places online order | Create fulfillment task at assigned store. Reserve inventory. Notify store staff. |
orders/updated | Shopify -> POS | Order modified (address change, note added) | Update local order record. Refresh fulfillment queue. |
orders/cancelled | Shopify -> POS | Online order cancelled | Release inventory reservation. Remove from fulfillment queue. Log cancellation. |
orders/fulfilled | Shopify -> POS | Order marked fulfilled (may be by third-party) | Update local order status. Confirm inventory decrement. |
orders/partially_fulfilled | Shopify -> POS | Partial shipment completed | Update line-item fulfillment status. Adjust remaining reservation. |
products/create | Shopify -> POS | Product created in Shopify Admin | Import as read-only reference (bidirectional mode). Ignore in POS-Master mode. |
products/update | Shopify -> POS | Product fields modified in Shopify | Apply field-ownership conflict resolution (Section 6.3.3). |
products/delete | Shopify -> POS | Product deleted from Shopify | Flag local product as SHOPIFY_DELETED. Do NOT delete from POS. Admin notification. |
inventory_levels/update | Shopify -> POS | Inventory adjusted in Shopify Admin or by another app | Sync adjusted quantity to POS at corresponding location. Log as EXTERNAL_ADJUSTMENT. |
inventory_levels/connect | Shopify -> POS | Product stocked at a new Shopify location | Create location-level inventory record in POS if location is mapped. |
inventory_levels/disconnect | Shopify -> POS | Product removed from a Shopify location | Set POS inventory to zero at that location. Flag for admin review. |
customers/create | Shopify -> POS | New customer registers online | Create or link customer profile in POS. Merge if email/phone matches existing. |
customers/update | Shopify -> POS | Customer profile updated | Sync updated fields to POS customer record (email, phone, address). |
refunds/create | Shopify -> POS | Online order refunded | Process refund in POS. Increment inventory if items restocked. Update order status. |
app/uninstalled | Shopify -> POS | Merchant uninstalls Nexus app | Disable all sync operations. Preserve local data. Alert admin. Set integration status to DISCONNECTED. |
shop/update | Shopify -> POS | Shop settings changed (currency, timezone, name) | Update cached shop metadata. Validate currency alignment. |
bulk_operations/finish | Shopify -> POS | A bulk operation completes | Download and process JSONL result file. Update sync status. |
Webhook Security & Reliability:
| Parameter | Value |
|---|---|
| Verification method | HMAC-SHA256 using app secret as key |
| Header | X-Shopify-Hmac-Sha256 (Base64-encoded) |
| Mandatory response time | < 5 seconds (return HTTP 200/201/202) |
| Shopify retry policy | 19 retries over 48 hours with exponential backoff |
| Failure threshold | After 19 failed deliveries, webhook is automatically removed by Shopify |
| Deduplication | X-Shopify-Webhook-Id header used as idempotency key for handler deduplication |
| Processing model | Acknowledge immediately (HTTP 200), process asynchronously via background job queue |
Business Rules:
- Webhook handlers MUST return HTTP 200 within 5 seconds. All processing happens asynchronously after acknowledgment.
- Every incoming webhook is deduplicated using
X-Shopify-Webhook-Idbefore queuing for processing. - Failed webhook processing (after acknowledgment) is retried 3 times internally before moving to dead letter queue.
- Webhook registrations are verified daily; any missing registrations are automatically re-created.
- All webhook payloads are logged (with PII redaction) for 30 days for debugging.
6.3.12 Third-Party POS Integration Rules
The Nexus POS is a non-native, custom POS system connecting to Shopify as a third-party application. This imposes specific compliance requirements and architectural constraints that differ from Shopify’s own native POS product.
Integration Architecture:
integration_type: third_party_pos
authentication: OAuth 2.0 (mandatory -- no API key bypass)
app_listing: Shopify App Store (or custom/unlisted app for private deployment)
data_authority: POS is source of truth for products and inventory
checkout_model: Must NOT bypass standard Shopify checkout for online orders
Compliance Requirements:
| Requirement | Description | Implementation |
|---|---|---|
| OAuth Authentication | All API access MUST use OAuth 2.0 access tokens obtained through Shopify’s authorization flow. Direct API key authentication is not permitted for production apps. | Implement OAuth install flow with PKCE. Store encrypted access tokens in Integration Credentials table (Section 6.2). |
| App Security | App MUST follow Shopify’s mandatory security requirements including HTTPS, secure credential storage, and regular security audits. | TLS 1.2+ for all API calls. AES-256 encryption for stored tokens. Annual security review. |
| Checkout Integrity | Third-party POS MUST NOT bypass Shopify’s standard checkout process for online orders. In-store transactions process through the POS payment system. | Online orders flow through Shopify checkout. POS handles in-store payments via its own payment integration (Module 1, Section 1.18). |
| Data Overwrite Awareness | Data pushed from POS potentially overwrites Shopify data. The field ownership model (Section 6.3.2) controls which fields POS is authorized to modify. | Sync engine enforces field ownership before every write operation. Shopify-owned fields are never overwritten by POS. |
| Real-Time Integration | Must support real-time API integration to avoid data delays that cause overselling or price mismatches. | Webhook-driven architecture with < 5 second latency target. Scheduled reconciliation as safety net. |
| Partner Program Compliance | Must comply with Shopify Partner Program requirements including app review, privacy policy, and terms of service. | Maintain active Shopify Partner account. Submit app for review before merchant deployment. |
| Data Privacy | Must handle customer data according to Shopify’s data protection requirements. Implement data deletion webhooks. | Process customers/data_request, customers/redact, and shop/redact mandatory compliance webhooks. |
| API Versioning | Must use a supported API version (within 12 months of release). Deprecated versions result in app rejection. | Track Shopify API version calendar. Update to latest stable version within 6 months of release. Test against release candidate versions. |
Compliance Checklist:
| # | Item | Status | Verification Method |
|---|---|---|---|
| 1 | OAuth 2.0 implementation with PKCE | Required | Shopify app review |
| 2 | HTTPS on all endpoints (TLS 1.2+) | Required | SSL certificate check |
| 3 | HMAC webhook verification on all webhook handlers | Required | Code review |
| 4 | Mandatory compliance webhooks implemented (customers/data_request, customers/redact, shop/redact) | Required | Shopify app review |
| 5 | Access token encryption at rest (AES-256) | Required | Security audit |
| 6 | No Shopify checkout bypass for online orders | Required | Functional test |
| 7 | Field ownership enforcement (no unauthorized overwrites) | Required | Integration test suite |
| 8 | API version within supported window | Required | Automated version check |
| 9 | Privacy policy URL configured in app settings | Required | Shopify Partner Dashboard |
| 10 | App terms of service URL configured | Required | Shopify Partner Dashboard |
| 11 | Rate limit compliance (no brute-force retry loops) | Required | Log analysis |
| 12 | Idempotency keys on all mutations | Required (2026-04+) | Code review |
6.3.13 Shopify Sync Rules & Best Practices
This section documents the operational rules and best practices that govern the day-to-day Shopify integration. These rules reflect both Shopify platform requirements and Nexus POS architectural decisions.
Single Source of Truth
- POS is the inventory master (consistent with ADR #24: POS-Master default sync mode).
- Product data (titles, descriptions, variants, pricing) syncs FROM POS TO Shopify in the default mode.
- Inventory quantities ALWAYS sync bidirectionally regardless of catalog sync mode. This is non-negotiable – Shopify online sales must decrement POS inventory in real time.
Location Configuration
Every POS physical location maps 1:1 to a Shopify location. Online sales decrement inventory at the fulfillment location, NOT a global pool.
Location Mapping Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
pos_location_id | UUID | Yes | FK to locations table (POS location) |
shopify_location_id | String(50) | Yes | Shopify location GID (e.g., gid://shopify/Location/12345) |
shopify_location_name | String(100) | Yes | Shopify location display name (cached) |
is_fulfillment_location | Boolean | Yes | Whether this location fulfills online orders (default: true) |
is_active | Boolean | Yes | Whether sync is active for this location (default: true) |
last_synced_at | DateTime | No | Last successful inventory sync for this location |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Every active POS location MUST have a corresponding Shopify location before inventory sync is enabled.
- Location mapping is configured during tenant onboarding (Step 12 of onboarding wizard, Module 5, Section 5.20).
- If a POS location has no Shopify mapping, inventory changes at that location are NOT synced to Shopify. A warning is displayed in the Integration Health Dashboard.
- “Track Inventory” MUST be enabled for all synced products in Shopify. The sync engine verifies this during reconciliation and enables tracking automatically if missing.
Real-Time Sync Requirements
| Sync Event | Latency Target | Method |
|---|---|---|
| In-store sale completed -> Shopify inventory update | < 5 seconds | Direct GraphQL mutation (fire-and-forget with retry) |
| Shopify online order -> POS fulfillment queue | < 10 seconds | Webhook receipt + async processing |
| Product update in POS -> Shopify product update | < 30 seconds | Queued GraphQL mutation (batched for rate limit efficiency) |
| Customer profile change -> cross-system sync | < 60 seconds | Queued mutation (lower priority) |
| Full inventory reconciliation | Every 15 minutes | Scheduled comparison job |
| Full catalog reconciliation | Daily at 02:00 | Bulk operations API |
Omnichannel Requirements
Customer Profile Unification:
- In-store purchases are linked to the same Shopify customer profile using email or phone match.
- Customer merge logic (Module 2, Section 2.2) handles deduplication when an online customer first visits a physical store.
- Staff can view a customer’s complete purchase history (online + in-store) from the POS terminal.
Cross-Channel Returns:
- Items bought online can be returned in-store. The POS retrieves the Shopify order and processes the return locally.
- Items bought in-store can be returned via Shopify online return flow (if enabled by the retailer).
- Return policy engine (Module 1, Section 1.3) validates return eligibility regardless of purchase channel.
BOPIS (Buy Online, Pick Up In-Store):
sequenceDiagram
autonumber
participant C as Customer
participant SF as Shopify Storefront
participant WH as Shopify Webhook
participant POS as POS Backend
participant ST as Store Staff
participant N as Notification Service
C->>SF: Place Order (select "Local Pickup" at checkout)
SF->>SF: Create Order (fulfillment_status: unfulfilled, delivery_method: local_pickup)
SF->>WH: Emit orders/create webhook
WH->>POS: Deliver orders/create payload (HMAC verified)
POS->>POS: Identify assigned pickup location
POS->>POS: Reserve inventory at pickup location
POS->>ST: Display order on Fulfillment Queue (tagged: PICKUP)
POS->>N: Send "Order Ready for Prep" email to customer
ST->>POS: Pick items, mark as READY_FOR_PICKUP
POS->>N: Send "Your Order is Ready for Pickup" email to customer
POS->>SF: Update Shopify order (fulfillment: ready_for_pickup)
C->>ST: Arrive at store, present order confirmation
ST->>POS: Scan order barcode or search by order number
POS->>POS: Verify customer identity
ST->>POS: Mark as PICKED_UP (complete fulfillment)
POS->>SF: Update Shopify order (fulfillment_status: fulfilled)
POS->>POS: Decrement inventory (SALE movement type)
POS->>N: Send "Pickup Confirmed" email to customer
BOPIS Business Rules:
- BOPIS orders appear in the store’s fulfillment queue with a
PICKUPtag for visual distinction. - Pickup window is configurable per location (default: 48 hours). After expiry, staff is notified and order may be cancelled.
- Inventory is reserved (not decremented) until customer picks up. If pickup expires and order is cancelled, reservation is released.
- Customer receives three notifications: order confirmed, ready for pickup, pickup completed.
Staff & Security
| Requirement | Implementation |
|---|---|
| Staff PINs | 4-6 digit PINs for POS terminal access and sale attribution (Module 5, Section 5.5) |
| OAuth authentication | All Shopify API access uses OAuth tokens – no API key shortcuts in production |
| Cycle counts | Weekly recommended, monthly minimum. Keeps inventory accurate even with real-time auto-sync. Discrepancies logged and investigated. |
| Audit trail | Every sync operation, conflict resolution, and manual override is logged with user ID and timestamp |
| Token rotation | Access tokens are monitored for expiry. Re-authorization flow triggered 30 days before token expiration. |
6.3.14 Inventory Sync Triggers
Inventory quantities sync bidirectionally between the POS system and Shopify, regardless of catalog sync mode. This ensures online customers see accurate availability at all times.
Cross-Reference: See Module 4, Section 4.12 for the POS inventory reservation model (reserve on cart add, commit on sale complete). See Module 4, Section 4.14 for online order fulfillment and pick-pack-ship workflows.
POS -> Shopify Sync Triggers:
| POS Event | Shopify Update | Mutation Used |
|---|---|---|
| Sale completed | Decrement inventory at sale location | inventorySetQuantities |
| Return processed | Increment inventory at return location | inventorySetQuantities |
| Purchase order received | Increment inventory at receiving location | inventorySetQuantities |
| Inventory adjustment posted | Adjust inventory at location (increment or decrement) | inventorySetQuantities |
| Transfer completed | Decrement source location, increment destination location | inventorySetQuantities (two calls) |
| Stock count finalized | Set inventory to physical count result at location | inventorySetQuantities |
| Reservation expired | Release reserved quantity (no Shopify update – reservation is POS-only) | None |
Shopify -> POS Sync Triggers:
| Shopify Event | POS Update | Webhook Topic |
|---|---|---|
| Online order placed | Decrement available qty at assigned fulfillment store | orders/create |
| Online order cancelled | Release reservation; increment available qty | orders/cancelled |
| Online return processed | Increment available qty at return location | refunds/create |
| Manual Shopify adjustment | Sync to POS with EXTERNAL_ADJUSTMENT movement type | inventory_levels/update |
| Product stocked at new location | Create location-level inventory record | inventory_levels/connect |
| Product removed from location | Zero out inventory at that location | inventory_levels/disconnect |
Sync Architecture Parameters:
| Parameter | Value |
|---|---|
| Sync method | Webhook-driven (near real-time) + scheduled reconciliation |
| Webhook latency target | < 5 seconds from event to POS database update |
| Reconciliation frequency | Every 15 minutes (incremental) + daily at 02:00 (full) |
| Conflict resolution | POS is source of truth; POS value wins on discrepancy |
| Retry on failure | 3 retries with exponential backoff (5s, 15s, 45s) |
| Dead letter queue | Failed syncs after 3 retries queued for manual review |
| Reconciliation tolerance | Differences of +/- 0 units trigger correction (zero tolerance) |
Business Rule: Inventory sync does NOT depend on catalog sync mode. Even if a tenant uses POS-Master catalog sync (where POS owns product data), inventory quantities always flow bidirectionally. The POS system is the authoritative source for inventory levels – if a discrepancy is detected during reconciliation, the POS value overwrites the Shopify value.
6.3.15 Shopify Hardware Compatibility
When the Nexus POS is deployed alongside or integrated with Shopify POS hardware, the following devices are compatible. This section covers hardware that can be shared between Shopify POS and the Nexus POS application.
Card Readers:
| Device | Connection | Shopify POS | Nexus POS | Notes |
|---|---|---|---|---|
| Shopify Tap & Chip Reader | Bluetooth | Yes | No (uses own payment integration) | Shopify-exclusive; cannot process payments for third-party POS |
| Shopify POS Terminal (Chipper 2X BT) | Bluetooth | Yes | No | Shopify-exclusive hardware |
| WisePOS E (Stripe Terminal) | Internet / USB | No | Yes | Recommended for Nexus POS payment processing |
| BBPOS Chipper 2X BT | Bluetooth | Yes | Via Stripe SDK | Dual-compatible with configuration |
Note: Shopify-branded card readers process payments exclusively through Shopify Payments. The Nexus POS uses its own payment integration (Module 1, Section 1.18) with Stripe Terminal or equivalent semi-integrated devices. Card readers are NOT shared between the two POS systems.
Receipt Printers:
| Device | Connection | Protocol | Shopify POS | Nexus POS | Notes |
|---|---|---|---|---|---|
| Star Micronics TSP143IV | USB / LAN / Bluetooth | StarPRNT | Yes | Yes | Recommended. Shared between both POS systems. |
| Star Micronics mPOP | Bluetooth | StarPRNT | Yes | Yes | Combined printer + cash drawer. Compact form factor. |
| Star Micronics SM-L200 | Bluetooth | StarPRNT | Yes | Yes | Mobile receipt printer for floor sales. |
| Epson TM-m30III | USB / LAN / Bluetooth | ESC/POS | Yes | Yes | Alternative to Star Micronics. Industry standard protocol. |
| Epson TM-T88VII | USB / LAN | ESC/POS | No (not Shopify certified) | Yes | High-speed thermal printer. Nexus POS only. |
Barcode Scanners:
| Device | Connection | Shopify POS | Nexus POS | Notes |
|---|---|---|---|---|
| Socket Mobile S700 | Bluetooth | Yes | Yes | 1D barcode scanner. Compact, retail-grade. |
| Socket Mobile S740 | Bluetooth | Yes | Yes | 2D barcode scanner (supports QR codes). |
| Shopify Retail Scanner | Bluetooth | Yes | No | Shopify-exclusive accessory. |
| Any HID-compliant USB scanner | USB | Yes | Yes | Generic USB scanners work with both systems via keyboard wedge. |
| Zebra DS2208 | USB | No | Yes | High-performance 2D imager. Nexus POS recommended. |
Cash Drawers:
| Device | Connection | Shopify POS | Nexus POS | Notes |
|---|---|---|---|---|
| Star Micronics Cash Drawer (via mPOP) | Integrated | Yes | Yes | Opens via mPOP printer signal. |
| APG Vasario Cash Drawer | RJ-12 (via printer) | Yes | Yes | Standard cash drawer. Triggered by receipt printer kick signal. |
| Any RJ-12 compatible drawer | RJ-12 (via printer) | Yes | Yes | Opened by ESC/POS or StarPRNT printer kick command. |
Device Compatibility Matrix (Summary):
| Peripheral Type | Shared Between Shopify POS & Nexus POS | Nexus POS Only | Shopify POS Only |
|---|---|---|---|
| Card Readers | None | WisePOS E, Stripe Terminal devices | Shopify Tap & Chip, Chipper 2X BT |
| Receipt Printers | Star TSP143IV, Star mPOP, Star SM-L200, Epson TM-m30III | Epson TM-T88VII | None |
| Barcode Scanners | Socket Mobile S700/S740, USB HID scanners | Zebra DS2208 | Shopify Retail Scanner |
| Cash Drawers | All RJ-12 compatible (via shared printer) | None | None |
Business Rules:
- Receipt printers and cash drawers can be shared between Shopify POS and Nexus POS because they connect via standard protocols (ESC/POS, StarPRNT).
- Card readers are NOT shared – each POS system uses its own payment processing hardware.
- Barcode scanners using USB HID (keyboard wedge) mode work with any POS system that accepts keyboard input.
- Hardware compatibility is validated during tenant onboarding. Incompatible devices are flagged with recommended alternatives.
- The Nexus POS Register Management screen (Module 5, Section 5.7) tracks which peripherals are paired to each register.
6.4 Amazon SP-API Integration
Scope: Integrating the POS system with Amazon’s Selling Partner API (SP-API) to enable multi-channel retail operations across Amazon marketplaces. This section covers authentication, catalog synchronization, listing management, order fulfillment (FBA and FBM), inventory tracking, push notifications, rate limiting, compliance requirements, and reporting.
Cross-Reference: See Module 3, Section 3.7 for Shopify integration patterns (field ownership, conflict resolution, sync modes). See Module 4, Section 4.14 for the pick-pack-ship fulfillment workflow reused by Amazon FBM orders. See Module 5, Section 5.16 for the Integration Hub registry and health dashboard where Amazon is registered as an integration provider.
6.4.1 Authentication & Authorization
Amazon SP-API uses OAuth 2.0 via Login with Amazon (LWA) for all API access. The POS system acts as a registered SP-API application and stores per-tenant seller credentials.
OAuth 2.0 Token Flow:
- Access tokens are valid for 1 hour and must be refreshed using the
refresh_tokengrant type before expiry. - The POS backend handles all token lifecycle management transparently – store staff and admin users never interact with OAuth directly.
- Regional endpoints determine which Amazon marketplace cluster the API calls target.
Regional Endpoints:
| Region | Endpoint | Marketplaces |
|---|---|---|
| North America (NA) | sellingpartnerapi-na.amazon.com | US (ATVPDKIKX0DER), CA (A2EUQ1WTGCTBG2), MX (A1AM78C64UM0Y8) |
| Europe (EU) | sellingpartnerapi-eu.amazon.com | UK (A1F83G8C2ARO7P), DE (A1PA6795UKMFR9), FR, IT, ES |
| Far East (FE) | sellingpartnerapi-fe.amazon.com | JP (A1VC38T7YXB528), AU (A39IBJ37TRP1C6) |
Amazon Credential Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key, system-generated |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
integration_id | UUID | Yes | FK to integrations table (Module 5, Section 5.16) |
selling_partner_id | String(50) | Yes | Amazon Seller Central account identifier |
marketplace_id | String(20) | Yes | Amazon marketplace identifier (e.g., ATVPDKIKX0DER for US) |
client_id | String(100) | Yes | LWA application client ID |
client_secret_encrypted | String(500) | Yes | AES-256 encrypted LWA client secret. Never returned in API responses. |
refresh_token_encrypted | String(500) | Yes | AES-256 encrypted OAuth refresh token. Long-lived credential. |
access_token_encrypted | String(1000) | No | AES-256 encrypted current access token. Null when expired. |
token_expiry | DateTime | No | Expiration timestamp of the current access token |
region | Enum | Yes | NA, EU, FE – determines API endpoint |
is_active | Boolean | Yes | Whether this Amazon connection is actively processing (default: false) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
OAuth Token Refresh Sequence:
sequenceDiagram
autonumber
participant POS as POS Backend
participant CACHE as Token Cache
participant LWA as Login with Amazon
participant SPAPI as SP-API Endpoint
Note over POS, SPAPI: Token Refresh Flow
POS->>CACHE: Request access_token for tenant
alt Token Valid (expiry > now + 5 min)
CACHE-->>POS: Return cached access_token
else Token Expired or Near-Expiry
POS->>LWA: POST /auth/o2/token
Note right of LWA: grant_type=refresh_token<br/>refresh_token=***<br/>client_id=***<br/>client_secret=***
LWA-->>POS: access_token (1 hour TTL)
POS->>CACHE: Store encrypted access_token + expiry
end
POS->>SPAPI: API Request + Authorization: Bearer {access_token}
alt Success
SPAPI-->>POS: 200 OK + Response Data
else Token Rejected (401)
POS->>LWA: Force refresh token
LWA-->>POS: New access_token
POS->>SPAPI: Retry API Request
SPAPI-->>POS: 200 OK + Response Data
end
Business Rules:
- The POS system refreshes the access token proactively when the current token has less than 5 minutes remaining, avoiding mid-request expiration.
- If the refresh token itself becomes invalid (seller revokes access or Amazon rotates credentials), the integration status transitions to
DISCONNECTEDand an alert is raised to all ADMIN/OWNER users. - Each tenant may connect to at most one Amazon Seller Central account per marketplace. Multiple marketplaces within the same region are supported (e.g., US + CA + MX under NA).
- All credentials are encrypted at rest using AES-256. The
client_secret_encrypted,refresh_token_encrypted, andaccess_token_encryptedfields are write-only – API responses redact these to"***".
6.4.2 Catalog Items API
The Catalog Items API resolves Amazon Standard Identification Numbers (ASINs) from POS product identifiers, enabling product matching and listing creation.
Endpoint: GET /catalog/2022-04-01/items
Primary Use Cases:
- Look up existing ASINs by UPC/EAN barcode before creating a new listing.
- Retrieve Amazon product detail pages for enrichment (titles, bullet points, images).
- Validate that a POS product maps to the correct Amazon catalog entry.
Rate Limit: 5 requests per second, burst of 5.
Field Mapping: POS to Amazon Catalog
| POS Field | Amazon Field | Notes |
|---|---|---|
sku | seller_sku | Unique per seller account. POS SKU used as seller_sku unless overridden. |
name | item_name | Max 500 characters for Amazon. Truncated with ellipsis if POS name exceeds limit. |
long_description | product_description | HTML allowed. Max 2,000 characters. |
brand | brand | Required for most Amazon categories. Must match Amazon Brand Registry if enrolled. |
barcode | external_id (UPC/EAN) | Used for ASIN lookup. UPC-A (12 digits) or EAN-13 (13 digits). |
primary_image_url | main_image_url | Minimum 1000x1000px. Pure white background required. No watermarks or text overlays. |
base_price | price.amount | Per-marketplace pricing. Currency determined by marketplace. |
weight | item_weight | Required for FBA. Must include unit (lb, kg, oz, g). |
product_type | product_type | Amazon Browse Node taxonomy. Mapped via Amazon Product Type Definition API. |
color | color_name | Amazon standard color values. Custom colors must map to nearest Amazon standard. |
size | size_name | Amazon standard size values. Apparel uses specific size systems (US, EU, UK). |
material | material_type | Required for certain categories (apparel, jewelry). |
ASIN Resolution Workflow:
flowchart TD
A[POS Product Selected for Amazon Listing] --> B{Has UPC/EAN Barcode?}
B -->|Yes| C[Search Catalog Items API by barcode]
B -->|No| D[Search by product name + brand]
C --> E{ASIN Found?}
D --> E
E -->|Yes, Single Match| F[Auto-link ASIN to POS product]
E -->|Yes, Multiple Matches| G[Present matches to admin for selection]
E -->|No Match| H[Create new listing - ASIN assigned by Amazon]
F --> I[Store ASIN in amazon_product_mapping table]
G --> I
H --> I
I --> J[Product ready for listing creation]
Amazon Product Mapping Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
product_id | UUID | Yes | FK to POS products table |
variant_id | UUID | No | FK to POS product_variants table (null for simple products) |
marketplace_id | String(20) | Yes | Amazon marketplace (e.g., ATVPDKIKX0DER) |
asin | String(10) | No | Amazon Standard Identification Number. Null until resolved. |
seller_sku | String(40) | Yes | Seller SKU on Amazon. Defaults to POS SKU. |
fnsku | String(10) | No | Fulfillment Network SKU. Assigned by Amazon for FBA items. |
fulfillment_type | Enum | Yes | FBA, FBM, BOTH – determines fulfillment method |
listing_status | Enum | Yes | DRAFT, ACTIVE, INACTIVE, SUPPRESSED, DELETED |
last_synced_at | DateTime | No | Timestamp of most recent sync with Amazon |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
6.4.3 Listings Items API
The Listings Items API manages the creation, update, and deletion of product listings on Amazon marketplaces. POS serves as the system of record for listing data, pushing changes to Amazon.
Endpoints:
| Operation | Method | Endpoint | Use Case |
|---|---|---|---|
| Create/Update | PUT | /listings/2021-08-01/items/{sellerId}/{sellerSku} | Full listing creation or complete overwrite |
| Partial Update | PATCH | /listings/2021-08-01/items/{sellerId}/{sellerSku} | Update specific attributes only |
| Delete | DELETE | /listings/2021-08-01/items/{sellerId}/{sellerSku} | Remove listing from marketplace |
| Bulk Submit | POST | /feeds/2021-06-30/feeds | JSON_LISTINGS_FEED for bulk operations (up to 10,000 items per feed) |
Listing Lifecycle State Machine:
stateDiagram-v2
[*] --> DRAFT: Admin selects product for Amazon
DRAFT --> PENDING_REVIEW: Submit listing to Amazon
PENDING_REVIEW --> ACTIVE: Amazon approves listing
PENDING_REVIEW --> SUPPRESSED: Amazon rejects listing
ACTIVE --> INACTIVE: Admin deactivates or out of stock
ACTIVE --> SUPPRESSED: Amazon policy violation
INACTIVE --> ACTIVE: Admin reactivates with stock
SUPPRESSED --> PENDING_REVIEW: Fix issues and resubmit
ACTIVE --> DELETED: Admin permanently removes
INACTIVE --> DELETED: Admin permanently removes
DELETED --> [*]
Listing Attribute Mapping: POS to Amazon
| POS Field | Amazon Listing Attribute | Constraints | Notes |
|---|---|---|---|
name | item_name | Max 500 chars | Title formula: Brand + Product Type + Key Feature + Size/Color |
long_description | product_description | Max 2,000 chars, HTML allowed | Sanitized before push – no JavaScript or external links |
bullet_points[] | bullet_point (x5) | Max 1,000 chars each, max 5 bullets | POS stores as array. Truncated if exceeds limit. |
search_terms | generic_keyword | Max 250 bytes total | Backend keywords. No brand names, ASINs, or profanity. |
base_price | purchasable_offer.our_price | Per marketplace, currency auto-set | Converted to marketplace currency if multi-currency enabled |
compare_at_price | purchasable_offer.list_price | Must be > our_price | Used for “was/now” pricing on Amazon |
quantity | fulfillment_availability.quantity | Integer >= 0 | FBM quantity only. FBA managed by Amazon. |
condition | condition_type | new_new, used_*, refurbished | Default: new_new for retail POS |
handling_time | fulfillment_availability.handling_time | Integer (business days) | FBM only. Default: 2 days. |
Bulk Feed Processing:
For initial catalog push or large-scale updates, the system uses the Feeds API with JSON_LISTINGS_FEED type:
- POS collects up to 10,000 product listings into a single feed document.
- Feed is submitted via
POST /feeds/2021-06-30/feedswithfeedType=JSON_LISTINGS_FEED. - POS polls
GET /feeds/2021-06-30/feeds/{feedId}untilprocessingStatus=DONE. - Download the processing report to identify per-item success/failure.
- Failed items are logged to the Integration Sync Log (Module 5, Section 5.16.4) with Amazon error codes.
Business Rules:
- All required Amazon fields are validated in the POS before submission. Missing fields block the listing with a clear error message (e.g., “Brand is required for Amazon category ‘Clothing’”).
- Search terms must not contain brand names, ASINs, competitor names, or offensive language. POS applies a deny-list filter before submission.
- Bullet points are strongly recommended (Amazon penalizes listings without them in search ranking). POS displays a completeness score for Amazon-bound products.
- Price changes are applied via PATCH to avoid overwriting other listing attributes.
6.4.4 Orders API
The Orders API enables the POS system to import Amazon marketplace orders for tracking and fulfillment (FBM). FBA orders are tracked for visibility but fulfilled by Amazon.
Core Endpoints:
| Operation | Method | Endpoint | Purpose |
|---|---|---|---|
| Poll Orders | GET | /orders/v0/orders | Retrieve new/updated orders every 2 minutes |
| Get Order Items | GET | /orders/v0/orders/{orderId}/orderItems | Retrieve line item details for a specific order |
| Confirm Shipment | POST | /orders/v0/orders/{orderId}/shipment | Provide tracking for FBM fulfillment |
Order Status Mapping:
| Amazon Status | POS Status | Action |
|---|---|---|
Pending | PENDING_FULFILLMENT | Wait for Amazon payment confirmation. Do not begin fulfillment. |
Unshipped | ASSIGNED | Route to nearest fulfillment-capable store for FBM. |
PartiallyShipped | PARTIALLY_SHIPPED | Some line items shipped, remainder pending. |
Shipped | SHIPPED | Tracking provided to Amazon. Inventory decremented. |
Canceled | CANCELLED | Release reserved inventory. Log cancellation reason. |
Unfulfillable | UNFULFILLABLE | FBA cannot fulfill (e.g., out of stock at FC). Alert admin. |
InvoiceUnconfirmed | PENDING_INVOICE | EU VAT invoice required before shipment. |
PendingAvailability | BACKORDER | Pre-order. Fulfill when stock arrives. |
FBA vs FBM Fulfillment Routing:
| Fulfillment Type | Who Ships | Inventory Source | POS Action |
|---|---|---|---|
| FBA (Fulfilled by Amazon) | Amazon warehouse | Amazon Fulfillment Center (FC) | POS monitors order status only. No pick-pack-ship required. Inventory tracked as “FBA” channel. |
| FBM (Fulfilled by Merchant) | Our stores | POS physical inventory | Full pick-pack-ship workflow (Section 4.14.4). Store assignment algorithm routes to nearest location. |
| SFP (Seller Fulfilled Prime) | Our stores (Prime SLA) | POS physical inventory | Same as FBM but with Prime delivery SLA (1-2 day shipping). Requires carrier integration. |
Order Import and Fulfillment Sequence:
sequenceDiagram
autonumber
participant AMZ as Amazon SP-API
participant POLL as Order Poller (2 min)
participant API as POS Backend
participant DB as Database
participant STAFF as Store Staff
participant CARRIER as Carrier
Note over AMZ, CARRIER: Phase 1: Order Import
POLL->>AMZ: GET /orders/v0/orders?CreatedAfter={lastPoll}
AMZ-->>POLL: Order list (new + updated)
loop Each New Order
POLL->>AMZ: GET /orders/v0/orders/{orderId}/orderItems
AMZ-->>POLL: Line items with ASIN, seller_sku, qty, price
POLL->>API: Map seller_sku to POS product_id
API->>DB: Create amazon_order record
API->>DB: Create order_line_items
alt FBA Order
API->>DB: Status: TRACKING_ONLY (no fulfillment action)
else FBM Order
API->>DB: Status: PENDING_FULFILLMENT
API->>API: Run store assignment algorithm (Section 4.14.2)
API->>DB: Reserve inventory at selected store
API-->>STAFF: New Amazon FBM order on fulfillment queue
end
end
Note over AMZ, CARRIER: Phase 2: FBM Fulfillment
STAFF->>API: Begin pick-pack-ship (Section 4.14.4)
API->>DB: Status: PICKING -> PACKING -> SHIPPED
STAFF->>API: Enter carrier + tracking number
API->>AMZ: POST /orders/v0/orders/{orderId}/shipment
Note right of AMZ: carrier_code, tracking_number,<br/>ship_date, line_items
AMZ-->>API: Shipment confirmation
API->>DB: Log SALE movement, decrement inventory
Note over AMZ, CARRIER: Phase 3: Delivery Tracking
CARRIER->>API: Delivery webhook
API->>DB: Status: DELIVERED
Amazon Order Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
amazon_order_id | String(20) | Yes | Amazon order identifier (e.g., 113-1234567-1234567) |
marketplace_id | String(20) | Yes | Marketplace where order was placed |
order_status | Enum | Yes | Current Amazon order status |
pos_status | Enum | Yes | Internal POS fulfillment status |
fulfillment_channel | Enum | Yes | AFN (Amazon Fulfillment Network / FBA) or MFN (Merchant Fulfillment Network / FBM) |
assigned_location_id | UUID | No | FK to locations table. FBM orders only. |
purchase_date | DateTime | Yes | When the customer placed the order |
buyer_name | String(100) | No | Buyer display name (PII – encrypted at rest) |
shipping_address | JSONB | No | Encrypted shipping address object. Used for store assignment. |
order_total | Decimal(12,2) | Yes | Total order amount |
currency_code | String(3) | Yes | Order currency (e.g., USD) |
items_shipped | Integer | Yes | Count of shipped line items |
items_unshipped | Integer | Yes | Count of unshipped line items |
carrier_code | String(20) | No | Shipping carrier (e.g., UPS, USPS, FedEx) |
tracking_number | String(50) | No | Shipment tracking number |
shipped_at | DateTime | No | When the order was shipped |
delivered_at | DateTime | No | When the order was delivered |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Order polling runs every 2 minutes using the
CreatedAfterparameter set to the last successful poll timestamp. Pendingorders are imported for visibility but NOT queued for fulfillment until Amazon confirms payment (status transitions toUnshipped).- FBM orders reuse the existing pick-pack-ship workflow (Section 4.14.4) with Amazon-specific carrier code and tracking submission.
- Amazon buyer PII (name, address, email) is encrypted at rest and purged after 30 days per Amazon’s Acceptable Use Policy.
- If a seller_sku in an incoming order does not map to any POS product, the order is flagged as
UNMAPPEDand an alert is raised for admin resolution.
6.4.5 FBA Inventory API
The FBA Inventory API provides visibility into stock held at Amazon Fulfillment Centers. The POS system reads FBA inventory levels for unified stock reporting but does not directly control FBA quantities (Amazon manages FC inventory).
Core Endpoint: GET /fba/inventory/v1/summaries
Identifier Mapping:
| Identifier | Source | Purpose |
|---|---|---|
SKU (seller_sku) | POS system | Our internal product identifier used across all channels |
ASIN | Amazon catalog | Amazon’s product catalog identifier. One ASIN can map to multiple seller SKUs. |
FNSKU | Amazon FBA | Fulfillment Network SKU. Amazon’s internal label for FBA units. Unique per seller + product. |
UPC / EAN | Manufacturer | Universal barcode. Used for ASIN resolution during listing creation. |
MSKU | Amazon (legacy) | Merchant SKU. Equivalent to seller_sku. Used in older API versions. |
FBA Inventory States:
| State | Description | POS Tracking |
|---|---|---|
Fulfillable | Available for customer orders at Amazon FC | Shown as “FBA Available” in POS inventory view |
Inbound Working | Shipment plan created, not yet shipped to FC | Shown as “FBA Inbound” – subtotal of all inbound stages |
Inbound Shipped | In transit to FC, not yet received | Shown as “FBA Inbound” |
Inbound Receiving | Arrived at FC, being processed and stowed | Shown as “FBA Receiving” |
Reserved | Allocated to pending customer orders or FC transfers | Shown as “FBA Reserved” (not available for new orders) |
Unfulfillable | Defective, customer-damaged, carrier-damaged, or expired at FC | Triggers alert to admin – requires removal order or disposal decision |
Researching | Discrepancy under investigation by Amazon | Triggers alert to admin – monitor and follow up if unresolved > 30 days |
FBA Inventory Sync Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
product_mapping_id | UUID | Yes | FK to amazon_product_mapping table |
fnsku | String(10) | Yes | Amazon Fulfillment Network SKU |
fulfillable_qty | Integer | Yes | Units available for sale at Amazon FC |
inbound_working_qty | Integer | Yes | Units in shipment plans not yet shipped |
inbound_shipped_qty | Integer | Yes | Units in transit to FC |
inbound_receiving_qty | Integer | Yes | Units being processed at FC |
reserved_qty | Integer | Yes | Units allocated to orders or FC transfers |
unfulfillable_qty | Integer | Yes | Defective or damaged units at FC |
researching_qty | Integer | Yes | Units under investigation |
last_updated_at | DateTime | Yes | Timestamp of last data refresh from Amazon |
created_at | DateTime | Yes | Record creation timestamp |
Inbound Shipment Creation Workflow:
When the POS system needs to send inventory to Amazon FBA, it uses the Inbound Shipment API to create a shipment plan:
flowchart TD
A[Admin selects products for FBA replenishment] --> B[POS calculates quantities per SKU]
B --> C[POST createInboundShipmentPlan]
C --> D{Amazon splits into shipment groups?}
D -->|Single FC| E[One shipment plan created]
D -->|Multiple FCs| F[Multiple shipment plans created]
E --> G[Print FNSKU labels for each unit]
F --> G
G --> H[Pack and ship to assigned FC addresses]
H --> I[Enter carrier + tracking per shipment]
I --> J[POS moves inventory to FBA Inbound status]
J --> K[Poll inventory summaries for receiving confirmation]
K --> L[Amazon confirms receipt -- FBA Fulfillable updated]
Business Rules:
- FBA inventory is read-only in the POS inventory view. Staff cannot adjust FBA quantities manually – all FBA adjustments originate from Amazon.
- The POS displays a unified inventory view:
Total Available = POS On-Hand + FBA Fulfillable. Channel-specific breakdowns are shown on the product detail screen. - FBA inventory is synced every 15 minutes via
getInventorySummaries. More frequent polling is unnecessary as Amazon updates FBA quantities asynchronously. - Unfulfillable inventory exceeding a configurable threshold (default: 5 units per SKU) triggers a removal recommendation alert to the admin.
6.4.6 Notifications (SQS / EventBridge)
Amazon SP-API supports push notifications for key events, eliminating the need for constant polling. The POS system subscribes to relevant notification types and processes them via an Amazon SQS queue.
Key Notification Types:
| Notification Type | Trigger | POS Action |
|---|---|---|
ORDER_CHANGE | Order status changes (new, shipped, cancelled, returned) | Update POS order status. Trigger fulfillment queue for new FBM orders. |
ITEM_INVENTORY_EVENT_CHANGE | FBA inventory level changes (receipt, sale, adjustment) | Update FBA stock display in POS. Recalculate unified available quantity. |
LISTINGS_ITEM_STATUS_CHANGE | Listing approved, suppressed, or policy-flagged by Amazon | Update product listing_status. Alert admin for suppressed listings. |
REPORT_PROCESSING_FINISHED | A requested report (settlement, inventory) is ready for download | Download and process report. Update POS financial or inventory records. |
FBA_OUTBOUND_SHIPMENT_STATUS | FBA order shipped or delivery status updated | Update FBA order tracking info. Mark order as SHIPPED or DELIVERED. |
FEED_PROCESSING_FINISHED | A submitted feed (listings, pricing, inventory) has been processed | Download processing report. Log successes and failures per item. |
BRANDED_ITEM_CONTENT_CHANGE | A+ Content or brand content changes on a shared ASIN | Log change for awareness. No auto-action (POS does not manage A+ Content). |
SQS Queue Configuration Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
integration_id | UUID | Yes | FK to integrations table |
sqs_queue_url | String(500) | Yes | Amazon SQS queue URL for receiving notifications |
sqs_queue_arn | String(200) | Yes | SQS queue ARN for subscription registration |
aws_access_key_id_encrypted | String(200) | Yes | Encrypted AWS IAM access key for SQS polling |
aws_secret_key_encrypted | String(500) | Yes | Encrypted AWS IAM secret key |
aws_region | String(20) | Yes | AWS region where SQS queue is provisioned (e.g., us-east-1) |
subscribed_notifications | String[] | Yes | Array of notification types subscribed (e.g., ["ORDER_CHANGE", "ITEM_INVENTORY_EVENT_CHANGE"]) |
polling_interval_seconds | Integer | Yes | How often to poll SQS (default: 30 seconds) |
is_active | Boolean | Yes | Whether notification processing is enabled |
last_polled_at | DateTime | No | Timestamp of last successful SQS poll |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Notification Processing Flow:
sequenceDiagram
autonumber
participant AMZ as Amazon SP-API
participant SQS as Amazon SQS Queue
participant WORKER as POS Notification Worker
participant API as POS Backend
participant DB as Database
Note over AMZ, DB: Push Notification Flow
AMZ->>SQS: Publish notification (ORDER_CHANGE, etc.)
loop Every 30 seconds
WORKER->>SQS: ReceiveMessage (max 10 messages)
SQS-->>WORKER: Notification batch
loop Each Notification
WORKER->>WORKER: Parse notification type + payload
alt ORDER_CHANGE
WORKER->>API: Update order status
API->>DB: Write order update + log
else ITEM_INVENTORY_EVENT_CHANGE
WORKER->>API: Update FBA inventory
API->>DB: Write inventory snapshot
else LISTINGS_ITEM_STATUS_CHANGE
WORKER->>API: Update listing status
API->>DB: Write status change + alert if suppressed
else REPORT_PROCESSING_FINISHED
WORKER->>AMZ: Download completed report
WORKER->>API: Process report data
API->>DB: Write report results
end
WORKER->>SQS: DeleteMessage (acknowledge)
end
end
Business Rules:
- SQS messages are processed at-least-once. All notification handlers are idempotent – duplicate messages produce the same result without side effects.
- Failed notification processing retries up to 3 times with exponential backoff (30s, 60s, 120s). After 3 failures, the message moves to a Dead Letter Queue (DLQ) and an admin alert is raised.
- The DLQ is monitored daily. Messages in the DLQ older than 7 days are automatically logged and purged.
- When
ORDER_CHANGEnotifications are active, order polling frequency (Section 6.4.4) can be reduced from 2 minutes to 15 minutes as a fallback mechanism.
6.4.7 Rate Limits & Throttling
Amazon SP-API enforces per-endpoint rate limits using a token bucket model. The POS system must respect these limits to avoid throttling (HTTP 429) and potential API access suspension.
Per-Endpoint Rate Limits:
| Endpoint | Rate Limit | Burst | Recovery Rate | Notes |
|---|---|---|---|---|
Catalog Items (GET /catalog/...) | 5 req/sec | 5 | 1 req/sec | Shared across all catalog operations |
Listings Items (PUT/PATCH/DELETE) | 5 req/sec | 5 | 1 req/sec | Per selling partner ID |
Orders (GET /orders) | 1 req/sec | 1 | 0.5 req/sec | Shared across getOrders + getOrder |
Order Items (GET .../orderItems) | 2 req/sec | 2 | 1 req/sec | Per order ID lookup |
Feeds (POST /feeds) | 1 req/sec | 1 | 0.5 req/sec | Feed submission only |
Feed Results (GET /feeds/{id}) | 2 req/sec | 2 | 1 req/sec | Polling feed status |
Reports (POST /reports) | 1 req/sec | 1 | 0.5 req/sec | Report request submission |
| Report Download | 15 req/sec | 15 | 1 req/sec | Downloading completed reports |
FBA Inventory (GET /summaries) | 2 req/sec | 2 | 1 req/sec | Per marketplace |
Notifications (POST /subscriptions) | 1 req/sec | 1 | 0.5 req/sec | Subscription management |
Throttle Response Headers:
| Header | Purpose |
|---|---|
x-amzn-RateLimit-Limit | Current rate limit for the endpoint (requests per second) |
x-amzn-RequestId | Unique request identifier for troubleshooting with Amazon support |
Retry-After | Seconds to wait before retrying (present on 429 responses) |
Rate Limit Handling Strategy:
amazon_rate_limiting:
strategy: token_bucket_with_dynamic_adjustment
# Pre-request: check available tokens before sending
pre_request_check: true
# Track remaining capacity from response headers
header_tracking:
rate_limit_header: "x-amzn-RateLimit-Limit"
request_id_header: "x-amzn-RequestId"
# Throttle response handling (HTTP 429)
throttle_handling:
respect_retry_after: true # Wait for Retry-After seconds
fallback_wait_seconds: 30 # If no Retry-After header
max_retries: 5 # Per individual request
backoff_strategy: exponential # 1s, 2s, 4s, 8s, 16s
jitter: true # Add random 0-500ms to prevent thundering herd
# Request prioritization during high load
priority_queue:
high: # Processed first
- order_import # Customer-facing
- shipment_confirmation # Time-sensitive
- inventory_sync # Prevents overselling
medium:
- listing_updates # Can tolerate delay
- catalog_lookups # Admin-initiated
low:
- report_requests # Background tasks
- feed_status_polling # Non-urgent
# Circuit breaker: disable endpoint temporarily on repeated failures
circuit_breaker:
failure_threshold: 10 # Consecutive 429s to trigger
open_duration_seconds: 300 # Wait 5 min before retrying
half_open_requests: 2 # Test requests before fully reopening
# Utilization target (% of rate limit to use)
utilization_plans:
peak_hours: 60% # Conservative during business hours
off_peak: 90% # Aggressive during overnight sync
bulk_operations: 80% # Feed submissions and bulk updates
Business Rules:
- The POS never exceeds 90% of any endpoint’s rate limit, even during bulk operations, to leave headroom for manual admin actions.
- All API calls are routed through a centralized HTTP client that enforces rate limits before dispatch. Direct API calls bypassing the rate limiter are prohibited.
- When a 429 response is received, the system logs the event, waits the specified
Retry-Afterduration (or 30 seconds if absent), and retries with exponential backoff. - Rate limit utilization metrics are displayed on the Integration Health Dashboard (Module 5, Section 5.16.5) with per-endpoint graphs.
6.4.8 Amazon Compliance & Seller Requirements
This section defines the compliance checks, packaging standards, and business rules the POS system must enforce to maintain good standing on Amazon’s marketplace.
6.4.8.1 Seller Code of Conduct
Amazon enforces strict seller performance standards. The POS system validates data quality before pushing to Amazon to prevent listing suppressions, account warnings, and potential suspension.
Pre-Submission Validation Checklist:
| Validation | Rule | Action on Failure |
|---|---|---|
| Product title | Non-empty, <= 500 chars, no ALL CAPS, no promotional phrases (“FREE”, “SALE”) | Block listing. Show specific error. |
| Brand name | Must match Amazon Brand Registry if enrolled | Block listing. Suggest registry lookup. |
| UPC/EAN | Valid check digit, not on Amazon’s blocked barcode list | Block listing. Prompt for GS1 verification. |
| Product images | Min 1000x1000px, pure white background (RGB 255,255,255), JPEG/PNG/TIFF | Block listing. Show image requirements. |
| Price | > $0.00, within Amazon category price range, no extreme markups vs. other channels | Warning. Admin override allowed with reason code. |
| Description | Non-empty, no HTML injection, no external links, no competitor references | Sanitize and warn. Strip disallowed content. |
| Bullet points | Non-empty for at least 3 of 5 bullets, no HTML, no promotional claims | Warning. Listing proceeds but flagged as “incomplete”. |
| Condition | Valid condition type for the category | Block listing. Show valid options. |
| Weight/Dimensions | Required for FBA. Must be positive values with valid units. | Block FBA listing. Allow FBM without. |
| Restricted category | Check if product type requires Amazon approval (e.g., jewelry, pesticides) | Block listing. Provide approval application link. |
Pricing Compliance:
- Amazon monitors pricing across channels. If a product is listed significantly cheaper on the retailer’s own website (Shopify), Amazon may suppress the Buy Box or flag the listing.
- The POS displays a cross-channel price comparison alert when Amazon price differs from Shopify price by more than a configurable threshold (default: 10%).
- The system does NOT enforce price parity automatically – it alerts the admin who makes the business decision.
6.4.8.2 Packaging & Labeling Guidelines
FBA Label Requirements:
| Specification | Requirement |
|---|---|
| Label format | 4“ x 6“ thermal label (Zebra-compatible) |
| Barcode type | FNSKU barcode (Code 128 symbology) |
| Label placement | One label per sellable unit, covering any existing UPC |
| Readability | Minimum 1“ barcode height, no damage, smudging, or obstruction |
| Suffocation warning | Required on poly bags with opening > 5“ |
FBA Prep Requirements by Product Category:
| Category | Prep Required | Label Required | Special Notes |
|---|---|---|---|
| Apparel | Poly bag with suffocation warning | FNSKU barcode on bag exterior | Transparent bag preferred. No hangers for FBA. |
| Shoes | Individual shoe box | FNSKU barcode on box exterior | Each pair in separate box. No loose shoes in poly bags. |
| Accessories (belts, scarves) | Poly bag or bubble wrap | FNSKU barcode on outer packaging | Small items must be in bag to prevent loss. |
| Jewelry | Bubble wrap + rigid box | FNSKU barcode on box exterior | High-value handling. Minimum 2“ cushion padding. |
| Fragile items | Bubble wrap + rigid box | FNSKU barcode on box exterior | “FRAGILE” label optional but recommended. |
| Oversized items | None (ship as-is) | FNSKU barcode on product | Box dimensions must not exceed FBA limits. |
Shipping Box Requirements (FBA Inbound):
| Specification | Limit |
|---|---|
| Maximum box weight | 50 lbs (23 kg) |
| Maximum box dimensions | 25“ x 25“ x 25“ (standard), oversize varies |
| Box material | New corrugated cardboard, minimum 200# burst strength |
| Contents per box | Single SKU (preferred) or mixed SKU with manifest |
| Pallet shipments | Standard 40“ x 48“ pallets, max 72“ stack height |
6.4.8.3 FBA vs FBM Support
The POS system supports both Amazon fulfillment methods with per-product configuration. A single product can use FBA in one marketplace and FBM in another, or both simultaneously.
Fulfillment Method Configuration:
| Setting | Options | Default | Description |
|---|---|---|---|
fulfillment_type | FBA, FBM, BOTH | FBM | How this product is fulfilled on Amazon |
fba_location_id | UUID | null | Virtual POS location representing Amazon FBA stock (Section 6.4.8.4) |
fbm_location_ids | UUID[] | all stores | Which POS locations can fulfill FBM orders for this product |
auto_replenish_fba | Boolean | false | Whether POS auto-generates FBA inbound shipment plans when FBA stock drops below threshold |
fba_reorder_point | Integer | 0 | FBA stock level that triggers replenishment alert or auto-plan |
fba_reorder_qty | Integer | 0 | Quantity to send per FBA replenishment shipment |
FBA Shipment Creation Workflow:
sequenceDiagram
autonumber
participant ADMIN as Admin
participant POS as POS Backend
participant AMZ as Amazon SP-API
participant STORE as Store Staff
Note over ADMIN, STORE: FBA Inbound Shipment Creation
ADMIN->>POS: Select products + quantities for FBA replenishment
POS->>POS: Validate stock available at source location
POS->>AMZ: POST createInboundShipmentPlan
Note right of AMZ: SKUs, quantities,<br/>ship-from address,<br/>label preference
AMZ-->>POS: Shipment plan(s) with FC destination(s)
Note left of POS: Amazon may split into<br/>multiple shipments to<br/>different FCs
loop Each Shipment Plan
POS->>POS: Generate FNSKU labels (4x6 thermal)
POS-->>ADMIN: Display FC destination + packing instructions
ADMIN->>STORE: Assign packing task to store staff
STORE->>STORE: Apply FNSKU labels, pack boxes
STORE->>POS: Confirm shipment packed + carrier + tracking
POS->>AMZ: PUT updateInboundShipment (carrier, tracking, box contents)
POS->>POS: Move inventory: POS On-Hand -> FBA Inbound
end
Note over ADMIN, STORE: Amazon receives and processes
AMZ-->>POS: Notification: ITEM_INVENTORY_EVENT_CHANGE
POS->>POS: Move inventory: FBA Inbound -> FBA Fulfillable
6.4.8.4 Safety Buffer Rules
To prevent overselling when sync delays exist between the POS and Amazon, configurable safety buffers reduce the quantity advertised on Amazon.
Buffer Calculation:
Amazon Available = POS Allocatable Quantity - Safety Buffer
Where:
- POS Allocatable Quantity = On-hand at designated Amazon fulfillment location(s), minus reserved, minus safety stock.
- Safety Buffer = Greater of (percentage buffer) or (minimum unit buffer).
Buffer Configuration:
| Setting | Type | Default | Description |
|---|---|---|---|
buffer_percentage | Decimal | 10% | Percentage of allocatable quantity to withhold |
buffer_minimum_units | Integer | 2 | Minimum units to always withhold, regardless of percentage |
buffer_scope | Enum | PER_PRODUCT | PER_PRODUCT (individual) or GLOBAL (all products same rule) |
amazon_location_id | UUID | null | Dedicated virtual location for Amazon inventory (recommended) |
use_specific_location | Boolean | false | If true, only inventory at amazon_location_id is available for Amazon. If false, network-wide available. |
Examples:
| Scenario | POS Available | Buffer % | Min Units | Buffer Applied | Amazon Qty |
|---|---|---|---|---|---|
| Normal stock | 50 | 10% | 2 | 5 (10% of 50) | 45 |
| Low stock | 10 | 10% | 2 | 2 (min units > 10% of 10) | 8 |
| Very low stock | 3 | 10% | 2 | 2 (min units) | 1 |
| Minimal stock | 2 | 10% | 2 | 2 (min units) | 0 |
| Out of stock | 0 | 10% | 2 | 0 | 0 |
Business Rules:
- When
Amazon Availabledrops to 0, the listing quantity is set to 0 on Amazon. The listing remains ACTIVE but shows “Currently unavailable.” - Safety buffers are recalculated on every inventory change event (sale, receiving, adjustment, transfer).
- Admins can override the buffer for individual products (e.g., set buffer to 0 for high-velocity items where overselling risk is acceptable).
- The dedicated Amazon virtual location approach is recommended for retailers with significant Amazon volume. It provides a clear physical segregation of Amazon-allocated inventory.
6.4.8.5 Order Routing Rules
Amazon FBM orders are integrated into the existing fulfillment infrastructure alongside Shopify orders.
Routing Logic:
flowchart TD
A[Amazon Order Received] --> B{Fulfillment Channel?}
B -->|AFN / FBA| C[Track status only]
C --> D[Display in POS as FBA order]
D --> E[Monitor via ITEM_INVENTORY_EVENT_CHANGE]
B -->|MFN / FBM| F{SFP / Prime?}
F -->|Prime SLA| G[Flag as HIGH PRIORITY]
F -->|Standard FBM| H[Standard priority]
G --> I[Run store assignment algorithm]
H --> I
I --> J{Stock available at nearest store?}
J -->|Yes| K[Assign to nearest store]
J -->|No| L{Stock available at ANY store?}
L -->|Yes| M[Assign to store with stock - warn about shipping cost]
L -->|No| N{Split fulfillment enabled?}
N -->|Yes| O[Split across multiple stores]
N -->|No| P[Alert admin: cannot fulfill]
K --> Q[Add to store fulfillment queue]
M --> Q
O --> Q
Q --> R[Pick-Pack-Ship workflow<br/>Section 4.14.4]
R --> S[POST shipment confirmation to Amazon]
Priority Rules for Mixed Channel Fulfillment:
| Priority | Source | SLA | Notes |
|---|---|---|---|
| 1 (Highest) | Amazon Prime (SFP) | 1-2 business days | Must ship same day if ordered before cutoff |
| 2 | Amazon FBM Standard | 3-5 business days | Ship within handling_time (default: 2 days) |
| 3 | Shopify orders | Per shipping method selected | Existing Shopify fulfillment SLA applies |
| 4 | In-store pickup | Customer-defined | Lower urgency, customer comes to store |
Business Rules:
- Amazon FBM orders appear in the same fulfillment queue as Shopify orders, with priority badges indicating the channel and SLA.
- The store assignment algorithm (Section 4.14.2) is reused for Amazon FBM orders with the same proximity + stock availability logic.
- If an FBM order cannot be fulfilled within the stated handling time, the system alerts the admin to either fulfill from an alternate location or request a cancellation from the buyer.
- Late shipments are tracked as a seller performance metric. Amazon penalizes sellers with >4% late shipment rate.
6.4.9 Reports: Amazon Integration
All Amazon integration reports are accessible from the Admin Portal reporting module with date range filtering, export to CSV/PDF, and drill-down capability.
| Report | Purpose | Key Data Fields |
|---|---|---|
| Amazon Sales Summary | Daily/weekly/monthly sales performance across Amazon channel | Date range, total revenue, order count, units sold, avg order value, FBA vs FBM breakdown, refund rate, marketplace breakdown |
| Amazon Inventory Status | Current stock levels across FBA and FBM channels | SKU, product name, FBA fulfillable qty, FBA inbound qty, FBA reserved qty, FBA unfulfillable qty, FBM available qty, total Amazon available |
| Amazon Listing Health | Status of all Amazon product listings | SKU, ASIN, listing status (active/inactive/suppressed), suppression reason, Buy Box %, listing completeness score, last sync timestamp |
| Amazon Order Fulfillment | Fulfillment performance metrics | Order count, avg fulfillment time (hours), on-time shipment %, late shipment %, carrier breakdown, FBA vs FBM split, return rate |
| Amazon Fee Analysis | Total Amazon costs and profitability per SKU | SKU, ASIN, referral fee %, referral fee $, FBA fee (pick + pack + weight), storage fee (monthly + long-term), total Amazon fees, POS cost, net margin after fees |
| Amazon Sync Health | Integration connectivity and sync reliability | Sync success rate (24h), failed syncs (with error codes), avg sync latency, API quota utilization %, notification delivery rate, DLQ message count |
| Amazon Compliance Scorecard | Seller account health metrics from Amazon | Order defect rate (target <1%), late shipment rate (target <4%), pre-fulfillment cancel rate (target <2.5%), valid tracking rate (target >95%), policy violations |
Cross-Channel Comparison View:
A dedicated cross-channel report enables the admin to compare performance across POS in-store, Shopify online, and Amazon channels:
| Metric | In-Store (POS) | Shopify Online | Amazon FBM | Amazon FBA |
|---|---|---|---|---|
| Revenue | Sum of in-store sales | Sum of Shopify orders | Sum of FBM order revenue | Sum of FBA order revenue |
| Units Sold | POS transaction items | Shopify line items | FBM shipped units | FBA shipped units |
| Avg Order Value | POS avg | Shopify avg | FBM avg | FBA avg |
| Margin % | After POS costs | After Shopify fees | After Amazon referral + shipping | After Amazon referral + FBA fees |
| Return Rate | In-store returns | Shopify returns | FBM returns | FBA returns |
Business Rules:
- Amazon fee data is sourced from Amazon Settlement Reports, requested bi-weekly and reconciled against POS order records.
- The Amazon Compliance Scorecard polls Amazon’s Seller Performance API daily and surfaces warnings when any metric approaches the threshold (yellow at 75% of limit, red at 90%).
- All reports respect the tenant’s timezone for date boundaries and the tenant’s currency for financial values.
- Report data is cached for 1 hour. An admin can force-refresh any report to pull the latest data from Amazon.
6.4.10 Amazon Integration Configuration (YAML Reference)
The following YAML reference consolidates all Amazon integration business rules for the rules engine:
amazon_integration:
version: "1.0"
# Connection settings
connection:
oauth_token_refresh_buffer_minutes: 5
max_marketplaces_per_tenant: 3
credential_encryption: AES-256
pii_retention_days: 30
# Sync intervals
sync_intervals:
order_polling_minutes: 2
order_polling_with_notifications_minutes: 15
fba_inventory_sync_minutes: 15
listing_sync_on_change: true
daily_reconciliation_hour: 3 # 3 AM tenant-local
# Safety buffer defaults
safety_buffer:
default_percentage: 10
default_minimum_units: 2
scope: PER_PRODUCT
recalculate_on: [SALE, RECEIVING, ADJUSTMENT, TRANSFER, FBA_UPDATE]
# Order routing
order_routing:
fbm_use_store_assignment_algorithm: true
prime_same_day_cutoff_hour: 14 # 2 PM local
default_handling_time_days: 2
late_shipment_alert_threshold_hours: 36
# Notification processing
notifications:
sqs_polling_interval_seconds: 30
max_messages_per_poll: 10
retry_max_attempts: 3
retry_backoff: [30, 60, 120] # seconds
dlq_retention_days: 7
# Compliance thresholds (Amazon seller performance)
compliance:
order_defect_rate_warning: 0.75 # Yellow at 0.75%
order_defect_rate_critical: 0.90 # Red at 0.90% (limit: 1%)
late_shipment_rate_warning: 3.0 # Yellow at 3%
late_shipment_rate_critical: 3.6 # Red at 3.6% (limit: 4%)
cancel_rate_warning: 1.875 # Yellow at 1.875%
cancel_rate_critical: 2.25 # Red at 2.25% (limit: 2.5%)
valid_tracking_warning: 96.25 # Yellow below 96.25%
valid_tracking_critical: 95.0 # Red below 95% (limit: 95%)
# Listing validation
listing_validation:
title_max_chars: 500
description_max_chars: 2000
bullet_points_max: 5
bullet_point_max_chars: 1000
search_terms_max_bytes: 250
image_min_dimension_px: 1000
price_cross_channel_alert_threshold_percent: 10
# FBA settings
fba:
unfulfillable_alert_threshold_units: 5
researching_followup_days: 30
label_format: "4x6_thermal"
label_barcode_symbology: "CODE128"
max_box_weight_lbs: 50
max_box_dimension_inches: 25
End of Section 6.4: Amazon SP-API Integration
6.5 Google Merchant API Integration
Scope: Outbound product-feed management, local inventory advertising, Google Business Profile linkage, push notification handling, and Content API-to-Merchant API migration for Google Shopping and Local Inventory Ads.
Cross-Reference: See Module 3, Section 3.9 for the product-sync lifecycle that triggers outbound calls to Google Merchant Center.
Cross-Reference: See Module 4, Section 4.5 for inventory-level sync events that Module 6 pushes to connected channels including Google local inventory.
Cross-Reference: See Module 5, Section 5.4 for tenant-level integration configuration (credentials, enabled providers, sync schedules).
CRITICAL: The Content API for Shopping reaches end-of-life on August 18, 2026. All new development MUST target the Merchant API (v1beta / v1). See Section 6.5.10 for the migration plan.
6.5.1 Authentication & Authorization
Google Merchant Center uses OAuth 2.0 with service accounts for server-to-server authentication. Unlike Shopify’s long-lived access tokens, Google service accounts issue short-lived JWTs that must be self-signed and exchanged for access tokens every 60 minutes.
Service Account Auth Flow
sequenceDiagram
autonumber
participant POS as POS Backend
participant VAULT as Credential Vault
participant JWT as JWT Builder
participant GAUTH as Google OAuth2 Endpoint
participant GAPI as Google Merchant API
Note over POS, GAPI: Phase 1: Credential Retrieval
POS->>VAULT: Fetch service account key (AES-256 decrypted)
VAULT-->>POS: Return private_key, client_email, project_id
Note over POS, GAPI: Phase 2: JWT Assertion
POS->>JWT: Build self-signed JWT
Note right of JWT: iss = client_email<br/>scope = content, merchantapi<br/>aud = https://oauth2.googleapis.com/token<br/>iat = now, exp = now + 3600
JWT->>JWT: Sign with RSA-SHA256 (private_key)
JWT-->>POS: Return signed JWT assertion
Note over POS, GAPI: Phase 3: Token Exchange
POS->>GAUTH: POST /token (grant_type=jwt-bearer, assertion=JWT)
GAUTH-->>POS: Return access_token (expires_in: 3600)
POS->>POS: Cache access_token (refresh at T-5min)
Note over POS, GAPI: Phase 4: API Call
POS->>GAPI: GET /accounts/{merchantId}/products<br/>Authorization: Bearer {access_token}
GAPI-->>POS: 200 OK (product data)
Note over POS, GAPI: Phase 5: Auto-Refresh (background)
POS->>POS: Timer fires at T-5min before expiry
POS->>GAUTH: POST /token (new JWT assertion)
GAUTH-->>POS: Return fresh access_token
Merchant Center Account Setup
Before API access is possible, the tenant must:
- Create a Merchant Center account at https://merchants.google.com
- Create a Google Cloud project and enable the Merchant API
- Create a service account in the Cloud Console with the
Merchant Center Adminrole - Download the JSON key file and upload it to the POS Admin Portal (Module 5)
- Link the service account to the Merchant Center account via the MC settings panel
- Verify and claim the website URL associated with the product feed
Data Model: google_merchant_credentials
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants.id |
merchant_account_id | VARCHAR(20) | Yes | Google Merchant Center account ID (numeric) |
service_account_email | VARCHAR(255) | Yes | Service account email (e.g., pos-sync@project.iam.gserviceaccount.com) |
private_key_encrypted | TEXT | Yes | RSA private key, AES-256-GCM encrypted at rest |
project_id | VARCHAR(100) | Yes | Google Cloud project ID |
token_endpoint | VARCHAR(255) | Yes | Default: https://oauth2.googleapis.com/token |
access_token | TEXT | No | Current cached access token (encrypted) |
token_expiry | TIMESTAMPTZ | No | Expiry timestamp of current access token |
scopes | TEXT[] | Yes | Granted scopes: content, merchantapi |
is_active | BOOLEAN | Yes | Whether this credential set is enabled – default true |
last_auth_at | TIMESTAMPTZ | No | Timestamp of last successful authentication |
last_auth_error | TEXT | No | Last authentication error message (if any) |
created_at | TIMESTAMPTZ | Yes | Row creation timestamp |
updated_at | TIMESTAMPTZ | Yes | Last modification timestamp |
Security Note: The
private_key_encryptedfield uses AES-256-GCM with a tenant-specific key derived from the master encryption key. The plaintext private key is never written to logs, error messages, or API responses. See Module 5, Section 5.4.5 for the credential vault architecture.
6.5.2 Product Data Management
The Google Merchant API separates product writes and reads into distinct resources:
ProductInput– the write resource. Used to create or update product data. Endpoint:accounts/{account}/productInputs:insertProduct– the read-only processed resource. Represents the product after Google’s validation, enrichment, and policy review. Endpoint:accounts/{account}/products/{product}
This distinction matters because the data you submit (ProductInput) may differ from the data Google stores (Product) after processing. The POS must read the Product resource to check approval status, supplemental attributes, and policy violations.
API Endpoint Reference
| Operation | Merchant API Endpoint | HTTP Method | Notes |
|---|---|---|---|
| Create/update product | accounts/{account}/productInputs:insert | POST | Upsert by offerId + feedLabel + contentLanguage |
| Get processed product | accounts/{account}/products/{product} | GET | Returns Google-enriched version with status |
| List products | accounts/{account}/products | GET | Paginated, max 250 per page |
| Delete product input | accounts/{account}/productInputs/{productInput}:delete | DELETE | Removes from feed; may take up to 24h to delist |
| Get product status | accounts/{account}/productStatuses/{product} | GET | Approval status, disapproval reasons, warnings |
Content API End-of-Life
| Milestone | Date | Impact |
|---|---|---|
| Merchant API v1beta GA | Available now | New features only in Merchant API |
| Content API deprecation announcement | 2025 | No new features; maintenance only |
| v1beta migration deadline | February 28, 2026 | Must begin migration |
| Content API full EOL | August 18, 2026 | All Content API endpoints cease functioning |
CRITICAL: Any integration built against the Content API (
/content/v2.1/) will stop working on August 18, 2026. The POS MUST use the Merchant API from day one. See Section 6.5.10 for legacy migration details.
Product Field Mapping: POS to Google Merchant
| POS Field | Google Field | API Path | Notes |
|---|---|---|---|
sku | offerId | productInput.offerId | Max 50 chars, unique per feed. Used as dedup key. |
name | title | productInput.title | Max 150 chars. No promotional text (“Sale!”, “Free Shipping” prohibited). |
long_description | description | productInput.description | Max 5,000 chars. Plain text preferred; HTML tags stripped by Google. |
primary_image_url | imageLink | productInput.imageLink | HTTPS required. Min 100x100px (apparel: 250x250px). |
base_price + currency | price | productInput.price | Object: { amountMicros: 2999000000, currencyCode: "USD" }. Price in micros (1 USD = 1,000,000 micros). |
| Computed from inventory | availability | productInput.availability | Enum: in_stock, out_of_stock, preorder, backorder. Derived from real-time inventory qty. |
brand | brand | productInput.brand | Required for most categories. Must be manufacturer brand, not store name. |
barcode (UPC/EAN) | gtin | productInput.gtin | Valid GTIN-8/12/13/14. No dashes. Check digit validated (mod-10 algorithm). |
product_condition | condition | productInput.condition | Enum: new, refurbished, used. Default: new. |
website_url | link | productInput.link | Must be live, HTTPS, product data on page must match feed exactly. |
manufacturer_part_number | mpn | productInput.mpn | Required if no GTIN available. Manufacturer’s part number. |
product_type_taxonomy | googleProductCategory | productInput.googleProductCategory | Full Google taxonomy path (e.g., “Apparel & Accessories > Clothing > Shirts”). |
weight + weight_unit | shippingWeight | productInput.shippingWeight | Object: { value: 0.5, unit: "lb" }. |
additional_images[] | additionalImageLinks | productInput.additionalImageLinks | Up to 10 additional images. Same quality requirements as primary. |
variant_color | color | productInput.color | Required for apparel. Google-normalised colour names. |
variant_size | size | productInput.size | Required for apparel. Use standard sizing (S, M, L, XL or numeric). |
Price Format Conversion
The POS stores prices as DECIMAL(10,2) but Google requires prices in micros (millionths of the currency unit):
POS price: $29.99 (DECIMAL)
Google micros: 29990000 (INT64)
Conversion: price_micros = ROUND(pos_price * 1_000_000)
6.5.3 Local Inventory
Local Inventory Ads (LIA) allow the POS to surface real-time store-level availability directly in Google Shopping results. When a shopper searches for a product near a physical store, Google can display “In stock at [Store Name]” with options for in-store pickup or same-day delivery.
Local Inventory API
| Operation | Endpoint | Method | Notes |
|---|---|---|---|
| Insert local inventory | accounts/{account}/products/{product}/localInventories:insert | POST | Upsert by storeCode |
| List local inventories | accounts/{account}/products/{product}/localInventories | GET | Returns all store-level records for a product |
| Delete local inventory | accounts/{account}/products/{product}/localInventories/{storeCode}:delete | DELETE | Remove store-level entry |
Processing time: Updates take up to 30 minutes to appear in Google Shopping results. The POS should not expect real-time reflection.
POS Location to Google Store Code Mapping
Each POS location must be mapped to a Google storeCode (the unique identifier from Google Business Profile). This mapping is configured in Module 5 and stored in the google_store_mappings table.
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants.id |
pos_location_id | UUID | Yes | FK to locations.id (Module 5) |
google_store_code | VARCHAR(50) | Yes | Google Business Profile store code |
google_merchant_account_id | VARCHAR(20) | Yes | FK reference to google_merchant_credentials.merchant_account_id |
store_name | VARCHAR(200) | Yes | Display name for the store |
is_lia_enrolled | BOOLEAN | Yes | Whether Local Inventory Ads are enabled for this location – default false |
pickup_method | VARCHAR(30) | No | Default: buy. Options: buy, reserve, ship_to_store, not_supported |
pickup_sla | VARCHAR(20) | No | Default: same_day. Options: same_day, next_day, multi_day, multi_week |
is_active | BOOLEAN | Yes | Whether this mapping is active – default true |
last_sync_at | TIMESTAMPTZ | No | Timestamp of most recent local inventory sync |
created_at | TIMESTAMPTZ | Yes | Row creation timestamp |
updated_at | TIMESTAMPTZ | Yes | Last modification timestamp |
Local Inventory Data Payload
For each product-location combination, the POS pushes:
| Field | Type | Required | Description |
|---|---|---|---|
storeCode | STRING | Yes | Mapped from google_store_mappings.google_store_code |
availability | STRING | Yes | in_stock, out_of_stock, limited_availability |
price | PRICE | No | Store-specific price override (if different from online). Object: { amountMicros, currencyCode } |
salePrice | PRICE | No | Store-specific sale price. Object: { amountMicros, currencyCode } |
salePriceEffectiveDate | STRING | No | ISO 8601 interval: 2026-03-01T00:00:00Z/2026-03-15T23:59:59Z |
quantity | INT64 | No | Exact quantity in stock at this location. Google recommends providing this. |
pickupMethod | STRING | No | buy, reserve, ship_to_store, not_supported |
pickupSla | STRING | No | same_day, next_day, multi_day, multi_week |
Local Inventory Sync Architecture
flowchart TD
INV_EVENT[Inventory Change Event<br/>Module 4] -->|qty changed| SYNC_EVAL{Sync Evaluation}
SYNC_EVAL -->|Google Merchant enabled<br/>& location enrolled| BUILD[Build Local Inventory Payload]
SYNC_EVAL -->|Not enrolled or<br/>provider disabled| SKIP[Skip -- No Action]
BUILD --> MAP[Map POS location_id<br/>to Google storeCode]
MAP --> AVAIL{Compute Availability}
AVAIL -->|qty > threshold| IN_STOCK[availability = in_stock]
AVAIL -->|qty > 0 AND<br/>qty <= threshold| LIMITED[availability = limited_availability]
AVAIL -->|qty == 0| OOS[availability = out_of_stock]
IN_STOCK --> BATCH
LIMITED --> BATCH
OOS --> BATCH
BATCH[Batch Queue<br/>Max 10 per request] -->|Flush every 60s<br/>or batch full| GAPI[POST localInventories:insert]
GAPI -->|200 OK| LOG_OK[Log Success<br/>Update last_sync_at]
GAPI -->|4xx / 5xx| RETRY[Retry Pipeline<br/>Section 6.2.3]
RETRY -->|Exhausted| DLQ[Dead Letter Queue]
Cross-Reference: See Module 4, Section 4.5.2 for the inventory change event schema that triggers local inventory sync.
6.5.4 Push Notifications
Google Merchant Center can send push notifications to a registered HTTPS callback endpoint when product statuses change. This eliminates the need to poll the Product Status API repeatedly.
Notification Configuration
| Parameter | Requirement |
|---|---|
| Callback URL | Must be publicly accessible HTTPS endpoint |
| SSL Certificate | Must be signed by a trusted CA (no self-signed certificates) |
| Response time | Must respond with 200 OK within 10 seconds |
| Authentication | Google sends a JWT in the Authorization header; validate against Google public keys |
Supported Notification Types
| Notification Type | Trigger | Payload Fields |
|---|---|---|
PRODUCT_STATUS_CHANGE | Product approval status changes (approved, disapproved, pending) | productId, accountId, attribute, previousValue, newValue, timestamp |
ACCOUNT_STATUS_CHANGE | Merchant Center account status changes | accountId, attribute, previousValue, newValue, timestamp |
Callback Data Model: google_merchant_notifications
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants.id |
notification_id | VARCHAR(100) | Yes | Google-provided unique notification ID (idempotency key) |
notification_type | VARCHAR(50) | Yes | PRODUCT_STATUS_CHANGE, ACCOUNT_STATUS_CHANGE |
product_id | VARCHAR(100) | No | Google product ID (present for product notifications) |
attribute | VARCHAR(100) | Yes | The attribute that changed (e.g., status, approvalStatus) |
previous_value | TEXT | No | Value before the change |
new_value | TEXT | Yes | Value after the change |
raw_payload | JSONB | Yes | Complete raw notification payload for audit |
processed | BOOLEAN | Yes | Whether the notification has been handled – default false |
processed_at | TIMESTAMPTZ | No | When the notification was processed |
received_at | TIMESTAMPTZ | Yes | When the POS received the notification |
Notification Processing Flow
When a PRODUCT_STATUS_CHANGE notification indicates a disapproval:
- Parse the notification payload and extract the
productIdand disapproval reason. - Map the Google
productIdback to the POSskuusing theofferIdfield. - Update the product’s Google sync status in the POS database.
- If the disapproval reason is actionable (e.g., missing GTIN, low image quality), create an alert for the catalog team (Module 3).
- Log the notification in
google_merchant_notificationsfor audit.
6.5.5 Rate Limits
Google Merchant API enforces rate limits at multiple levels. The POS must respect these limits to avoid request rejection and potential account suspension.
Rate Limits by Operation
| Operation | Limit | Strategy |
|---|---|---|
Product insert/update (productInputs:insert) | 2 updates per product per day | Batch all changes for a product; push at most 2x daily. Use change-detection to skip unchanged products. |
Product get (products.get) | Generous (1,000+ per day) | Cache responses locally for 15 minutes. Use bulk products.list instead of individual gets. |
Product list (products.list) | 250 results per page | Implement cursor-based pagination. Store nextPageToken for continuation. |
Local inventory insert (localInventories:insert) | No strict per-product limit | Update on inventory change events. Batch up to 10 entries per request. |
| Account-level daily quota | Varies by account tier | Monitor X-RateLimit-Remaining headers. Schedule bulk syncs during 02:00-06:00 UTC off-peak window. |
| Requests per minute | ~600 for standard accounts | Implement in-memory token bucket per tenant. Queue excess requests. |
Batching Strategy
To stay within the 2-updates-per-product-per-day limit:
- Change Detection: Before syncing, compare the current product data hash against the last-synced hash. Skip unchanged products.
- Batch Window: Accumulate product changes during the business day. Execute sync at two scheduled windows (e.g., 06:00 and 18:00 UTC).
- Priority Queue: If a product is updated more than twice in a day, only the most recent state is synced. Intermediate states are discarded.
- Emergency Override: Critical changes (price corrections, safety-related updates) bypass the batch window and sync immediately, consuming one of the two daily slots.
google_merchant_sync:
batch_windows:
- "06:00 UTC"
- "18:00 UTC"
max_updates_per_product_per_day: 2
change_detection: sha256_hash
emergency_override_enabled: true
emergency_reasons:
- price_correction
- safety_recall
- legal_compliance
off_peak_bulk_window: "02:00-06:00 UTC"
6.5.6 Google Required Product Data Fields
Google Shopping enforces strict data requirements. Products missing required fields or containing policy-violating content will be disapproved and will not appear in Shopping results.
Mandatory Feed Fields – POS Must Provide
| Google Field | POS Source Field | Required | Validation Rule |
|---|---|---|---|
id (offerId) | sku | Yes | Unique per product, max 50 chars, alphanumeric + hyphens + underscores only |
title | name | Yes | Max 150 chars. Prohibited: “Sale!”, “Free Shipping”, “Best Price”, all-caps words, excessive punctuation |
description | long_description | Yes | Max 5,000 chars. Plain text preferred. No HTML tags (Google strips them). No promotional language. |
imageLink | primary_image_url | Yes | Min 100x100px (apparel: 250x250px). No watermarks, no promotional overlays, no placeholder images. Must be publicly accessible HTTPS URL. |
price | base_price + currency_code | Yes | Must match website/landing page price EXACTLY. Object: { amountMicros, currencyCode }. Price discrepancies cause immediate disapproval. |
availability | Computed from inventory | Yes | Enum: in_stock, out_of_stock, preorder, backorder. Must match actual stock. Mismatches cause disapproval. |
brand | brand | Yes (most categories) | Manufacturer brand name, NOT store name. Required for all products with a known brand. |
gtin | barcode (UPC/EAN) | Yes (if exists) | Valid GTIN-8/12/13/14. No dashes or spaces. Check digit validated via mod-10 algorithm. |
condition | product_condition | Yes | Enum: new, refurbished, used. Default: new. Must accurately reflect product condition. |
link | Website product URL | Yes | Must resolve to a live HTTPS page. Product data on page must match feed data. Broken links cause disapproval. |
mpn | manufacturer_part_number | Conditional | Required if no GTIN available. Must be the manufacturer’s own part number. |
Optional but Recommended Fields
| Google Field | POS Source Field | Benefit |
|---|---|---|
additionalImageLink | additional_images[] (up to 10) | Better product presentation; higher click-through rate |
productHighlight | bullet_points[] (up to 10, max 150 chars each) | Enhanced listing appearance in Google Shopping |
color | variant_color | Required for apparel; improves search matching |
size | variant_size | Required for apparel; enables size-based filtering |
material | material_type | Improves product matching and filtering accuracy |
pattern | pattern_type | Improves matching for patterned products (striped, plaid, etc.) |
ageGroup | target_age_group | Required for apparel. Enum: newborn, infant, toddler, kids, adult |
gender | target_gender | Required for apparel. Enum: male, female, unisex |
itemGroupId | parent_product_id | Groups variants together in Shopping results |
salePrice | sale_price + currency_code | Displays original + sale price; strikethrough pricing |
salePriceEffectiveDate | sale_start / sale_end | ISO 8601 interval for automatic sale price activation |
shippingWeight | weight + weight_unit | Required for carrier-calculated shipping rates |
6.5.7 Google Image Requirements
Product images are the single most important factor in Google Shopping performance. Google enforces strict image quality rules; violations result in product disapproval.
Image Specification Table
| Requirement | Specification | POS Validation |
|---|---|---|
| Minimum size | 100x100 pixels (apparel: 250x250px) | Validate dimensions on upload; block submission if below minimum |
| Maximum size | 64 megapixels / 16 MB file size | Validate on upload; reject files exceeding limits |
| Format | JPEG, PNG, GIF (non-animated), BMP, TIFF, WebP | Validate file extension AND MIME type (prevent extension spoofing) |
| Background | White or transparent preferred | Advisory warning if background is non-white; do not block |
| Watermarks | PROHIBITED – causes immediate disapproval | Block upload if watermark metadata detected; warn if image analysis flags overlay text |
| Promotional text | PROHIBITED – no “Sale”, “Free Shipping”, logos, badges | Advisory warning to user; flag for manual review before sync |
| Borders | No decorative borders allowed | Advisory warning if border detected in image analysis |
| Product visibility | Product must occupy 75-90% of image frame | Advisory warning based on object-detection analysis (if available) |
| URL accessibility | Must be publicly accessible via HTTPS | Validate URL reachability (HTTP HEAD request) before every sync |
| Image alt text | Descriptive, non-keyword-stuffed | Auto-generate from product name + colour + size if empty |
| Placeholders | No “image coming soon” or generic placeholder images | Block sync if image URL matches known placeholder patterns |
| Product accuracy | Image must show the exact product being sold | Manual review flag; cannot be automated reliably |
Image Validation Workflow
The POS validates images at two stages:
- Upload time (immediate feedback): File size, dimensions, format, MIME type.
- Pre-sync time (before pushing to Google): URL accessibility, watermark detection, placeholder detection, completeness check.
6.5.8 Google Disapproval Prevention Rules
Product disapprovals directly reduce revenue by removing products from Google Shopping. The POS must implement proactive validation to prevent disapprovals before they occur.
Common Disapproval Reasons & POS Prevention
| Disapproval Reason | Google Policy | POS Prevention Rule |
|---|---|---|
| Price mismatch | Feed price MUST match landing page price exactly | Cross-check feed price vs. website price before every sync. Block sync if prices diverge by > $0.01. |
| Availability mismatch | Feed availability MUST match actual stock | Real-time inventory sync. Automatically set out_of_stock when qty == 0. Never show in_stock for zero-qty items. |
| Missing required attributes | All mandatory fields must be populated | Pre-sync validation gate blocks submission if any required field is empty or null. |
| Low image quality | Below minimum resolution, watermarks, promotional overlays | Image validation on upload (dimensions, format). Pre-sync URL accessibility check. |
| Misrepresentation | No fake urgency, fake scarcity, misleading claims | Block promotional text patterns in title and description: regex filter for “Sale!”, “Limited Time”, “Act Now”, etc. |
| Prohibited content | Restricted product categories (weapons, drugs, counterfeit) | Flag restricted Google product categories during product setup. Require manager approval before sync. |
| Missing landing page | Product URL must resolve to a live page with matching data | HTTP HEAD request to link URL before every sync. Block sync on 404/500 responses. |
| Invalid GTIN | Wrong format, invalid check digit, or mismatched product | GTIN validation algorithm: verify length (8/12/13/14 digits), compute mod-10 check digit, reject on mismatch. |
| Missing business info | Merchant Center must have contact info, return policy | Validate Merchant Center profile completeness via API during initial setup. Alert if incomplete. |
| Insufficient product identifiers | Products with known UPC must include gtin | Require barcode field when product has_manufacturer_upc = true. Warn if GTIN is empty for branded products. |
POS Pre-Sync Validation Checklist (Automated)
Every product must pass all 10 validation checks before being submitted to Google Merchant:
- All required fields populated (non-null, non-empty)?
- Primary image meets quality requirements (dimensions, format, HTTPS)?
- Price matches across all channels (POS, website, feed)?
- GTIN/UPC valid format (correct length, check digit passes mod-10)?
- Product category not in prohibited list?
- Landing page URL accessible (HTTP 200 response)?
- Title length <= 150 chars, no promotional text detected?
- Description length <= 5,000 chars, no HTML tags?
- Brand field populated (non-empty, not equal to store name)?
- Availability computed from current inventory (not stale > 1 hour)?
Pre-Sync Validation Flowchart
flowchart TD
START[Product Queued for<br/>Google Sync] --> V1{1. Required fields<br/>populated?}
V1 -->|No| FAIL_REQ[BLOCK: Missing required<br/>fields -- log which fields]
V1 -->|Yes| V2{2. Image meets<br/>quality requirements?}
V2 -->|No| FAIL_IMG[BLOCK: Image quality<br/>failure -- log reason]
V2 -->|Yes| V3{3. Price matches<br/>across channels?}
V3 -->|No| FAIL_PRICE[BLOCK: Price mismatch<br/>detected -- log discrepancy]
V3 -->|Yes| V4{4. GTIN valid<br/>format & check digit?}
V4 -->|No GTIN & no MPN| FAIL_ID[BLOCK: Missing product<br/>identifier -- GTIN or MPN required]
V4 -->|Invalid check digit| FAIL_GTIN[BLOCK: Invalid GTIN<br/>check digit -- log expected vs actual]
V4 -->|Valid or N/A| V5{5. Category not<br/>prohibited?}
V5 -->|Prohibited| FAIL_CAT[BLOCK: Prohibited<br/>category -- requires override]
V5 -->|Allowed| V6{6. Landing page<br/>URL accessible?}
V6 -->|404 / 500 / timeout| FAIL_URL[BLOCK: Landing page<br/>not accessible -- log HTTP status]
V6 -->|200 OK| V7{7. Title ≤ 150 chars<br/>& no promo text?}
V7 -->|Fails| FAIL_TITLE[BLOCK: Title validation<br/>failure -- log reason]
V7 -->|Passes| V8{8. Description ≤ 5K<br/>chars & no HTML?}
V8 -->|Fails| FAIL_DESC[BLOCK: Description<br/>validation failure]
V8 -->|Passes| V9{9. Brand field<br/>populated & valid?}
V9 -->|Empty or equals store name| FAIL_BRAND[BLOCK: Invalid brand<br/>-- must be manufacturer brand]
V9 -->|Valid| V10{10. Availability<br/>fresh < 1 hour?}
V10 -->|Stale| REFRESH[Refresh inventory<br/>count from Module 4]
REFRESH --> RECOMPUTE[Recompute availability]
RECOMPUTE --> PASS
V10 -->|Fresh| PASS[ALL CHECKS PASSED]
PASS --> SUBMIT[Submit to Google<br/>Merchant API]
SUBMIT -->|200 OK| SUCCESS[Log success<br/>Update sync timestamp]
SUBMIT -->|Error| RETRY[Retry Pipeline<br/>Section 6.2.3]
FAIL_REQ --> LOG[Log validation failure<br/>to google_sync_log]
FAIL_IMG --> LOG
FAIL_PRICE --> LOG
FAIL_ID --> LOG
FAIL_GTIN --> LOG
FAIL_CAT --> LOG
FAIL_URL --> LOG
FAIL_TITLE --> LOG
FAIL_DESC --> LOG
FAIL_BRAND --> LOG
6.5.9 Google Business Profile Integration
Google Business Profile (GBP) integration is required for Local Inventory Ads. The Merchant Center account must be linked to verified GBP listings so that Google can associate product availability with physical store locations.
GBP-to-Merchant Center Linkage
- Verify store ownership in Google Business Profile for each physical location.
- Link GBP to Merchant Center via the Merchant Center settings panel.
- Map POS locations to GBP listings using the
google_store_codeassigned by GBP. - Enrol locations in Local Inventory Ads via the Merchant Center LIA program.
- Sync store hours from POS location settings to GBP to ensure accuracy.
Data Model: google_business_profile_locations
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants.id |
pos_location_id | UUID | Yes | FK to locations.id (Module 5) |
gbp_place_id | VARCHAR(100) | Yes | Google Places API place ID (e.g., ChIJ...) |
google_store_code | VARCHAR(50) | Yes | Store code used in Local Inventory feed (must match Merchant Center) |
store_name | VARCHAR(200) | Yes | Business name as it appears in GBP |
address_line1 | VARCHAR(255) | Yes | Street address |
address_line2 | VARCHAR(255) | No | Suite, unit, floor |
city | VARCHAR(100) | Yes | City name |
state_province | VARCHAR(100) | Yes | State or province |
postal_code | VARCHAR(20) | Yes | ZIP or postal code |
country_code | CHAR(2) | Yes | ISO 3166-1 alpha-2 (e.g., US, CA) |
phone | VARCHAR(20) | Yes | Primary phone number in E.164 format |
hours_json | JSONB | Yes | Structured store hours (see format below) |
is_verified | BOOLEAN | Yes | Whether GBP ownership is verified – default false |
is_enrolled_lia | BOOLEAN | Yes | Whether Local Inventory Ads are enabled – default false |
last_sync_at | TIMESTAMPTZ | No | Timestamp of most recent GBP sync |
created_at | TIMESTAMPTZ | Yes | Row creation timestamp |
updated_at | TIMESTAMPTZ | Yes | Last modification timestamp |
Store Hours JSON Format
# Example hours_json structure
hours:
monday: { open: "09:00", close: "21:00" }
tuesday: { open: "09:00", close: "21:00" }
wednesday: { open: "09:00", close: "21:00" }
thursday: { open: "09:00", close: "21:00" }
friday: { open: "09:00", close: "22:00" }
saturday: { open: "10:00", close: "22:00" }
sunday: { open: "11:00", close: "18:00" }
special_hours:
- date: "2026-12-25"
closed: true
- date: "2026-12-24"
open: "09:00"
close: "15:00"
Cross-Reference: See Module 5, Section 5.3 for the location management screens where POS locations are configured and mapped to Google Business Profile listings.
6.5.10 Content API Migration Plan
The Content API for Shopping (/content/v2.1/) is being fully replaced by the Merchant API. All POS integrations must target the Merchant API exclusively. This section documents the migration path for any legacy Content API usage.
Migration Timeline
| Step | Action | Deadline | Status |
|---|---|---|---|
| 1 | Audit current Content API usage across all endpoints | January 2026 | Required |
| 2 | Map Content API calls to Merchant API v1 equivalents | February 2026 | Required |
| 3 | Update authentication to use Merchant API token endpoints | March 2026 | Required |
| 4 | Update product data submission to ProductInput resource | April 2026 | Required |
| 5 | Update local inventory to new localInventories endpoints | May 2026 | Required |
| 6 | Full regression testing (product sync, local inventory, notifications) | June 2026 | Required |
| 7 | Production cutover: switch all tenants to Merchant API | July 2026 | Required |
| 8 | Content API fully deprecated – all endpoints cease | August 18, 2026 | Hard deadline |
Key API Mapping: Content API to Merchant API
| Content API (v2.1) | Merchant API (v1) | Breaking Changes |
|---|---|---|
products.insert | productInputs:insert | Write resource renamed to ProductInput; read resource is now Product |
products.get | products.get | Response schema changed; now returns Google-processed version only |
products.list | products.list | Pagination uses pageToken instead of startToken |
products.delete | productInputs:delete | Must target ProductInput resource, not Product |
products.custombatch | Individual calls or batch API | custombatch removed; use standard batch request pattern |
localinventory.insert | localInventories:insert | New URL structure under products/{product}/localInventories |
pos.inventory (legacy) | localInventories:insert | POS-specific endpoint removed; use standard local inventory |
productstatuses.get | productStatuses.get | Response schema updated; new status categories |
Risk Mitigation
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| Migration not completed before August 18, 2026 | Medium | Critical – all Google Shopping listings go dark | Start migration in Q1 2026; track weekly in sprint reviews |
| Breaking changes in Merchant API v1 before GA | Low | Medium – requires rework of integration code | Pin to v1beta; monitor Google Merchant API changelog weekly |
| Tenant data inconsistency during cutover | Medium | Medium – temporary product disapprovals | Run Content API and Merchant API in parallel for 2 weeks before final cutover |
| Rate limit changes in new API | Low | Low – may require batching strategy adjustment | Monitor rate limit headers during testing; update google_merchant_sync config as needed |
6.5.11 Reports: Google Merchant Integration
The POS provides five standard reports for monitoring Google Merchant integration health, product approval status, and shopping performance.
Report Catalogue
| Report | Purpose | Key Data Fields | Refresh Frequency |
|---|---|---|---|
| Google Product Status | Track approved, pending, and disapproved products across the feed | sku, google_product_id, status (approved/pending/disapproved), disapproval_reasons[], last_updated | Every 6 hours |
| Google Local Inventory | Monitor store-level availability accuracy and sync latency | store_code, store_name, products_synced, avg_sync_latency_min, stale_count (not synced in 24h), accuracy_pct | Every 4 hours |
| Google Shopping Performance | Surface click, impression, and CTR data from Performance Max campaigns (if available) | product_id, sku, impressions, clicks, ctr_pct, avg_cpc, conversions, revenue | Daily (data from Google Ads API) |
| Google Disapproval Tracker | Track disapproved products with reasons, remediation status, and re-approval dates | sku, product_name, disapproval_reason, disapproved_at, remediation_action, remediation_status (open/in_progress/resolved), re_approved_at | Real-time (via push notifications, Section 6.5.4) |
| Google Feed Health | Overall feed quality score with missing-field analysis and image quality audit | total_products, products_with_all_fields_pct, missing_gtin_count, missing_description_count, low_quality_image_count, feed_health_score (0-100), recommendations[] | Daily |
Feed Health Score Calculation
The Feed Health Score is a composite metric (0–100) calculated as follows:
| Component | Weight | Scoring |
|---|---|---|
| Required fields completeness | 40% | 100 if all products have all required fields; deduct 1 point per product with missing fields (min 0) |
| Image quality compliance | 25% | 100 if all images pass validation; deduct 2 points per product with image issues (min 0) |
| GTIN coverage | 15% | 100 if all products with UPCs have valid GTINs; deduct 1 point per missing GTIN (min 0) |
| Price accuracy | 10% | 100 if all feed prices match website prices; 0 if any mismatch detected |
| Disapproval rate | 10% | 100 if 0% disapproved; 0 if > 10% disapproved; linear between |
feed_health_score:
components:
required_fields:
weight: 0.40
deduction_per_violation: 1
image_quality:
weight: 0.25
deduction_per_violation: 2
gtin_coverage:
weight: 0.15
deduction_per_violation: 1
price_accuracy:
weight: 0.10
scoring: binary # 100 or 0
disapproval_rate:
weight: 0.10
max_acceptable_rate: 0.10
thresholds:
excellent: 90
good: 75
needs_attention: 50
critical: 0
Cross-Reference: See Module 5, Section 5.18 for dashboard widget configuration that surfaces the Feed Health Score on the admin portal home screen.
6.6 Cross-Platform Product Data Requirements
Scope: Ensuring that all product data managed in the POS system meets the validation requirements of every connected external platform simultaneously. Rather than validating per-platform at sync time, the POS enforces a unified “strictest-rule-wins” validation policy at data entry. This guarantees that any product passing POS validation is immediately eligible for listing on Shopify, Amazon, and Google Merchant Center without remediation.
Cross-Reference: See Module 3, Section 3.6 for multi-channel management and channel visibility. See Module 3, Section 3.7 for Shopify-specific sync and field ownership. See Module 5, Section 5.16 for Integration Hub configuration and health monitoring.
Design Principle: The POS system acts as the single source of truth for product data. By enforcing the strictest requirement from any connected platform at the point of data entry, we eliminate the common pattern of “create now, fix later” that leads to suppressed listings, disapproved products, and lost revenue.
6.6.1 Unified Product Data Validation Matrix
The following matrix compares field-level requirements across all three platforms and documents the POS-enforced rule derived from the strictest constraint.
| POS Field | Shopify Requirement | Amazon Requirement | Google Requirement | POS Enforced Rule (Strictest) |
|---|---|---|---|---|
name (title) | Max 255 chars | Max 500 chars | Max 150 chars | Max 150 chars (Google strictest) |
long_description | HTML allowed, no max | HTML allowed, max 2,000 chars (category-dependent) | Max 5,000 chars, plain text preferred | Max 5,000 chars (Google cap; HTML sanitized for Google feed) |
short_description | N/A (uses body) | Max 1,000 chars (bullet points) | N/A | Max 1,000 chars (Amazon bullet point limit) |
primary_image | Any format, 2048x2048 recommended | Min 1000x1000px for zoom eligibility | Min 250x250px (apparel), no watermarks | Min 1000x1000px, no watermarks (Amazon + Google combined) |
base_price | Required | Required per marketplace | Must match landing page price | Required, must match across all channels |
compare_at_price | Optional (strikethrough) | List price (optional) | Optional (sale_price / sale_price_effective_date) | Optional, must be > base_price if set |
barcode (UPC/EAN) | Optional | Required for most categories | Required (GTIN) if exists for product | Required (treat as mandatory for channel eligibility) |
brand | Optional (vendor field) | Required | Required (most categories) | Required |
weight | Optional | Required for FBA fulfillment | Optional (but needed for shipping) | Required (needed for FBA and shipping calculations) |
weight_unit | g, kg, lb, oz | Pounds or kilograms | g, kg, lb, oz | Required, stored in grams, converted per platform |
condition | N/A | Required (New, Refurbished, Used) | Required (new, refurbished, used) | Required (default: new) |
product_type | Optional (free-text) | Required (Amazon Browse Node ID) | Optional (Google Product Category ID) | Required (must map to both Amazon and Google taxonomies) |
sku | Required, unique per store | Required (seller_sku), unique | Required (offerId), max 50 chars | Required, unique per tenant, max 50 chars (Google strictest) |
manufacturer_part_number | N/A | Optional (MPN) | Required if no GTIN assigned | Required if no barcode/GTIN |
color | Optional (variant option) | Required for apparel | Required for apparel | Required for apparel categories |
size | Optional (variant option) | Required for apparel | Required for apparel | Required for apparel categories |
gender | N/A | Required for apparel | Required for apparel | Required for apparel categories |
age_group | N/A | Required for apparel | Required for apparel | Required for apparel categories (adult, kids, toddler, infant, newborn) |
material | N/A | Optional | Recommended for apparel | Recommended (improves listing quality) |
country_of_origin | N/A | Required for some categories | N/A | Required (needed for customs and Amazon compliance) |
Business Rules:
- All text fields are trimmed of leading/trailing whitespace before validation.
- Title (
name) must not contain promotional text (e.g., “FREE SHIPPING”, “SALE”, “BUY NOW”) per Amazon and Google policies. - Price must be > $0.00 for all channel-listed products. Zero-price items are blocked from channel sync.
- If a product fails unified validation, it can still be used for in-store POS sales but is blocked from external channel sync.
6.6.2 Image Requirements Matrix
Product images are the most common reason for listing suppression across platforms. The POS enforces a unified image standard that satisfies all platforms simultaneously.
| Requirement | Shopify | Amazon | POS Enforced Rule | |
|---|---|---|---|---|
| Min resolution | 2048x2048 recommended | 1000x1000 min (zoom eligible) | 250x250 (apparel) / 100x100 (other) | 1000x1000 minimum |
| Max resolution | 4472x4472 | 10000x10000 | N/A | 10000x10000 maximum |
| Max file size | 20MB | 10MB | 16MB | 10MB maximum (Amazon strictest) |
| Formats | JPEG, PNG, GIF | JPEG, PNG, TIFF, GIF | JPEG, PNG, GIF, WebP, BMP, TIFF | JPEG or PNG (universally supported) |
| Color space | sRGB | sRGB | sRGB | sRGB required |
| Background | Any | Pure white (RGB 255,255,255) required for main | White/transparent preferred | White background required (for main image) |
| Watermarks | Allowed | PROHIBITED | PROHIBITED | PROHIBITED |
| Text overlay | Allowed | PROHIBITED on main image | PROHIBITED | PROHIBITED on main image |
| Borders/frames | Allowed | PROHIBITED | PROHIBITED | PROHIBITED |
| Product coverage | N/A | 85% of frame recommended | 75-90% of frame | 85% of frame (Amazon guideline) |
| Main image count | 1 required | 1 required (up to 9 total) | 1 required (up to 11 additional) | 1 required, up to 9 recommended |
| Aspect ratio | Any | 1:1 preferred | Any (1:1 preferred) | 1:1 recommended (square) |
Image Upload Workflow:
flowchart TD
A[Staff Uploads Image] --> B{File Format Check}
B -->|Not JPEG/PNG| C[REJECT: Convert to JPEG or PNG]
B -->|JPEG or PNG| D{Resolution Check}
D -->|< 1000x1000| E[REJECT: Minimum 1000x1000px required]
D -->|>= 1000x1000| F{File Size Check}
F -->|> 10MB| G[WARN: Compress to under 10MB]
F -->|<= 10MB| H{Main Image?}
H -->|Yes| I{Background Check}
H -->|No - Additional| J[PASS: Store Image]
I -->|Not White| K[WARN: White background recommended for main image]
I -->|White| J
G --> L[Auto-Compress & Re-Check]
L --> H
K --> M[Staff Acknowledges Warning]
M --> J
J --> N[Generate Platform Variants]
N --> O[Store Original + Variants]
style C fill:#d32f2f,color:#fff
style E fill:#d32f2f,color:#fff
style G fill:#ff9800,color:#fff
style K fill:#ff9800,color:#fff
style J fill:#2e7d32,color:#fff
Image Storage Strategy:
- Original image stored at upload resolution.
- Platform-optimized variants generated asynchronously: Shopify (2048x2048), Amazon (2000x2000), Google (1200x1200).
- Variant generation uses lossy JPEG compression at quality 85 for file size compliance.
- Images are served via CDN with platform-specific URL patterns.
6.6.3 Pre-Sync Validation Engine
Before pushing any product to any external platform, the validation engine evaluates the product against the unified rules defined above. Validation runs automatically on product save and on-demand before sync operations.
Validation Result Levels:
| Level | Code | Behavior | Description |
|---|---|---|---|
| PASS | PASS | Sync allowed | All required fields present and valid for this platform |
| WARN | WARN | Sync allowed with advisory | Non-blocking issues detected (e.g., recommended fields missing, suboptimal image) |
| FAIL | FAIL | Sync blocked | Required fields missing or invalid; product cannot be listed on this platform |
Product Sync Validation Record:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | FK to products table |
variant_id | UUID | No | FK to product_variants table (NULL for parent-level validation) |
tenant_id | UUID | Yes | FK to tenants table |
platform | Enum | Yes | SHOPIFY, AMAZON, GOOGLE_MERCHANT |
validation_status | Enum | Yes | PASS, WARN, FAIL |
validation_errors | JSON | No | Array of {field, rule, message, severity} objects |
validation_warnings | JSON | No | Array of {field, rule, message, recommendation} objects |
last_validated_at | DateTime | Yes | Timestamp of last validation run |
last_synced_at | DateTime | No | Timestamp of last successful sync to this platform |
sync_status | Enum | Yes | PENDING, SYNCED, FAILED, BLOCKED, NOT_CONFIGURED |
sync_error_message | String(500) | No | Error message from last failed sync attempt |
external_id | String(100) | No | ID on external platform (Shopify product_id, Amazon ASIN, Google offerId) |
external_status | String(50) | No | Status on external platform (active, suppressed, disapproved, pending) |
external_status_reason | Text | No | Platform-reported reason for non-active status |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Validation Error Object Schema:
validation_error:
field: "barcode"
rule: "REQUIRED_FOR_AMAZON"
message: "Barcode (UPC/EAN) is required for Amazon listings in this category"
severity: "FAIL"
platform: "AMAZON"
remediation: "Add a valid UPC or EAN barcode in the product details"
Validation Engine Flow:
flowchart TD
A[Product Change Detected] --> B[Load Product Data]
B --> C[Load Platform Configurations]
C --> D{For Each Enabled Platform}
D --> E[Check Required Fields]
E --> F[Check Field Length Constraints]
F --> G[Check Image Rules]
G --> H[Check Price Consistency]
H --> I[Check Category Mapping]
I --> J[Check Platform-Specific Attributes]
J --> K[Generate Validation Report]
K --> L{Any FAIL Results?}
L -->|Yes| M[Set sync_status = BLOCKED]
L -->|No| N{Any WARN Results?}
N -->|Yes| O[Set validation_status = WARN\nsync_status = PENDING]
N -->|No| P[Set validation_status = PASS\nsync_status = PENDING]
M --> Q[Store Validation Record]
O --> Q
P --> Q
Q --> R[Notify Admin Dashboard]
R --> S{Auto-Sync Enabled?}
S -->|Yes, status = PASS/WARN| T[Queue for Platform Sync]
S -->|No or BLOCKED| U[Await Manual Action]
style M fill:#d32f2f,color:#fff
style O fill:#ff9800,color:#fff
style P fill:#2e7d32,color:#fff
Validation Triggers:
- Product created or updated (automatic)
- Image added, replaced, or removed (automatic)
- Price changed (automatic)
- Manual “Validate All” button in Admin Portal (on-demand)
- Bulk validation via scheduled job (nightly at 2:00 AM tenant time)
Admin Dashboard - Validation Summary View:
| Metric | Description |
|---|---|
| Total Products | Count of all active products in tenant catalog |
| Shopify Ready | Count where Shopify validation = PASS or WARN |
| Amazon Ready | Count where Amazon validation = PASS or WARN |
| Google Ready | Count where Google Merchant validation = PASS or WARN |
| Blocked (per platform) | Count where validation = FAIL, with top 5 failure reasons |
| Needs Attention | Products with WARN status grouped by warning type |
6.6.4 Platform-Specific Product Attributes
Beyond the unified fields, each platform requires or supports additional attributes that are stored in platform-specific extension fields on the product record.
Amazon-Specific Attributes:
| Attribute | Description | POS Storage | Validation |
|---|---|---|---|
product_type | Amazon Browse Node taxonomy classification | amazon_product_type (String) | Must map to valid Amazon category node ID |
bullet_points | Up to 5 key feature bullet points | amazon_bullet_points (JSON array) | Max 5 entries, max 1,000 chars each |
search_terms | Backend search keywords (not visible to customers) | amazon_search_terms (String) | Max 250 bytes total, no ASINs or brand names |
a_plus_content | Enhanced brand content eligibility | amazon_a_plus_eligible (Boolean) | Brand registered sellers only |
fulfillment_channel | Fulfillment method | amazon_fulfillment (Enum) | FBA, FBM, or BOTH |
item_condition_note | Condition details for non-new items | amazon_condition_note (Text) | Required if condition != new, max 1,000 chars |
max_handling_time | Days to ship after order | amazon_handling_days (Integer) | 1-30 days; required for FBM |
restock_date | Expected restock date if out of stock | amazon_restock_date (Date) | Optional; future date only |
Amazon Attribute Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | FK to products table |
tenant_id | UUID | Yes | FK to tenants table |
amazon_product_type | String(100) | Yes | Amazon Browse Node category |
amazon_bullet_points | JSON | No | Array of up to 5 strings, 1000 chars each |
amazon_search_terms | String(250) | No | Backend keywords, space-separated |
amazon_a_plus_eligible | Boolean | Yes | Default: false |
amazon_fulfillment | Enum | Yes | FBA, FBM, BOTH |
amazon_condition_note | Text | No | Required if condition is not new |
amazon_handling_days | Integer | No | Max handling time for FBM orders |
amazon_restock_date | Date | No | Expected restock date |
amazon_asin | String(10) | No | Amazon Standard Identification Number (assigned by Amazon) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Google-Specific Attributes:
| Attribute | Description | POS Storage | Validation |
|---|---|---|---|
google_product_category | Google taxonomy path (numeric ID) | google_category_id (Integer) | Must map to valid Google Product Category |
mpn | Manufacturer part number | manufacturer_part_number (String) | Required if no GTIN; max 70 chars |
additional_image_link | Up to 10 additional images | product_images array (positions 2-11) | Same quality rules as main image |
product_highlight | Up to 10 key feature bullet points | google_highlights (JSON array) | Max 150 chars each, max 10 entries |
local_inventory_attrs | Store-level availability for Local Inventory Ads | Computed from POS inventory per location | Per-location mapping via storeCode |
pickup_method | How customer picks up in-store | google_pickup_method (Enum) | buy, reserve, ship_to_store, not_supported |
pickup_sla | Pickup time estimate | google_pickup_sla (Enum) | same_day, next_day, 2-day, 3-day, 4-day, 5-day, 6-day, multi-week |
custom_label_0 through custom_label_4 | Custom grouping labels for Shopping campaigns | google_custom_labels (JSON) | Max 100 chars each, up to 5 labels |
ads_redirect | Tracking URL for Google Ads | google_ads_redirect (String) | Valid URL, max 2,000 chars |
Google Attribute Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | FK to products table |
tenant_id | UUID | Yes | FK to tenants table |
google_category_id | Integer | Yes | Google Product Category taxonomy ID |
google_category_path | String(500) | No | Human-readable category path (e.g., “Apparel & Accessories > Clothing > Shirts”) |
google_highlights | JSON | No | Array of up to 10 strings, 150 chars each |
google_pickup_method | Enum | No | buy, reserve, ship_to_store, not_supported |
google_pickup_sla | Enum | No | same_day, next_day, 2-day through 6-day, multi-week |
google_custom_labels | JSON | No | Object with keys label_0 through label_4, max 100 chars each |
google_ads_redirect | String(2000) | No | Tracking URL for Google Ads campaigns |
google_offer_id | String(50) | No | Google Merchant Center offer ID (auto-generated from SKU if blank) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Shopify-Specific Attributes:
These fields are owned by Shopify in bidirectional sync mode (see Module 3, Section 3.7). The POS stores them as read-only references.
| Attribute | Description | POS Storage | Ownership |
|---|---|---|---|
seo_title | Meta title for search engine results | shopify_meta_title (String, max 70 chars) | Shopify-Owned |
seo_description | Meta description for search engine results | shopify_meta_description (String, max 320 chars) | Shopify-Owned |
url_handle | URL slug for the product page | shopify_handle (String, max 255 chars) | Shopify-Owned |
metafields | Custom structured data (JSON metafields) | shopify_metafields (JSON) | Shopify-Owned |
collections | Product collection memberships | shopify_collections (JSON array of IDs) | Shopify-Owned |
sales_channels | Channel publishing scope | shopify_channels (JSON array) | Shopify-Owned |
tags | Comma-separated product tags | shopify_tags (Text) | Configurable (POS or Shopify) |
template_suffix | Product page template override | shopify_template_suffix (String) | Shopify-Owned |
Shopify Attribute Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
product_id | UUID | Yes | FK to products table |
tenant_id | UUID | Yes | FK to tenants table |
shopify_product_id | BigInt | No | Shopify internal product ID (assigned after first sync) |
shopify_meta_title | String(70) | No | SEO title |
shopify_meta_description | String(320) | No | SEO description |
shopify_handle | String(255) | No | URL handle/slug |
shopify_metafields | JSON | No | Custom metafield key-value pairs |
shopify_collections | JSON | No | Array of Shopify collection IDs |
shopify_channels | JSON | No | Array of Shopify sales channel names |
shopify_tags | Text | No | Comma-separated tags |
shopify_template_suffix | String(100) | No | Theme template override |
shopify_published_at | DateTime | No | When product was published on Shopify |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Cross-Reference: See Module 3, Section 3.7.2 for field-level ownership model defining which system (POS or Shopify) is authoritative for each field in bidirectional sync mode.
6.7 Cross-Platform Inventory Sync Rules
Scope: Defining the rules, architecture, and failure handling for real-time inventory synchronization between the POS system (source of truth) and all connected external sales channels. This section covers sync latency targets, safety buffer configuration, oversell prevention, channel-specific inventory rules, and failure recovery procedures.
Cross-Reference: See Module 4, Section 4.1 for the POS inventory status model. See Module 4, Section 4.14 for Shopify-specific inventory sync. See Module 3, Section 3.6.3 for channel inventory allocation modes. See Module 5, Section 5.16 for Integration Hub health monitoring.
Design Principle: The POS system maintains a single, authoritative inventory count per product per location. All external channels receive computed available quantities derived from the POS count minus safety buffers and reservations. No external channel can directly modify POS inventory – all inbound changes (e.g., Shopify admin adjustments) are processed through the sync engine with conflict resolution.
6.7.1 Real-Time Inventory Sync Architecture
flowchart TD
POS["POS System\n(Source of Truth)\nSingle Inventory Record\nPer Product Per Location"]
INV_ENGINE["Inventory Sync Engine\n(Event-Driven)"]
SHOP["Shopify\nTarget: < 5s latency\nBidirectional Webhooks"]
AMZ_FBM["Amazon FBM\nTarget: < 2min latency\nAPI Push + SQS Pull"]
AMZ_FBA["Amazon FBA\nRead-Only Monitoring\nAmazon Manages Stock"]
GOOG["Google Merchant Center\nTarget: < 30min processing\nAPI Push (Content API)"]
POS -->|Inventory Event| INV_ENGINE
INV_ENGINE -->|Webhook Push| SHOP
INV_ENGINE -->|Feeds API Push| AMZ_FBM
INV_ENGINE -->|Content API Push| GOOG
INV_ENGINE -.->|Read via SP-API| AMZ_FBA
SHOP -->|Webhook: inventory_levels/update| INV_ENGINE
AMZ_FBM -->|SQS: ANY_OFFER_CHANGED notification| INV_ENGINE
INV_ENGINE -->|Reconciliation| POS
style POS fill:#1565c0,stroke:#0d47a1,color:#fff
style INV_ENGINE fill:#7b2d8e,stroke:#5a1d6e,color:#fff
style SHOP fill:#2e7d32,stroke:#1b5e20,color:#fff
style AMZ_FBM fill:#e65100,stroke:#bf360c,color:#fff
style AMZ_FBA fill:#6c757d,stroke:#495057,color:#fff
style GOOG fill:#c62828,stroke:#b71c1c,color:#fff
Inventory Events That Trigger Sync:
| Event | Source | Channels Notified |
|---|---|---|
| POS Sale completed | POS Terminal | All enabled channels |
| POS Return processed | POS Terminal | All enabled channels |
| Inventory adjustment | Admin Portal | All enabled channels |
| Inventory count reconciliation | Admin Portal | All enabled channels |
| Inter-store transfer (shipped) | Source location | All enabled channels (source location) |
| Inter-store transfer (received) | Destination location | All enabled channels (destination location) |
| Purchase order received | Receiving module | All enabled channels (receiving location) |
| Online order reserved | Shopify/Amazon | Remaining channels |
| Online order cancelled | Shopify/Amazon | All enabled channels (restore qty) |
Sync Latency Targets:
| Channel | Sync Method | Target Latency | Reconciliation Frequency | Max Acceptable Lag |
|---|---|---|---|---|
| Shopify | Webhooks (bidirectional) | < 5 seconds | Every 15 minutes | 60 seconds |
| Amazon FBM | SP-API push + SQS pull | < 2 minutes | Every 30 minutes | 10 minutes |
| Amazon FBA | Read-only monitoring (SP-API) | N/A (Amazon manages) | Every 4 hours | N/A |
| Google Merchant | Content API push | < 30 minutes (Google processing time) | Every 6 hours | 60 minutes |
Reconciliation Process:
- At each reconciliation interval, the sync engine compares POS quantities against platform-reported quantities.
- Discrepancies are logged in the Integration Sync Log (Module 5, Section 5.16.4).
- If discrepancy exceeds the configured threshold (default: 5 units), an admin alert is triggered.
- Auto-correction pushes POS quantity to the platform (POS always wins in reconciliation).
6.7.2 Safety Buffer Configuration
Safety buffers prevent overselling by withholding a configurable number of units from external channel listings. This provides a cushion for in-store sales, processing delays, and inventory inaccuracies.
Primary Formula:
Channel Available Qty = POS Available Qty - Safety Buffer
If Channel Available Qty < min_channel_qty → Show as out_of_stock on that channel
If max_channel_qty is set → Channel Available Qty = MIN(Channel Available Qty, max_channel_qty)
Safety Buffer Settings:
| Setting | Description | Default | Per-Product Override | Per-Channel Override |
|---|---|---|---|---|
safety_buffer_qty | Fixed units withheld from channel listing | 0 | Yes | Yes |
safety_buffer_pct | Percentage of POS qty withheld (alternative to fixed) | NULL | Yes | Yes |
buffer_calculation | How the buffer is computed | FIXED | Yes | Yes |
channel_warehouse_id | Specific POS location(s) feeding this channel | NULL (all locations) | No | Yes |
min_channel_qty | Minimum qty to display on channel; below this = out_of_stock | 1 | Yes | Yes |
max_channel_qty | Maximum qty shown on channel (cap) | NULL (no cap) | Yes | Yes |
Buffer Calculation Modes:
| Mode | Formula | Use Case | Example (POS Qty = 20, Buffer = 3) |
|---|---|---|---|
FIXED | Channel Qty = POS Qty - buffer_qty | Simple fixed reserve for walk-in customers | 20 - 3 = 17 listed |
PERCENTAGE | Channel Qty = POS Qty - CEIL(POS Qty * buffer_pct / 100) | Proportional reserve that scales with stock level | 20 - CEIL(20 * 15%) = 20 - 3 = 17 listed |
MIN_RESERVE | Channel Qty = MAX(0, POS Qty - buffer_qty) | Floor-based reserve (never goes negative) | 20 - 3 = 17 listed (or 0 if POS Qty < buffer) |
Buffer Priority Resolution:
- Product-specific + Channel-specific override (highest priority)
- Product-specific override (applies to all channels)
- Channel-specific default (applies to all products on that channel)
- Tenant-wide default (lowest priority)
Safety Buffer Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
product_id | UUID | No | FK to products table; NULL = tenant-wide or channel-wide default |
variant_id | UUID | No | FK to product_variants; NULL = applies to all variants of product |
channel | Enum | Yes | SHOPIFY, AMAZON_FBM, GOOGLE_MERCHANT, ALL |
safety_buffer_qty | Integer | Yes | Fixed units to withhold (default: 0) |
safety_buffer_pct | Decimal(5,2) | No | Percentage buffer (alternative to fixed); NULL if using fixed |
buffer_calculation | Enum | Yes | FIXED, PERCENTAGE, MIN_RESERVE |
min_channel_qty | Integer | Yes | Below this threshold = show as out_of_stock (default: 1) |
max_channel_qty | Integer | No | Cap on listed quantity; NULL = unlimited |
channel_warehouse_id | UUID | No | FK to locations; specific location for this channel; NULL = aggregate all locations |
is_active | Boolean | Yes | Whether this buffer rule is active (default: true) |
priority | Integer | Yes | Resolution priority (lower = higher priority); auto-calculated |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Recommended Buffer Defaults by Channel:
| Channel | Recommended Buffer | Rationale |
|---|---|---|
| Shopify | 0-2 units (fixed) | Low latency (< 5s) reduces oversell risk |
| Amazon FBM | 5-10% (percentage) | 2-minute sync lag + high velocity = higher risk |
| Google Merchant | 10-15% (percentage) | 30-minute processing delay = highest risk |
6.7.3 Oversell Prevention Rules
Overselling occurs when two or more channels sell the last available units simultaneously before inventory sync propagates the decrement. The POS system uses a reserve-on-order model with first-commit-wins conflict resolution.
Core Principles:
- All channels sync from a single POS inventory source (no shadow inventory).
- When an order is received from ANY channel, the POS immediately creates a reservation (soft lock) against the inventory.
- If two channels attempt to reserve the last unit simultaneously, the first transaction to commit wins; the second receives an insufficient stock response.
- Safety buffers provide a cushion during sync propagation windows.
- During offline mode, channels receive the last-known inventory quantity; the safety buffer provides protection against stale data.
Oversell Prevention Sequence:
sequenceDiagram
autonumber
participant SHOP as Shopify Store
participant POS as POS Inventory Engine
participant AMZ as Amazon Marketplace
participant DB as Database
Note over SHOP,AMZ: Scenario: 1 unit available, 2 simultaneous orders
SHOP->>POS: Order webhook (product X, qty: 1)
AMZ->>POS: Order notification (product X, qty: 1)
POS->>DB: BEGIN TRANSACTION (Shopify order)
DB-->>POS: Lock acquired on inventory row
POS->>DB: Reserve 1 unit (qty_available: 1 → 0, qty_reserved: +1)
DB-->>POS: COMMIT SUCCESS
POS->>DB: BEGIN TRANSACTION (Amazon order)
DB-->>POS: Lock acquired on inventory row
POS->>DB: Attempt reserve 1 unit (qty_available: 0)
DB-->>POS: INSUFFICIENT STOCK - ROLLBACK
POS-->>SHOP: Order confirmed - fulfillment pending
POS-->>AMZ: Reject order - insufficient stock
POS->>SHOP: Inventory update: qty = 0
POS->>AMZ: Inventory update: qty = 0
Note over SHOP,AMZ: Amazon order auto-cancelled or backordered per tenant config
Conflict Resolution Policies:
| Scenario | Resolution | Tenant Configurable |
|---|---|---|
| Two channels sell last unit simultaneously | First-commit wins (database row lock) | No (system behavior) |
| Losing channel order | Auto-cancel with customer notification OR backorder | Yes (per channel) |
| POS sale conflicts with online order | POS sale always wins (staff has physical product) | No (system behavior) |
| Inventory goes negative (edge case) | Alert admin, freeze channel sync, require manual resolution | No (safety mechanism) |
| Stale inventory during offline mode | Safety buffer absorbs; reconcile on reconnection | Yes (buffer size) |
Backorder Policy Options (per channel, per tenant):
| Policy | Behavior | Use Case |
|---|---|---|
AUTO_CANCEL | Automatically cancel the losing order and notify customer | Default for most retailers |
BACKORDER | Accept the order as backorder; fulfill when stock arrives | High-value or made-to-order products |
MANUAL_REVIEW | Hold the order for staff decision | Conservative approach |
6.7.4 Channel-Specific Inventory Rules
Each external channel has unique inventory sync behaviors, API constraints, and recommended configurations.
Amazon Inventory Rules:
| Rule | Value | Notes |
|---|---|---|
| FBA inventory tracking | Separate – Amazon manages physical stock | POS monitors FBA levels via SP-API getInventorySummaries; does not push to FBA |
| FBM inventory sync | From POS locations via Feeds API | Uses JSON_LISTINGS_FEED for inventory updates |
| FBM sync trigger | Every POS inventory event | Push via SP-API submitFeed or patchListingsItem |
| Order polling frequency | Every 2 minutes | New orders checked via getOrders SP-API endpoint |
| Safety buffer recommendation | 5-10% (percentage mode) | Higher buffer for slow-sync or high-velocity items |
| Handling time | Configurable per product (default: 2 days) | Affects customer delivery expectation |
| Multi-location support | Channel warehouse mapping | Map specific POS location(s) to Amazon FBM fulfillment |
| Throttling limits | 10 requests/sec for Feeds API | Batch updates to stay within rate limits |
| Quantity cap | Amazon shows “In Stock” for qty > 0 | Exact quantity not displayed to customers on most categories |
Amazon Inventory Sync Flow:
sequenceDiagram
autonumber
participant POS as POS System
participant ENGINE as Sync Engine
participant SP as Amazon SP-API
participant SQS as Amazon SQS
Note over POS,SQS: Outbound: POS → Amazon
POS->>ENGINE: Inventory event (product X, location A, new qty: 15)
ENGINE->>ENGINE: Calculate buffer (15 - 10% = 13)
ENGINE->>SP: patchListingsItem (sku: X, qty: 13)
SP-->>ENGINE: 200 OK
Note over POS,SQS: Inbound: Amazon → POS
SQS->>ENGINE: ANY_OFFER_CHANGED notification
ENGINE->>SP: getListingsItem (sku: X)
SP-->>ENGINE: Listing data with fulfillable_qty
ENGINE->>POS: Reconcile if discrepancy detected
Google Merchant Inventory Rules:
| Rule | Value | Notes |
|---|---|---|
| Sync scope | Local inventory per storeCode | Each POS location maps to a Google storeCode for Local Inventory Ads |
| Online inventory | Aggregated or warehouse-specific | Configurable: aggregate all locations or use designated warehouse |
| Processing delay | Up to 30 minutes after API submission | Google processes updates asynchronously |
| Safety buffer recommendation | 10-15% (percentage mode) | Higher buffer to account for processing delay |
| Availability values | in_stock, out_of_stock, preorder, backorder | Computed from POS qty and buffer rules |
| Update method | Content API localInventory.insert for local; products.insert for online | Different endpoints for local vs. online inventory |
| Update frequency | On every POS inventory event + 2x daily full sync | Event-driven + full reconciliation ensures accuracy |
| Quantity precision | Whole numbers only | Fractional quantities rounded down |
| Sale price sync | sale_price + sale_price_effective_date | Must match actual price on landing page |
Google Merchant Availability Mapping:
| POS Qty (after buffer) | Google Availability | Additional Fields |
|---|---|---|
| qty >= min_channel_qty | in_stock | quantity: actual qty |
| qty = 0 and restock date set | backorder | availability_date: restock date |
| qty = 0 and preorder flag | preorder | availability_date: release date |
| qty = 0 (default) | out_of_stock | – |
Shopify Inventory Rules:
| Rule | Value | Notes |
|---|---|---|
| Sync mode | Real-time bidirectional via webhooks | < 5s target latency |
| Granularity | Per-location | Each POS location maps 1:1 to a Shopify location |
| Location mapping | pos_location_id ↔ shopify_location_id | Configured in Integration Hub (Module 5, Section 5.16.3) |
| Webhook events | inventory_levels/update (inbound), Inventory Level Set API (outbound) | Bidirectional sync |
| Reconciliation | Every 15 minutes | Full inventory comparison per location to catch missed webhooks |
| Safety buffer | Optional (low latency reduces need) | Default: 0 for Shopify channel |
| Track inventory | MUST be enabled for all synced products | Products without inventory tracking are skipped |
| Multi-location | Supported natively | Shopify supports multiple inventory locations |
| Negative inventory | Blocked in POS; Shopify setting must match | Ensure “Allow negative inventory” is disabled in Shopify |
| Inventory policy | deny (stop selling at 0) or continue (oversell allowed) | Synced from POS channel config; deny recommended |
Shopify Location Mapping Data Model:
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
pos_location_id | UUID | Yes | FK to POS locations table |
shopify_location_id | BigInt | Yes | Shopify location ID |
shopify_location_name | String(100) | No | Cached Shopify location name for display |
sync_enabled | Boolean | Yes | Whether inventory syncs for this location pair (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
6.7.5 Sync Failure Handling
Inventory sync failures are critical because stale inventory data directly causes overselling or lost sales (showing out-of-stock when units are available). The system implements a tiered failure response with automatic escalation.
Retry Strategy:
- Attempt 1: Immediate retry after 5 seconds
- Attempt 2: Retry after 15 seconds (3x backoff)
- Attempt 3: Retry after 45 seconds (3x backoff)
- After 3 failures: Message moved to dead letter queue (DLQ)
- DLQ processor retries every 5 minutes for up to 2 hours
- After 2 hours: Admin escalation and channel freeze
Failure Escalation Timeline:
| Failure Duration | Action | Channel Effect | Admin Notification |
|---|---|---|---|
| 0-5 seconds | Automatic retry (attempt 1) | No impact | None |
| 5-15 seconds | Retry with backoff (attempt 2) | No impact | None |
| 15-45 seconds | Final retry (attempt 3) | Minimal delay | None |
| 45s - 5 minutes | Dead letter queue, auto-retry every 5 min | Slight staleness possible | None |
| 5 - 30 minutes | DLQ retries continue | Channel qty may be stale | Warning badge on Integration Hub |
| 30 min - 2 hours | Admin alert (email + in-app notification) | Channel qty is stale | Alert: “Inventory sync failing for [channel]” |
| > 2 hours | Channel freeze: set all products to out_of_stock | Products shown as unavailable | Critical alert: “Channel [X] frozen - manual intervention required” |
Sync Failure Sequence Diagram:
sequenceDiagram
autonumber
participant POS as POS System
participant ENGINE as Sync Engine
participant PLATFORM as External Platform
participant DLQ as Dead Letter Queue
participant ADMIN as Admin Dashboard
POS->>ENGINE: Inventory change event
ENGINE->>PLATFORM: Push inventory update (attempt 1)
PLATFORM-->>ENGINE: ERROR (timeout/5xx)
Note over ENGINE: Wait 5 seconds
ENGINE->>PLATFORM: Retry (attempt 2)
PLATFORM-->>ENGINE: ERROR (timeout/5xx)
Note over ENGINE: Wait 15 seconds
ENGINE->>PLATFORM: Retry (attempt 3)
PLATFORM-->>ENGINE: ERROR (timeout/5xx)
ENGINE->>DLQ: Move to dead letter queue
ENGINE->>ENGINE: Log failure in sync_log
loop Every 5 minutes for up to 2 hours
DLQ->>ENGINE: Dequeue message
ENGINE->>PLATFORM: Retry sync
alt Success
PLATFORM-->>ENGINE: 200 OK
ENGINE->>POS: Update sync_status = SYNCED
else Still failing
PLATFORM-->>ENGINE: ERROR
ENGINE->>DLQ: Re-queue message
end
end
Note over ENGINE: 30 minutes elapsed
ENGINE->>ADMIN: Warning: Sync failing for 30+ minutes
Note over ENGINE: 2 hours elapsed
ENGINE->>ADMIN: CRITICAL: Channel frozen
ENGINE->>PLATFORM: Set all products to out_of_stock
ENGINE->>POS: Update sync_status = FROZEN for all products on channel
Failure Types and Handling:
| Failure Type | Detection | Recovery | Auto-Resolve |
|---|---|---|---|
| Network timeout | HTTP timeout (30s) | Retry with backoff | Yes (transient) |
| Rate limiting (429) | HTTP 429 response | Exponential backoff, respect Retry-After header | Yes |
| Authentication expired | HTTP 401/403 | Refresh OAuth token; if fails, alert admin | Partial (token refresh is automatic) |
| Platform outage | HTTP 5xx repeated | DLQ + escalation timeline | Yes (when platform recovers) |
| Invalid data (400) | HTTP 400 with error details | Log error, skip item, alert admin | No (requires data fix) |
| Webhook delivery failure | No acknowledgment from POS | Platform retries (Shopify: up to 19 times over 48h) | Yes (platform-managed retry) |
Manual Recovery Tools:
| Tool | Location | Function |
|---|---|---|
| Resync Single Product | Product Detail > Integration Tab | Push current POS inventory to all channels for one product |
| Resync All Products | Integration Hub > Channel Card | Full inventory push to one channel |
| Unfreeze Channel | Integration Hub > Channel Card | Remove out_of_stock freeze and resume normal sync |
| View Failed Syncs | Integration Hub > Sync Log | Filter by status = FAILED, view error details, retry individually |
| Force Reconciliation | Integration Hub > Channel Card | Trigger immediate full reconciliation outside normal schedule |
Sync Health Monitoring Dashboard Metrics:
| Metric | Description | Alert Threshold |
|---|---|---|
| Sync success rate (1h) | Percentage of successful syncs in last hour | < 95% = Warning, < 80% = Critical |
| Average sync latency | Mean time from POS event to platform confirmation | > 2x target latency = Warning |
| DLQ depth | Number of messages in dead letter queue | > 50 = Warning, > 200 = Critical |
| Reconciliation discrepancies | Count of qty mismatches found in last reconciliation | > 10 = Warning, > 50 = Critical |
| Time since last successful sync | Elapsed time since last confirmed sync per channel | > 15 min (Shopify), > 30 min (Amazon), > 2h (Google) = Warning |
Business Rules:
- Sync failures for a single product do not affect other products on the same channel.
- Channel freeze (out_of_stock) is a safety mechanism, not a punitive one. It prevents overselling during extended outages.
- Unfreezing a channel triggers an immediate full reconciliation to ensure accuracy before resuming normal sync.
- All sync failures are logged in the Integration Sync Log (Module 5, Section 5.16.4) with full error details for troubleshooting.
- Tenants can configure per-channel freeze thresholds (override the default 2-hour threshold) in the Integration Hub settings.
6.7.6 Reports: Cross-Platform Inventory Sync
| Report | Purpose | Key Data Fields |
|---|---|---|
| Sync Health Summary | Overall sync performance across all channels | Channel, success rate %, avg latency, DLQ depth, last successful sync, status |
| Oversell Incident Report | Track instances where overselling occurred despite safeguards | Date, channel, product, POS qty at time, channel qty at time, order details, root cause |
| Safety Buffer Effectiveness | Analyze whether buffers are appropriately sized | Channel, buffer setting, oversell incidents, missed sales (buffer too high), recommendation |
| Sync Failure Analysis | Detailed breakdown of sync failures by type and channel | Channel, failure type, count, avg resolution time, auto-resolved %, manual intervention count |
| Inventory Discrepancy Log | Reconciliation findings showing POS vs. channel qty differences | Product, channel, POS qty, channel qty, discrepancy, reconciliation action, timestamp |
| Channel Freeze History | Track channel freeze events and their impact | Channel, freeze start, freeze end, duration, products affected, estimated lost revenue |
End of Module 6: Integrations – Sections 6.6 and 6.7
6.8 Payment Processor Integration
Scope: Consolidation of payment integration architecture, terminal communication, processor configuration, failure handling, and batch settlement. This section brings together payment-related content previously distributed across Module 1 (Sections 1.18.1-1.18.3) and Module 5 (Section 5.11.3) into a unified integration specification.
Cross-Reference: The payment flow sequence diagram (card tap/insert, refund via token) remains in Module 1, Section 1.18. Payment method registry (CASH, CREDIT_CARD, GIFT_CARD, etc.) and per-location payment configuration remain in Module 5, Section 5.11.1-5.11.2.
6.8.1 SAQ-A Architecture
The POS system uses a semi-integrated payment architecture that achieves PCI SAQ-A compliance – the simplest and least burdensome PCI self-assessment level. In this architecture, card data never enters or traverses the POS application. The payment terminal communicates directly with the payment processor, and only non-sensitive tokens and metadata flow back to the POS system.
Data the POS System Stores (Safe)
| Field | Type | Example | Purpose |
|---|---|---|---|
transaction_id | UUID | a1b2c3d4-... | Internal payment reference |
payment_token | String(255) | tok_1Nq... | Processor-issued token for refunds and voids |
approval_code | String(20) | AUTH4829 | Authorization code from issuing bank |
masked_card_number | String(10) | ****1234 | Last 4 digits only |
card_brand | Enum | VISA | Visa, Mastercard, Amex, Discover |
entry_method | Enum | TAP | CHIP, TAP, SWIPE, MANUAL |
terminal_id | String(50) | TRM-001-GM | Which physical terminal processed the payment |
timestamp | DateTime | 2026-02-17T14:32:00Z | When the authorization was obtained |
amount | Decimal(10,2) | 45.00 | Authorized or settled amount |
PCI Prohibited Data (NEVER Stored)
| Data Element | PCI Requirement | Risk if Stored |
|---|---|---|
| Full PAN (16-digit card number) | Prohibited under SAQ-A | Immediate PCI non-compliance; liability for fraud |
| CVV / CVC (3-4 digit security code) | Prohibited post-authorization | Stored CVV is grounds for PCI ban by acquirer |
| Track 1 / Track 2 data (magnetic stripe) | Prohibited post-authorization | Enables card cloning |
| PIN block (encrypted PIN) | Prohibited post-authorization | Enables unauthorized transactions |
| Raw EMV data (chip cryptogram) | Prohibited post-authorization | Enables replay attacks |
Architecture Principle: The POS backend sends the payment amount and order reference to the terminal. The terminal handles all card interaction. The processor returns a token. The POS stores only the token. This ensures the POS application, its database, and its network are entirely out of PCI cardholder data scope.
6.8.2 Terminal Communication Protocol
terminal_integration:
# Communication method
protocol: "cloud_api" # Terminal vendor's cloud service
# Timeout settings
payment_timeout_seconds: 60
connection_timeout_seconds: 10
# Terminal state machine
states:
- IDLE: "Ready for transaction"
- WAITING_FOR_CARD: "Display amount, await tap/insert"
- PROCESSING: "Communicating with processor"
- APPROVED: "Transaction successful"
- DECLINED: "Transaction declined"
- ERROR: "Communication or hardware error"
- CANCELLED: "Customer or staff cancelled"
# Error handling
on_timeout: "prompt_retry_or_cancel"
on_decline: "display_reason_allow_retry"
on_error: "log_and_alert_manager"
# Void window (same-day before batch)
same_day_void: true
batch_close_time: "23:00" # Auto-batch at 11 PM
Terminal State Machine
stateDiagram-v2
[*] --> IDLE: Terminal powered on
IDLE --> WAITING_FOR_CARD: Payment request received
WAITING_FOR_CARD --> PROCESSING: Card presented
WAITING_FOR_CARD --> CANCELLED: Staff/customer cancels
WAITING_FOR_CARD --> IDLE: Timeout (60s)
PROCESSING --> APPROVED: Processor approves
PROCESSING --> DECLINED: Processor declines
PROCESSING --> ERROR: Communication failure
APPROVED --> IDLE: Transaction complete
DECLINED --> IDLE: Staff acknowledges
CANCELLED --> IDLE: Reset terminal
ERROR --> IDLE: Staff acknowledges
6.8.3 Processor Configuration (from 5.11.3)
External payment processors handle card transactions and third-party financing. Each processor is configured with encrypted credentials, terminal mappings, and environment settings.
Payment Processor Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
name | String(100) | Yes | Processor display name (e.g., “CardConnect”, “Square”, “Affirm”) |
processor_type | Enum | Yes | CARD, FINANCING |
api_key | String(500) | Yes | Encrypted API key or token (AES-256 encrypted at rest) |
api_secret | String(500) | No | Encrypted API secret (for processors requiring key + secret) |
merchant_id | String(100) | Yes | Merchant account identifier with the processor |
webhook_url | String(500) | No | URL for processor to send asynchronous notifications (refund confirmations, chargebacks) |
test_mode | Boolean | Yes | true for sandbox/test environment, false for production (default: true) |
is_active | Boolean | Yes | Soft-delete flag (default: true) |
config_json | JSON | No | Processor-specific configuration (e.g., Affirm min/max order amounts, supported card brands) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Processor Terminal Mapping
Each payment terminal (physical card reader) is associated with a processor and a location.
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
processor_id | UUID | Yes | FK to payment_processors table |
terminal_id | String(50) | Yes | Terminal serial number or identifier assigned by the processor |
ip_address | String(45) | No | Local IP address of the terminal (for IP-connected terminals) |
port | Integer | No | Port number for terminal communication (default: 9000) |
location_id | UUID | Yes | FK to locations table – physical location of this terminal |
is_active | Boolean | Yes | Soft-delete flag (default: true) |
created_at | DateTime | Yes | Record creation timestamp |
Business Rules:
- A tenant may have multiple processors of the same type (e.g., one CARD processor for in-store terminals and another for e-commerce).
test_modeallows the administrator to toggle between sandbox and production without changing API keys (processors typically issue separate sandbox and production keys).- Credentials (
api_key,api_secret) are encrypted at rest using AES-256 and never returned in API responses. The Admin Portal displays only a masked preview (e.g.,sk_live_****ABCD). - Before activating a processor (
test_mode = false), the system performs a validation handshake with the processor API to confirm credentials are valid.
6.8.4 Failure Handling
sequenceDiagram
autonumber
participant U as Staff
participant UI as POS UI
participant API as Backend
participant TERM as Payment Terminal
participant PROC as Processor
U->>UI: Click "Pay by Card"
UI->>API: POST /payments/initiate
API->>TERM: Send Payment Request
alt Terminal Timeout (60s)
TERM--xAPI: No Response
API-->>UI: "Terminal not responding"
UI-->>U: Options: Retry | Different Terminal | Cash | Cancel
alt Staff selects Retry
U->>UI: Click "Retry"
UI->>API: POST /payments/initiate (same order)
API->>TERM: Send Payment Request (attempt 2)
else Staff selects Different Terminal
U->>UI: Select alternate terminal
UI->>API: POST /payments/initiate (new terminal_id)
API->>TERM: Send to alternate terminal
else Staff selects Cash
U->>UI: Switch to Cash payment
Note right of UI: Proceeds to cash drawer flow
else Staff selects Cancel
U->>UI: Return to cart
end
else Card Declined
TERM->>PROC: Card data (encrypted)
PROC-->>TERM: DECLINED (reason: Insufficient Funds)
TERM-->>API: Decline response
API-->>UI: "Card Declined: Insufficient Funds"
UI-->>U: Options: Try Another Card | Cash | Cancel
API->>API: Log decline (reason, terminal, timestamp)
else Terminal Hardware Error
TERM-->>API: ERROR (Hardware Issue)
API-->>UI: "Terminal Error"
UI-->>U: Options: Different Terminal | Cash | Cancel
API->>API: Log error, create manager alert
Note right of API: Alert sent to on-duty manager
end
Failure Logging:
- All payment failures are recorded in the
payment_attemptslog with:order_id,terminal_id,attempt_number,failure_type(TIMEOUT, DECLINED, ERROR),decline_reason,timestamp. - Consecutive failures on the same terminal (3+ within 15 minutes) trigger a terminal health alert visible on the Integration Health Dashboard.
6.8.5 Batch Settlement
Batch settlement closes the day’s card transactions and initiates fund transfer from the payment processor to the merchant’s bank account.
Settlement Schedule:
- Auto-batch executes at a configurable time (default: 23:00 local time per location timezone).
- Manual batch close is available to managers via the Admin Portal.
- Batch close is per-processor, per-location (each location settles its own terminals).
Settlement Process:
| Step | Action | System Behavior |
|---|---|---|
| 1 | Trigger batch close | System sends batch close command to processor API |
| 2 | Processor responds | Processor returns batch summary (transaction count, total amount) |
| 3 | Reconciliation | POS compares processor batch total against local transaction records |
| 4 | Variance detection | If totals differ by more than $0.01, a reconciliation alert is created |
| 5 | Settlement report | Daily settlement report generated and available in Reports module |
| 6 | Fund transfer | Processor initiates ACH/wire to merchant bank (1-3 business days) |
Reconciliation Rules:
- POS-side total is calculated as: SUM(authorized amounts) - SUM(voided amounts) - SUM(refunded amounts) for the batch period.
- Processor-side total is received via the batch close API response.
- Variance <= $0.01 is auto-accepted (rounding tolerance).
- Variance > $0.01 generates a
RECONCILIATION_VARIANCEalert and requires manager review.
6.8.6 Reports: Payment Integration
| Report | Purpose | Key Data Fields |
|---|---|---|
| Payment Terminal Performance | Monitor terminal health and throughput | Terminal ID, transaction count, avg response time (ms), error rate (%), decline rate (%), uptime (%) |
| Decline Rate Report | Analyze payment failure patterns | Decline reason, frequency, terminal ID, time of day, retry success rate (%), card brand breakdown |
| Batch Settlement Report | Daily batch close summary and reconciliation | Batch date, transaction count, total amount, processor total, variance, settlement status (SETTLED / PENDING / VARIANCE) |
| Chargeback Tracking | Monitor disputes and chargebacks | Chargeback ID, original transaction, amount, reason code, deadline, status (OPEN / WON / LOST) |
6.9 Email Provider Integration
Scope: Consolidation of email provider configuration for all outbound transactional and notification emails. This section brings together provider setup previously in Module 5, Section 5.15.1.
Cross-Reference: Email template registry, template data model, merge field definitions, and per-template enablement controls remain in Module 5, Section 5.15.2-5.15.4. See Module 2, Section 2.9 for customer communication preferences.
6.9.1 Provider Configuration (from 5.15.1)
Each tenant configures exactly one email provider. The provider handles all outbound transactional and notification emails for the tenant. Three provider types are supported.
Email Config Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
provider_type | Enum | Yes | SMTP, SENDGRID, MAILGUN |
smtp_host | String(255) | Conditional | SMTP server hostname (required when provider_type = SMTP) |
smtp_port | Integer | Conditional | SMTP server port: 587 (TLS) or 465 (SSL) (required when provider_type = SMTP) |
smtp_username | String(255) | Conditional | SMTP authentication username (required when provider_type = SMTP) |
smtp_password_encrypted | String(500) | Conditional | AES-256 encrypted SMTP password (required when provider_type = SMTP) |
api_key_encrypted | String(500) | Conditional | AES-256 encrypted API key (required when provider_type = SENDGRID or MAILGUN) |
api_region | Enum | No | US, EU – Mailgun region (default: US) |
sender_email | String(255) | Yes | “From” address for all outbound emails (e.g., noreply@nexusclothing.com) |
sender_name | String(100) | Yes | “From” display name (e.g., “Nexus Clothing”) |
reply_to_email | String(255) | No | “Reply-To” address; defaults to sender_email if NULL |
daily_send_limit | Integer | No | Maximum emails per 24-hour period (0 = unlimited; default: 0) |
is_verified | Boolean | Yes | Whether the configuration has been verified via test email (default: false) |
verified_at | DateTime | No | Timestamp of last successful verification |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Business Rules:
- Only one active email configuration per tenant. Updating the provider replaces the previous configuration; the old record is soft-deleted for audit.
- The
is_verifiedflag is set totrueonly after a successful test email delivery. The Admin Portal displays a warning banner if the configuration is unverified. - All sensitive fields (
smtp_password_encrypted,api_key_encrypted) are encrypted at rest using AES-256. They are never returned in API GET responses – only a masked indicator (e.g.,********) is shown. - When
provider_type = SMTP, the system attempts a TLS handshake during verification. If TLS fails, verification fails with a specific error message.
6.9.2 Delivery Monitoring
The system tracks delivery outcomes for all outbound emails to detect provider issues early and ensure transactional emails reach their intended recipients.
Delivery Status Tracking
| Status | Description | Source |
|---|---|---|
QUEUED | Email accepted by POS, awaiting provider submission | Internal |
SENT | Email accepted by provider for delivery | Provider API response |
DELIVERED | Email delivered to recipient inbox | Provider webhook |
BOUNCED | Email rejected by recipient mail server | Provider webhook |
SOFT_BOUNCE | Temporary delivery failure (mailbox full, server down) | Provider webhook |
SPAM_COMPLAINT | Recipient marked email as spam | Provider webhook |
FAILED | Provider rejected email (invalid sender, quota exceeded) | Provider API response |
Delivery Log Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
template_code | String(50) | Yes | Email template code (e.g., TMPL-REFUND-CONFIRMATION) |
recipient_email | String(255) | Yes | Recipient email address |
status | Enum | Yes | Delivery status from table above |
provider_message_id | String(255) | No | Message ID returned by provider (for tracking) |
error_message | String(500) | No | Error details for failed/bounced emails |
sent_at | DateTime | Yes | Timestamp when email was submitted to provider |
delivered_at | DateTime | No | Timestamp when delivery was confirmed |
created_at | DateTime | Yes | Record creation timestamp |
Provider Webhook Integration:
| Provider | Webhook Mechanism | Endpoint | Events Tracked |
|---|---|---|---|
| SendGrid | Event Webhook | /api/webhooks/sendgrid/events | delivered, bounced, dropped, spam_report |
| Mailgun | Events API / Webhooks | /api/webhooks/mailgun/events | delivered, failed, complained |
| SMTP | None (poll-based) | N/A | Delivery inferred from absence of bounce within 24 hours |
Monitoring Alerts:
- Bounce rate exceeding 5% in a rolling 24-hour window triggers a dashboard alert to tenant administrators.
- Three consecutive delivery failures to the same recipient suppress further emails to that address until an administrator reviews and clears the suppression.
- Daily send limit approaching 80% capacity triggers a warning notification.
6.10 Carrier & Shipping Integration
Scope: Abstract carrier interface, supported carrier configuration, and shipping data model for ship-to-customer fulfillment. This section defines the integration framework for rate lookup, label generation, address validation, and shipment tracking.
Cross-Reference: The ship-to-customer sales workflow, store assignment logic, and pick-pack-ship process remain in Module 1, Section 1.7.3. See Module 4, Section 4.14 for online order fulfillment inventory logic.
6.10.1 Abstract Carrier Interface
All carrier integrations implement a common interface that abstracts provider-specific API differences. This enables the POS system to support multiple carriers without changes to business logic.
classDiagram
class ICarrierProvider {
<<interface>>
+GetRates(origin, destination, package) ShippingRate[]
+CreateLabel(shipment) Label
+GetTracking(trackingNumber) TrackingStatus
+CancelShipment(shipmentId) bool
+ValidateAddress(address) AddressValidation
}
class UPSProvider {
+GetRates()
+CreateLabel()
+GetTracking()
+CancelShipment()
+ValidateAddress()
}
class FedExProvider {
+GetRates()
+CreateLabel()
+GetTracking()
+CancelShipment()
+ValidateAddress()
}
class USPSProvider {
+GetRates()
+CreateLabel()
+GetTracking()
+CancelShipment()
+ValidateAddress()
}
class AmazonShippingProvider {
+GetRates()
+CreateLabel()
+GetTracking()
+CancelShipment()
+ValidateAddress()
}
ICarrierProvider <|.. UPSProvider
ICarrierProvider <|.. FedExProvider
ICarrierProvider <|.. USPSProvider
ICarrierProvider <|.. AmazonShippingProvider
Interface Methods:
| Method | Input | Output | Description |
|---|---|---|---|
GetRates | Origin address, destination address, package dimensions/weight | ShippingRate[] (service level, cost, estimated days) | Returns available shipping rates for the given package |
CreateLabel | Shipment details (addresses, package, service level) | Label (label URL, tracking number, cost) | Generates a shipping label and returns tracking number |
GetTracking | Tracking number | TrackingStatus (status, location, events[]) | Returns current shipment tracking status and history |
CancelShipment | Shipment ID | bool (success/failure) | Cancels a shipment and voids the label |
ValidateAddress | Address (street, city, state, zip, country) | AddressValidation (is_valid, suggested_address, corrections[]) | Validates and standardizes a shipping address |
6.10.2 Supported Carriers (Future)
All carrier integrations are planned for v2.0. The integration framework is designed to support the following carriers.
| Carrier | API | Rate Lookup | Label Generation | Tracking | Update Method | Status |
|---|---|---|---|---|---|---|
| UPS | UPS Developer API | Yes | Yes | Yes | Webhook | Planned v2.0 |
| FedEx | FedEx API | Yes | Yes | Yes | Webhook | Planned v2.0 |
| USPS | USPS Web Tools | Yes | Yes | Yes | Poll (hourly) | Planned v2.0 |
| Amazon Buy Shipping | SP-API Shipping | Yes | Yes | Yes | Push (SQS) | Planned v2.0 |
Carrier Configuration Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
carrier_code | Enum | Yes | UPS, FEDEX, USPS, AMAZON_SHIPPING |
display_name | String(100) | Yes | Admin-friendly name (e.g., “UPS Ground”) |
api_key_encrypted | String(500) | Yes | AES-256 encrypted API credentials |
api_secret_encrypted | String(500) | No | AES-256 encrypted API secret |
account_number | String(50) | Yes | Carrier account number |
is_active | Boolean | Yes | Whether this carrier is available for label generation (default: false) |
default_service_level | String(50) | No | Default shipping service (e.g., GROUND, EXPRESS, PRIORITY) |
config_json | JSON | No | Carrier-specific settings (e.g., insurance defaults, signature requirements) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
6.10.3 Shipping Data Model
Shipment Table
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table |
order_id | UUID | Yes | FK to orders table – the sales order being shipped |
carrier_code | Enum | Yes | UPS, FEDEX, USPS, AMAZON_SHIPPING |
service_level | String(50) | Yes | Service level selected (e.g., GROUND, 2DAY, OVERNIGHT) |
tracking_number | String(100) | No | Carrier tracking number (populated after label creation) |
label_url | String(500) | No | URL to printable shipping label (populated after label creation) |
ship_date | Date | No | Actual ship date (populated when carrier scans package) |
estimated_delivery | Date | No | Carrier-estimated delivery date |
actual_delivery | DateTime | No | Confirmed delivery date and time |
shipping_cost | Decimal(10,2) | Yes | Cost charged to customer for shipping |
carrier_cost | Decimal(10,2) | No | Actual cost charged by carrier (for margin reporting) |
insurance_amount | Decimal(10,2) | No | Declared value for insurance (default: 0.00) |
weight_oz | Decimal(8,2) | Yes | Package weight in ounces |
dimensions_json | JSON | No | Package dimensions: { "length": 12, "width": 8, "height": 4, "unit": "in" } |
ship_from_location_id | UUID | Yes | FK to locations table – store fulfilling the shipment |
ship_to_address_json | JSON | Yes | Destination address: { "name", "street1", "street2", "city", "state", "zip", "country" } |
status | Enum | Yes | Shipment lifecycle status (see table below) |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Shipment Status Values
| Status | Description | Trigger |
|---|---|---|
PENDING | Shipment record created, no label yet | Order marked for shipping |
LABEL_CREATED | Shipping label generated | CreateLabel API call succeeds |
SHIPPED | Package handed to carrier | Carrier first scan event |
IN_TRANSIT | Package moving through carrier network | Carrier tracking update |
OUT_FOR_DELIVERY | Package on delivery vehicle | Carrier tracking update |
DELIVERED | Package delivered to recipient | Carrier delivery confirmation |
EXCEPTION | Delivery issue (weather, address error, refused) | Carrier exception event |
RETURNED | Package returned to sender | Carrier return scan |
CANCELLED | Shipment cancelled before carrier pickup | CancelShipment API call |
6.11 Integration Hub (Enhanced)
Scope: Central configuration and health monitoring for all external system integrations. This section enhances the Integration Hub defined in Module 5, Section 5.16 by expanding the integration registry to include Amazon SP-API, Google Merchant Center, and shipping carriers alongside the existing Shopify, payment processor, and email provider integrations.
Cross-Reference: See Module 5, Section 5.16 for the base Integration Hub definition, credential storage, and Shopify-specific configuration. See Module 3, Section 3.7 for Shopify product sync logic. See Module 4, Section 4.14 for Shopify inventory sync.
6.11.1 Integration Registry (Enhanced)
The integration_type enum is expanded to include all platform integrations defined in Module 6.
Enhanced integration_type Enum
| integration_type Enum Value | Description | Status |
|---|---|---|
SHOPIFY | Shopify e-commerce platform (product, inventory, order sync) | v1.0 |
AMAZON | Amazon Selling Partner API (listings, orders, FBA/FBM) | Planned v1.5 |
GOOGLE_MERCHANT | Google Merchant Center (product feeds, local inventory) | Planned v1.5 |
PAYMENT_PROCESSOR | Card and financing processor (authorization, settlement) | v1.0 |
EMAIL_PROVIDER | Email delivery service (SMTP, SendGrid, Mailgun) | v1.0 |
SHIPPING_CARRIER | Carrier rate lookup, label generation, tracking | Planned v2.0 |
ACCOUNTING | QuickBooks Online, Xero (journal entries, invoice sync) | Planned v2.0 |
CUSTOM | Custom webhook integration (tenant-defined endpoints) | Planned v2.0 |
Integration Data Model (Enhanced)
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
tenant_id | UUID | Yes | FK to tenants table – owning tenant |
integration_type | Enum | Yes | Enhanced enum from table above |
provider_name | String(100) | Yes | Provider identifier (e.g., “Shopify”, “Amazon SP-API”, “Google Merchant Center”) |
display_name | String(100) | Yes | Admin-friendly name (e.g., “Nexus Clothing Shopify Store”, “Nexus Amazon US”) |
status | Enum | Yes | CONNECTED, DISCONNECTED, ERROR, NOT_CONFIGURED, RATE_LIMITED |
is_enabled | Boolean | Yes | Whether the integration is actively processing (default: false until verified) |
last_sync_at | DateTime | No | Timestamp of the most recent successful sync operation |
last_error_at | DateTime | No | Timestamp of the most recent error |
last_error_message | String(500) | No | Human-readable error description |
error_count_24h | Integer | Yes | Rolling count of errors in the past 24 hours (default: 0) |
sync_latency_ms | Integer | No | Average sync latency in milliseconds over the last hour |
api_version | String(20) | No | Current API version in use (e.g., “2024-01”, “2022-04-01”) |
rate_limit_remaining | Integer | No | Remaining API calls before rate limit is hit |
rate_limit_reset_at | DateTime | No | Timestamp when rate limit bucket resets |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
New fields vs. Module 5 base model: RATE_LIMITED status, api_version, rate_limit_remaining, rate_limit_reset_at.
6.11.2 Credentials Storage
Credentials for each integration are stored separately with encryption at rest. This model is identical to Module 5, Section 5.16.2.
Integration Credentials Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
integration_id | UUID | Yes | FK to integrations table |
credential_key | String(100) | Yes | Credential identifier (e.g., api_key, api_secret, access_token, refresh_token, shop_url, merchant_id, seller_id) |
credential_value_encrypted | String(1000) | Yes | AES-256 encrypted credential value. Never returned in API responses. |
expires_at | DateTime | No | Credential expiry (e.g., OAuth token, SP-API refresh token). NULL for non-expiring credentials. |
created_at | DateTime | Yes | Record creation timestamp |
updated_at | DateTime | Yes | Last modification timestamp |
Credential Rotation:
- OAuth tokens with
expires_atare automatically refreshed before expiry (5-minute buffer). - Failed token refresh triggers a
DISCONNECTEDstatus and dashboard alert. - Manual credential update requires re-verification handshake before status returns to
CONNECTED.
6.11.3 Sync Log (Enhanced)
The sync log captures every sync operation across all integrations. The sync_type enum is expanded to include new operation types introduced by Amazon (SQS notifications, bulk feeds) and Google Merchant (scheduled product pushes).
Enhanced sync_type Enum
| sync_type Value | Description | Example |
|---|---|---|
WEBHOOK_IN | Incoming webhook from external system | Shopify orders/create webhook |
WEBHOOK_OUT | Outgoing webhook to external system | POS notifies custom endpoint of sale |
SCHEDULED_PULL | Scheduled data pull from external system | Amazon order poll every 120 seconds |
SCHEDULED_PUSH | Scheduled data push to external system | Google Merchant product feed update (2x daily) |
RECONCILIATION | Periodic full data comparison and correction | Shopify inventory reconciliation every 15 minutes |
MANUAL | Admin-triggered manual sync from Integration Hub | Admin clicks “Sync Now” for Shopify products |
BULK_OPERATION | Large-scale data operation via bulk API | Shopify bulk GraphQL operation, Amazon flat-file feed |
NOTIFICATION | Asynchronous notification processing | Amazon SQS message, Google Merchant disapproval alert |
Integration Sync Log Data Model
| Field | Type | Required | Description |
|---|---|---|---|
id | UUID | Yes | Primary key |
integration_id | UUID | Yes | FK to integrations table |
sync_type | Enum | Yes | Enhanced enum from table above |
direction | Enum | Yes | INBOUND, OUTBOUND |
entity_type | String(50) | Yes | Entity synced (e.g., PRODUCT, INVENTORY, ORDER, CUSTOMER, LISTING, FEED) |
entity_id | UUID | No | FK to the local entity record affected (NULL for bulk syncs) |
external_id | String(100) | No | External system identifier (e.g., Shopify product ID, Amazon ASIN, Google offer ID) |
status | Enum | Yes | SUCCESS, FAILED, PARTIAL, SKIPPED, RETRYING |
records_processed | Integer | Yes | Number of records processed in this sync operation |
records_failed | Integer | Yes | Number of records that failed processing |
error_details | Text | No | Detailed error message and stack trace for failed syncs |
duration_ms | Integer | Yes | Sync operation duration in milliseconds |
started_at | DateTime | Yes | Sync start timestamp |
completed_at | DateTime | Yes | Sync completion timestamp |
New fields vs. Module 5 base model: RETRYING status, LISTING and FEED entity types, BULK_OPERATION and NOTIFICATION sync types.
6.11.4 Health Dashboard (Enhanced)
The Integration Hub dashboard provides a consolidated health view for all configured integrations across all six integration types.
flowchart LR
HUB["Integration Hub\n(Central Management)"]
SHOP["Shopify\nProduct + Inventory\n+ Orders"]
AMZN["Amazon SP-API\nListings + Orders\n+ FBA/FBM"]
GOOG["Google Merchant\nProduct Feeds\n+ Local Inventory"]
PAY["Payment Processor\nCard Processing\n+ Batch Settlement"]
EMAIL["Email Provider\nSMTP / SendGrid\n/ Mailgun"]
SHIP["Shipping Carrier\nRates + Labels\n+ Tracking"]
HUB -->|"Product sync\nInventory sync\nOrder webhooks"| SHOP
HUB -->|"Listings sync\nOrder poll\nSQS notifications"| AMZN
HUB -->|"Product feed\nLocal inventory\nDisapproval alerts"| GOOG
HUB -->|"Authorization\nSettlement\nRefunds"| PAY
HUB -->|"Transactional emails\nDigest notifications"| EMAIL
HUB -.->|"Rate lookup\nLabel generation\nTracking updates"| SHIP
style HUB fill:#7b2d8e,stroke:#5a1d6e,color:#fff
style SHOP fill:#2d6a4f,stroke:#1b4332,color:#fff
style AMZN fill:#ff9900,stroke:#cc7a00,color:#000
style GOOG fill:#4285f4,stroke:#3367d6,color:#fff
style PAY fill:#2d6a4f,stroke:#1b4332,color:#fff
style EMAIL fill:#2d6a4f,stroke:#1b4332,color:#fff
style SHIP fill:#6c757d,stroke:#495057,color:#fff
Enhanced Health Indicators per Integration
| Metric | Green | Yellow | Red |
|---|---|---|---|
| Status | CONNECTED | RATE_LIMITED | DISCONNECTED / ERROR |
| Error Count (24h) | 0 | 1-5 | > 5 |
| Last Sync | < 30 min ago | 30 min – 2 hrs | > 2 hrs |
| Sync Latency | < 2,000ms | 2,000 – 5,000ms | > 5,000ms |
| Rate Limit | > 50% remaining | 10-50% remaining | < 10% remaining |
| Validation Failures | 0 products | 1-10 products | > 10 products |
Dashboard Features:
- Real-time status indicator (green/yellow/red dot) per integration.
- Click-through to detailed sync log filtered by integration.
- “Sync Now” button for manual sync trigger (requires ADMIN role).
- Credential expiry warning banner (30 days before OAuth token expiry).
- Rate limit usage bar showing current consumption vs. limit.
6.12 Integration Business Rules (YAML)
Scope: Consolidated YAML configuration for all integration-related business rules across Module 6. This follows the same pattern as Module 5, Section 5.19 (Consolidated Business Rules). All values shown are defaults and can be overridden at tenant level.
# ============================================================
# Module 6: Integration Business Rules
# ============================================================
# All values shown are defaults and can be overridden at
# tenant level. This file is the single authoritative source
# for all integration-related configurable settings.
# ============================================================
integration_config:
# ----------------------------------------------------------
# SHOPIFY INTEGRATION
# ----------------------------------------------------------
shopify:
sync_mode: "pos_master" # pos_master | bidirectional
api_preference: "graphql" # graphql | rest
idempotency_required: true
third_party_pos: true # Shopify POS channel disabled; POS is external
source_of_truth: "pos" # pos | shopify (for product data)
track_inventory: true # Shopify inventory tracking enabled
bopis_enabled: true # Buy Online, Pick Up In Store
reconciliation_interval_minutes: 15
webhook_hmac_algorithm: "sha256"
max_variants_per_product: 100 # Shopify hard limit
max_option_dimensions: 3 # Shopify hard limit
bulk_concurrent_queries: 5
bulk_concurrent_mutations: 5
rate_limit_rest_bucket: 40 # Shopify REST leak bucket size
rate_limit_rest_leak_per_sec: 2 # Shopify REST leak rate
rate_limit_graphql_points_per_sec: 50
image_sync: "first_publish_only" # first_publish_only | always | never
customer_sync_enabled: true
order_sync_enabled: true
# ----------------------------------------------------------
# AMAZON SP-API INTEGRATION
# ----------------------------------------------------------
amazon:
enabled: false # Disabled by default; tenant must opt-in
sync_mode: "pos_master" # pos_master only (Amazon does not push product edits)
marketplace_id: "ATVPDKIKX0DER" # US marketplace (configurable per tenant)
region: "NA" # NA | EU | FE
order_poll_interval_seconds: 120 # Poll for new orders every 2 minutes
fba_enabled: false # Fulfillment by Amazon
fbm_enabled: true # Fulfillment by Merchant (store ships)
safety_buffer_default_qty: 10 # Reserve 10 units from Amazon availability
safety_buffer_default_pct: null # Percentage-based buffer (overrides qty if set)
notification_delivery: "sqs" # sqs | polling
catalog_api_version: "2022-04-01"
listings_api_version: "2021-08-01"
packaging_label_format: "4x6" # Thermal label format
seller_code_compliance: true # Enforce Amazon seller code rules
max_bullet_points: 5 # Amazon listing bullet point limit
max_bullet_point_length: 1000 # Characters per bullet point
max_search_terms_bytes: 250 # Amazon search term byte limit
fulfillment_default: "FBM" # FBM | FBA
# ----------------------------------------------------------
# GOOGLE MERCHANT CENTER INTEGRATION
# ----------------------------------------------------------
google_merchant:
enabled: false # Disabled by default; tenant must opt-in
sync_mode: "pos_master" # POS is source of truth for product data
api_version: "v1" # Content API version
local_inventory_enabled: false # Local Inventory Ads (LIA)
product_update_frequency: "2x_daily" # 2x_daily | daily | hourly
image_validation_strict: true # Enforce Google image requirements
gtin_required: true # GTIN (barcode) required for all products
price_match_validation: true # Verify POS price matches Google listing
disapproval_prevention: true # Pre-validate before pushing to Google
ssl_required: true # Landing page must be HTTPS
content_api_migration_deadline: "2026-08-18" # Merchant API migration deadline
v1beta_deadline: "2026-02-28" # v1beta sunset date
title_max_length: 150 # Google product title limit
description_max_length: 5000 # Google product description limit
gbp_integration_enabled: false # Google Business Profile integration
# ----------------------------------------------------------
# CROSS-PLATFORM PRODUCT VALIDATION
# ----------------------------------------------------------
product_validation:
title_max_length: 150 # Strictest platform limit
description_max_length: 5000 # Strictest platform limit
image_min_width: 1000 # Pixels (Google requirement)
image_min_height: 1000 # Pixels (Google requirement)
image_max_size_mb: 10
image_formats_allowed:
- "JPEG"
- "PNG"
watermarks_prohibited: true # Google and Amazon both prohibit
text_overlay_prohibited: true # Google prohibits text on images
white_background_required: true # Amazon main image requirement
barcode_required: true # GTIN/UPC/EAN required for marketplace listing
brand_required: true # Required by Google and Amazon
condition_required: true # Required by Google
condition_default: "new" # new | refurbished | used
sku_max_length: 50
weight_required: true # Required for shipping calculation
product_type_required: true # Google product category taxonomy
# ----------------------------------------------------------
# CROSS-PLATFORM INVENTORY SYNC
# ----------------------------------------------------------
inventory_sync:
safety_buffer_enabled: true # Hold back inventory from marketplaces
safety_buffer_default_qty: 0 # Default buffer (0 = no buffer)
oversell_prevention: true # Block sale if available_qty <= 0
reserve_on_order: true # Reserve inventory when order is placed
first_commit_wins: true # First system to commit gets the unit
sync_failure_freeze_minutes: 120 # Freeze marketplace qty on sync failure
dead_letter_retry_hours: 24 # Retry failed sync events for 24 hours
shopify_reconciliation_minutes: 15
amazon_reconciliation_minutes: 30
google_reconciliation_hours: 6
# ----------------------------------------------------------
# PAYMENT INTEGRATION
# ----------------------------------------------------------
payment:
pci_scope: "SAQ-A" # Semi-integrated; card data never touches POS
payment_timeout_seconds: 60
connection_timeout_seconds: 10
same_day_void: true
batch_close_time: "23:00" # Local time, configurable per location
batch_close_auto: true
terminal_failure_alert_threshold: 3 # Consecutive failures before alert
terminal_failure_alert_window_min: 15
reconciliation_variance_tolerance: 0.01 # Dollars
decline_log_retention_days: 90
# ----------------------------------------------------------
# EMAIL INTEGRATION
# ----------------------------------------------------------
email:
provider_type: "SMTP" # SMTP | SENDGRID | MAILGUN
daily_send_limit: 0 # 0 = unlimited
bounce_rate_alert_threshold_pct: 5
consecutive_failure_suppress: 3 # Suppress after N failures to same address
delivery_log_retention_days: 90
test_email_required_before_go_live: true
# ----------------------------------------------------------
# SHIPPING INTEGRATION (Future)
# ----------------------------------------------------------
shipping:
enabled: false # Planned for v2.0
default_carrier: null
address_validation_required: true
insurance_default: false
label_format: "4x6" # Thermal label format
tracking_poll_interval_minutes: 60
# ----------------------------------------------------------
# GLOBAL INTEGRATION SETTINGS
# ----------------------------------------------------------
global:
retry_max_attempts: 3
retry_backoff_base_seconds: 5
retry_backoff_multiplier: 3 # Exponential: 5s, 15s, 45s
circuit_breaker_threshold: 5 # Failures before circuit opens
circuit_breaker_window_seconds: 60
circuit_breaker_cooldown_seconds: 30
idempotency_window_hours: 24 # Idempotency key validity
credential_encryption: "AES-256"
webhook_verification: "HMAC-SHA256"
sync_log_retention_days: 90
health_check_interval_seconds: 60
rate_limit_buffer_pct: 10 # Stop at 90% of rate limit
6.13 Integration User Stories & Gherkin Acceptance Criteria
Scope: All user stories and Gherkin acceptance criteria for Module 6 (Integrations). Stories are organized into 7 epics covering all integration areas. Each epic includes user stories in standard format and Gherkin feature files with acceptance scenarios.
6.13.1 Integration User Story Epics
Epic 6.A: Shopify Integration
US-6.A.1: Product Sync to Shopify
- As a store manager, I want products created in POS to automatically appear on my Shopify store so that my online catalog stays current without manual duplicate entry.
- Constraint: Product must have all required fields (title, price, at least one image, barcode). Sync occurs within 30 seconds of product creation.
US-6.A.2: Real-Time Inventory Sync
- As a store manager, I want real-time inventory sync between POS and Shopify so that online customers never purchase items that are out of stock in-store.
- Constraint: Inventory updates propagate within 30 seconds. Reconciliation runs every 15 minutes.
US-6.A.3: Online Order Fulfillment
- As a store manager, I want online Shopify orders to appear in my POS for fulfillment so that my staff can pick, pack, and ship from the store.
- Constraint: Orders appear within 60 seconds of placement. Inventory is reserved immediately.
US-6.A.4: Sync Mode Configuration
- As a tenant admin, I want to configure Shopify sync mode (POS-master or bidirectional) so that I can control which system is authoritative for product data.
- Constraint: Inventory sync is always bidirectional regardless of product sync mode.
US-6.A.5: BOPIS Order Processing
- As a store manager, I want BOPIS (Buy Online, Pick Up In Store) orders from Shopify to appear as pickup orders in POS so that staff can stage items for customer collection.
- Constraint: BOPIS orders follow the Hold for Pickup workflow (Module 1, Section 1.11).
US-6.A.6: Shopify Conflict Resolution
- As a tenant admin, I want the system to automatically resolve sync conflicts between POS and Shopify so that data remains consistent without manual intervention.
- Constraint: In POS-master mode, POS-owned field changes in Shopify are overwritten on next sync cycle.
Epic 6.B: Amazon SP-API Integration
US-6.B.1: Amazon Account Connection
- As a tenant admin, I want to connect my Amazon Seller Central account to POS via OAuth so that I can manage Amazon listings from within the POS system.
- Constraint: Connection uses SP-API OAuth with LWA (Login with Amazon). Refresh token stored encrypted.
US-6.B.2: Amazon Listing Management
- As a store manager, I want products listed on Amazon to be managed from POS so that pricing, descriptions, and images are maintained in one place.
- Constraint: POS is source of truth. Amazon-specific fields (bullet points, search terms) are editable from POS.
US-6.B.3: Amazon FBM Order Routing
- As a store manager, I want Amazon FBM orders routed to the nearest store for fulfillment so that delivery times are minimized and inventory is balanced.
- Constraint: Store assignment uses the same proximity + stock algorithm as Shopify orders (Module 4, Section 4.14).
US-6.B.4: Safety Buffer Configuration
- As a tenant admin, I want to configure safety buffers for Amazon inventory so that I can reserve stock for in-store customers and prevent overselling.
- Constraint: Buffer can be absolute quantity or percentage. Applied per-product or globally.
US-6.B.5: FBA Inventory Monitoring
- As a store manager, I want to monitor FBA inventory levels from within POS so that I can see total inventory across all fulfillment channels.
- Constraint: FBA quantities are read-only in POS. Displayed separately from FBM/in-store quantities.
US-6.B.6: Amazon Order Polling
- As a tenant admin, I want the system to automatically poll for new Amazon orders so that orders are imported without manual action.
- Constraint: Polling interval configurable (default: 120 seconds). SQS notifications preferred when available.
Epic 6.C: Google Merchant Integration
US-6.C.1: Google Merchant Connection
- As a tenant admin, I want to connect Google Merchant Center to POS via OAuth so that product data feeds to Google Shopping automatically.
- Constraint: Uses Google Content API for Shopping. Service account or OAuth credentials stored encrypted.
US-6.C.2: Local Inventory Ads
- As a store manager, I want local store inventory to appear in Google Shopping searches so that nearby customers can see what is available at my store.
- Constraint: Requires Google Business Profile linked to Merchant Center. Location-level inventory synced.
US-6.C.3: Pre-Publish Validation
- As a tenant admin, I want products validated against Google requirements before sync to prevent disapprovals so that my product listings maintain good standing.
- Constraint: Validation checks GTIN, images (1000x1000 min, no watermarks), price match, and required attributes.
US-6.C.4: Disapproval Dashboard
- As a store manager, I want a disapproval dashboard showing which products failed Google validation so that I can fix issues before they affect visibility.
- Constraint: Dashboard shows disapproval reason, affected product, date, and suggested fix.
US-6.C.5: Product Feed Scheduling
- As a tenant admin, I want to configure product feed update frequency so that Google always has current product data without excessive API usage.
- Constraint: Options are hourly, daily, or 2x daily (default). Changes trigger immediate incremental sync.
US-6.C.6: Price Consistency Enforcement
- As a tenant admin, I want the system to verify that POS prices match Google listing prices so that customers are not misled by stale pricing.
- Constraint: Price mismatch detected during reconciliation triggers an automatic price update push.
Epic 6.D: Cross-Platform Product Validation
US-6.D.1: Unified Validation Engine
- As a product manager, I want products validated against all platform requirements before publishing so that I can fix issues once rather than per-platform.
- Constraint: Validation runs against the strictest requirement across all enabled platforms.
US-6.D.2: Image Validation
- As a product manager, I want image validation that checks dimensions, file size, watermarks, and background requirements so that images pass all platform reviews.
- Constraint: Minimum 1000x1000px, max 10MB, JPEG/PNG only, no watermarks or text overlays, white background for Amazon main image.
US-6.D.3: Unified Validation Dashboard
- As a product manager, I want a unified validation dashboard showing platform readiness per product so that I can see at a glance which products are ready for each channel.
- Constraint: Dashboard shows green/yellow/red status per product per platform with drill-down to specific failures.
US-6.D.4: Validation on Product Save
- As a product manager, I want validation to run automatically when I save a product so that I receive immediate feedback on any issues.
- Constraint: Validation is non-blocking (product saves regardless) but warnings are displayed prominently.
US-6.D.5: Bulk Validation Report
- As a tenant admin, I want to run bulk validation across all products so that I can identify and fix issues before enabling a new marketplace.
- Constraint: Bulk validation runs asynchronously and produces a downloadable report.
Epic 6.E: Cross-Platform Inventory Sync
US-6.E.1: Safety Buffer Management
- As a tenant admin, I want to configure safety buffers per marketplace so that I can reserve stock for in-store customers while selling online.
- Constraint: Buffers are configurable per product, per marketplace. Default is configurable at tenant level.
US-6.E.2: Oversell Prevention
- As a store manager, I want the system to prevent overselling across all channels so that customers never purchase items that are not available.
- Constraint: First-commit-wins arbitration. Inventory reserved at time of order, not payment.
US-6.E.3: Sync Failure Handling
- As a tenant admin, I want inventory quantities frozen on marketplaces when sync fails so that stale data does not cause overselling.
- Constraint: Quantities are frozen for configurable period (default: 120 minutes). Dead letter queue retries for 24 hours.
US-6.E.4: Reconciliation Dashboard
- As a store manager, I want a reconciliation dashboard showing inventory discrepancies across channels so that I can identify and resolve sync issues.
- Constraint: Dashboard shows POS qty vs. each marketplace qty with variance highlighting.
US-6.E.5: Multi-Channel Available Quantity
- As a store manager, I want to see available quantity per channel from the product detail screen so that I understand how inventory is allocated.
- Constraint: Displays: Total On Hand, Shopify Available, Amazon Available, Google Available, In-Store Reserve, Safety Buffer.
Epic 6.F: Payment Integration
US-6.F.1: Card Payment Processing
- As a cashier, I want to process card payments via the payment terminal without handling card data so that transactions are fast and PCI compliant.
- Constraint: Card data never enters POS system. Terminal communicates directly with processor. POS stores only token and masked card.
US-6.F.2: Processor Credential Configuration
- As a tenant admin, I want to configure payment processor credentials with test/production toggle so that I can verify the integration in sandbox before going live.
- Constraint: Credentials encrypted with AES-256. Validation handshake required before activating production mode.
US-6.F.3: Terminal Health Monitoring
- As a store manager, I want to view payment terminal health and decline rates so that I can identify and resolve terminal issues proactively.
- Constraint: Dashboard shows per-terminal metrics: transaction count, avg response time, error rate, decline rate.
US-6.F.4: Batch Settlement Management
- As a store manager, I want to view daily batch settlement reports and reconciliation status so that I can verify all card transactions settled correctly.
- Constraint: Auto-batch at configurable time. Variance > $0.01 triggers reconciliation alert.
Epic 6.G: Integration Hub
US-6.G.1: Integration Management
- As a tenant admin, I want a central dashboard to manage all external integrations so that I can connect, configure, and monitor all third-party services from one place.
- Constraint: Dashboard shows status, last sync, error count, and health indicator per integration.
US-6.G.2: Health Monitoring
- As a tenant admin, I want real-time health indicators for all integrations so that I can immediately see when an integration needs attention.
- Constraint: Green/yellow/red indicators based on status, error count, sync age, latency, and rate limit usage.
US-6.G.3: Sync Log Access
- As a tenant admin, I want to view detailed sync logs for each integration so that I can troubleshoot failed syncs and understand data flow.
- Constraint: Logs filterable by integration, sync type, status, date range. 90-day retention.
US-6.G.4: Manual Sync Trigger
- As a tenant admin, I want to manually trigger a sync for any integration so that I can force a refresh when needed without waiting for the scheduled cycle.
- Constraint: Requires ADMIN or OWNER role. Rate limited to one manual sync per integration per 5 minutes.
6.13.2 Gherkin Acceptance Criteria
Feature: Shopify Product Sync
As a store manager
I want products to sync between POS and Shopify
So that my online store always shows current product data
Background:
Given I am logged in as a user with "MANAGER" role
And Shopify integration is enabled with status "CONNECTED"
And sync_mode is set to "pos_master"
Scenario: New product syncs to Shopify on creation
Given I create a new product with SKU "NXJ-TSHIRT-BLK-M"
And the product has title "Classic Black T-Shirt - Medium"
And the product has base_price "$24.99"
And the product has a valid image (1200x1200px JPEG)
And the product has barcode "195962000123"
When the product is saved
Then the product should appear in Shopify within 30 seconds
And the Shopify product title should match "Classic Black T-Shirt - Medium"
And the Shopify product price should match "$24.99"
And a sync log entry should be created with sync_type "WEBHOOK_OUT" and status "SUCCESS"
Scenario: POS-owned field change in Shopify is overwritten
Given product "NXJ-TSHIRT-BLK-M" exists in both POS and Shopify
And sync_mode is "pos_master"
When someone changes the Shopify title to "Updated Title on Shopify"
And the next reconciliation cycle runs
Then the Shopify title should revert to the POS title "Classic Black T-Shirt - Medium"
And a sync log entry should be created with sync_type "RECONCILIATION"
Scenario: Product missing required fields is excluded from sync
Given I create a new product with SKU "NXJ-DRAFT-001"
And the product has title "Draft Product"
But the product has no image
When the product is saved
Then the product should NOT sync to Shopify
And a validation warning should display "Product excluded from Shopify sync: missing required image"
And the product should be saved locally without error
Feature: Shopify Inventory Sync
As a store manager
I want real-time inventory sync between POS and Shopify
So that online customers see accurate stock levels
Background:
Given Shopify integration is enabled with inventory_sync_enabled = true
And product "NXJ-TSHIRT-BLK-M" exists in both POS and Shopify
And the current POS quantity at "Georgetown Store" is 25
Scenario: POS sale decrements Shopify inventory
When a sale of 1 unit of "NXJ-TSHIRT-BLK-M" is completed at "Georgetown Store"
Then the POS quantity should update to 24
And the Shopify inventory level should update to 24 within 30 seconds
And a sync log entry should be created with entity_type "INVENTORY" and status "SUCCESS"
Scenario: Shopify order decrements POS inventory
When a Shopify order for 2 units of "NXJ-TSHIRT-BLK-M" is placed
Then the POS should receive the order via webhook within 60 seconds
And the POS quantity should decrease by 2 (from 25 to 23)
And the inventory status should show 2 units as "RESERVED" for the online order
Scenario: Reconciliation detects and corrects discrepancy
Given the POS quantity is 20 but the Shopify quantity shows 22
When the scheduled reconciliation runs (every 15 minutes)
Then the Shopify quantity should be corrected to 20 (POS is source of truth)
And a sync log entry should be created with sync_type "RECONCILIATION"
And the discrepancy should be logged with details "Corrected Shopify qty from 22 to 20"
Scenario: Inventory sync failure freezes marketplace quantity
Given the Shopify API is unreachable
When a POS sale occurs reducing quantity from 20 to 19
Then the POS should record the sync as "FAILED"
And the Shopify quantity should remain frozen at its last known value
And a retry should be queued with exponential backoff (5s, 15s, 45s)
And if still failing after 120 minutes, an alert should be sent to the tenant admin
Feature: Amazon Order Import
As a store manager
I want Amazon orders automatically imported into POS
So that I can fulfill them from my store inventory
Background:
Given Amazon SP-API integration is enabled with status "CONNECTED"
And fulfillment_default is "FBM"
And order_poll_interval_seconds is 120
Scenario: New FBM order is imported via polling
Given a customer places an Amazon order for "NXJ-TSHIRT-BLK-M" (qty: 1)
When the next order poll cycle runs
Then the order should appear in POS with source "AMAZON"
And the order status should be "PENDING_FULFILLMENT"
And inventory should be reserved (1 unit) at the assigned store
And a sync log entry should be created with sync_type "SCHEDULED_PULL" and entity_type "ORDER"
Scenario: FBM order is routed to nearest store with stock
Given "Georgetown Store" has 5 units and "HQ Warehouse" has 20 units
And the customer shipping address is in Washington, DC
When an Amazon FBM order is imported
Then the order should be assigned to "Georgetown Store" (nearest with stock)
And the fulfillment team at "Georgetown Store" should see the order in their queue
Scenario: Order for out-of-stock item triggers alert
Given "NXJ-TSHIRT-BLK-M" has 0 units available across all locations
When an Amazon order for this item is imported
Then the order should be created with status "PENDING_FULFILLMENT"
And a "STOCK_ALERT" notification should be sent to the store manager
And the order should be flagged with warning "Insufficient stock for fulfillment"
Feature: Amazon FBM Fulfillment
As a store manager
I want to fulfill Amazon FBM orders from my store
So that customers receive their orders on time
Background:
Given Amazon integration is connected
And I am logged in as a user with "MANAGER" role at "Georgetown Store"
Scenario: Ship and confirm FBM order
Given an Amazon FBM order "114-1234567-8901234" is assigned to my store
And the order contains 1 unit of "NXJ-TSHIRT-BLK-M"
When I pick and pack the item
And I generate a shipping label via the carrier integration
And I confirm shipment with tracking number "1Z999AA10123456784"
Then the order status should update to "SHIPPED" in POS
And Amazon should receive the shipment confirmation via SP-API
And the customer should receive a shipping notification from Amazon
And the inventory should be decremented by 1 unit at "Georgetown Store"
Scenario: FBA inventory is visible but read-only
Given FBA is enabled for product "NXJ-HOODIE-GRY-L"
And Amazon FBA has 50 units in their fulfillment center
When I view the product detail for "NXJ-HOODIE-GRY-L"
Then I should see "FBA Qty: 50" in the inventory breakdown
And the FBA quantity should be read-only (not editable from POS)
And the total network quantity should include FBA units
Feature: Google Local Inventory
As a store manager
I want local store inventory visible in Google Shopping
So that nearby customers can find products at my store
Background:
Given Google Merchant integration is connected
And local_inventory_enabled is true
And Google Business Profile is linked for "Georgetown Store"
Scenario: Local inventory appears in Google Shopping
Given "NXJ-TSHIRT-BLK-M" has 15 units at "Georgetown Store"
And the product passes all Google validation rules
When the scheduled product feed runs
Then Google Merchant should show "In stock" at "Georgetown Store"
And the price shown should match the POS base_price "$24.99"
Scenario: Out-of-stock local inventory updates Google
Given "NXJ-TSHIRT-BLK-M" has 0 units at "Georgetown Store"
When the next inventory sync to Google runs
Then Google Merchant should show "Out of stock" at "Georgetown Store"
And other locations with stock should still show "In stock"
Scenario: Product with missing GTIN is excluded from Google feed
Given product "NXJ-CUSTOM-001" has no barcode/GTIN
And gtin_required is true
When the product feed sync runs
Then "NXJ-CUSTOM-001" should be excluded from the Google feed
And a validation failure should be logged with reason "Missing required GTIN"
And the disapproval dashboard should show this product
Feature: Cross-Platform Product Validation
As a product manager
I want products validated against all platform requirements
So that I can publish to any channel with confidence
Background:
Given Shopify, Amazon, and Google Merchant integrations are all enabled
And I am logged in as a user with "PRODUCT_MANAGER" role
Scenario: Product passes all platform validations
Given product "NXJ-TSHIRT-BLK-M" has:
| Field | Value |
| title | Classic Black T-Shirt - Medium |
| description | Premium cotton crew neck t-shirt |
| price | $24.99 |
| barcode | 195962000123 |
| brand | Nexus Clothing |
| weight | 8 oz |
| image | 1200x1200 JPEG, no watermark, white background |
| condition | new |
When I save the product
Then the validation dashboard should show:
| Platform | Status |
| Shopify | Ready (green) |
| Amazon | Ready (green) |
| Google | Ready (green) |
Scenario: Product fails Amazon image validation
Given product "NXJ-DRESS-RED-S" has a main image with a colored background
And white_background_required is true for Amazon
When I save the product
Then the validation dashboard should show:
| Platform | Status |
| Shopify | Ready (green) |
| Amazon | Warning (yellow) - "Main image requires white background" |
| Google | Warning (yellow) - "Image may not meet quality standards" |
Scenario: Product exceeds Shopify variant limit
Given product "NXJ-MATRIX-SHOE" has 120 variants (sizes x colors x widths)
And max_variants_per_product is 100
When I save the product
Then the validation dashboard should show Shopify status as "Blocked (red)"
And the message should read "Exceeds Shopify 100-variant limit (120 variants)"
And the product should be excluded from Shopify sync
But the product should still be eligible for Amazon and Google sync
Scenario: Bulk validation report generation
Given there are 500 active products in the catalog
When I click "Run Bulk Validation" from the validation dashboard
Then a background job should start processing all 500 products
And I should see a progress indicator
And when complete, a downloadable CSV report should be available
And the report should contain one row per product per platform with validation status
Feature: Safety Buffer Inventory Management
As a tenant admin
I want to configure safety buffers per marketplace
So that I reserve stock for in-store customers while selling online
Background:
Given I am logged in as a user with "ADMIN" role
And product "NXJ-TSHIRT-BLK-M" has 50 total units on hand
Scenario: Safety buffer reduces marketplace available quantity
Given the Amazon safety buffer is set to 10 units
And the Shopify safety buffer is set to 0 units
When inventory sync runs for "NXJ-TSHIRT-BLK-M"
Then Shopify should show 50 available units
And Amazon should show 40 available units (50 - 10 buffer)
Scenario: Percentage-based safety buffer
Given the Amazon safety buffer is set to 20%
And total on-hand quantity is 50
When inventory sync runs
Then Amazon should show 40 available units (50 - 10 buffer, where 10 = 20% of 50)
Scenario: Safety buffer prevents overselling
Given the Amazon safety buffer is 10 units
And total on-hand is 12 units
And Amazon shows 2 available (12 - 10)
When an Amazon order for 3 units is placed
Then the order should be imported with a warning "Ordered qty (3) exceeds Amazon available (2)"
And inventory should be reserved for 3 units (allowing negative available on Amazon)
And the Amazon available quantity should update to 0
Scenario: Buffer recalculated on inventory change
Given the Amazon safety buffer is 10 units
And total on-hand is 50 (Amazon shows 40)
When a POS sale reduces on-hand to 45
Then Amazon available should update to 35 (45 - 10)
And this update should propagate within the sync interval
Feature: Payment Terminal Flow
As a cashier
I want to process card payments securely via the terminal
So that customers can pay by card without handling card data
Background:
Given I am logged in as a user with "CASHIER" role
And a payment terminal "TRM-001-GM" is configured and active at my location
And a cart with total $45.00 is ready for payment
Scenario: Successful card payment via tap
When I click "Pay by Card"
And the terminal displays "Tap or Insert Card - $45.00"
And the customer taps their Visa card
Then the terminal should communicate directly with the processor
And the POS should receive: token, approval_code "AUTH4829", masked_card "****1234", brand "VISA"
And the POS should display "Approved - $45.00"
And the receipt should show "VISA ****1234"
And no card data should be stored in the POS database
Scenario: Card declined - insufficient funds
When I click "Pay by Card"
And the customer inserts their card
And the processor returns "DECLINED - Insufficient Funds"
Then the POS should display "Card Declined: Insufficient Funds"
And I should see options: "Try Another Card" | "Cash" | "Cancel"
And the decline should be logged with reason "Insufficient Funds"
Scenario: Terminal timeout
When I click "Pay by Card"
And the terminal does not respond within 60 seconds
Then the POS should display "Terminal not responding"
And I should see options: "Retry" | "Different Terminal" | "Cash" | "Cancel"
And the timeout should be logged as a terminal failure event
Feature: Integration Health Dashboard
As a tenant admin
I want to monitor the health of all integrations
So that I can quickly identify and resolve connectivity issues
Background:
Given I am logged in as a user with "ADMIN" role
And I navigate to the Integration Hub dashboard
Scenario: All integrations healthy
Given Shopify integration last synced 5 minutes ago with 0 errors
And Payment Processor status is "CONNECTED" with 0 errors
And Email Provider is verified and has 0 bounces
When I view the Integration Hub dashboard
Then Shopify should show a green health indicator
And Payment Processor should show a green health indicator
And Email Provider should show a green health indicator
Scenario: Integration in error state shows red
Given Shopify integration has had 8 errors in the past 24 hours
And the last sync was 3 hours ago
When I view the Integration Hub dashboard
Then Shopify should show a red health indicator
And the error count should display "8 errors (24h)"
And the last sync should display "3 hours ago" in red text
And an "Investigate" button should be available
Scenario: Rate-limited integration shows yellow
Given Amazon SP-API has 8% of rate limit remaining
And the rate_limit_reset_at is in 45 seconds
When I view the Integration Hub dashboard
Then Amazon should show a yellow health indicator
And the rate limit bar should show "8% remaining"
And a tooltip should display "Rate limit resets in 45 seconds"
Feature: Integration Credential Management
As a tenant admin
I want to securely manage integration credentials
So that external services stay connected without exposing sensitive data
Background:
Given I am logged in as a user with "OWNER" role
And I navigate to the Integration Hub settings
Scenario: Configure Shopify credentials
When I click "Configure" on the Shopify integration card
And I enter shop URL "nexus-clothes.myshopify.com"
And I enter API key "shppa_abc123def456"
And I enter API secret "shpss_xyz789"
And I click "Verify & Save"
Then the system should perform a test API call to Shopify
And the credentials should be encrypted with AES-256 before storage
And the integration status should change to "CONNECTED"
And the API key should display as "shppa_****f456" in the UI
Scenario: Credentials never exposed in API response
Given Shopify integration is connected with saved credentials
When I make a GET request to /api/integrations/{shopify_id}
Then the response should contain credential_key values
But credential_value_encrypted should NOT be in the response
And a masked indicator "********" should be shown instead
Scenario: Failed credential verification
When I enter invalid Shopify API credentials
And I click "Verify & Save"
Then the system should display "Verification failed: Invalid API credentials"
And the integration status should remain "NOT_CONFIGURED"
And the invalid credentials should NOT be saved
Scenario: OAuth token auto-refresh
Given Amazon integration has a refresh token expiring in 4 minutes
When the token refresh check runs (5-minute buffer)
Then the system should automatically request a new access token
And the new token should be encrypted and stored
And the integration status should remain "CONNECTED"
And a sync log entry should record the token refresh
End of Module 6: Integrations & External Systems (Sections 6.1 – 6.13)
7. State Machine Reference
This section consolidates all entity state machines for quick reference.
7.1 Order States
| State | Description | Transitions To |
|---|---|---|
| DRAFT | Cart in progress | PENDING |
| PENDING | Awaiting payment | PAID, PARTIAL_PAID |
| PARTIAL_PAID | Partial payment received | PAID, PENDING |
| PAID | Full payment received | COMPLETED, HOLD_FOR_PICKUP |
| HOLD_FOR_PICKUP | Paid, awaiting pickup staging | READY_FOR_PICKUP |
| READY_FOR_PICKUP | Items staged for pickup | COMPLETED, HOLD_EXPIRED |
| HOLD_EXPIRED | Pickup deadline passed | CONTACT_CUSTOMER |
| CONTACT_CUSTOMER | Staff attempting contact | READY_FOR_PICKUP (extended), REFUNDED |
| COMPLETED | Transaction finalized | VOIDED (same day), PARTIALLY_RETURNED |
| VOIDED | Transaction reversed (same day) | Terminal |
| PARTIALLY_RETURNED | Some items returned | FULLY_RETURNED |
| FULLY_RETURNED | All items returned | Terminal |
| REFUNDED | Customer refunded (expired hold) | Terminal |
7.2 Parked Sale States
| State | Description | Transitions To |
|---|---|---|
| ACTIVE | Cart in progress | PARKED, PENDING |
| PARKED | Sale parked for later | ACTIVE (retrieved), EXPIRED |
| EXPIRED | TTL exceeded (4 hours) | Terminal (inventory released) |
7.3 Gift Card States
| State | Description | Transitions To |
|---|---|---|
| INACTIVE | Card manufactured, not sold | ACTIVE |
| ACTIVE | Balance > $0, within expiry | DEPLETED, EXPIRED (where allowed) |
| DEPLETED | Balance = $0 | ACTIVE (reload), CASHED_OUT |
| CASHED_OUT | Cash out processed (CA) | Terminal |
| EXPIRED | Past expiry date (where allowed) | Terminal |
7.4 Layaway States
| State | Description | Transitions To |
|---|---|---|
| DEPOSIT_PAID | Initial deposit received | RESERVED |
| RESERVED | Inventory held | PAID_IN_FULL, CANCELLED, FORFEITED |
| PAID_IN_FULL | All payments complete | COMPLETED |
| COMPLETED | Items released to customer | Terminal |
| CANCELLED | Customer cancelled | Terminal |
| FORFEITED | Payment deadline missed | Terminal |
7.5 Special Order States
| State | Description | Transitions To |
|---|---|---|
| CREATED | Order initiated | DEPOSIT_PAID, CANCELLED |
| DEPOSIT_PAID | Deposit received | ORDERED, CANCELLED_REFUND |
| ORDERED | Sent to vendor | RECEIVED |
| RECEIVED | Item arrived | READY_FOR_PICKUP |
| READY_FOR_PICKUP | Staged for customer | COMPLETED, ABANDONED |
| COMPLETED | Customer picked up | Terminal |
| CANCELLED | No deposit, cancelled | Terminal |
| CANCELLED_REFUND | Deposit refunded | Terminal |
| ABANDONED | No pickup after 30 days | Terminal |
7.6 Transfer States
| State | Description | Transitions To |
|---|---|---|
| REQUESTED | Transfer initiated | PAID, CANCELLED |
| PAID | Customer paid in full | PICKING |
| PICKING | Source store processing | SHIPPED |
| SHIPPED | Handed to carrier | IN_TRANSIT |
| IN_TRANSIT | Carrier confirmed pickup | RECEIVED |
| RECEIVED | Arrived at destination | COMPLETED |
| COMPLETED | Customer notified/picked up | Terminal |
| CANCELLED | Cancelled before payment | Terminal |
| CANCELLED_REFUND | Cancelled after payment | Terminal |
7.7 Reservation States
| State | Description | Transitions To |
|---|---|---|
| REQUESTED | Reservation initiated | PAID, CANCELLED |
| PAID | Customer paid in full | RESERVED |
| RESERVED | Item held at store | PICKED_UP, EXPIRED |
| PICKED_UP | Customer collected | Terminal |
| EXPIRED | Deadline passed | REFUND_PENDING |
| REFUND_PENDING | Auto-refund triggered | REFUNDED |
| CANCELLED | Cancelled before payment | Terminal |
| REFUNDED | Refund processed | Terminal |
7.8 Ship-to-Customer States
| State | Description | Transitions To |
|---|---|---|
| REQUESTED | Shipment initiated | PAID, CANCELLED |
| PAID | Customer paid item + shipping | PICKING, CANCELLED_REFUND |
| PICKING | Source store processing | PACKED |
| PACKED | Items packed, awaiting label | SHIPPED |
| SHIPPED | Label generated, handed to carrier | IN_TRANSIT |
| IN_TRANSIT | Carrier pickup confirmed | DELIVERED |
| DELIVERED | Delivery confirmed | Terminal |
| CANCELLED | Cancelled before payment | Terminal |
| CANCELLED_REFUND | Cancelled after payment, full refund | Terminal |
7.9 Cash Drawer States
| State | Description | Transitions To |
|---|---|---|
| CLOSED | Drawer secured | OPENING |
| OPENING | Manager initiating open | OPEN |
| OPEN | Accepting transactions | COUNTING |
| COUNTING | End-of-day count in progress | BALANCED, VARIANCE_DETECTED |
| BALANCED | Count matches expected | CLOSED |
| VARIANCE_DETECTED | Count doesn’t match | MANAGER_REVIEW |
| MANAGER_REVIEW | Awaiting approval | BALANCED |
7.10 Coupon States
| State | Description | Transitions To |
|---|---|---|
| CREATED | Coupon generated | ACTIVE |
| ACTIVE | Available for use | REDEEMED, EXPIRED, DEPLETED |
| REDEEMED | Single-use completed | Terminal |
| EXPIRED | Past expiry date | Terminal |
| DEPLETED | Multi-use limit reached | Terminal |
7.11 Customer Tier States
| State | Description | Transitions To |
|---|---|---|
| BRONZE | New/base tier | SILVER |
| SILVER | Mid tier ($1,000+ annual) | GOLD, BRONZE |
| GOLD | Top tier ($5,000+ annual) | SILVER |
7.12 Offline Mode States
| State | Description | Transitions To |
|---|---|---|
| ONLINE | Network available | OFFLINE |
| OFFLINE | Network lost, queuing locally | SYNCING |
| SYNCING | Network restored, processing queue | ONLINE, CONFLICT_REVIEW |
| CONFLICT_REVIEW | Sync conflicts detected | SYNCING (resolved), ONLINE (override) |
7.13 Integration Sync States
| State | Description | Transitions To |
|---|---|---|
| IDLE | No sync operation in progress | SYNCING |
| SYNCING | Active sync operation underway | COMPLETED, FAILED, PARTIAL |
| COMPLETED | Sync finished successfully | IDLE |
| FAILED | Sync failed after all retries exhausted | IDLE (manual retry), SYNCING (auto-retry) |
| PARTIAL | Some records synced, others failed | IDLE (accept partial), SYNCING (retry failed) |
7.14 Integration Connection States
| State | Description | Transitions To |
|---|---|---|
| NOT_CONFIGURED | Integration not set up | CONNECTING |
| CONNECTING | Validating credentials and testing connection | CONNECTED, ERROR |
| CONNECTED | Active and operational | DISCONNECTED, ERROR, RATE_LIMITED |
| DISCONNECTED | Manually disabled by admin | CONNECTING |
| ERROR | Connection failed or credentials invalid | CONNECTING (re-validate), NOT_CONFIGURED (reset) |
| RATE_LIMITED | API rate limit exceeded, temporarily paused | CONNECTED (after cooldown) |
7.15 Amazon Order Fulfillment States (FBM)
| State | Description | Transitions To |
|---|---|---|
| PENDING | Order received from Amazon, awaiting payment confirmation | ASSIGNED, CANCELLED |
| ASSIGNED | Routed to a POS store location for fulfillment | PICKING, CANCELLED |
| PICKING | Store staff locating and scanning items | PACKED, CANCELLED |
| PACKED | Items packaged and ready for shipment | SHIPPED |
| SHIPPED | Carrier picked up, tracking number provided to Amazon | DELIVERED, EXCEPTION |
| DELIVERED | Carrier confirms delivery | (terminal) |
| CANCELLED | Order cancelled by customer or seller | (terminal) |
| EXCEPTION | Delivery exception (lost, damaged, refused) | SHIPPED (re-attempt), CANCELLED (refund) |
7.16 Product Sync Validation States
| State | Description | Transitions To |
|---|---|---|
| DRAFT | Product created, not yet validated for any channel | VALIDATING |
| VALIDATING | Running cross-platform validation rules | VALID, INVALID |
| VALID | All required fields pass validation for target platform(s) | SYNCING, INVALID (data changed) |
| INVALID | One or more validation rules failed | VALIDATING (after fix) |
| SYNCING | Pushing product data to external platform | SYNCED, SYNC_FAILED |
| SYNCED | Successfully published to external platform | VALID (re-validate on change), SYNCING (update) |
| SYNC_FAILED | Push to platform failed | SYNCING (retry), VALID (re-queue) |
| BLOCKED | Product excluded from sync (e.g., exceeds variant limits) | VALIDATING (after fix) |
Appendix M: Field Specifications Reference
This appendix summarizes the 44 field-level specifications gathered through requirements interviews. Complete Technical User Stories with field-level acceptance criteria are documented in the companion document: Technical-User-Stories-Field-Specs.md (60 stories, 5,285 lines).
M.1 SKU & Barcode Specifications
| Specification | Value |
|---|---|
| SKU Max Length | 20 characters |
| SKU Allowed Characters | Alphanumeric + dash + underscore ([A-Z0-9\-_]) |
| SKU Uniqueness | Globally unique per tenant |
| Barcode Types Supported | UPC-A (12 digit), EAN-13 (13 digit), Internal |
| Invalid Barcode Behavior | Show error + manual SKU entry option |
M.2 Customer Contact Specifications
| Specification | Value |
|---|---|
| Phone Format | E.164 international (+1-555-123-4567) |
| Required Customer Fields | First name + Last name + (Phone OR Email) |
| Address Format | Structured fields (Street/City/State/ZIP) |
| ZIP Code Validation | 5 digits or 9 digits (12345 or 12345-6789) |
| Customer Notes Max Length | 500 characters |
M.3 Pricing Specifications
| Specification | Value |
|---|---|
| Price Data Type | DECIMAL(10,2) |
| Price Range | $0.00 - $99,999.99 |
| Tax Rate Format | Percentage with 2 decimals (stored as 8.25) |
| Zero Price Handling | Allowed with mandatory reason code (SAMPLE, DONATION, PROMO, OTHER) |
M.4 Inventory Reason Codes
| Category | Values |
|---|---|
| Adjustment | SHRINKAGE, DAMAGE, COUNT_CORRECTION, VENDOR_ERROR, FOUND_STOCK, SAMPLE, DONATION, EMPLOYEE_PURCHASE, OTHER |
| Transfer | REBALANCE, REPLENISHMENT, CUSTOMER_REQUEST, OVERSTOCK, CONSOLIDATION, OTHER |
| Non-PO Receive | SAMPLE, REPLACEMENT, FOUND_STOCK, CONSIGNMENT, DONATION, VENDOR_CREDIT_RETURN, RMA_RETURN, OTHER |
| Custom Codes | Predefined + Admin-created custom codes allowed |
M.5 Authentication Specifications
| Specification | Value |
|---|---|
| POS PIN Format | 4 digits numeric only (regex: ^\d{4}$) |
| Lockout Policy | 5 failed attempts → 15-minute lock, admin can unlock |
| Password Requirements | 8+ chars, 1 uppercase, 1 number |
| Manager Override | Manager must enter their own PIN (creates audit trail) |
M.6 Payment Specifications
| Specification | Value |
|---|---|
| Card Minimum | None (accept any amount) |
| Supported Card Types | Visa, Mastercard, Amex, Discover |
| Payment Decline Message | “Payment declined. Please try another payment method.” |
| Cash Maximum | $10,000 (IRS reporting threshold) |
M.7 Product Variant Specifications
| Specification | Value |
|---|---|
| Size Values | Predefined by category (Apparel: XS-3XL, Shoes: 5-15, Pants: 28-44), admin can add |
| Color Values | Predefined list (20+ standard) + admin custom additions |
| Max Dimension Values | 50 per dimension |
| Dimension Value Max Length | 30 characters |
M.8 Custom Field Specifications
| Specification | Value |
|---|---|
| Dropdown Max Options | 25 options per dropdown |
| Text Field Max Length | 255 characters |
| Number Field Precision | DECIMAL(10,4), max 999,999.9999 |
| Fields Per Entity | 50 max per entity type (Product/Customer/Order/Vendor) |
M.9 Approval Workflow Specifications
| Specification | Value |
|---|---|
| Threshold Type | Configurable per action ($ or %) |
| Timeout Action | Auto-reject after 48 hours |
| Self-Approval | Not allowed — different person must approve |
| Default Thresholds | Fully configurable by Tenant Admin during setup |
M.10 Receipt & Email Specifications
| Specification | Value |
|---|---|
| Receipt Line Length | 40 characters per line (80mm thermal) |
| Email Subject Max Length | 100 characters |
| Email Body Max Size | 50 KB (HTML) |
| Merge Field Syntax | {{FIELD_NAME}} |
M.11 Time & Date Specifications
| Specification | Value |
|---|---|
| Time Format Display | 12-hour AM/PM (e.g., 9:00 AM) |
| Time Storage | 24-hour format |
| Clock-In Duration | Maximum 16 hours; auto-alert to manager if clock-out not recorded within 16 hours |
| Default Timezone | America/New_York (Eastern) |
| Date Format Display | MM/DD/YYYY |
| Date Storage | ISO format (YYYY-MM-DD) |
M.12 Error Message Specifications
| Specification | Value |
|---|---|
| Error Max Length | 80 characters |
| Error Code Format | Prefix with [ERR-XXXX] |
| Error Tone | Direct and instructive (e.g., “Enter a valid email address.”) |
| Error Display | Inline below each field |
M.13 Error Code Ranges
| Module | Error Code Range |
|---|---|
| Module 1: Sales | ERR-1001 to ERR-1099 |
| Module 2: Customers | ERR-2001 to ERR-2099 |
| Module 3: Catalog | ERR-3001 to ERR-3099 |
| Module 4: Inventory | ERR-4001 to ERR-4099 |
| Module 5: Setup | ERR-5001 to ERR-5099 |
| Module 6: Integrations (General) | ERR-6001 to ERR-6009 |
| Module 6: Shopify | ERR-6010 to ERR-6029 |
| Module 6: Amazon SP-API | ERR-6030 to ERR-6049 |
| Module 6: Google Merchant | ERR-6050 to ERR-6069 |
| Module 6: Product Validation | ERR-6070 to ERR-6079 |
| Module 6: Inventory Sync | ERR-6080 to ERR-6089 |
| Module 6: Payment Integration | ERR-6090 to ERR-6094 |
| Module 6: Email Integration | ERR-6095 to ERR-6097 |
| Module 6: Shipping Integration | ERR-6098 to ERR-6099 |
Companion Document: For complete Technical User Stories with field-level acceptance criteria, see:
Technical-User-Stories-Field-Specs.md
Document History
| Version | Date | Changes |
|---|---|---|
| 10.0 | 2026-01-26 | Initial unified specification |
| 11.0 | 2026-01-26 | Added: Gift Cards, Dedicated Exchanges, Price Tiers, Special Orders, Multi-Store Inventory (full payment required), Commissions, Return Policy Engine, Serial Numbers, Hold for Pickup, Cash Drawer Management, Price Check Mode, Coupons, Flexible Loyalty, Customer Groups, Customer Notes, Communication Preferences |
| 11.1 | 2026-01-26 | Added: State Machine Diagrams (8 entities), Gherkin Acceptance Criteria (Sales & Customers), Business Rules Configuration (YAML), State Machine Reference section |
| 12.0 | 2026-01-26 | Major Update: Added Section 1.16 Offline Operations (queue-and-sync), Section 1.17 Tax Calculation Engine (custom, Virginia-first with expansion design), Section 1.18 Payment Integration (SAQ-A semi-integrated). Fixed inconsistencies: Hold for Pickup state machine reconciled, Discount calculation order clarified (added loyalty redemptions), Credit limit calculation documented (includes pending layaways), Void vs Return distinction clarified (same-day vs after). Added missing user stories: Payment failures, Receipt reprinting, Privacy compliance (GDPR-style). Added Parked Sale state machine. Updated Commission rules for proportional reversal on returns. Added Gift Card jurisdiction compliance (California cash-out). Added Customer self-service and privacy workflows. New state machines: Parked Sales (3.2), Offline Mode (3.11). Updated all Gherkin acceptance criteria. |
| 13.0 | 2026-02-01 | Major Update: (1) Renamed all RFID references to Scanner for technology-agnostic input. (2) Removed Cash Rounding feature. (3) Added Affirm third-party financing (BNPL) as payment method. (4) Receipt scanning now required for return validation. (5) Added X-Report for mid-shift cash audits. (6) New Section 1.7.3 Ship to Customer from Other Location with carrier integration. (7) Card refund now offers staff choice between manual on terminal or automatic via token. (8) Multiple credit cards and cash+card(s) combinations supported. (9) Renamed Integrated Card to Credit Card for clarity. (10) Added Exchange to toggle mode (Sale/Return/Exchange). (11) Loyalty Redemption now applies after tax calculation. (12) Return/exchange policy is configurable in Settings/Setup, not hardcoded. (13) Offline mode notifies customers via email when transfer/ship/reserve items sold at source. (14) Added Reports & Email Templates sub-sections under all major sections (50 reports, 6 email templates). (15) Clarified Reservation vs Hold for Pickup distinction with BOPIS examples. (16) Added Online/In-Store return and exchange policy examples. |
| 14.0 | 2026-02-02 | Major Catalog Expansion: Expanded Section 3 (Catalog Module) from 8 subsections to 23 subsections. (1) New Pricing Engine with 5-level price hierarchy, price books, 4 promotion types, markdown workflows with accountability controls. (2) New Multi-Channel Management with visibility controls, inventory allocation, and channel-specific pricing. (3) New Shopify Integration with POS-master sync strategy, field-level ownership model, and optional bi-directional mode. (4) New Vendor RMA workflow (8-state machine). (5) New Reorder Management with velocity-based dynamic reorder points and auto-generated draft POs. (6) New Inventory Control with 6 statuses, 5 counting methods, approval-gated adjustments, and unified receiving workflow. (7) New Inter-Store Transfers with state machine and auto-rebalancing. (8) New Serial & Lot Tracking. (9) New Landed Cost & Weighted Average Costing. (10) New Product Movement History with stock ledger. (11) New Product Search & Discovery with full-text search, filters, substitutions. (12) New Label & Price Tag Printing with templates. (13) New Product Media management (images + video). (14) New Product Notes & Attachments with structured types. (15) New Catalog Permissions & Approvals (RBAC, field-level, approval workflows). (16) New Product Performance Analytics (ABC classification, embedded metrics). (17) Expanded Product Data Model with retail attributes, custom fields, UoM, shipping, templates, matrix management. (18) Expanded Categories to include formal seasons and reporting dimensions. (19) Expanded User Stories with 14 new epics (F through S) and comprehensive Gherkin acceptance criteria. (20) Added “Features Not Needed” section documenting explicit exclusions (warranties, consignment, expiration, assembly, recalls, product-level tax). |
| 15.0 | 2026-02-04 | New Inventory Module: (1) Created Module 4: Inventory Management with 19 sections (4.1-4.19). (2) Moved inventory content (PO, RMA, Transfers, Counts, Costing, Movement History) from Catalog Module 3 to dedicated Inventory Module 4. (3) Reduced Catalog Module from 23 to 15 sections (3.1-3.15). (4) New sections: POS & Sales Integration (reserve/commit model), Online Order Fulfillment (nearest-store), Offline Inventory Operations (queue + conflict resolution), Alerts & Notifications (5 types + 4 email templates), Inventory Dashboard & Reports (dedicated KPIs + 33 reports), Inventory Business Rules YAML. (5) Added 16 user story epics (4.A-4.P) with 42 stories and 10 Gherkin feature files (52 scenarios). (6) Added cross-references between Modules 1, 3, and 4. (7) Renumbered State Machine Reference to Section 5 and Business Rules to Section 6. (8) Added 20 new decisions (#35-54) covering receiving, counting, transfers, POS integration, offline ops, fulfillment, alerts, and dashboard. |
| 16.0 | 2026-02-04 | New Setup & Configuration Module: (1) Created Module 5: Setup & Configuration with 21 sections (5.1-5.21). (2) System settings with core, operational, and branding configuration. (3) Multi-currency support with USD base and manual exchange rates. (4) Flat location hierarchy (Location → Zones) with predefined and custom zones. (5) User profiles with predefined roles (Staff/Manager/Admin/Buyer/Owner) and configurable feature toggles. (6) Register management with device pairing and two profiles (Full POS, Mobile Checkout). (7) Central printer registry with register linking. (8) Simple flat tax rate per location. (9) Predefined + custom UoMs with conversion factors. (10) Payment method configuration per location with processor settings. (11) Per-entity custom fields (Product/Customer/Order/Vendor). (12) Per-action approval workflows with escalation. (13) Full receipt builder with email receipt template. (14) Central email template registry with SMTP configuration. (15) Integration hub for Shopify, payments, and email providers. (16) Loyalty settings split from Module 2 (rules in M2, settings in M5). (17) Configurable audit logging with retention and export. (18) Consolidated all Business Rules YAML from old Section 6 and Section 4.18 into Module 5.19, organized by module. (19) 14-step tenant onboarding wizard with go-live validation. (20) 17 user story epics (5.A-5.Q) with 51 Gherkin scenarios. (21) Removed old Section 6 (Business Rules Configuration). (22) Renumbered State Machine Reference to Section 6. (23) Added 17 new decisions (#55-71). |
| 17.0 | 2026-02-06 | Field Specifications & Technical User Stories: (1) Completed 12-round requirements interview gathering 44 field-level specifications. (2) Created 60 Technical User Stories with field-level acceptance criteria in companion document. (3) Added Appendix M: Field Specifications Reference summarizing all validation rules, data types, formats, and error codes. (4) Established error code ranges: ERR-1xxx (Sales), ERR-2xxx (Customers), ERR-3xxx (Catalog), ERR-4xxx (Inventory), ERR-5xxx (Setup). (5) Documented reason codes for inventory adjustments, transfers, and non-PO receiving. (6) Defined authentication specs (4-digit PIN, 8+ char password, 5 failures = 15-min lockout). (7) Specified product variant constraints (50 values per dimension, 30 chars each). (8) Set approval timeout to auto-reject after 48 hours. (9) Added 12 new decisions (#72-83) for field-level specifications. |
| 18.0 | 2026-02-17 | New Integrations Module & Multi-Platform Expansion: (1) Created Module 6: Integrations & External Systems with 13 sections (6.1-6.13). (2) Moved integration content from Modules 1, 3, 4, and 5 into Module 6 with redirect stubs at original locations. Moved sections: 3.7 Shopify Integration → 6.3, 4.14.3 Inventory Sync with Shopify → 6.3.14, 5.11.3 Payment Processor Configuration → 6.8.3, 5.15.1 Email Provider Configuration → 6.9.1, 5.16 Integrations Hub → 6.11. (3) New Section 6.2: Integration Architecture with provider abstraction, retry/backoff, circuit breaker, idempotency framework, rate limit management, and webhook processing pipeline. (4) Enhanced Shopify Integration (Section 6.3) with GraphQL API preference, @idempotent directive (mandatory 2026-04), Bulk Operations API, POS UI Extensions, 2026 rate limits, webhook topics catalog, third-party POS integration rules, sync rules & best practices (single source of truth, location config, real-time sync, omnichannel/BOPIS, staff security), and hardware compatibility. (5) New Amazon SP-API Integration (Section 6.4) with OAuth via LWA, Catalog Items API, Listings Items API, Orders API (2-min polling), FBA Inventory API (ASIN/SKU/FNSKU mapping, 7 inventory states), SQS/EventBridge notifications, per-endpoint rate limits, and comprehensive compliance & seller requirements (seller code of conduct, packaging/labeling, FBA vs FBM support, safety buffers, order routing). (6) New Google Merchant API Integration (Section 6.5) with service account auth, ProductInput/Product resource split, local inventory sync (storeCode-level), push notifications, rate limits, required product data fields (11 mandatory + 8 recommended), image quality requirements (8 rules), disapproval prevention rules (10 policies with automated pre-sync validation), Google Business Profile integration, and Content API migration plan (EOL 2026-08-18). (7) New Cross-Platform Product Data Requirements (Section 6.6) with unified validation matrix (strictest-rule-wins), image requirements matrix, pre-sync validation engine with data model, and platform-specific product attributes for Amazon, Google, and Shopify. (8) New Cross-Platform Inventory Sync Rules (Section 6.7) with real-time sync architecture, safety buffer configuration (3 calculation modes: FIXED/PERCENTAGE/MIN_RESERVE), oversell prevention (reserve-on-order, first-commit-wins), channel-specific inventory rules, and graduated sync failure handling. (9) Consolidated Payment Processor Integration (Section 6.8), Email Provider Integration (Section 6.9), Carrier & Shipping Integration (Section 6.10), and enhanced Integration Hub (Section 6.11) with AMAZON and GOOGLE_MERCHANT added to integration_type enum. (10) New Integration Business Rules YAML (Section 6.12) covering shopify, amazon, google_merchant, product_validation, inventory_sync, and global settings. (11) New Integration User Stories (Section 6.13) with 7 epics (6.A-6.G) and 10 Gherkin feature files covering Shopify sync, Amazon orders, Google local inventory, cross-platform validation, safety buffers, payment terminals, and integration hub. (12) Renumbered State Machine Reference from Section 6 to Section 7. (13) Added 4 new state machines: Integration Sync States (7.13), Integration Connection States (7.14), Amazon Order Fulfillment States (7.15), Product Sync Validation States (7.16). (14) Added ERR-6001 to ERR-6099 error code range to Appendix M.13 with sub-ranges per provider. (15) Added 16 new decisions (#84-99) covering module structure, API choices, compliance rules, validation strategy, safety buffers, and dual fulfillment support. |
| 19.0 | 2026-02-19 | Tax Redesign & Simplification Update: (1) Replaced flat tax_rate with tax_jurisdiction_id FK supporting 3-level compound taxes (State/County/City). (2) Added is_franchise Boolean to locations. (3) Removed zone tracking from all modules. (4) Removed role-based location access enforcement; user_locations informational only. (5) Simplified shift management to clock-in/clock-out. (6) Added register IP modification limit (2/365 days). (7) Restricted register retirement to OWNER with type-to-confirm. (8) Added Decisions #100-107. |
Decision Log
Decisions captured during BRD review and refinement:
| # | Decision | Choice | Rationale | Date |
|---|---|---|---|---|
| 1 | Offline Strategy | Queue-and-sync | Allows continued operation, sync on reconnect | 2026-01-26 |
| 2 | Tax Engine | Build custom | Full control over jurisdiction rules, expansion flexibility | 2026-01-26 |
| 3 | Payment PCI Scope | SAQ-A | Simplest compliance, card data never touches system | 2026-01-26 |
| 4 | Multi-tenant Isolation | Schema-based | Balance of isolation and operational efficiency | 2026-01-26 |
| 5 | Commission Reversal | Proportional on returns | Fair to employees, full reversal only on voids | 2026-01-26 |
| 6 | Geographic Scope | Virginia → US → International | Design for most restrictive jurisdiction from start | 2026-01-26 |
| 7 | Gift Card Default | No expiry (California rules) | Most restrictive as baseline, enable where permitted | 2026-01-26 |
| 8 | Discount Order | Added loyalty redemptions before tax | Complete calculation order documented | 2026-01-26 |
| 9 | Credit Limit | Include pending layaways | Accurate available credit calculation | 2026-01-26 |
| 10 | Void vs Return | Void = same day only | Clear distinction for commission handling | 2026-01-26 |
| 11 | Scanner Terminology | Replace RFID with Scanner | Technology-agnostic input device naming | 2026-02-01 |
| 12 | Cash Rounding | Removed | Not required for business operations | 2026-02-01 |
| 13 | Third-Party Financing | Affirm as BNPL provider | Customer financing option, store receives full payment immediately | 2026-02-01 |
| 14 | Receipt Validation | Mandatory scanning before returns | Prevents fraudulent returns, system validates receipt authenticity | 2026-02-01 |
| 15 | X-Report | Mid-shift cash audit (does not close drawer) | Enables shift handoffs and spot-checks without closing drawer | 2026-02-01 |
| 16 | Ship to Customer | Direct shipping from source store | Carrier API integration for real-time shipping cost calculation | 2026-02-01 |
| 17 | Card Refund Method | Staff choice: manual or auto via token | Flexibility for customer-present and customer-absent scenarios | 2026-02-01 |
| 18 | Multi-Card Payment | Multiple cards + cash+card(s) allowed | Each card token stored separately for individual refund processing | 2026-02-01 |
| 19 | Loyalty After Tax | Redemption applies after tax calculation | Loyalty discount reduces final total including tax | 2026-02-01 |
| 20 | Return Policy Config | Manually configured in Settings/Setup | Per-tenant, per-store, per-channel (online vs in-store) | 2026-02-01 |
| 21 | Offline Sold Notification | Email customer via TMPL-OFFLINE-SOLD | Customer informed when transfer/ship/reserve item unavailable | 2026-02-01 |
| 22 | Hold for Pickup Scope | In-store holds + BOPIS | Clear distinction from Reservation (different store) | 2026-02-01 |
| 23 | Pricing Model | Centralized + Price Books + Channel overrides | 5-level hierarchy enables flexible pricing without complexity | 2026-02-02 |
| 24 | Shopify Sync | POS-master default, optional bi-directional | Industry standard (Lightspeed, Retail Pro, SKU IQ all use POS-master); bi-directional option for 3rd party Shopify editors | 2026-02-02 |
| 25 | Shopify Field Ownership | Per-field direction model | Eliminates conflicts by assigning clear ownership; SEO stays in Shopify, product data stays in POS | 2026-02-02 |
| 26 | Inventory Sync | Always bidirectional | Inventory quantities sync both ways regardless of catalog sync mode | 2026-02-02 |
| 27 | Consignment | Not supported | All inventory purchased outright; no consignment tracking needed | 2026-02-02 |
| 28 | Warranties | Not supported | Warranty tracking handled outside POS system | 2026-02-02 |
| 29 | Product Expiration | Not applicable | Clothing/accessories business; no expiration dates needed | 2026-02-02 |
| 30 | Product Assembly | Not needed | Bundles are virtual pricing groupings only, not physical assembly | 2026-02-02 |
| 31 | Product Tax | Location-based only | No product-level tax variation; tax determined by store jurisdiction | 2026-02-02 |
| 32 | Reorder Strategy | Velocity-based dynamic | Dynamic reorder points from sales velocity, not static thresholds; seasonal adjustment | 2026-02-02 |
| 33 | Costing Method | Weighted average | Recalculated on every PO receive; used for COGS and margin | 2026-02-02 |
| 34 | Markdown Accountability | Formal workflow + approval | All price changes tracked with who/when/old/new/reason; manager approval required | 2026-02-02 |
| 35 | Receive Mode | Open receive | Staff sees expected qty; faster workflow; variances still recorded | 2026-02-04 |
| 36 | Receiving Discrepancies | Triple approach | Note variance + auto-RMA draft + quarantine damaged goods | 2026-02-04 |
| 37 | Non-PO Receiving | Allow with reason code | Supports samples, replacements, return-to-stock, found stock | 2026-02-04 |
| 38 | Over-shipment | Threshold-based | Allow up to configurable %; above requires manager approval | 2026-02-04 |
| 39 | Adjustment Approval | All require manager | Strongest control; every adjustment must be reviewed and approved | 2026-02-04 |
| 40 | Custom Reason Codes | Standard + tenant-defined | Standard set plus ability to add custom codes per tenant | 2026-02-04 |
| 41 | Count Freeze | Configurable per count | Manager chooses freeze or snapshot per count session | 2026-02-04 |
| 42 | Count Input | Scanner-primary | Barcode scan increments by 1; manual override for damaged barcodes | 2026-02-04 |
| 43 | Transfer Initiation | Both directions + auto-suggest | HQ push + store pull + system auto-suggests from supply imbalances | 2026-02-04 |
| 44 | Allocation Strategy | Manager manual | Manual allocation when multiple stores need scarce item | 2026-02-04 |
| 45 | REMOVED in v19.0 — zones eliminated; inventory tracked per-location only | 2026-02-04 | ||
| 46 | Overstock Returns | Supported | Negotiated return of seasonal/end-of-line unsold goods to vendor | 2026-02-04 |
| 47 | Sale Decrement | Reserve + commit | Reserve on cart add, commit on payment; reserve for parked/held | 2026-02-04 |
| 48 | Offline Inventory | Queue + conflict resolution | Queue all changes offline; conflict resolution on reconnect | 2026-02-04 |
| 49 | Min Display Qty | Advisory only | Soft warning; doesn’t block sales or transfers | 2026-02-04 |
| 50 | Return to Stock | Auto to available | Customer returns auto-go to Available; staff marks damaged separately | 2026-02-04 |
| 51 | Online Fulfillment | Nearest store | Reserve inventory from store closest to customer shipping address | 2026-02-04 |
| 52 | PO Approval | Threshold-based | Auto-approve under configurable $; manager approval above | 2026-02-04 |
| 53 | Inventory Dashboard | Dedicated | Standalone dashboard with inventory KPIs separate from main admin | 2026-02-04 |
| 54 | Dead Stock | Alert + report only | Flag no-sales items; manager decides action manually | 2026-02-04 |
| 55 | Permission Model | Predefined roles + feature toggles | Balance between simplicity and flexibility; roles are fixed, feature access is configurable | 2026-02-04 |
| 56 | Location Hierarchy | Flat (Locations only) | Simple single-level sufficient for current scale; no multi-region complexity | 2026-02-04 |
| 57 | Register Config | Full (list + device + profiles) | Complete hardware management; two profiles (Full POS, Mobile) cover all use cases | 2026-02-04 |
| 58 | Tax Model | Originally flat rate; upgraded to 3-level compound model (State/County/City) per Decision #100 | 2026-02-04 | |
| 59 | UoM Approach | Predefined + custom with conversions | Standard units provided; custom UoMs with conversion factors for flexible product measurement | 2026-02-04 |
| 60 | Supplier Config | Payment terms + lead times only | Lean setup; full vendor data stays in Module 3 Section 3.8 | 2026-02-04 |
| 61 | Custom Fields | Per-entity, no field groups | Simple field management per entity type; no grouping or validation rules needed | 2026-02-04 |
| 62 | Approval Workflows | Per-action rules | Each approvable action configured independently; granular control without matrix complexity | 2026-02-04 |
| 63 | Email Templates | Central registry + SMTP | Single management point for all templates; SMTP/provider config centralized | 2026-02-04 |
| 64 | System Branding | Full suite (core + operational + branding) | Complete tenant customization including login page, receipt branding, report headers | 2026-02-04 |
| 65 | YAML Consolidation | All into Module 5 | Single source of truth; removes Section 6 and absorbs Section 4.18 | 2026-02-04 |
| 66 | Receipt Config | Full builder | Field selection, ordering, sizing, paper width — maximum receipt customization | 2026-02-04 |
| 67 | Payment Config | Methods + processors in Module 5 | Centralizes payment setup with per-location enable/disable and processor credentials | 2026-02-04 |
| 68 | Currency | Multi-currency, USD base, manual rates | Supports vendor POs in vendor currency; manual rate management sufficient | 2026-02-04 |
| 69 | Audit Logging | Configurable categories | Admin toggles which actions are logged; configurable retention and export | 2026-02-04 |
| 70 | Loyalty Settings | Split (rules M2, settings M5) | Business logic stays with customer module; configurable values centralized in Setup | 2026-02-04 |
| 71 | Tenant Onboarding | Step-by-step wizard | Documented 14-step setup flow for new tenant provisioning | 2026-02-04 |
| 72 | SKU Format | 20 chars, alphanumeric + dash/underscore | Industry standard; covers most retail SKU schemes | 2026-02-06 |
| 73 | Barcode Types | UPC-A, EAN-13, Internal only | Standard retail barcodes; internal for custom use | 2026-02-06 |
| 74 | Phone Format | E.164 international | Supports global customers with standardized format | 2026-02-06 |
| 75 | Price Precision | DECIMAL(10,2), max $99,999.99 | Standard retail pricing; sufficient for high-value items | 2026-02-06 |
| 76 | POS PIN Format | 4 digits numeric | Fast entry on touchscreen; familiar to retail staff | 2026-02-06 |
| 77 | Password Rules | 8+ chars, 1 upper, 1 number | Standard strength; balances security with usability | 2026-02-06 |
| 78 | Card Decline Message | Generic “Payment declined” | Protects customer privacy; reduces fraud hints | 2026-02-06 |
| 79 | Variant Dimensions | 50 values max, 30 chars each | Covers extensive size/color ranges with reasonable limits | 2026-02-06 |
| 80 | Custom Field Limits | 50 fields/entity, 25 dropdown options | Generous limits without performance impact | 2026-02-06 |
| 81 | Approval Timeout | Auto-reject after 48 hours | Prevents stuck requests; requester re-submits if needed | 2026-02-06 |
| 82 | Error Code Format | [ERR-XXXX] prefix | Easy support reference; traceable in logs | 2026-02-06 |
| 83 | Error Display | Inline below each field | Immediate visual feedback; industry standard UX | 2026-02-06 |
| 84 | Dedicated Integration Module | Create Module 6 for all integrations | Consolidates scattered integration content from 5 modules into single authoritative source; reduces duplication and cross-reference complexity | 2026-02-17 |
| 85 | Amazon SP-API Authentication | OAuth 2.0 via Login with Amazon (LWA) | Amazon’s required auth method; 1-hour tokens with automatic refresh; regional endpoint support | 2026-02-17 |
| 86 | Google Merchant API Version | Merchant API v1 (migrate from Content API) | Content API for Shopping EOL August 18, 2026; v1 is the successor API with ProductInput/Product resource split | 2026-02-17 |
| 87 | Shopify API Preference | GraphQL over REST | GraphQL has higher rate limits (50pts/sec vs 2req/sec), more efficient queries, better pagination; Shopify’s recommended approach | 2026-02-17 |
| 88 | Shopify Idempotency | @idempotent directive mandatory on all mutations | Required by Shopify starting 2026-04; prevents duplicate operations on retry; uses SHA-256 idempotency key | 2026-02-17 |
| 89 | Amazon Notifications | SQS push notifications | Real-time event delivery for order changes, inventory events, listing status; avoids polling overhead | 2026-02-17 |
| 90 | Google Local Inventory | Store-level local inventory sync | Google requires storeCode-level granularity; maps directly to POS per-location inventory model | 2026-02-17 |
| 91 | Integration Error Codes | ERR-6xxx range | Dedicated error code range for integration module; sub-ranges per provider (6010-6029 Shopify, 6030-6049 Amazon, 6050-6069 Google) | 2026-02-17 |
| 92 | Circuit Breaker | 5 failures in 60 seconds triggers OPEN | Standard resilience pattern; 30-second cooldown before HALF_OPEN probe; prevents cascade failures to external APIs | 2026-02-17 |
| 93 | Provider Abstraction | IIntegrationProvider interface | Common interface for all providers (Connect, Sync, GetStatus, ValidateCredentials); enables consistent error handling and monitoring | 2026-02-17 |
| 94 | Amazon Sync Direction | POS-master default | Consistent with Decision #24 (Shopify POS-master); all external channels receive product data from POS as source of truth | 2026-02-17 |
| 95 | Redirect Stubs | Cross-reference stubs in original locations | Moved sections leave redirect stubs (e.g., “See Module 6, Section 6.3”) to prevent broken references; stubs include brief scope statement | 2026-02-17 |
| 96 | Cross-Platform Validation | Strictest-rule-wins approach | POS enforces the most restrictive requirement across all platforms (e.g., 150-char title from Google, 1000x1000px image from Amazon); ensures products valid everywhere | 2026-02-17 |
| 97 | Safety Buffer | Configurable per-product per-channel inventory buffer | Formula: Channel Available = POS Available - Safety Buffer; prevents overselling during sync delays; critical for Amazon (2-min) and Google (30-min) latency | 2026-02-17 |
| 98 | Dual Fulfillment | Support both FBA and FBM for Amazon | FBA inventory tracked separately (Amazon manages); FBM uses POS pick-pack-ship workflow; per-product fulfillment method configuration | 2026-02-17 |
| 99 | Third-Party POS Rules | Shopify third-party POS integration compliance | Non-native POS must use OAuth, support real-time sync, not bypass checkout; POS is source of truth with field ownership model controlling data flow | 2026-02-17 |
| 100 | Compound Tax Model | 3-level compound (State/County/City) replaces flat rate per location | Virginia model requires state + regional + local stacking; single flat rate insufficient for multi-jurisdiction compliance | 2026-02-19 |
| 101 | Tax Jurisdiction FK | Location references tax_jurisdictions table instead of storing flat rate | Decouples tax configuration from location entity; enables shared jurisdictions and compound rate management | 2026-02-19 |
| 102 | Franchise Flag | is_franchise boolean on locations table | Distinguishes franchise vs company-owned locations for operational rules, reporting, and fee structures | 2026-02-19 |
| 103 | Location Access Informational | user_locations no longer restricts transaction processing; assignments used for defaults and reporting only | Simplifies permission model; all users can operate at any tenant location | 2026-02-19 |
| 104 | Register IP Limit | Max 2 IP address changes per rolling 365 days, tracked in register_ip_changes audit table | Prevents frequent device swapping; ensures hardware stability and audit traceability | 2026-02-19 |
| 105 | Register Retire Safety | OWNER-only with type-to-confirm ‘RETIRE’ | Prevents accidental permanent decommission; strongest safety for irreversible action | 2026-02-19 |
| 106 | Shift Simplification | Simple clock-in/clock-out replaces full shift management (shift types, assignments, handoff notes removed) | Full shift scheduling adds complexity without proportional value; clock-in/out sufficient for time tracking and payroll | 2026-02-19 |
| 107 | Zone Removal | Zones within locations removed; per-location inventory tracking only | Zone sub-divisions added complexity without sufficient operational value; per-location granularity sufficient for current scale | 2026-02-19 |
| 108 | RFID Scope | Counting only — strip lifecycle fields (sold_at, transferred_at) | RFID used exclusively for inventory counting; sales and receiving handled by barcode Scanner; simplifies tag status to (active, void, lost) | 2026-02-25 |
| 109 | Build Custom Raptag | Build .NET MAUI app (not buy off-shelf) | Full control over RFID counting UX, offline-first with SQLite, Zebra SDK integration, multi-operator support | 2026-02-25 |
| 110 | Chunked Upload | 5,000 events per chunk with idempotent dedup | Enterprise scale (100K+ tags) requires chunked sync; UNIQUE(session_id, epc) constraint makes retries safe; resume via upload-status endpoint | 2026-02-25 |
| 111 | Multi-Operator Sessions | Up to 10 operators per session with section assignment | Large stores need parallel counting; session_operators table tracks who scanned where; server deduplicates by highest RSSI | 2026-02-25 |
| 112 | Auto-Save | 30-second SQLite checkpoint flush + session recovery | Protects against data loss from app crashes, battery death; recovery dialog on restart with Resume/Discard options | 2026-02-25 |
| 113 | EPC Serial Numbering | PostgreSQL SEQUENCE per tenant (not column-based) | Concurrent-safe serial assignment; avoids race conditions from last_serial_number column approach | 2026-02-25 |
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2026-02-25 |
| Updated | 2026-02-25 |
| Source | BRD v20.0 (19,900+ lines, 7 modules, 113 decisions) |
| Author | Claude Code |
| Status | Active |
| Part | II - Architecture |
| Chapter | 05 of 32 |
Change Log
| Version | Date | Changes |
|---|---|---|
| 4.0.0 | 2026-02-25 | BRD v20.0 integrated as Chapter 08: Architecture Components |
Next Chapter: Chapter 06: Database Strategy
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 06: Database Strategy
PostgreSQL 16 on Shared Infrastructure
6.1 Overview
This chapter defines the database strategy for the POS Platform, using PostgreSQL 16 on shared infrastructure. The strategy balances performance, isolation, and operational simplicity for a multi-tenant SaaS application.
Key Decisions
| Decision | Choice | Rationale |
|---|---|---|
| Database Engine | PostgreSQL 16 | JSONB support, excellent concurrency, mature ecosystem |
| Multi-Tenancy | Row-Level Security (RLS) | Database-enforced isolation, simpler ops, no schema sprawl |
| Shared Tables | shared. schema | Platform-wide tenants, subscription plans, feature flags |
| Connection Pooling | PgBouncer | Essential for multi-tenant connection efficiency |
| Hosting | Shared container (postgres16) | Existing infrastructure, reduced ops complexity |
6.2 Infrastructure Architecture
Physical Deployment
┌─────────────────────────────────────────────────────────────────────────────┐
│ SYNOLOGY NAS (192.168.1.26) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ PostgreSQL 16 Container │ │
│ │ (postgres16) │ │
│ │ │ │
│ │ Port: 5433 (external) → 5432 (internal) │ │
│ │ Data: /volume1/docker/postgres/data │ │
│ │ Network: postgres_default │ │
│ │ │ │
│ │ ┌───────────────────────────────────────────────────────────────┐ │ │
│ │ │ pos_platform Database │ │ │
│ │ │ │ │ │
│ │ │ ┌──────────────┐ ┌─────────────────────────────────────┐ │ │ │
│ │ │ │ shared │ │ public schema │ │ │ │
│ │ │ │ schema │ │ (all tenant tables with RLS) │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ tenants │ │ products (tenant_id + RLS) │ │ │ │
│ │ │ │ plans │ │ orders (tenant_id + RLS) │ │ │ │
│ │ │ │ features │ │ customers (tenant_id + RLS) │ │ │ │
│ │ │ │ │ │ inventory (tenant_id + RLS) │ │ │ │
│ │ │ │ │ │ ... all other tenant tables │ │ │ │
│ │ │ └──────────────┘ └─────────────────────────────────────┘ │ │ │
│ │ │ │ │ │
│ │ └───────────────────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Other Databases: salessight_db, stanly_db, shopsyncflow_db, ... │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────┐ ┌──────────────────────────────────────┐ │
│ │ PgBouncer │◄───────►│ Application Containers │ │
│ │ Port: 6432 │ │ (pos-api, pos-admin, etc.) │ │
│ └─────────────────────┘ └──────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Database Creation
-- Create the main POS platform database
CREATE DATABASE pos_platform
WITH OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TEMPLATE = template0;
-- Connect to the new database
\c pos_platform
-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; -- UUID generation
CREATE EXTENSION IF NOT EXISTS "pgcrypto"; -- Cryptographic functions
CREATE EXTENSION IF NOT EXISTS "btree_gist"; -- GiST index support
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- Trigram text search
-- Create the shared schema for platform-wide tables
CREATE SCHEMA IF NOT EXISTS shared;
-- Grant usage to application role
GRANT USAGE ON SCHEMA shared TO pos_app;
GRANT USAGE ON SCHEMA public TO pos_app;
6.3 Row-Level Security (RLS) Architecture
Why Row-Level Security?
| Approach | Pros | Cons | Our Choice |
|---|---|---|---|
| Row-level (RLS) | Single schema, simpler ops, database-enforced isolation, no schema sprawl | All tenants share tables | Yes |
| Schema-per-tenant | Strong logical isolation, easy per-tenant backup | Many schemas, complex migrations per schema, connection overhead | No |
| Database-per-tenant | Maximum physical isolation | High resource usage, complex management | No |
Row-Level Security was selected because the BRD v19.0 data models already include tenant_id UUID on every tenant-scoped table (135+ occurrences across all modules). RLS enforces isolation at the database level as defense-in-depth, preventing accidental cross-tenant data access even if application code has bugs.
How RLS Works
All tenants share the same tables in the public schema. Every tenant-scoped table includes a tenant_id UUID NOT NULL column. PostgreSQL RLS policies automatically filter rows so each tenant can only see and modify their own data.
┌─────────────────────────────────────────────────────────────────┐
│ RLS Data Flow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Request arrives for tenant "nexus" │
│ ┌─────────────────────────────────┐ │
│ │ POST /api/products │ │
│ │ Authorization: Bearer <jwt> │ │
│ └──────────────┬──────────────────┘ │
│ │ │
│ 2. Middleware extracts tenant_id from JWT │
│ │ │
│ 3. Application sets PostgreSQL session variable │
│ ┌─────────────────────────────────┐ │
│ │ SET app.current_tenant = │ │
│ │ 'a1b2c3d4-...-tenant-uuid' │ │
│ └──────────────┬──────────────────┘ │
│ │ │
│ 4. Query executes — RLS policy filters automatically │
│ ┌─────────────────────────────────┐ │
│ │ SELECT * FROM products; │ │
│ │ -- RLS adds: WHERE tenant_id │ │
│ │ -- = 'a1b2c3d4-...' │ │
│ └─────────────────────────────────┘ │
│ │
│ Result: Only nexus's products returned. Other tenants │
│ invisible even without WHERE clause in application code. │
│ │
└─────────────────────────────────────────────────────────────────┘
RLS Policy Implementation
-- Step 1: Add tenant_id to every tenant-scoped table
-- (Already present in schema design — see Chapter 07)
-- Step 2: Enable RLS on each tenant-scoped table
ALTER TABLE products ENABLE ROW LEVEL SECURITY;
ALTER TABLE products FORCE ROW LEVEL SECURITY;
-- Step 3: Create isolation policy
CREATE POLICY tenant_isolation ON products
USING (tenant_id = current_setting('app.current_tenant')::uuid);
-- The USING clause applies to SELECT, UPDATE, DELETE
-- For INSERT, add a WITH CHECK clause to prevent inserting for wrong tenant
CREATE POLICY tenant_insert ON products
FOR INSERT
WITH CHECK (tenant_id = current_setting('app.current_tenant')::uuid);
Application Tenant Context (C# Middleware)
// TenantMiddleware.cs — Sets PostgreSQL session variable for RLS
public class TenantMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<TenantMiddleware> _logger;
public TenantMiddleware(RequestDelegate next, ILogger<TenantMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context, PosDbContext dbContext)
{
var tenantId = ResolveTenantFromJwt(context);
if (tenantId == null)
{
context.Response.StatusCode = 401;
await context.Response.WriteAsJsonAsync(new { error = "Tenant not resolved" });
return;
}
// Set PostgreSQL session variable — RLS policies read this
await dbContext.Database.ExecuteSqlRawAsync(
"SET app.current_tenant = {0}", tenantId.ToString());
// Store in HttpContext for application-layer use
context.Items["TenantId"] = tenantId;
_logger.LogDebug("RLS context set for tenant: {TenantId}", tenantId);
await _next(context);
}
private Guid? ResolveTenantFromJwt(HttpContext context)
{
var claim = context.User?.FindFirst("tenant_id");
return claim != null ? Guid.Parse(claim.Value) : null;
}
}
Benefits of RLS
- Simpler operations: Single schema, no per-tenant schema migrations
- No schema sprawl: 100 tenants = same number of tables (not 100x)
- Simpler connection pooling: Shared pool for all tenants (no search_path switching)
- Database-enforced isolation: Even buggy application code cannot leak data
- Easier migrations: ALTER TABLE once, applies to all tenants
- Matches BRD data models: 135+
tenant_idFK occurrences already in BRD v19.0
Trade-offs
- Less physical isolation than schema-per-tenant (mitigated by RLS enforcement)
- All tenants share the same table structure (schema flexibility limited)
- RLS policies must be applied to every tenant-scoped table (automated via migration scripts)
- Per-tenant backup requires
WHERE tenant_id = Xexports instead ofpg_dump -n schema
Reference: See Chapter 04, Section L.10A.4 for the full multi-tenancy decision analysis, comparison matrix, and C# middleware implementation details.
6.4 Connection Pooling Strategy
PgBouncer Configuration
; /etc/pgbouncer/pgbouncer.ini
[databases]
; Route all connections through pooler
pos_platform = host=postgres16 port=5432 dbname=pos_platform
[pgbouncer]
; Pool mode: transaction (best for multi-tenant with RLS)
pool_mode = transaction
; Pooler ports
listen_addr = 0.0.0.0
listen_port = 6432
; Authentication
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
; Pool sizing
; With RLS, all tenants share the same pool (no per-schema pools needed)
default_pool_size = 20
max_client_conn = 1000
min_pool_size = 5
; Reserve connections for admin
reserve_pool_size = 5
reserve_pool_timeout = 3
; Connection limits
max_db_connections = 100
max_user_connections = 100
; Timeouts
server_connect_timeout = 5
server_idle_timeout = 60
server_lifetime = 3600
query_timeout = 30
; Logging
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1
stats_period = 60
Pool Mode Comparison
| Mode | Behavior | Use Case |
|---|---|---|
| Session | Connection per session | Long-running sessions |
| Transaction | Connection per transaction | Multi-tenant APIs |
| Statement | Connection per statement | Read replicas only |
Recommendation: Use transaction mode for the POS API to maximize connection reuse across tenants. With RLS, the tenant context is set via SET app.current_tenant at the start of each transaction, so connection reuse is safe.
Docker Compose Integration
# docker-compose.yml
services:
pgbouncer:
image: bitnami/pgbouncer:latest
container_name: pos-pgbouncer
environment:
- PGBOUNCER_DATABASE=pos_platform
- PGBOUNCER_PORT=6432
- PGBOUNCER_POOL_MODE=transaction
- PGBOUNCER_MAX_CLIENT_CONN=1000
- PGBOUNCER_DEFAULT_POOL_SIZE=20
- POSTGRESQL_HOST=postgres16
- POSTGRESQL_PORT=5432
- POSTGRESQL_USERNAME=pos_app
- POSTGRESQL_PASSWORD=${DB_PASSWORD}
ports:
- "6432:6432"
networks:
- postgres_default
depends_on:
- postgres16
healthcheck:
test: ["CMD", "pg_isready", "-h", "localhost", "-p", "6432"]
interval: 10s
timeout: 5s
retries: 5
networks:
postgres_default:
external: true
6.5 Backup and Restore
Backup Strategy Overview
| Backup Type | Frequency | Retention | Purpose |
|---|---|---|---|
| Full Database | Daily | 30 days | Disaster recovery |
| Tenant Data Export | On-demand | 90 days | Tenant migration, recovery, compliance |
| WAL Archives | Continuous | 7 days | Point-in-time recovery |
With RLS architecture, all tenant data resides in the same database and schema. Full database backups capture everything. Tenant-specific exports use WHERE tenant_id = X to extract individual tenant data.
Full Database Backup
#!/bin/bash
# /volume1/docker/scripts/backup-pos-full.sh
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/volume1/backup/pos_platform"
CONTAINER="postgres16"
DB_NAME="pos_platform"
# Create backup directory
mkdir -p "$BACKUP_DIR/full"
# Full database dump with compression
docker exec $CONTAINER pg_dump \
-U postgres \
-d $DB_NAME \
-Fc \
-Z 9 \
-f /tmp/pos_platform_${DATE}.dump
# Copy to backup location
docker cp $CONTAINER:/tmp/pos_platform_${DATE}.dump \
"$BACKUP_DIR/full/"
# Cleanup container temp file
docker exec $CONTAINER rm /tmp/pos_platform_${DATE}.dump
# Remove backups older than 30 days
find "$BACKUP_DIR/full" -name "*.dump" -mtime +30 -delete
echo "Full backup completed: pos_platform_${DATE}.dump"
Tenant-Specific Data Export
With RLS, per-tenant backup is a data export using SQL queries filtered by tenant_id:
#!/bin/bash
# /volume1/docker/scripts/export-tenant.sh
# Usage: ./export-tenant.sh <tenant-uuid>
TENANT_ID=$1
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/volume1/backup/pos_platform/tenants"
CONTAINER="postgres16"
DB_NAME="pos_platform"
if [ -z "$TENANT_ID" ]; then
echo "Usage: $0 <tenant-uuid>"
exit 1
fi
# Create backup directory
mkdir -p "$BACKUP_DIR/$TENANT_ID"
# Export tenant data using COPY with WHERE clause
# This exports each tenant-scoped table filtered by tenant_id
docker exec $CONTAINER psql -U postgres -d $DB_NAME -c "
-- Export products for this tenant
COPY (SELECT * FROM products WHERE tenant_id = '$TENANT_ID')
TO '/tmp/tenant_products.csv' WITH CSV HEADER;
-- Export customers for this tenant
COPY (SELECT * FROM customers WHERE tenant_id = '$TENANT_ID')
TO '/tmp/tenant_customers.csv' WITH CSV HEADER;
-- Export orders for this tenant
COPY (SELECT * FROM orders WHERE tenant_id = '$TENANT_ID')
TO '/tmp/tenant_orders.csv' WITH CSV HEADER;
-- ... repeat for all tenant-scoped tables
"
# Package into tar archive
docker exec $CONTAINER tar czf /tmp/tenant_${TENANT_ID}_${DATE}.tar.gz \
/tmp/tenant_*.csv
# Copy to backup location
docker cp $CONTAINER:/tmp/tenant_${TENANT_ID}_${DATE}.tar.gz \
"$BACKUP_DIR/$TENANT_ID/"
# Cleanup
docker exec $CONTAINER rm /tmp/tenant_*.csv /tmp/tenant_${TENANT_ID}_${DATE}.tar.gz
echo "Tenant export completed: tenant_${TENANT_ID}_${DATE}.tar.gz"
Tenant Data Restore
-- Restore tenant data from export
-- Step 1: Delete existing tenant data (if re-importing)
BEGIN;
DELETE FROM order_items WHERE order_id IN (
SELECT id FROM orders WHERE tenant_id = 'target-tenant-uuid'
);
DELETE FROM orders WHERE tenant_id = 'target-tenant-uuid';
DELETE FROM inventory_levels WHERE tenant_id = 'target-tenant-uuid';
DELETE FROM variants WHERE product_id IN (
SELECT id FROM products WHERE tenant_id = 'target-tenant-uuid'
);
DELETE FROM products WHERE tenant_id = 'target-tenant-uuid';
DELETE FROM customers WHERE tenant_id = 'target-tenant-uuid';
-- ... repeat for all tenant-scoped tables in dependency order
-- Step 2: Import from CSV
COPY products FROM '/tmp/tenant_products.csv' WITH CSV HEADER;
COPY customers FROM '/tmp/tenant_customers.csv' WITH CSV HEADER;
COPY orders FROM '/tmp/tenant_orders.csv' WITH CSV HEADER;
-- ... repeat for all tables
COMMIT;
Tenant Migration (Between Databases)
-- Export tenant to SQL for migration to a different server
-- Uses pg_dump with row-level filter via a view
-- Step 1: Create temporary view filtered by tenant
CREATE TEMP VIEW export_products AS
SELECT * FROM products WHERE tenant_id = 'source-tenant-uuid';
-- Step 2: Use COPY to export
COPY (SELECT * FROM export_products) TO '/tmp/migration_products.csv' WITH CSV HEADER;
-- Step 3: On target server, COPY FROM with updated tenant_id if needed
-- Step 4: Update shared.tenants registry on target
6.6 Performance Considerations
RLS Performance
RLS policy overhead is minimal. PostgreSQL evaluates the USING clause as part of the query plan, effectively adding a WHERE tenant_id = X filter. With proper indexing, this has negligible impact:
-- Composite indexes with tenant_id as leading column
-- These are critical for RLS performance
CREATE INDEX idx_products_tenant ON products(tenant_id);
CREATE INDEX idx_products_tenant_sku ON products(tenant_id, sku);
CREATE INDEX idx_orders_tenant_created ON orders(tenant_id, created_at);
CREATE INDEX idx_inventory_tenant_location ON inventory_levels(tenant_id, location_id);
CREATE INDEX idx_customers_tenant_email ON customers(tenant_id, email);
-- The query planner uses these indexes to efficiently filter by tenant
-- before applying any additional WHERE clauses
RLS Performance Benchmarks (expected): | Scenario | Without RLS | With RLS | Overhead | |–––––|———––|–––––|–––––| | Simple SELECT | 0.5ms | 0.6ms | ~20% | | JOIN 3 tables | 2.1ms | 2.3ms | ~10% | | Aggregate query | 5.0ms | 5.2ms | ~4% |
The overhead decreases proportionally as query complexity increases, since the tenant_id filter is a simple equality check on an indexed column.
Table Partitioning Strategy
For high-volume time-series tables, use declarative partitioning:
-- Partition inventory_transactions by month
CREATE TABLE inventory_transactions (
id BIGSERIAL,
tenant_id UUID NOT NULL,
variant_id INT NOT NULL,
location_id INT NOT NULL,
transaction_type VARCHAR(20) NOT NULL,
quantity_change INT NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
-- ... other columns
PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE inventory_transactions_2025_01
PARTITION OF inventory_transactions
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE inventory_transactions_2025_02
PARTITION OF inventory_transactions
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
-- RLS policies apply to the parent table and are inherited by partitions
ALTER TABLE inventory_transactions ENABLE ROW LEVEL SECURITY;
ALTER TABLE inventory_transactions FORCE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON inventory_transactions
USING (tenant_id = current_setting('app.current_tenant')::uuid);
-- Automate partition creation
CREATE OR REPLACE FUNCTION create_monthly_partitions()
RETURNS VOID AS $$
DECLARE
next_month DATE;
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
-- Create partitions for next 3 months
FOR i IN 0..2 LOOP
next_month := date_trunc('month', CURRENT_DATE + (i || ' months')::interval);
start_date := next_month;
end_date := next_month + '1 month'::interval;
partition_name := 'inventory_transactions_' || to_char(next_month, 'YYYY_MM');
-- Check if partition exists
IF NOT EXISTS (
SELECT 1 FROM pg_class c
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE n.nspname = 'public'
AND c.relname = partition_name
) THEN
EXECUTE format(
'CREATE TABLE %I PARTITION OF inventory_transactions
FOR VALUES FROM (%L) TO (%L)',
partition_name, start_date, end_date
);
END IF;
END LOOP;
END;
$$ LANGUAGE plpgsql;
Connection Pooling Metrics
Monitor these key metrics in PgBouncer:
-- Connect to PgBouncer admin console
-- psql -h localhost -p 6432 -U postgres pgbouncer
-- Show pool status
SHOW POOLS;
-- Key metrics to monitor:
-- cl_active: Active client connections
-- cl_waiting: Clients waiting for connection
-- sv_active: Active server connections
-- sv_idle: Idle server connections
-- sv_used: Server connections in use
-- Show statistics
SHOW STATS;
-- Alert thresholds:
-- cl_waiting > 10: Pool exhaustion risk
-- sv_active / default_pool_size > 0.8: Near capacity
Query Timeout Configuration
-- Set statement timeout to prevent long-running queries
SET statement_timeout = '30s';
-- For specific operations, extend timeout
SET LOCAL statement_timeout = '5m';
-- Connection-level setting in PgBouncer
; query_timeout = 30
-- PostgreSQL server-level (postgresql.conf)
statement_timeout = 30000 -- 30 seconds
lock_timeout = 10000 -- 10 seconds
idle_in_transaction_session_timeout = 60000 -- 1 minute
Memory Configuration
# postgresql.conf optimizations for multi-tenant RLS
# Shared memory (25% of RAM)
shared_buffers = 4GB
# Work memory per query (be conservative with many concurrent tenants)
work_mem = 64MB
# Maintenance operations
maintenance_work_mem = 512MB
# Effective cache size (75% of RAM)
effective_cache_size = 12GB
# Connection limits
max_connections = 200 # Higher limit, pooler handles distribution
# WAL settings
wal_buffers = 64MB
checkpoint_completion_target = 0.9
max_wal_size = 2GB
min_wal_size = 512MB
# Query planning
random_page_cost = 1.1 # SSD storage
effective_io_concurrency = 200 # SSD storage
6.7 High Availability Considerations
Replication Setup (Future)
# docker-compose-ha.yml (for future HA deployment)
services:
postgres-primary:
image: postgres:16-alpine
environment:
- POSTGRES_REPLICATION_MODE=master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=${REPL_PASSWORD}
volumes:
- postgres-primary-data:/var/lib/postgresql/data
postgres-replica:
image: postgres:16-alpine
environment:
- POSTGRES_REPLICATION_MODE=slave
- POSTGRES_MASTER_HOST=postgres-primary
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=${REPL_PASSWORD}
volumes:
- postgres-replica-data:/var/lib/postgresql/data
depends_on:
- postgres-primary
Read Replica Routing
# PgBouncer configuration for read replicas
[databases]
pos_platform = host=postgres-primary port=5432 dbname=pos_platform
pos_platform_ro = host=postgres-replica port=5432 dbname=pos_platform
Application routing:
// Database connection factory
public class DatabaseConnectionFactory
{
private readonly string _primaryConnectionString;
private readonly string _readonlyConnectionString;
public DatabaseConnectionFactory(IConfiguration config)
{
_primaryConnectionString = config.GetConnectionString("Primary")!;
_readonlyConnectionString = config.GetConnectionString("Readonly")!;
}
// Route based on operation type
// RLS works identically on both primary and replica —
// SET app.current_tenant must be called on each connection
public string GetConnectionString(bool readOnly = false)
{
return readOnly ? _readonlyConnectionString : _primaryConnectionString;
}
}
6.8 Monitoring and Alerting
Key Database Metrics
-- Database size per tenant (RLS approach)
SELECT
tenant_id,
t.name AS tenant_name,
pg_size_pretty(SUM(pg_column_size(p.*))) AS products_size
FROM products p
JOIN shared.tenants t ON t.id = p.tenant_id
GROUP BY tenant_id, t.name
ORDER BY SUM(pg_column_size(p.*)) DESC;
-- Approximate tenant data size across all tables
SELECT
t.name AS tenant_name,
COUNT(DISTINCT p.id) AS product_count,
COUNT(DISTINCT o.id) AS order_count,
COUNT(DISTINCT c.id) AS customer_count
FROM shared.tenants t
LEFT JOIN products p ON p.tenant_id = t.id
LEFT JOIN orders o ON o.tenant_id = t.id
LEFT JOIN customers c ON c.tenant_id = t.id
GROUP BY t.name
ORDER BY COUNT(DISTINCT o.id) DESC;
-- Active connections (all tenants share the same pool)
SELECT
COUNT(*) AS total_connections,
COUNT(*) FILTER (WHERE state = 'active') AS active,
COUNT(*) FILTER (WHERE state = 'idle') AS idle,
COUNT(*) FILTER (WHERE state = 'idle in transaction') AS idle_in_txn
FROM pg_stat_activity
WHERE datname = 'pos_platform';
-- Table bloat check
SELECT
schemaname || '.' || relname AS table_name,
pg_size_pretty(pg_relation_size(schemaname || '.' || relname)) AS size,
n_dead_tup AS dead_tuples,
last_autovacuum
FROM pg_stat_user_tables
WHERE schemaname = 'public'
AND n_dead_tup > 10000
ORDER BY n_dead_tup DESC
LIMIT 20;
-- Slow queries (requires pg_stat_statements)
SELECT
query,
calls,
mean_exec_time::numeric(10,2) AS avg_ms,
total_exec_time::numeric(10,2) AS total_ms
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
Alerting Thresholds
| Metric | Warning | Critical | Action |
|---|---|---|---|
| Connection usage | 70% | 90% | Scale pool, investigate |
| Disk usage | 70% | 85% | Cleanup, expand storage |
| Replication lag | 10s | 60s | Check network, replica health |
| Long-running queries | 30s | 60s | Investigate, possibly kill |
| Dead tuples | 1M | 5M | Force vacuum |
| Cache hit ratio | <95% | <90% | Increase shared_buffers |
6.9 Security Configuration
Role-Based Access
-- Create application role (minimal privileges)
CREATE ROLE pos_app WITH LOGIN PASSWORD 'secure_password';
GRANT CONNECT ON DATABASE pos_platform TO pos_app;
GRANT USAGE ON SCHEMA shared TO pos_app;
GRANT USAGE ON SCHEMA public TO pos_app;
-- Grant table access in public schema (RLS enforces tenant isolation)
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO pos_app;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO pos_app;
-- Default privileges for future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO pos_app;
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT ALL ON SEQUENCES TO pos_app;
-- RLS enforces that pos_app can only see rows for the current tenant
-- This is defense-in-depth: even with full table access, RLS filters rows
-- Create admin role (elevated privileges, bypasses RLS)
CREATE ROLE pos_admin WITH LOGIN PASSWORD 'admin_password' BYPASSRLS;
GRANT ALL PRIVILEGES ON DATABASE pos_platform TO pos_admin;
-- Create read-only role (for reporting, respects RLS)
CREATE ROLE pos_readonly WITH LOGIN PASSWORD 'readonly_password';
GRANT CONNECT ON DATABASE pos_platform TO pos_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA shared TO pos_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO pos_readonly;
-- RLS policies still apply — readonly user must SET app.current_tenant
RLS Security Notes
-- FORCE ROW LEVEL SECURITY ensures RLS applies even to table owners
-- Without FORCE, the table owner bypasses RLS
ALTER TABLE products FORCE ROW LEVEL SECURITY;
-- The pos_admin role has BYPASSRLS for administrative operations
-- (tenant migration, cross-tenant reporting, data cleanup)
-- This role should ONLY be used for administrative tasks, never by the API
-- Verify RLS is enabled on all tenant-scoped tables
SELECT
schemaname,
tablename,
rowsecurity
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY tablename;
SSL/TLS Configuration
# postgresql.conf
ssl = on
ssl_cert_file = '/var/lib/postgresql/server.crt'
ssl_key_file = '/var/lib/postgresql/server.key'
ssl_ca_file = '/var/lib/postgresql/root.crt'
ssl_min_protocol_version = 'TLSv1.2'
# Require SSL for external connections
# pg_hba.conf
hostssl pos_platform pos_app 0.0.0.0/0 scram-sha-256
6.10 Quick Reference
Connection Strings
# Direct PostgreSQL connection
postgres://pos_app:password@192.168.1.26:5433/pos_platform
# Through PgBouncer (recommended)
postgres://pos_app:password@192.168.1.26:6432/pos_platform
# Application environment variables
DATABASE_URL=postgres://pos_app:password@pgbouncer:6432/pos_platform
DATABASE_URL_READONLY=postgres://pos_readonly:password@pgbouncer:6432/pos_platform_ro
Common Operations
# Connect to database
docker exec -it postgres16 psql -U postgres -d pos_platform
# List all tenants
docker exec postgres16 psql -U postgres -d pos_platform -c \
"SELECT id, name, slug, status FROM shared.tenants;"
# Check row counts per tenant for a table
docker exec postgres16 psql -U postgres -d pos_platform -c \
"SELECT tenant_id, COUNT(*) FROM products GROUP BY tenant_id;"
# Verify RLS is enabled
docker exec postgres16 psql -U postgres -d pos_platform -c \
"SELECT tablename, rowsecurity FROM pg_tables WHERE schemaname = 'public';"
# Export specific tenant data
./export-tenant.sh <tenant-uuid>
# Vacuum full (maintenance window only)
docker exec postgres16 psql -U postgres -d pos_platform -c "VACUUM FULL ANALYZE products;"
RLS Quick Setup for New Tables
-- Template: Enable RLS on a new tenant-scoped table
ALTER TABLE <table_name> ENABLE ROW LEVEL SECURITY;
ALTER TABLE <table_name> FORCE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON <table_name>
USING (tenant_id = current_setting('app.current_tenant')::uuid);
CREATE POLICY tenant_insert ON <table_name>
FOR INSERT
WITH CHECK (tenant_id = current_setting('app.current_tenant')::uuid);
CREATE INDEX idx_<table_name>_tenant ON <table_name>(tenant_id);
Next Chapter: Chapter 07: Schema Design - Detailed schema structure with 51 tables across 13 domains.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | III - Database |
| Chapter | 06 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 07: Schema Design
51 Tables Across 13 Domains
7.1 Overview
The POS Platform database consists of 51 tables organized into 13 functional domains. This chapter provides the complete schema design using a single-database, single-schema architecture with Row-Level Security (RLS) for tenant isolation.
All tenant-scoped tables include a tenant_id UUID NOT NULL column. PostgreSQL RLS policies automatically filter rows per tenant, ensuring data isolation at the database level.
Domain Summary
| Domain | Tenant-Scoped | Tables | Purpose |
|---|---|---|---|
| 1. Products & Variants | Yes | 5 | Product catalog with SKU/variant model |
| 2. Categories & Tags | Yes | 5 | Flexible product organization |
| 3. Product Attributes | Yes | 4 | Brand, gender, origin, fabric attributes |
| 4. Inventory & Locations | Yes | 3 | Multi-location inventory tracking |
| 5. Tax Configuration | Yes | 2 | Location-specific tax rates |
| 6. Orders & Customers | Yes | 3 | Transactions and customer profiles |
| 7. User Preferences | Yes | 1 | Per-user view settings |
| 8. Tenant Management | No (shared) | 3 | Platform tenant registry |
| 9. Authentication & Authorization | Mixed | 7 | Users, sessions, roles |
| 10. Offline Sync Infrastructure | Yes | 4 | Device sync and conflicts |
| 11. Cash Drawer Operations | Yes | 6 | Shift and cash management |
| 12. Payment Processing | Yes | 4 | Terminals and settlements |
| 13. RFID Module (Optional) | Yes | 7 | Tag printing and scanning |
| TOTAL | 51 |
7.2 Schema Architecture
Visual Diagram
┌─────────────────────────────────────────────────────────────────────────────┐
│ pos_platform │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ shared SCHEMA (6 tables) │ │
│ │ Platform-wide, NO tenant_id, NO RLS │ │
│ │ │ │
│ │ ┌─────────────┐ ┌───────────────────┐ ┌─────────────────────────┐ │ │
│ │ │ tenants │ │tenant_subscriptions│ │ tenant_modules │ │ │
│ │ │ (registry) │ │ (billing) │ │ (feature add-ons) │ │ │
│ │ └─────────────┘ └───────────────────┘ └─────────────────────────┘ │ │
│ │ ┌─────────────────┐ ┌─────────────────────┐ ┌───────────────────┐ │ │
│ │ │ users │ │ user_sessions │ │ password_resets │ │ │
│ │ │ (platform auth) │ │ (session tracking) │ │ (recovery) │ │ │
│ │ └─────────────────┘ └─────────────────────┘ └───────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ All tenant-scoped tables reference │
│ shared.tenants(id) via tenant_id FK │
│ │ │
│ ▼ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ public SCHEMA (45 tables, all with tenant_id + RLS) │ │
│ │ │ │
│ │ Domain 1-3: Catalog Domain 4-5: Inventory Domain 6: Sales │ │
│ │ ┌────────────────────┐ ┌───────────────────┐ ┌────────────────┐ │ │
│ │ │ products [T] │ │ locations [T] │ │ customers [T] │ │ │
│ │ │ variants [T] │ │ inventory_ [T] │ │ orders [T] │ │ │
│ │ │ brands [T] │ │ levels │ │ order_ [T] │ │ │
│ │ │ categories [T] │ │ inventory_ [T] │ │ items │ │ │
│ │ │ collections [T] │ │ transactions │ └────────────────┘ │ │
│ │ │ tags [T] │ │ taxes [T] │ │ │
│ │ │ product_ [T] │ │ location_tax [T] │ Domain 9: Auth │ │
│ │ │ collection │ └───────────────────┘ ┌────────────────┐ │ │
│ │ │ product_tag [T] │ │ roles [T] │ │ │
│ │ │ product_ [T] │ Domain 10: Sync │ role_ [T] │ │ │
│ │ │ groups │ ┌───────────────────┐ │ perms │ │ │
│ │ │ genders [T] │ │ devices [T] │ │ tenant_ [T] │ │ │
│ │ │ origins [T] │ │ sync_queue [T] │ │ users │ │ │
│ │ │ fabrics [T] │ │ sync_ [T] │ │ tenant_ [T] │ │ │
│ │ └────────────────────┘ │ conflicts │ │ settings │ │ │
│ │ │ sync_ [T] │ └────────────────┘ │ │
│ │ Domain 7: Prefs │ checkpoints │ │ │
│ │ ┌────────────────────┐ └───────────────────┘ Domain 11-12: Ops │ │
│ │ │ item_view_ [T] │ ┌────────────────┐ │ │
│ │ │ settings │ Domain 13: RFID │ shifts [T] │ │ │
│ │ └────────────────────┘ ┌───────────────────┐ │ cash_ [T] │ │ │
│ │ │ rfid_config [T] │ │ drawers │ │ │
│ │ [T] = has tenant_id │ rfid_printers[T] │ │ cash_ [T] │ │ │
│ │ column + RLS │ rfid_ [T] │ │ counts │ │ │
│ │ policy │ templates │ │ cash_ [T] │ │ │
│ │ │ rfid_print_ [T] │ │ movements │ │ │
│ │ │ jobs │ │ cash_drops[T] │ │ │
│ │ │ rfid_tags [T] │ │ cash_ [T] │ │ │
│ │ │ rfid_scan_ [T] │ │ pickups │ │ │
│ │ │ sessions │ │ payment_ [T] │ │ │
│ │ │ rfid_scan_ [T] │ │ terminals │ │ │
│ │ │ events │ │ payment_ [T] │ │ │
│ │ └───────────────────┘ │ attempts │ │ │
│ │ │ payment_ [T] │ │ │
│ │ │ batches │ │ │
│ │ │ payment_ [T] │ │ │
│ │ │ recon │ │ │
│ │ └────────────────┘ │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
7.3 Complete CREATE TABLE Statements
All tables reside in a single database. The shared schema holds platform-wide tables (no tenant_id). The public schema holds all tenant-scoped tables with tenant_id UUID NOT NULL and RLS policies.
Enable Required Extensions
-- Run once on database creation
\c pos_platform
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Create shared schema
CREATE SCHEMA IF NOT EXISTS shared;
Shared Schema Tables (6 tables — no tenant_id)
Table: tenants
-- Tenant registry for multi-tenant SaaS architecture
CREATE TABLE shared.tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
slug VARCHAR(100) NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'active',
tier VARCHAR(20) NOT NULL DEFAULT 'standard',
contact_email VARCHAR(255) NOT NULL,
contact_phone VARCHAR(20),
billing_email VARCHAR(255),
timezone VARCHAR(50) NOT NULL DEFAULT 'UTC',
currency_code CHAR(3) NOT NULL DEFAULT 'USD',
locale VARCHAR(10) NOT NULL DEFAULT 'en-US',
trial_ends_at TIMESTAMP,
metadata JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
-- Constraints
CONSTRAINT tenants_slug_unique UNIQUE (slug),
CONSTRAINT tenants_status_check CHECK (status IN ('provisioning', 'active', 'suspended', 'cancelled', 'trial')),
CONSTRAINT tenants_tier_check CHECK (tier IN ('free', 'starter', 'standard', 'enterprise'))
);
-- Indexes
CREATE INDEX idx_tenants_status ON shared.tenants(status);
CREATE INDEX idx_tenants_tier ON shared.tenants(tier);
CREATE INDEX idx_tenants_trial ON shared.tenants(trial_ends_at) WHERE trial_ends_at IS NOT NULL;
COMMENT ON TABLE shared.tenants IS 'Tenant/organization registry for multi-tenant SaaS architecture';
COMMENT ON COLUMN shared.tenants.slug IS 'URL-safe identifier used for subdomain routing';
COMMENT ON COLUMN shared.tenants.tier IS 'Subscription tier determining feature access';
Table: tenant_subscriptions
-- Billing and subscription plan tracking
CREATE TABLE shared.tenant_subscriptions (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id) ON DELETE CASCADE,
plan_id VARCHAR(50) NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'active',
billing_cycle VARCHAR(20) NOT NULL,
price_cents INT NOT NULL,
currency_code CHAR(3) NOT NULL DEFAULT 'USD',
location_limit INT NOT NULL DEFAULT 5,
user_limit INT NOT NULL DEFAULT 10,
device_limit INT NOT NULL DEFAULT 20,
external_subscription_id VARCHAR(255),
current_period_start TIMESTAMP NOT NULL,
current_period_end TIMESTAMP NOT NULL,
cancelled_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
-- Constraints
CONSTRAINT subscriptions_status_check CHECK (status IN ('active', 'past_due', 'cancelled', 'paused')),
CONSTRAINT subscriptions_cycle_check CHECK (billing_cycle IN ('monthly', 'annual'))
);
-- Indexes
CREATE INDEX idx_tenant_subscriptions_tenant ON shared.tenant_subscriptions(tenant_id);
CREATE INDEX idx_tenant_subscriptions_status ON shared.tenant_subscriptions(status);
CREATE INDEX idx_tenant_subscriptions_period ON shared.tenant_subscriptions(current_period_end);
CREATE INDEX idx_tenant_subscriptions_external ON shared.tenant_subscriptions(external_subscription_id)
WHERE external_subscription_id IS NOT NULL;
COMMENT ON TABLE shared.tenant_subscriptions IS 'Billing and subscription plan tracking for each tenant';
COMMENT ON COLUMN shared.tenant_subscriptions.external_subscription_id IS 'Stripe/PayPal subscription ID for payment integration';
Table: tenant_modules
-- Optional module subscriptions (RFID, promotions, gift cards, etc.)
CREATE TABLE shared.tenant_modules (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id) ON DELETE CASCADE,
module_code VARCHAR(50) NOT NULL,
is_enabled BOOLEAN DEFAULT TRUE,
activated_at TIMESTAMP NOT NULL DEFAULT NOW(),
expires_at TIMESTAMP,
monthly_fee_cents INT,
trial_days_remaining INT,
configuration JSONB DEFAULT '{}',
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
-- Constraints
CONSTRAINT tenant_modules_unique UNIQUE (tenant_id, module_code),
CONSTRAINT tenant_modules_code_check CHECK (module_code IN (
'rfid', 'promotions', 'gift_cards', 'scheduling',
'loyalty_advanced', 'analytics', 'ecommerce', 'b2b'
))
);
-- Indexes
CREATE INDEX idx_tenant_modules_code ON shared.tenant_modules(module_code) WHERE is_enabled = TRUE;
CREATE INDEX idx_tenant_modules_expiring ON shared.tenant_modules(expires_at)
WHERE expires_at IS NOT NULL;
COMMENT ON TABLE shared.tenant_modules IS 'Optional add-on modules subscribed by each tenant';
COMMENT ON COLUMN shared.tenant_modules.module_code IS 'Identifier for the module (rfid, promotions, etc.)';
COMMENT ON COLUMN shared.tenant_modules.configuration IS 'Module-specific settings in JSON format';
Table: users
-- Platform-wide user accounts (can belong to multiple tenants)
CREATE TABLE shared.users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) NOT NULL,
password_hash VARCHAR(255) NOT NULL,
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
phone VARCHAR(20),
avatar_url VARCHAR(500),
is_platform_admin BOOLEAN DEFAULT FALSE,
email_verified BOOLEAN DEFAULT FALSE,
email_verified_at TIMESTAMP,
last_login_at TIMESTAMP,
failed_login_count INT DEFAULT 0,
locked_until TIMESTAMP,
mfa_enabled BOOLEAN DEFAULT FALSE,
mfa_secret VARCHAR(255),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
-- Constraints
CONSTRAINT users_email_unique UNIQUE (email),
CONSTRAINT users_email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')
);
-- Indexes
CREATE INDEX idx_users_name ON shared.users(last_name, first_name);
CREATE INDEX idx_users_locked ON shared.users(locked_until) WHERE locked_until IS NOT NULL;
CREATE INDEX idx_users_login ON shared.users(last_login_at);
COMMENT ON TABLE shared.users IS 'Platform-wide user accounts supporting multi-tenant membership';
COMMENT ON COLUMN shared.users.password_hash IS 'Argon2id password hash (memory-hard algorithm)';
COMMENT ON COLUMN shared.users.mfa_secret IS 'Encrypted TOTP secret for 2FA';
Table: user_sessions
-- Active user sessions across all tenants
CREATE TABLE shared.user_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
tenant_id UUID REFERENCES shared.tenants(id) ON DELETE CASCADE,
session_token VARCHAR(255) NOT NULL,
refresh_token VARCHAR(255),
device_id UUID,
ip_address INET NOT NULL,
user_agent VARCHAR(500),
device_type VARCHAR(20) NOT NULL DEFAULT 'web',
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP NOT NULL,
last_activity_at TIMESTAMP DEFAULT NOW(),
-- Constraints
CONSTRAINT sessions_token_unique UNIQUE (session_token),
CONSTRAINT sessions_refresh_unique UNIQUE (refresh_token),
CONSTRAINT sessions_device_type_check CHECK (device_type IN ('web', 'mobile', 'pos_terminal', 'api'))
);
-- Indexes
CREATE INDEX idx_sessions_user ON shared.user_sessions(user_id);
CREATE INDEX idx_sessions_tenant ON shared.user_sessions(tenant_id) WHERE tenant_id IS NOT NULL;
CREATE INDEX idx_sessions_expiry ON shared.user_sessions(expires_at) WHERE is_active = TRUE;
CREATE INDEX idx_sessions_device ON shared.user_sessions(device_id) WHERE device_id IS NOT NULL;
CREATE INDEX idx_sessions_activity ON shared.user_sessions(last_activity_at);
COMMENT ON TABLE shared.user_sessions IS 'Active session tracking with multi-tenant context';
COMMENT ON COLUMN shared.user_sessions.tenant_id IS 'Current tenant context (NULL for platform-level sessions)';
Table: password_resets
-- Password reset token management
CREATE TABLE shared.password_resets (
id SERIAL PRIMARY KEY,
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
token_hash VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP NOT NULL,
used_at TIMESTAMP,
ip_address INET NOT NULL,
-- Constraints
CONSTRAINT password_resets_token_unique UNIQUE (token_hash)
);
-- Indexes
CREATE INDEX idx_password_resets_user ON shared.password_resets(user_id);
CREATE INDEX idx_password_resets_expiry ON shared.password_resets(expires_at) WHERE used_at IS NULL;
COMMENT ON TABLE shared.password_resets IS 'Password reset token management with expiration';
COMMENT ON COLUMN shared.password_resets.token_hash IS 'SHA-256 hash of reset token (token sent to user)';
Public Schema Tables (45 tables — all with tenant_id + RLS)
The public schema contains all tenant-scoped tables. Every table includes:
tenant_id UUID NOT NULL— referencesshared.tenants(id)- A composite index with
tenant_idas the leading column - RLS policies applied (see Section 7.4)
Below are representative examples. For complete CREATE TABLE statements for all 51 tables, see Chapter 08 (Entity Specifications).
Example: products
CREATE TABLE products (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
sku VARCHAR(50) NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
brand_id INT,
product_group_id INT,
gender_id INT,
origin_id INT,
fabric_id INT,
base_price DECIMAL(10,2) NOT NULL,
cost_price DECIMAL(10,2) NOT NULL,
is_active BOOLEAN DEFAULT TRUE,
has_variants BOOLEAN DEFAULT FALSE,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Tenant isolation index (CRITICAL for RLS performance)
CREATE INDEX idx_products_tenant ON products(tenant_id);
CREATE UNIQUE INDEX idx_products_tenant_sku ON products(tenant_id, sku) WHERE deleted_at IS NULL;
CREATE INDEX idx_products_tenant_brand ON products(tenant_id, brand_id);
CREATE INDEX idx_products_tenant_active ON products(tenant_id, is_active)
WHERE is_active = TRUE AND deleted_at IS NULL;
COMMENT ON TABLE products IS 'Product catalog — tenant-scoped with RLS';
Example: variants
CREATE TABLE variants (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
product_id INT NOT NULL REFERENCES products(id) ON DELETE CASCADE,
sku VARCHAR(50) NOT NULL,
size VARCHAR(20),
color VARCHAR(50),
price_adjustment DECIMAL(10,2) DEFAULT 0.00,
weight DECIMAL(10,3),
barcode VARCHAR(50),
is_active BOOLEAN DEFAULT TRUE,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Tenant isolation indexes
CREATE INDEX idx_variants_tenant ON variants(tenant_id);
CREATE UNIQUE INDEX idx_variants_tenant_sku ON variants(tenant_id, sku) WHERE deleted_at IS NULL;
CREATE UNIQUE INDEX idx_variants_tenant_barcode ON variants(tenant_id, barcode)
WHERE barcode IS NOT NULL AND deleted_at IS NULL;
CREATE INDEX idx_variants_tenant_product ON variants(tenant_id, product_id);
Example: orders
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
order_number VARCHAR(50) NOT NULL,
customer_id INT REFERENCES customers(id),
location_id INT NOT NULL,
employee_id UUID NOT NULL REFERENCES shared.users(id),
status VARCHAR(20) NOT NULL DEFAULT 'open',
subtotal DECIMAL(12,2) NOT NULL DEFAULT 0,
tax_total DECIMAL(12,2) NOT NULL DEFAULT 0,
discount_total DECIMAL(12,2) NOT NULL DEFAULT 0,
grand_total DECIMAL(12,2) NOT NULL DEFAULT 0,
payment_status VARCHAR(20) NOT NULL DEFAULT 'unpaid',
notes TEXT,
voided_at TIMESTAMP,
voided_by UUID REFERENCES shared.users(id),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Tenant isolation indexes
CREATE INDEX idx_orders_tenant ON orders(tenant_id);
CREATE UNIQUE INDEX idx_orders_tenant_number ON orders(tenant_id, order_number);
CREATE INDEX idx_orders_tenant_created ON orders(tenant_id, created_at);
CREATE INDEX idx_orders_tenant_customer ON orders(tenant_id, customer_id);
CREATE INDEX idx_orders_tenant_location ON orders(tenant_id, location_id);
CREATE INDEX idx_orders_tenant_status ON orders(tenant_id, status);
Example: customers
CREATE TABLE customers (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
email VARCHAR(255),
phone VARCHAR(20),
loyalty_points INT DEFAULT 0,
total_spent DECIMAL(12,2) DEFAULT 0,
visit_count INT DEFAULT 0,
notes TEXT,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Tenant isolation indexes
CREATE INDEX idx_customers_tenant ON customers(tenant_id);
CREATE UNIQUE INDEX idx_customers_tenant_email ON customers(tenant_id, email)
WHERE email IS NOT NULL;
CREATE INDEX idx_customers_tenant_name ON customers(tenant_id, last_name, first_name);
CREATE INDEX idx_customers_tenant_phone ON customers(tenant_id, phone)
WHERE phone IS NOT NULL;
Example: locations
CREATE TABLE locations (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(100) NOT NULL,
code VARCHAR(20) NOT NULL,
address_line1 VARCHAR(255),
address_line2 VARCHAR(255),
city VARCHAR(100),
state VARCHAR(50),
zip_code VARCHAR(20),
country CHAR(2) DEFAULT 'US',
phone VARCHAR(20),
is_active BOOLEAN DEFAULT TRUE,
timezone VARCHAR(50) DEFAULT 'UTC',
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Tenant isolation indexes
CREATE INDEX idx_locations_tenant ON locations(tenant_id);
CREATE UNIQUE INDEX idx_locations_tenant_code ON locations(tenant_id, code);
Example: inventory_levels
CREATE TABLE inventory_levels (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
variant_id INT NOT NULL REFERENCES variants(id),
location_id INT NOT NULL REFERENCES locations(id),
quantity_on_hand INT NOT NULL DEFAULT 0,
quantity_committed INT NOT NULL DEFAULT 0,
quantity_available INT GENERATED ALWAYS AS (quantity_on_hand - quantity_committed) STORED,
reorder_point INT,
reorder_quantity INT,
last_counted_at TIMESTAMP,
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT inventory_unique UNIQUE (tenant_id, variant_id, location_id)
);
-- Tenant isolation indexes
CREATE INDEX idx_inventory_tenant ON inventory_levels(tenant_id);
CREATE INDEX idx_inventory_tenant_location ON inventory_levels(tenant_id, location_id);
CREATE INDEX idx_inventory_tenant_variant ON inventory_levels(tenant_id, variant_id);
7.4 RLS Policy Definitions
Every tenant-scoped table in the public schema must have RLS enabled with isolation policies. The application sets app.current_tenant via middleware before any query executes.
Master RLS Setup Script
-- ============================================================
-- RLS POLICY SETUP
-- Apply to all tenant-scoped tables in public schema
-- ============================================================
-- Helper function to apply RLS to a table
CREATE OR REPLACE FUNCTION apply_rls_policy(p_table_name TEXT)
RETURNS VOID AS $$
BEGIN
-- Enable RLS
EXECUTE format('ALTER TABLE %I ENABLE ROW LEVEL SECURITY', p_table_name);
-- Force RLS even for table owners (defense-in-depth)
EXECUTE format('ALTER TABLE %I FORCE ROW LEVEL SECURITY', p_table_name);
-- SELECT, UPDATE, DELETE policy
EXECUTE format(
'CREATE POLICY tenant_isolation ON %I
USING (tenant_id = current_setting(''app.current_tenant'')::uuid)',
p_table_name
);
-- INSERT policy (prevent inserting rows for wrong tenant)
EXECUTE format(
'CREATE POLICY tenant_insert ON %I
FOR INSERT
WITH CHECK (tenant_id = current_setting(''app.current_tenant'')::uuid)',
p_table_name
);
RAISE NOTICE 'RLS policies applied to: %', p_table_name;
END;
$$ LANGUAGE plpgsql;
-- Apply RLS to all tenant-scoped tables
-- Domain 1: Products & Variants
SELECT apply_rls_policy('products');
SELECT apply_rls_policy('variants');
SELECT apply_rls_policy('product_collection');
SELECT apply_rls_policy('product_tag');
SELECT apply_rls_policy('brands');
-- Domain 2: Categories & Tags
SELECT apply_rls_policy('categories');
SELECT apply_rls_policy('collections');
SELECT apply_rls_policy('tags');
-- Domain 3: Product Attributes
SELECT apply_rls_policy('product_groups');
SELECT apply_rls_policy('genders');
SELECT apply_rls_policy('origins');
SELECT apply_rls_policy('fabrics');
-- Domain 4: Inventory & Locations
SELECT apply_rls_policy('locations');
SELECT apply_rls_policy('inventory_levels');
SELECT apply_rls_policy('inventory_transactions');
-- Domain 5: Tax Configuration
SELECT apply_rls_policy('taxes');
SELECT apply_rls_policy('location_tax');
-- Domain 6: Orders & Customers
SELECT apply_rls_policy('customers');
SELECT apply_rls_policy('orders');
SELECT apply_rls_policy('order_items');
-- Domain 7: User Preferences
SELECT apply_rls_policy('item_view_settings');
-- Domain 9: Auth (tenant-specific tables only)
SELECT apply_rls_policy('roles');
SELECT apply_rls_policy('role_permissions');
SELECT apply_rls_policy('tenant_users');
SELECT apply_rls_policy('tenant_settings');
-- Domain 10: Offline Sync
SELECT apply_rls_policy('devices');
SELECT apply_rls_policy('sync_queue');
SELECT apply_rls_policy('sync_conflicts');
SELECT apply_rls_policy('sync_checkpoints');
-- Domain 11: Cash Drawer Operations
SELECT apply_rls_policy('shifts');
SELECT apply_rls_policy('cash_drawers');
SELECT apply_rls_policy('cash_counts');
SELECT apply_rls_policy('cash_movements');
SELECT apply_rls_policy('cash_drops');
SELECT apply_rls_policy('cash_pickups');
-- Domain 12: Payment Processing
SELECT apply_rls_policy('payment_terminals');
SELECT apply_rls_policy('payment_attempts');
SELECT apply_rls_policy('payment_batches');
SELECT apply_rls_policy('payment_reconciliation');
-- Domain 13: RFID Module
SELECT apply_rls_policy('rfid_config');
SELECT apply_rls_policy('rfid_printers');
SELECT apply_rls_policy('rfid_print_templates');
SELECT apply_rls_policy('rfid_print_jobs');
SELECT apply_rls_policy('rfid_tags');
SELECT apply_rls_policy('rfid_scan_sessions');
SELECT apply_rls_policy('rfid_scan_events');
Verification Query
-- Verify RLS is enabled on all tenant-scoped tables
SELECT
schemaname,
tablename,
rowsecurity AS rls_enabled
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY tablename;
-- Verify policies exist
SELECT
schemaname,
tablename,
policyname,
cmd AS applies_to,
qual AS using_expression,
with_check
FROM pg_policies
WHERE schemaname = 'public'
ORDER BY tablename, policyname;
RLS Bypass for Admin Operations
-- The pos_admin role bypasses RLS for cross-tenant operations
-- ONLY use for: reporting, data migration, tenant cleanup
-- Example: Cross-tenant product count (admin only)
SET ROLE pos_admin;
SELECT tenant_id, COUNT(*) AS product_count
FROM products
GROUP BY tenant_id;
-- Reset to application role
RESET ROLE;
7.5 Seed Data
When a new tenant is provisioned, default data is inserted with the tenant’s tenant_id:
-- Seed default data for a new tenant
CREATE OR REPLACE FUNCTION seed_tenant_data(p_tenant_id UUID)
RETURNS VOID AS $$
DECLARE
v_owner_role_id INT;
v_admin_role_id INT;
v_manager_role_id INT;
v_staff_role_id INT;
v_buyer_role_id INT;
BEGIN
-- Seed default roles
INSERT INTO roles (tenant_id, name, display_name, description, is_system)
VALUES
(p_tenant_id, 'owner', 'Owner', 'Full access to all features and settings', TRUE),
(p_tenant_id, 'admin', 'Administrator', 'Administrative access excluding billing', TRUE),
(p_tenant_id, 'manager', 'Manager', 'Store management and reporting access', TRUE),
(p_tenant_id, 'staff', 'Staff', 'Sales and basic customer operations', TRUE),
(p_tenant_id, 'buyer', 'Buyer', 'Purchasing and vendor management access', TRUE);
-- Get role IDs for permission assignment
SELECT id INTO v_owner_role_id FROM roles WHERE tenant_id = p_tenant_id AND name = 'owner';
SELECT id INTO v_admin_role_id FROM roles WHERE tenant_id = p_tenant_id AND name = 'admin';
SELECT id INTO v_manager_role_id FROM roles WHERE tenant_id = p_tenant_id AND name = 'manager';
SELECT id INTO v_staff_role_id FROM roles WHERE tenant_id = p_tenant_id AND name = 'staff';
SELECT id INTO v_buyer_role_id FROM roles WHERE tenant_id = p_tenant_id AND name = 'buyer';
-- Seed role permissions (Owner gets all)
INSERT INTO role_permissions (tenant_id, role_id, permission, granted) VALUES
-- Owner permissions (all)
(p_tenant_id, v_owner_role_id, 'products.*', TRUE),
(p_tenant_id, v_owner_role_id, 'inventory.*', TRUE),
(p_tenant_id, v_owner_role_id, 'orders.*', TRUE),
(p_tenant_id, v_owner_role_id, 'customers.*', TRUE),
(p_tenant_id, v_owner_role_id, 'reports.*', TRUE),
(p_tenant_id, v_owner_role_id, 'settings.*', TRUE),
(p_tenant_id, v_owner_role_id, 'users.*', TRUE),
(p_tenant_id, v_owner_role_id, 'billing.*', TRUE),
-- Manager permissions
(p_tenant_id, v_manager_role_id, 'products.view', TRUE),
(p_tenant_id, v_manager_role_id, 'products.edit', TRUE),
(p_tenant_id, v_manager_role_id, 'inventory.*', TRUE),
(p_tenant_id, v_manager_role_id, 'orders.*', TRUE),
(p_tenant_id, v_manager_role_id, 'customers.*', TRUE),
(p_tenant_id, v_manager_role_id, 'reports.view', TRUE),
(p_tenant_id, v_manager_role_id, 'shifts.*', TRUE),
-- Staff permissions
(p_tenant_id, v_staff_role_id, 'products.view', TRUE),
(p_tenant_id, v_staff_role_id, 'orders.create', TRUE),
(p_tenant_id, v_staff_role_id, 'orders.view', TRUE),
(p_tenant_id, v_staff_role_id, 'customers.view', TRUE),
(p_tenant_id, v_staff_role_id, 'customers.create', TRUE),
(p_tenant_id, v_staff_role_id, 'shifts.open', TRUE),
(p_tenant_id, v_staff_role_id, 'shifts.close', TRUE),
-- Buyer permissions
(p_tenant_id, v_buyer_role_id, 'products.view', TRUE),
(p_tenant_id, v_buyer_role_id, 'products.edit', TRUE),
(p_tenant_id, v_buyer_role_id, 'inventory.view', TRUE),
(p_tenant_id, v_buyer_role_id, 'inventory.receive', TRUE),
(p_tenant_id, v_buyer_role_id, 'inventory.transfer', TRUE),
(p_tenant_id, v_buyer_role_id, 'reports.view', TRUE);
-- Seed default genders
INSERT INTO genders (tenant_id, name) VALUES
(p_tenant_id, 'Men'), (p_tenant_id, 'Women'), (p_tenant_id, 'Unisex'),
(p_tenant_id, 'Kids'), (p_tenant_id, 'Boys'), (p_tenant_id, 'Girls');
-- Seed default tenant settings
INSERT INTO tenant_settings (tenant_id, category, key, value, value_type, description) VALUES
(p_tenant_id, 'general', 'business_name', '"New Business"', 'string', 'Business display name'),
(p_tenant_id, 'general', 'timezone', '"UTC"', 'string', 'Default timezone'),
(p_tenant_id, 'pos', 'require_customer', 'false', 'boolean', 'Require customer for sales'),
(p_tenant_id, 'pos', 'allow_negative_inventory', 'false', 'boolean', 'Allow selling without stock'),
(p_tenant_id, 'pos', 'receipt_footer', '"Thank you for your business!"', 'string', 'Receipt footer message'),
(p_tenant_id, 'inventory', 'low_stock_threshold', '5', 'number', 'Low stock alert threshold'),
(p_tenant_id, 'cash', 'require_drawer_count', 'true', 'boolean', 'Require cash count at shift open/close'),
(p_tenant_id, 'loyalty', 'points_per_dollar', '1', 'number', 'Loyalty points earned per dollar spent');
RAISE NOTICE 'Seed data inserted for tenant: %', p_tenant_id;
END;
$$ LANGUAGE plpgsql;
7.6 Tenant Provisioning
With RLS architecture, provisioning a new tenant is significantly simpler than schema-per-tenant. No schema creation is needed — just insert a tenant record and seed default data.
Provisioning Script
-- ============================================================
-- TENANT PROVISIONING SCRIPT (RLS)
-- Much simpler than schema-per-tenant: no CREATE SCHEMA needed
-- ============================================================
-- Variables (replace with actual values)
\set tenant_name 'Acme Retail'
\set tenant_slug 'acme-retail'
\set contact_email 'admin@acmeretail.com'
-- Begin transaction
BEGIN;
-- Step 1: Create tenant record in shared schema
INSERT INTO shared.tenants (
name, slug, status, tier, contact_email
) VALUES (
:'tenant_name',
:'tenant_slug',
'provisioning',
'standard',
:'contact_email'
) RETURNING id AS tenant_id \gset
-- Step 2: Create subscription record
INSERT INTO shared.tenant_subscriptions (
tenant_id,
plan_id,
status,
billing_cycle,
price_cents,
location_limit,
user_limit,
device_limit,
current_period_start,
current_period_end
) VALUES (
:'tenant_id',
'standard_monthly',
'active',
'monthly',
9900, -- $99.00
5,
10,
20,
CURRENT_DATE,
CURRENT_DATE + INTERVAL '1 month'
);
-- Step 3: Seed default data (roles, permissions, settings)
-- The seed function uses RLS-compatible tenant_id on every row
SELECT seed_tenant_data(:'tenant_id'::uuid);
-- Step 4: Activate tenant
UPDATE shared.tenants
SET status = 'active'
WHERE id = :'tenant_id'::uuid;
-- Step 5: Verify creation
SELECT
t.name,
t.slug,
t.status,
(SELECT COUNT(*) FROM roles WHERE tenant_id = t.id) AS roles_created,
(SELECT COUNT(*) FROM tenant_settings WHERE tenant_id = t.id) AS settings_created
FROM shared.tenants t
WHERE t.id = :'tenant_id'::uuid;
COMMIT;
-- Success message
\echo 'Tenant provisioned successfully!'
\echo 'Tenant ID: ' :'tenant_id'
C# Provisioning Service
// TenantProvisioningService.cs
public class TenantProvisioningService
{
private readonly PosDbContext _dbContext;
private readonly ILogger<TenantProvisioningService> _logger;
public async Task<Guid> ProvisionTenantAsync(CreateTenantRequest request)
{
await using var transaction = await _dbContext.Database.BeginTransactionAsync();
try
{
// Step 1: Create tenant record
var tenant = new Tenant
{
Name = request.Name,
Slug = request.Slug,
Status = "provisioning",
Tier = request.Tier,
ContactEmail = request.ContactEmail
};
_dbContext.Tenants.Add(tenant);
await _dbContext.SaveChangesAsync();
// Step 2: Create subscription
var subscription = new TenantSubscription
{
TenantId = tenant.Id,
PlanId = request.PlanId,
Status = "active",
BillingCycle = "monthly",
PriceCents = 9900,
CurrentPeriodStart = DateTime.UtcNow,
CurrentPeriodEnd = DateTime.UtcNow.AddMonths(1)
};
_dbContext.TenantSubscriptions.Add(subscription);
await _dbContext.SaveChangesAsync();
// Step 3: Seed default data (no schema creation needed!)
await _dbContext.Database.ExecuteSqlRawAsync(
"SELECT seed_tenant_data({0})", tenant.Id);
// Step 4: Activate
tenant.Status = "active";
await _dbContext.SaveChangesAsync();
await transaction.CommitAsync();
_logger.LogInformation(
"Tenant provisioned: {Name} ({Id})", tenant.Name, tenant.Id);
return tenant.Id;
}
catch (Exception ex)
{
await transaction.RollbackAsync();
_logger.LogError(ex, "Tenant provisioning failed for {Slug}", request.Slug);
throw;
}
}
}
RLS vs Schema-Per-Tenant Provisioning Comparison
| Step | Schema-Per-Tenant | RLS (Current) |
|---|---|---|
| 1. Create tenant record | INSERT into shared.tenants | INSERT into shared.tenants |
| 2. Create schema | CREATE SCHEMA tenant_XXXX | Not needed |
| 3. Create tables | Run DDL for 45 tables in new schema | Not needed (tables already exist) |
| 4. Set permissions | GRANT on new schema | Not needed (RLS handles isolation) |
| 5. Seed data | INSERT into tenant_XXXX.roles etc. | INSERT into roles with tenant_id |
| 6. Activate | UPDATE status | UPDATE status |
| Total time | ~5-10 seconds | ~500ms |
| Rollback complexity | DROP SCHEMA CASCADE | DELETE WHERE tenant_id = X |
7.7 Table Count by Domain
| Domain | Tables | Shared | Tenant (public + RLS) |
|---|---|---|---|
| 1. Products & Variants | 5 | 0 | 5 |
| 2. Categories & Tags | 5 | 0 | 5 |
| 3. Product Attributes | 4 | 0 | 4 |
| 4. Inventory & Locations | 3 | 0 | 3 |
| 5. Tax Configuration | 2 | 0 | 2 |
| 6. Orders & Customers | 3 | 0 | 3 |
| 7. User Preferences | 1 | 0 | 1 |
| 8. Tenant Management | 3 | 3 | 0 |
| 9. Auth & Authorization | 7 | 3 | 4 |
| 10. Offline Sync | 4 | 0 | 4 |
| 11. Cash Drawer | 6 | 0 | 6 |
| 12. Payment Processing | 4 | 0 | 4 |
| 13. RFID Module | 7 | 0 | 7 |
| TOTAL | 51 | 6 | 45 |
7.8 Quick Reference: Table List
Shared Schema Tables (6)
shared.tenants
shared.tenant_subscriptions
shared.tenant_modules
shared.users
shared.user_sessions
shared.password_resets
Public Schema Tables (45, all with tenant_id + RLS)
-- Domain 1: Products (5)
products, variants, product_collection, product_tag, brands
-- Domain 2: Categories (5)
categories, collections, tags, (product_collection, product_tag counted above)
-- Domain 3: Attributes (4)
product_groups, genders, origins, fabrics
-- Domain 4: Inventory (3)
locations, inventory_levels, inventory_transactions
-- Domain 5: Tax (2)
taxes, location_tax
-- Domain 6: Orders (3)
customers, orders, order_items
-- Domain 7: Preferences (1)
item_view_settings
-- Domain 9: Auth (4 tenant-specific)
roles, role_permissions, tenant_users, tenant_settings
-- Domain 10: Sync (4)
devices, sync_queue, sync_conflicts, sync_checkpoints
-- Domain 11: Cash (6)
shifts, cash_drawers, cash_counts, cash_movements, cash_drops, cash_pickups
-- Domain 12: Payment (4)
payment_terminals, payment_attempts, payment_batches, payment_reconciliation
-- Domain 13: RFID (7)
rfid_config, rfid_printers, rfid_print_templates, rfid_print_jobs,
rfid_tags, rfid_scan_sessions, rfid_scan_events
Index Naming Convention
All tenant-scoped tables follow this composite index pattern:
-- Primary tenant isolation index
CREATE INDEX idx_<table>_tenant ON <table>(tenant_id);
-- Composite indexes for common queries (tenant_id always first)
CREATE INDEX idx_<table>_tenant_<column> ON <table>(tenant_id, <column>);
-- Unique constraints include tenant_id for proper scoping
CREATE UNIQUE INDEX idx_<table>_tenant_<column> ON <table>(tenant_id, <column>);
Next Chapter: Chapter 08: Entity Specifications - Complete CREATE TABLE statements for all 51 tables organized by domain.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | III - Database |
| Chapter | 07 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 08: Entity Specifications
Complete SQL for All 51 Tables
8.1 Overview
This chapter provides complete CREATE TABLE statements for all 51 tables in the POS Platform database. Each table includes:
- Column definitions with data types
- Constraints (PRIMARY KEY, FOREIGN KEY, UNIQUE, CHECK)
- Default values
- Comments explaining purpose
Usage: Copy-paste these statements to create the database schema.
Note: This chapter combines complete SQL CREATE TABLE statements with Domain Model entity field references (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C). Domain Model sections provide business context, validation rules, and field descriptions. SQL sections provide implementation-ready schema.
Domain 1-2: Catalog (Products, Categories, Tags)
Domain Model: Product
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| PRODUCT |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| sku | String(50) | Unique stock keeping unit |
| barcode | String(50) | UPC/EAN barcode (nullable) |
| name | String(255) | Display name |
| description | Text | Full description |
| category_id | UUID | FK to Category |
| brand | String(100) | Brand name |
| vendor | String(100) | Supplier/vendor name |
| cost | Decimal | Wholesale cost |
| price | Decimal | Retail price |
| compare_at_price| Decimal | Original price (for discounts) |
| tax_code | String(20) | Tax category code |
| is_taxable | Boolean | Subject to sales tax |
| track_inventory | Boolean | Enable inventory tracking |
| is_active | Boolean | Available for sale |
| shopify_id | String(50) | Shopify product ID (if synced) |
| image_url | String(500) | Primary product image |
| weight | Decimal | Weight in default unit |
| weight_unit | String(10) | lb, kg, oz, g |
| tags | String[] | Searchable tags |
| metadata | JSONB | Custom attributes |
| created_at | Timestamp | Creation timestamp |
| updated_at | Timestamp | Last update timestamp |
+------------------------------------------------------------------+
Domain Model: ProductVariant
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| PRODUCT_VARIANT |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| product_id | UUID | FK to Product (required) |
| sku | String(50) | Unique variant SKU |
| barcode | String(50) | Variant barcode |
| name | String(255) | Variant name (e.g., "Large/Blue") |
| option1_name | String(50) | First option name (e.g., "Size") |
| option1_value | String(100) | First option value (e.g., "Large")|
| option2_name | String(50) | Second option name |
| option2_value | String(100) | Second option value |
| option3_name | String(50) | Third option name |
| option3_value | String(100) | Third option value |
| cost | Decimal | Variant cost (overrides product) |
| price | Decimal | Variant price (overrides product) |
| weight | Decimal | Variant weight |
| image_url | String(500) | Variant-specific image |
| shopify_variant_id | String(50) | Shopify variant ID |
| is_active | Boolean | Available for sale |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: Category
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| CATEGORY |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| name | String(255) | Category name |
| slug | String(255) | URL-friendly identifier |
| parent_id | UUID | FK to parent Category (nullable) |
| description | Text | Category description |
| image_url | String(500) | Category image |
| sort_order | Integer | Display order |
| is_active | Boolean | Show in UI |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: PricingRule
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| PRICING_RULE |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| name | String(255) | Rule name |
| type | String(50) | percentage, fixed, buy_x_get_y |
| value | Decimal | Discount value or percentage |
| product_id | UUID | Apply to specific product |
| category_id | UUID | Apply to category |
| customer_group | String(50) | Apply to customer group |
| min_quantity | Integer | Minimum quantity required |
| start_date | Timestamp | Rule start date |
| end_date | Timestamp | Rule end date |
| priority | Integer | Rule priority (higher wins) |
| is_active | Boolean | Rule is enabled |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
brands
-- Brand/manufacturer reference data
CREATE TABLE brands (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(100) NOT NULL,
logo_url VARCHAR(500),
description TEXT,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT brands_tenant_name_unique UNIQUE (tenant_id, name)
);
CREATE INDEX idx_brands_tenant ON brands(tenant_id);
CREATE INDEX idx_brands_active ON brands(tenant_id, is_active) WHERE is_active = TRUE;
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE brands IS 'Brand/manufacturer reference data for product categorization';
product_groups
-- High-level product type categorization
CREATE TABLE product_groups (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(50) NOT NULL,
description TEXT,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT product_groups_tenant_name_unique UNIQUE (tenant_id, name)
);
CREATE INDEX idx_product_groups_tenant ON product_groups(tenant_id);
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE product_groups IS 'High-level product types (Tops, Bottoms, Accessories, etc.)';
genders
-- Target demographic for products
CREATE TABLE genders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(20) NOT NULL,
CONSTRAINT genders_tenant_name_unique UNIQUE (tenant_id, name)
);
CREATE INDEX idx_genders_tenant ON genders(tenant_id);
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE genders IS 'Target demographic (Men, Women, Unisex, Kids)';
origins
-- Country of origin for compliance tracking
CREATE TABLE origins (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
country VARCHAR(100) NOT NULL,
code VARCHAR(3),
CONSTRAINT origins_tenant_country_unique UNIQUE (tenant_id, country),
CONSTRAINT origins_tenant_code_unique UNIQUE (tenant_id, code)
);
CREATE INDEX idx_origins_tenant ON origins(tenant_id);
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE origins IS 'Country of origin for compliance and import tracking';
COMMENT ON COLUMN origins.code IS 'ISO 3166-1 alpha-3 country code';
fabrics
-- Material composition and care instructions
CREATE TABLE fabrics (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(100) NOT NULL,
care_instructions TEXT,
CONSTRAINT fabrics_tenant_name_unique UNIQUE (tenant_id, name)
);
CREATE INDEX idx_fabrics_tenant ON fabrics(tenant_id);
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE fabrics IS 'Fabric/material composition (100% Cotton, Polyester Blend, etc.)';
products
-- Master product record containing shared attributes
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
sku VARCHAR(50) NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
brand_id UUID REFERENCES brands(id) ON DELETE SET NULL,
product_group_id UUID REFERENCES product_groups(id) ON DELETE SET NULL,
gender_id UUID REFERENCES genders(id) ON DELETE SET NULL,
origin_id UUID REFERENCES origins(id) ON DELETE SET NULL,
fabric_id UUID REFERENCES fabrics(id) ON DELETE SET NULL,
base_price DECIMAL(10,2) NOT NULL,
cost_price DECIMAL(10,2) NOT NULL,
is_active BOOLEAN DEFAULT TRUE,
has_variants BOOLEAN DEFAULT FALSE,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT products_price_positive CHECK (base_price >= 0),
CONSTRAINT products_cost_positive CHECK (cost_price >= 0)
);
-- Indexes
CREATE UNIQUE INDEX idx_products_tenant_sku ON products(tenant_id, sku) WHERE deleted_at IS NULL;
CREATE INDEX idx_products_tenant ON products(tenant_id);
CREATE INDEX idx_products_brand ON products(tenant_id, brand_id);
CREATE INDEX idx_products_group ON products(tenant_id, product_group_id);
CREATE INDEX idx_products_active ON products(tenant_id, is_active) WHERE is_active = TRUE AND deleted_at IS NULL;
CREATE INDEX idx_products_deleted ON products(deleted_at) WHERE deleted_at IS NOT NULL;
CREATE INDEX idx_products_name_search ON products USING gin(to_tsvector('english', name));
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE products IS 'Master product catalog with shared attributes';
COMMENT ON COLUMN products.has_variants IS 'TRUE if product has size/color variants; inventory tracked at variant level';
COMMENT ON COLUMN products.deleted_at IS 'Soft delete timestamp (NULL = active)';
variants
-- Product variations (size, color) with own SKUs and inventory
CREATE TABLE variants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
product_id UUID NOT NULL REFERENCES products(id) ON DELETE CASCADE,
sku VARCHAR(50) NOT NULL,
size VARCHAR(20),
color VARCHAR(50),
price_adjustment DECIMAL(10,2) DEFAULT 0.00,
weight DECIMAL(10,3),
barcode VARCHAR(50),
is_active BOOLEAN DEFAULT TRUE,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
-- Indexes (tenant_id included in unique constraints for RLS compatibility)
CREATE UNIQUE INDEX idx_variants_tenant_sku ON variants(tenant_id, sku) WHERE deleted_at IS NULL;
CREATE UNIQUE INDEX idx_variants_tenant_barcode ON variants(tenant_id, barcode)
WHERE barcode IS NOT NULL AND deleted_at IS NULL;
CREATE INDEX idx_variants_tenant ON variants(tenant_id);
CREATE INDEX idx_variants_product ON variants(tenant_id, product_id);
CREATE INDEX idx_variants_size ON variants(size) WHERE size IS NOT NULL;
CREATE INDEX idx_variants_color ON variants(color) WHERE color IS NOT NULL;
CREATE INDEX idx_variants_deleted ON variants(deleted_at) WHERE deleted_at IS NOT NULL;
-- RLS: tenant_id = current_setting('app.current_tenant')::uuid
COMMENT ON TABLE variants IS 'Product variants with size/color combinations and unique SKUs';
COMMENT ON COLUMN variants.price_adjustment IS 'Price modifier from base (can be negative for discounts)';
COMMENT ON COLUMN variants.barcode IS 'UPC/EAN barcode for POS scanning';
categories
-- Hierarchical product categories
CREATE TABLE categories (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
parent_id INT REFERENCES categories(id) ON DELETE SET NULL,
description TEXT,
display_order INT DEFAULT 0,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT categories_name_unique UNIQUE (name)
);
-- Indexes
CREATE INDEX idx_categories_parent ON categories(parent_id);
CREATE INDEX idx_categories_display ON categories(display_order);
CREATE INDEX idx_categories_active ON categories(is_active) WHERE is_active = TRUE;
COMMENT ON TABLE categories IS 'Hierarchical product categories (Clothing > Mens > Shirts)';
COMMENT ON COLUMN categories.parent_id IS 'Self-reference for hierarchy; NULL = root category';
collections
-- Marketing/seasonal product groupings
CREATE TABLE collections (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
description TEXT,
image_url VARCHAR(500),
is_active BOOLEAN DEFAULT TRUE,
start_date TIMESTAMP,
end_date TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT collections_name_unique UNIQUE (name),
CONSTRAINT collections_date_order CHECK (end_date IS NULL OR start_date IS NULL OR end_date > start_date)
);
-- Indexes
CREATE INDEX idx_collections_active ON collections(is_active, start_date, end_date);
CREATE INDEX idx_collections_current ON collections(start_date, end_date)
WHERE is_active = TRUE;
COMMENT ON TABLE collections IS 'Marketing collections (Summer 2025, Clearance, New Arrivals)';
tags
-- Flexible product tagging
CREATE TABLE tags (
id SERIAL PRIMARY KEY,
name VARCHAR(50) NOT NULL,
color VARCHAR(7),
CONSTRAINT tags_name_unique UNIQUE (name),
CONSTRAINT tags_color_hex CHECK (color IS NULL OR color ~ '^#[0-9A-Fa-f]{6}$')
);
COMMENT ON TABLE tags IS 'Freeform product tags for quick filtering';
COMMENT ON COLUMN tags.color IS 'Hex color code for UI display (#FF5733)';
product_collection
-- Junction table: products to collections (many-to-many)
CREATE TABLE product_collection (
id SERIAL PRIMARY KEY,
product_id INT NOT NULL REFERENCES products(id) ON DELETE CASCADE,
collection_id INT NOT NULL REFERENCES collections(id) ON DELETE CASCADE,
display_order INT DEFAULT 0,
CONSTRAINT product_collection_unique UNIQUE (product_id, collection_id)
);
CREATE INDEX idx_product_collection_product ON product_collection(product_id);
CREATE INDEX idx_product_collection_collection ON product_collection(collection_id);
COMMENT ON TABLE product_collection IS 'Links products to marketing collections';
product_tag
-- Junction table: products to tags (many-to-many)
CREATE TABLE product_tag (
id SERIAL PRIMARY KEY,
product_id INT NOT NULL REFERENCES products(id) ON DELETE CASCADE,
tag_id INT NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
CONSTRAINT product_tag_unique UNIQUE (product_id, tag_id)
);
CREATE INDEX idx_product_tag_product ON product_tag(product_id);
CREATE INDEX idx_product_tag_tag ON product_tag(tag_id);
COMMENT ON TABLE product_tag IS 'Links products to tags for flexible categorization';
Domain 3: Inventory
Domain Model: InventoryItem
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| INVENTORY_ITEM |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| product_id | UUID | FK to Product |
| variant_id | UUID | FK to ProductVariant |
| location_id | UUID | FK to Location (required) |
| quantity_on_hand| Integer | Current stock quantity |
| quantity_committed | Integer | Reserved for pending orders |
| quantity_available | Integer | Calculated: on_hand - committed |
| quantity_incoming | Integer | Expected from purchase orders |
| reorder_point | Integer | Alert when below this level |
| reorder_quantity| Integer | Default reorder amount |
| bin_location | String(50) | Physical bin/shelf location |
| last_counted_at | Timestamp | Last physical count |
| last_received_at| Timestamp | Last inventory receipt |
| last_sold_at | Timestamp | Last sale of this item |
| created_at | Timestamp | Creation timestamp |
| updated_at | Timestamp | Last update |
+------------------------------------------------------------------+
Domain Model: InventoryAdjustment
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| INVENTORY_ADJUSTMENT |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| adjustment_number | String(50)| Human-readable ID |
| location_id | UUID | FK to Location |
| employee_id | UUID | FK to Employee (who adjusted) |
| reason | String(50) | count, damage, theft, return, etc.|
| notes | Text | Adjustment notes |
| status | String(20) | draft, pending, completed |
| created_at | Timestamp | Adjustment timestamp |
| completed_at | Timestamp | When finalized |
+------------------------------------------------------------------+
Domain Model: InventoryTransfer
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| INVENTORY_TRANSFER |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| transfer_number | String(50) | Human-readable ID |
| from_location_id| UUID | FK to source Location |
| to_location_id | UUID | FK to destination Location |
| employee_id | UUID | FK to Employee (initiator) |
| status | String(20) | draft, pending, in_transit, received |
| notes | Text | Transfer notes |
| shipped_at | Timestamp | When shipped |
| received_at | Timestamp | When received |
| received_by | UUID | FK to Employee (receiver) |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
locations
-- Physical stores, warehouses, and fulfillment centers
CREATE TABLE locations (
id SERIAL PRIMARY KEY,
code VARCHAR(10) NOT NULL,
name VARCHAR(100) NOT NULL,
type VARCHAR(20) NOT NULL,
address VARCHAR(255),
city VARCHAR(100),
state VARCHAR(50),
postal_code VARCHAR(20),
phone VARCHAR(20),
timezone VARCHAR(50) NOT NULL DEFAULT 'America/New_York',
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT locations_code_unique UNIQUE (code),
CONSTRAINT locations_type_check CHECK (type IN ('store', 'warehouse', 'online', 'popup'))
);
CREATE INDEX idx_locations_type ON locations(type);
CREATE INDEX idx_locations_active ON locations(is_active) WHERE is_active = TRUE;
COMMENT ON TABLE locations IS 'Physical and virtual locations for inventory tracking';
COMMENT ON COLUMN locations.code IS 'Short code (GM, HM, LM, NM, HQ)';
COMMENT ON COLUMN locations.type IS 'Location type: store, warehouse, online, popup';
inventory_levels
-- Current stock quantity per variant per location
CREATE TABLE inventory_levels (
id SERIAL PRIMARY KEY,
variant_id INT NOT NULL REFERENCES variants(id) ON DELETE CASCADE,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE CASCADE,
quantity_on_hand INT DEFAULT 0,
quantity_reserved INT DEFAULT 0,
quantity_available INT GENERATED ALWAYS AS (quantity_on_hand - quantity_reserved) STORED,
reorder_point INT DEFAULT 0,
reorder_quantity INT DEFAULT 0,
last_counted TIMESTAMP,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT inventory_levels_on_hand_check CHECK (quantity_on_hand >= 0 OR
(SELECT value::boolean FROM tenant_settings WHERE key = 'allow_negative_inventory')),
CONSTRAINT inventory_levels_reserved_check CHECK (quantity_reserved >= 0)
);
-- Indexes
CREATE UNIQUE INDEX idx_inventory_levels_lookup ON inventory_levels(variant_id, location_id)
WHERE deleted_at IS NULL;
CREATE INDEX idx_inventory_levels_location ON inventory_levels(location_id) WHERE deleted_at IS NULL;
CREATE INDEX idx_inventory_levels_low_stock ON inventory_levels(location_id, quantity_on_hand)
WHERE quantity_on_hand <= reorder_point AND deleted_at IS NULL;
CREATE INDEX idx_inventory_levels_variant ON inventory_levels(variant_id);
COMMENT ON TABLE inventory_levels IS 'Current inventory quantities per variant per location';
COMMENT ON COLUMN inventory_levels.quantity_available IS 'Computed: on_hand - reserved';
COMMENT ON COLUMN inventory_levels.reorder_point IS 'Low stock alert threshold';
inventory_transactions
-- Audit log for all inventory movements (append-only)
CREATE TABLE inventory_transactions (
id BIGSERIAL PRIMARY KEY,
variant_id INT NOT NULL REFERENCES variants(id) ON DELETE RESTRICT,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
transaction_type VARCHAR(20) NOT NULL,
quantity_change INT NOT NULL,
quantity_before INT NOT NULL,
quantity_after INT NOT NULL,
reference_type VARCHAR(50),
reference_id INT,
notes TEXT,
user_id UUID REFERENCES shared.users(id) ON DELETE SET NULL,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT inventory_trans_type_check CHECK (transaction_type IN (
'sale', 'return', 'purchase', 'transfer_in', 'transfer_out',
'adjustment', 'count', 'damage', 'theft', 'found'
)),
CONSTRAINT inventory_trans_math CHECK (quantity_after = quantity_before + quantity_change)
);
-- Indexes (BRIN for time-series, B-tree for lookups)
CREATE INDEX idx_inventory_trans_date ON inventory_transactions USING BRIN (created_at);
CREATE INDEX idx_inventory_trans_variant ON inventory_transactions(variant_id, created_at DESC);
CREATE INDEX idx_inventory_trans_location ON inventory_transactions(location_id, created_at DESC);
CREATE INDEX idx_inventory_trans_reference ON inventory_transactions(reference_type, reference_id)
WHERE reference_type IS NOT NULL;
CREATE INDEX idx_inventory_trans_type ON inventory_transactions(transaction_type, created_at DESC);
COMMENT ON TABLE inventory_transactions IS 'Immutable audit log of all inventory changes';
COMMENT ON COLUMN inventory_transactions.transaction_type IS 'Type of movement: sale, return, purchase, transfer, adjustment';
COMMENT ON COLUMN inventory_transactions.reference_type IS 'Source document type (order, transfer, adjustment)';
Domain 4: Sales
Domain Model: Sale
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| SALE |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| sale_number | String(50) | Human-readable sale ID |
| location_id | UUID | FK to Location (required) |
| register_id | String(20) | Register identifier |
| employee_id | UUID | FK to Employee (cashier) |
| customer_id | UUID | FK to Customer (nullable) |
| status | String(20) | draft, completed, voided, refunded|
| subtotal | Decimal | Sum of line items before tax |
| discount_total | Decimal | Total discounts applied |
| tax_total | Decimal | Total tax amount |
| total | Decimal | Final total (subtotal-discount+tax)|
| payment_status | String(20) | pending, partial, paid, refunded |
| source | String(20) | pos, online, mobile |
| notes | Text | Sale notes |
| voided_at | Timestamp | When sale was voided |
| voided_by | UUID | Employee who voided |
| void_reason | Text | Reason for void |
| created_at | Timestamp | Sale timestamp |
| updated_at | Timestamp | Last update |
+------------------------------------------------------------------+
Domain Model: SaleLineItem
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| SALE_LINE_ITEM |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| sale_id | UUID | FK to Sale (required) |
| product_id | UUID | FK to Product |
| variant_id | UUID | FK to ProductVariant |
| sku | String(50) | SKU at time of sale |
| name | String(255) | Product name at time of sale |
| quantity | Integer | Quantity sold |
| unit_price | Decimal | Price per unit |
| unit_cost | Decimal | Cost per unit (for profit calc) |
| discount_amount | Decimal | Discount on this line |
| discount_reason | String(100) | Reason for discount |
| tax_amount | Decimal | Tax on this line |
| total | Decimal | Line total |
| is_refunded | Boolean | Line was refunded |
| refunded_at | Timestamp | When refunded |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: Payment
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| PAYMENT |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| sale_id | UUID | FK to Sale (required) |
| payment_method | String(50) | cash, credit, debit, gift, store_credit |
| amount | Decimal | Payment amount |
| tendered | Decimal | Amount tendered (for cash) |
| change_given | Decimal | Change returned |
| reference | String(100) | Card last 4, check #, etc. |
| card_type | String(20) | visa, mastercard, amex, discover |
| auth_code | String(50) | Authorization code |
| status | String(20) | pending, completed, failed, refunded |
| gateway_response| JSONB | Full payment gateway response |
| created_at | Timestamp | Payment timestamp |
+------------------------------------------------------------------+
Domain Model: Refund
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| REFUND |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| sale_id | UUID | FK to original Sale |
| refund_number | String(50) | Human-readable refund ID |
| employee_id | UUID | FK to Employee (who processed) |
| reason | String(100) | Refund reason |
| subtotal | Decimal | Refund subtotal |
| tax_refunded | Decimal | Tax refunded |
| total | Decimal | Total refund amount |
| refund_method | String(50) | original, cash, store_credit |
| notes | Text | Additional notes |
| created_at | Timestamp | Refund timestamp |
+------------------------------------------------------------------+
Domain Model: RefundLineItem
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| REFUND_LINE_ITEM |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| refund_id | UUID | FK to Refund |
| sale_line_item_id | UUID | FK to original SaleLineItem |
| quantity | Integer | Quantity refunded |
| amount | Decimal | Refund amount for this line |
| restock | Boolean | Add back to inventory |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
customers
-- Customer profiles with loyalty tracking
CREATE TABLE customers (
id SERIAL PRIMARY KEY,
loyalty_number VARCHAR(20),
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
email VARCHAR(255),
phone VARCHAR(20),
address TEXT,
loyalty_points INT DEFAULT 0,
total_spent DECIMAL(12,2) DEFAULT 0,
visit_count INT DEFAULT 0,
first_visit TIMESTAMP,
last_visit TIMESTAMP,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
anonymized_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT customers_points_positive CHECK (loyalty_points >= 0),
CONSTRAINT customers_spent_positive CHECK (total_spent >= 0)
);
-- Indexes
CREATE UNIQUE INDEX idx_customers_loyalty ON customers(loyalty_number)
WHERE loyalty_number IS NOT NULL AND deleted_at IS NULL;
CREATE UNIQUE INDEX idx_customers_email ON customers(email)
WHERE email IS NOT NULL AND deleted_at IS NULL;
CREATE INDEX idx_customers_name ON customers(last_name, first_name) WHERE deleted_at IS NULL;
CREATE INDEX idx_customers_phone ON customers(phone) WHERE phone IS NOT NULL AND deleted_at IS NULL;
CREATE INDEX idx_customers_last_visit ON customers(last_visit DESC);
COMMENT ON TABLE customers IS 'Customer profiles with loyalty program tracking';
COMMENT ON COLUMN customers.anonymized_at IS 'GDPR: timestamp when PII was scrubbed';
orders
-- Transaction header with payment and status
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
order_number VARCHAR(20) NOT NULL,
customer_id INT REFERENCES customers(id) ON DELETE SET NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
user_id UUID REFERENCES shared.users(id) ON DELETE SET NULL,
shift_id INT REFERENCES shifts(id) ON DELETE SET NULL,
status VARCHAR(20) NOT NULL DEFAULT 'pending',
subtotal DECIMAL(12,2) NOT NULL,
tax_amount DECIMAL(12,2) NOT NULL,
discount_amount DECIMAL(12,2) DEFAULT 0,
total_amount DECIMAL(12,2) NOT NULL,
payment_method VARCHAR(20) NOT NULL,
payment_reference VARCHAR(100),
notes TEXT,
deleted_at TIMESTAMP,
deleted_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
void_reason VARCHAR(255),
created_at TIMESTAMP DEFAULT NOW(),
completed_at TIMESTAMP,
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT orders_number_unique UNIQUE (order_number),
CONSTRAINT orders_status_check CHECK (status IN ('pending', 'completed', 'refunded', 'voided', 'on_hold')),
CONSTRAINT orders_payment_check CHECK (payment_method IN (
'cash', 'credit', 'debit', 'mobile', 'gift_card', 'store_credit', 'split', 'check'
)),
CONSTRAINT orders_amounts_positive CHECK (
subtotal >= 0 AND tax_amount >= 0 AND discount_amount >= 0 AND total_amount >= 0
),
CONSTRAINT orders_total_math CHECK (
total_amount = subtotal + tax_amount - discount_amount
)
);
-- Indexes
CREATE INDEX idx_orders_date ON orders(created_at DESC);
CREATE INDEX idx_orders_location ON orders(location_id, created_at DESC);
CREATE INDEX idx_orders_customer ON orders(customer_id) WHERE customer_id IS NOT NULL;
CREATE INDEX idx_orders_shift ON orders(shift_id) WHERE shift_id IS NOT NULL;
CREATE INDEX idx_orders_status ON orders(status, created_at DESC);
CREATE INDEX idx_orders_number ON orders(order_number);
COMMENT ON TABLE orders IS 'Sales transaction headers with payment info';
COMMENT ON COLUMN orders.order_number IS 'Format: LOC-YYYYMMDD-SEQUENCE';
COMMENT ON COLUMN orders.void_reason IS 'Required explanation when status = voided';
order_items
-- Line items with snapshots of product data at time of sale
CREATE TABLE order_items (
id SERIAL PRIMARY KEY,
order_id INT NOT NULL REFERENCES orders(id) ON DELETE CASCADE,
variant_id INT NOT NULL REFERENCES variants(id) ON DELETE RESTRICT,
sku VARCHAR(50) NOT NULL,
product_name VARCHAR(255) NOT NULL,
quantity INT NOT NULL,
unit_price DECIMAL(10,2) NOT NULL,
discount_amount DECIMAL(10,2) DEFAULT 0,
tax_amount DECIMAL(10,2) NOT NULL,
line_total DECIMAL(10,2) NOT NULL,
is_returned BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT order_items_quantity_positive CHECK (quantity > 0),
CONSTRAINT order_items_amounts_positive CHECK (
unit_price >= 0 AND discount_amount >= 0 AND tax_amount >= 0
),
CONSTRAINT order_items_total_math CHECK (
line_total = (unit_price * quantity) - discount_amount + tax_amount
)
);
-- Indexes
CREATE INDEX idx_order_items_order ON order_items(order_id);
CREATE INDEX idx_order_items_variant ON order_items(variant_id);
CREATE INDEX idx_order_items_sku ON order_items(sku);
CREATE INDEX idx_order_items_returned ON order_items(order_id) WHERE is_returned = TRUE;
COMMENT ON TABLE order_items IS 'Order line items with point-in-time price snapshots';
COMMENT ON COLUMN order_items.sku IS 'SKU snapshot at time of sale (product may change)';
COMMENT ON COLUMN order_items.product_name IS 'Name snapshot at time of sale';
Domain 5: Customer Loyalty & Gift Cards
Domain Model: Customer
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| CUSTOMER |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| customer_number | String(20) | Human-readable customer ID |
| first_name | String(100) | First name |
| last_name | String(100) | Last name |
| email | String(255) | Email address (unique) |
| phone | String(20) | Phone number |
| company | String(255) | Company name |
| date_of_birth | Date | Birthday (for loyalty) |
| tax_exempt | Boolean | Tax exempt status |
| tax_exempt_id | String(50) | Tax exemption certificate |
| notes | Text | Customer notes |
| loyalty_points | Integer | Current loyalty points |
| loyalty_tier | String(20) | bronze, silver, gold, platinum |
| total_spent | Decimal | Lifetime spending |
| visit_count | Integer | Total visits |
| average_order | Decimal | Average order value |
| last_visit_at | Timestamp | Last visit timestamp |
| tags | String[] | Customer tags |
| marketing_consent | Boolean | Opted in for marketing |
| shopify_id | String(50) | Shopify customer ID |
| created_at | Timestamp | Creation timestamp |
| updated_at | Timestamp | Last update |
+------------------------------------------------------------------+
Domain Model: CustomerAddress
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| CUSTOMER_ADDRESS |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| customer_id | UUID | FK to Customer |
| address_type | String(20) | billing, shipping |
| is_default | Boolean | Default address for type |
| first_name | String(100) | Recipient first name |
| last_name | String(100) | Recipient last name |
| company | String(255) | Company name |
| address_line1 | String(255) | Street address line 1 |
| address_line2 | String(255) | Street address line 2 |
| city | String(100) | City |
| state | String(50) | State/Province |
| postal_code | String(20) | ZIP/Postal code |
| country | String(2) | Country code (ISO 3166-1) |
| phone | String(20) | Contact phone |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: StoreCredit
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| STORE_CREDIT |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| customer_id | UUID | FK to Customer |
| code | String(50) | Unique credit code |
| original_amount | Decimal | Initial credit amount |
| current_balance | Decimal | Remaining balance |
| reason | String(100) | Reason for credit |
| issued_by | UUID | FK to Employee |
| expires_at | Timestamp | Expiration date (nullable) |
| is_active | Boolean | Credit is usable |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: LoyaltyTransaction
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| LOYALTY_TRANSACTION |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| customer_id | UUID | FK to Customer |
| sale_id | UUID | FK to Sale (if earned from sale) |
| type | String(20) | earn, redeem, adjustment, expire |
| points | Integer | Points (positive or negative) |
| balance_after | Integer | Balance after transaction |
| description | String(255) | Transaction description |
| created_by | UUID | FK to Employee |
| created_at | Timestamp | Transaction timestamp |
+------------------------------------------------------------------+
loyalty_accounts
-- Customer loyalty program accounts
CREATE TABLE loyalty_accounts (
id SERIAL PRIMARY KEY,
customer_id INT NOT NULL REFERENCES customers(id) ON DELETE CASCADE,
tier VARCHAR(20) NOT NULL DEFAULT 'bronze',
points_balance INT DEFAULT 0,
lifetime_points INT DEFAULT 0,
tier_start_date DATE,
tier_expiry_date DATE,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT loyalty_accounts_customer_unique UNIQUE (customer_id),
CONSTRAINT loyalty_tier_check CHECK (tier IN ('bronze', 'silver', 'gold', 'platinum')),
CONSTRAINT loyalty_points_positive CHECK (points_balance >= 0 AND lifetime_points >= 0)
);
CREATE INDEX idx_loyalty_tier ON loyalty_accounts(tier) WHERE is_active = TRUE;
COMMENT ON TABLE loyalty_accounts IS 'Customer loyalty program tier and points tracking';
loyalty_transactions
-- Loyalty points earn/redeem history
CREATE TABLE loyalty_transactions (
id BIGSERIAL PRIMARY KEY,
loyalty_account_id INT NOT NULL REFERENCES loyalty_accounts(id) ON DELETE CASCADE,
order_id INT REFERENCES orders(id) ON DELETE SET NULL,
transaction_type VARCHAR(20) NOT NULL,
points INT NOT NULL,
points_balance_after INT NOT NULL,
description VARCHAR(255),
expires_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT loyalty_trans_type_check CHECK (transaction_type IN (
'earn', 'redeem', 'expire', 'adjust', 'bonus', 'transfer'
))
);
CREATE INDEX idx_loyalty_trans_account ON loyalty_transactions(loyalty_account_id, created_at DESC);
CREATE INDEX idx_loyalty_trans_order ON loyalty_transactions(order_id) WHERE order_id IS NOT NULL;
CREATE INDEX idx_loyalty_trans_expiry ON loyalty_transactions(expires_at)
WHERE expires_at IS NOT NULL AND transaction_type = 'earn';
COMMENT ON TABLE loyalty_transactions IS 'Audit trail of loyalty point changes';
gift_cards
-- Gift card issuance and balance tracking
CREATE TABLE gift_cards (
id SERIAL PRIMARY KEY,
card_number VARCHAR(20) NOT NULL,
pin_hash VARCHAR(255),
initial_balance DECIMAL(10,2) NOT NULL,
current_balance DECIMAL(10,2) NOT NULL,
currency_code CHAR(3) DEFAULT 'USD',
issued_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP,
issued_by UUID REFERENCES shared.users(id),
issued_location_id INT REFERENCES locations(id),
purchased_order_id INT REFERENCES orders(id),
is_active BOOLEAN DEFAULT TRUE,
deactivated_at TIMESTAMP,
deactivated_reason VARCHAR(255),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT gift_cards_number_unique UNIQUE (card_number),
CONSTRAINT gift_cards_balance_positive CHECK (initial_balance > 0 AND current_balance >= 0),
CONSTRAINT gift_cards_balance_max CHECK (current_balance <= initial_balance)
);
CREATE INDEX idx_gift_cards_number ON gift_cards(card_number);
CREATE INDEX idx_gift_cards_active ON gift_cards(is_active, expires_at);
COMMENT ON TABLE gift_cards IS 'Gift card issuance and balance management';
COMMENT ON COLUMN gift_cards.pin_hash IS 'Optional PIN for additional security (hashed)';
gift_card_transactions
-- Gift card usage history
CREATE TABLE gift_card_transactions (
id BIGSERIAL PRIMARY KEY,
gift_card_id INT NOT NULL REFERENCES gift_cards(id) ON DELETE CASCADE,
order_id INT REFERENCES orders(id) ON DELETE SET NULL,
transaction_type VARCHAR(20) NOT NULL,
amount DECIMAL(10,2) NOT NULL,
balance_after DECIMAL(10,2) NOT NULL,
location_id INT REFERENCES locations(id),
user_id UUID REFERENCES shared.users(id),
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT gift_card_trans_type CHECK (transaction_type IN (
'issue', 'redeem', 'reload', 'refund', 'adjust', 'expire'
)),
CONSTRAINT gift_card_trans_amount CHECK (amount > 0)
);
CREATE INDEX idx_gift_card_trans_card ON gift_card_transactions(gift_card_id, created_at DESC);
CREATE INDEX idx_gift_card_trans_order ON gift_card_transactions(order_id) WHERE order_id IS NOT NULL;
COMMENT ON TABLE gift_card_transactions IS 'Audit trail of gift card balance changes';
Domain 6-7: Returns & Reporting
returns
-- Return/exchange header
CREATE TABLE returns (
id SERIAL PRIMARY KEY,
return_number VARCHAR(20) NOT NULL,
original_order_id INT NOT NULL REFERENCES orders(id) ON DELETE RESTRICT,
customer_id INT REFERENCES customers(id) ON DELETE SET NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
user_id UUID REFERENCES shared.users(id) ON DELETE SET NULL,
status VARCHAR(20) NOT NULL DEFAULT 'pending',
return_type VARCHAR(20) NOT NULL,
subtotal DECIMAL(12,2) NOT NULL,
tax_amount DECIMAL(12,2) NOT NULL,
refund_amount DECIMAL(12,2) NOT NULL,
refund_method VARCHAR(20) NOT NULL,
reason VARCHAR(255),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW(),
completed_at TIMESTAMP,
CONSTRAINT returns_number_unique UNIQUE (return_number),
CONSTRAINT returns_status_check CHECK (status IN ('pending', 'approved', 'completed', 'rejected')),
CONSTRAINT returns_type_check CHECK (return_type IN ('refund', 'exchange', 'store_credit')),
CONSTRAINT returns_method_check CHECK (refund_method IN (
'original_payment', 'cash', 'store_credit', 'gift_card'
))
);
CREATE INDEX idx_returns_order ON returns(original_order_id);
CREATE INDEX idx_returns_date ON returns(created_at DESC);
CREATE INDEX idx_returns_status ON returns(status);
COMMENT ON TABLE returns IS 'Return and exchange transaction headers';
return_items
-- Individual items being returned
CREATE TABLE return_items (
id SERIAL PRIMARY KEY,
return_id INT NOT NULL REFERENCES returns(id) ON DELETE CASCADE,
order_item_id INT NOT NULL REFERENCES order_items(id) ON DELETE RESTRICT,
variant_id INT NOT NULL REFERENCES variants(id) ON DELETE RESTRICT,
quantity INT NOT NULL,
unit_price DECIMAL(10,2) NOT NULL,
refund_amount DECIMAL(10,2) NOT NULL,
reason VARCHAR(50),
condition VARCHAR(20) DEFAULT 'sellable',
restocked BOOLEAN DEFAULT FALSE,
restocked_location_id INT REFERENCES locations(id),
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT return_items_quantity_positive CHECK (quantity > 0),
CONSTRAINT return_items_condition_check CHECK (condition IN (
'sellable', 'damaged', 'defective', 'other'
))
);
CREATE INDEX idx_return_items_return ON return_items(return_id);
CREATE INDEX idx_return_items_variant ON return_items(variant_id);
COMMENT ON TABLE return_items IS 'Individual items in a return transaction';
reports (User Preferences)
-- Saved report configurations
CREATE TABLE reports (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
report_type VARCHAR(50) NOT NULL,
description TEXT,
query_config JSONB NOT NULL,
schedule_config JSONB,
is_system BOOLEAN DEFAULT FALSE,
is_public BOOLEAN DEFAULT FALSE,
created_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_reports_type ON reports(report_type);
CREATE INDEX idx_reports_public ON reports(is_public) WHERE is_public = TRUE;
COMMENT ON TABLE reports IS 'Saved report configurations and schedules';
item_view_settings (user_preferences)
-- Personalized view preferences for inventory screens
CREATE TABLE item_view_settings (
id SERIAL PRIMARY KEY,
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
view_type VARCHAR(10) DEFAULT 'list',
visible_columns JSONB,
sort_preferences JSONB,
filter_defaults JSONB,
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT item_view_settings_user_unique UNIQUE (user_id),
CONSTRAINT item_view_settings_type_check CHECK (view_type IN ('list', 'grid', 'compact'))
);
COMMENT ON TABLE item_view_settings IS 'User-specific inventory view preferences';
Domain 8: Multi-tenant (Shared Schema)
See Chapter 07 (Schema Design) for complete shared schema tables: tenants, tenant_subscriptions, tenant_modules
Domain 9: Authentication & Authorization
Domain Model: Employee
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| EMPLOYEE |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| employee_number | String(20) | Human-readable employee ID |
| first_name | String(100) | First name |
| last_name | String(100) | Last name |
| email | String(255) | Email address (unique) |
| phone | String(20) | Phone number |
| pin_hash | String(255) | Hashed PIN for clock-in |
| role_id | UUID | FK to Role |
| home_location_id| UUID | FK to primary Location |
| hire_date | Date | Date of hire |
| termination_date| Date | Date of termination |
| hourly_rate | Decimal | Hourly pay rate |
| commission_rate | Decimal | Commission percentage |
| is_active | Boolean | Employee is active |
| last_login_at | Timestamp | Last login timestamp |
| created_at | Timestamp | Creation timestamp |
| updated_at | Timestamp | Last update |
+------------------------------------------------------------------+
Domain Model: Role
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| ROLE |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| name | String(100) | Role name |
| code | String(50) | Role code (admin, manager, etc.) |
| description | Text | Role description |
| is_system | Boolean | System role (cannot delete) |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: Permission
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| PERMISSION |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| code | String(100) | Permission code |
| name | String(255) | Permission name |
| category | String(50) | Grouping category |
| description | Text | What this permission allows |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: RolePermission
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| ROLE_PERMISSION |
+------------------------------------------------------------------+
| role_id | UUID | FK to Role |
| permission_id | UUID | FK to Permission |
| created_at | Timestamp | When assigned |
| PRIMARY KEY (role_id, permission_id) |
+------------------------------------------------------------------+
Domain Model: Shift
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| SHIFT |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| employee_id | UUID | FK to Employee |
| location_id | UUID | FK to Location |
| clock_in | Timestamp | Clock in time |
| clock_out | Timestamp | Clock out time |
| break_minutes | Integer | Total break time |
| notes | Text | Shift notes |
| status | String(20) | active, completed, edited |
| edited_by | UUID | FK to Employee (if edited) |
| edit_reason | Text | Reason for edit |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
roles (Tenant Schema)
-- Role definitions per tenant
CREATE TABLE roles (
id SERIAL PRIMARY KEY,
name VARCHAR(50) NOT NULL,
display_name VARCHAR(100) NOT NULL,
description TEXT,
is_system BOOLEAN DEFAULT FALSE,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT roles_name_unique UNIQUE (name)
);
COMMENT ON TABLE roles IS 'Role definitions customizable per tenant';
COMMENT ON COLUMN roles.is_system IS 'System roles (Owner, Admin, etc.) cannot be deleted';
role_permissions
-- Permission matrix linking roles to permissions
CREATE TABLE role_permissions (
id SERIAL PRIMARY KEY,
role_id INT NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
permission VARCHAR(100) NOT NULL,
granted BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT role_permissions_unique UNIQUE (role_id, permission)
);
CREATE INDEX idx_role_permissions_role ON role_permissions(role_id);
CREATE INDEX idx_role_permissions_permission ON role_permissions(permission);
COMMENT ON TABLE role_permissions IS 'Fine-grained permission assignments per role';
COMMENT ON COLUMN role_permissions.permission IS 'Permission ID (products.view, orders.create, etc.)';
tenant_users
-- User-tenant-role mapping
CREATE TABLE tenant_users (
id SERIAL PRIMARY KEY,
user_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE CASCADE,
role_id INT NOT NULL REFERENCES roles(id) ON DELETE RESTRICT,
employee_id VARCHAR(20),
pin_hash VARCHAR(255),
hourly_rate DECIMAL(8,2),
commission_rate DECIMAL(5,4),
default_location_id INT REFERENCES locations(id) ON DELETE SET NULL,
is_active BOOLEAN DEFAULT TRUE,
hired_at DATE,
terminated_at DATE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT tenant_users_user_unique UNIQUE (user_id),
CONSTRAINT tenant_users_employee_unique UNIQUE (employee_id)
);
CREATE INDEX idx_tenant_users_role ON tenant_users(role_id);
CREATE INDEX idx_tenant_users_location ON tenant_users(default_location_id);
CREATE INDEX idx_tenant_users_active ON tenant_users(is_active) WHERE is_active = TRUE;
COMMENT ON TABLE tenant_users IS 'Links platform users to tenant with role assignment';
COMMENT ON COLUMN tenant_users.employee_id IS 'Short ID for quick POS login';
COMMENT ON COLUMN tenant_users.pin_hash IS 'Quick login PIN (not primary auth)';
tenant_settings
-- Per-tenant configuration settings
CREATE TABLE tenant_settings (
id SERIAL PRIMARY KEY,
category VARCHAR(50) NOT NULL,
key VARCHAR(100) NOT NULL,
value TEXT NOT NULL,
value_type VARCHAR(20) NOT NULL,
description TEXT,
is_secret BOOLEAN DEFAULT FALSE,
updated_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT tenant_settings_key_unique UNIQUE (category, key),
CONSTRAINT tenant_settings_type_check CHECK (value_type IN ('string', 'number', 'boolean', 'json'))
);
CREATE INDEX idx_tenant_settings_category ON tenant_settings(category);
COMMENT ON TABLE tenant_settings IS 'Key-value configuration settings per tenant';
COMMENT ON COLUMN tenant_settings.is_secret IS 'Mask value in UI (API keys, passwords)';
Domain 10: Offline Sync Infrastructure
devices
-- POS terminals and device registration
CREATE TABLE devices (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(100) NOT NULL,
device_type VARCHAR(30) NOT NULL,
hardware_id VARCHAR(255) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
cash_drawer_id INT REFERENCES cash_drawers(id) ON DELETE SET NULL,
payment_terminal_id INT REFERENCES payment_terminals(id) ON DELETE SET NULL,
status VARCHAR(20) NOT NULL DEFAULT 'pending',
last_sync_at TIMESTAMP,
last_seen_at TIMESTAMP,
app_version VARCHAR(20),
os_version VARCHAR(50),
ip_address INET,
push_token VARCHAR(500),
settings JSONB,
registered_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT devices_hardware_unique UNIQUE (hardware_id),
CONSTRAINT devices_type_check CHECK (device_type IN ('pos_terminal', 'tablet', 'mobile', 'kiosk')),
CONSTRAINT devices_status_check CHECK (status IN ('pending', 'active', 'disabled', 'lost'))
);
CREATE INDEX idx_devices_location ON devices(location_id);
CREATE INDEX idx_devices_status ON devices(status);
CREATE INDEX idx_devices_last_seen ON devices(last_seen_at);
COMMENT ON TABLE devices IS 'Registered POS devices and tablets';
COMMENT ON COLUMN devices.hardware_id IS 'Unique hardware fingerprint to prevent cloning';
sync_queue
-- Pending sync operations from offline devices
CREATE TABLE sync_queue (
id BIGSERIAL PRIMARY KEY,
device_id UUID NOT NULL REFERENCES devices(id) ON DELETE CASCADE,
idempotency_key UUID NOT NULL DEFAULT gen_random_uuid(),
operation_type VARCHAR(50) NOT NULL,
entity_type VARCHAR(50) NOT NULL,
entity_id VARCHAR(100) NOT NULL,
payload JSONB NOT NULL,
checksum VARCHAR(64) NOT NULL,
sequence_number BIGINT NOT NULL,
causality_version BIGINT NOT NULL DEFAULT 0,
priority INT DEFAULT 5,
status VARCHAR(20) NOT NULL DEFAULT 'pending',
attempts INT DEFAULT 0,
error_message TEXT,
created_at TIMESTAMP DEFAULT NOW(),
processed_at TIMESTAMP,
CONSTRAINT sync_queue_idempotency_unique UNIQUE (idempotency_key),
CONSTRAINT sync_queue_status_check CHECK (status IN (
'pending', 'processing', 'completed', 'failed', 'conflict'
)),
CONSTRAINT sync_queue_priority_check CHECK (priority BETWEEN 1 AND 10)
);
CREATE INDEX idx_sync_queue_device ON sync_queue(device_id, sequence_number);
CREATE INDEX idx_sync_queue_status ON sync_queue(status, priority, created_at);
CREATE INDEX idx_sync_queue_entity ON sync_queue(entity_type, entity_id);
CREATE INDEX idx_sync_queue_pending ON sync_queue(device_id, processed_at) WHERE processed_at IS NULL;
COMMENT ON TABLE sync_queue IS 'Pending sync operations from offline devices';
COMMENT ON COLUMN sync_queue.idempotency_key IS 'Prevents duplicate processing on replay';
COMMENT ON COLUMN sync_queue.causality_version IS 'Lamport timestamp for event ordering';
sync_conflicts
-- Conflict types enum
CREATE TYPE conflict_type_enum AS ENUM (
'update_update',
'update_delete',
'delete_update',
'version_mismatch',
'schema_change'
);
CREATE TYPE resolution_strategy_enum AS ENUM (
'keep_local',
'keep_server',
'merge',
'ignore',
'auto_local',
'auto_server'
);
-- Conflict tracking requiring resolution
CREATE TABLE sync_conflicts (
id SERIAL PRIMARY KEY,
sync_queue_id BIGINT NOT NULL REFERENCES sync_queue(id) ON DELETE CASCADE,
entity_type VARCHAR(50) NOT NULL,
entity_id VARCHAR(100) NOT NULL,
local_data JSONB NOT NULL,
server_data JSONB NOT NULL,
conflict_type conflict_type_enum NOT NULL,
resolution resolution_strategy_enum,
resolution_data JSONB,
resolution_notes TEXT,
resolved_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
resolved_at TIMESTAMP,
auto_resolved BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_sync_conflicts_entity ON sync_conflicts(entity_type, entity_id);
CREATE INDEX idx_sync_conflicts_unresolved ON sync_conflicts(created_at) WHERE resolved_at IS NULL;
CREATE INDEX idx_sync_conflicts_type ON sync_conflicts(conflict_type);
COMMENT ON TABLE sync_conflicts IS 'Sync conflicts requiring manual or automatic resolution';
COMMENT ON COLUMN sync_conflicts.auto_resolved IS 'TRUE if resolved by policy without human intervention';
sync_checkpoints
-- Sync progress tracking per device
CREATE TABLE sync_checkpoints (
id SERIAL PRIMARY KEY,
device_id UUID NOT NULL REFERENCES devices(id) ON DELETE CASCADE,
entity_type VARCHAR(50) NOT NULL,
direction VARCHAR(10) NOT NULL,
last_sync_at TIMESTAMP NOT NULL,
last_sequence BIGINT NOT NULL,
last_server_timestamp TIMESTAMP,
records_synced INT DEFAULT 0,
error_count INT DEFAULT 0,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT sync_checkpoints_unique UNIQUE (device_id, entity_type, direction),
CONSTRAINT sync_checkpoints_direction_check CHECK (direction IN ('push', 'pull'))
);
CREATE INDEX idx_sync_checkpoints_device ON sync_checkpoints(device_id);
COMMENT ON TABLE sync_checkpoints IS 'Tracks sync progress for incremental synchronization';
Domain 11: Cash Drawer Operations
Domain Model: Location
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| LOCATION |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| code | String(10) | Short code (HQ, GM, LM) |
| name | String(255) | Full location name |
| type | String(20) | store, warehouse, popup |
| address_line1 | String(255) | Street address |
| address_line2 | String(255) | Suite/unit |
| city | String(100) | City |
| state | String(50) | State/Province |
| postal_code | String(20) | ZIP/Postal code |
| country | String(2) | Country code |
| phone | String(20) | Phone number |
| email | String(255) | Email address |
| timezone | String(50) | IANA timezone |
| currency | String(3) | Currency code |
| shopify_location_id | String(50) | Shopify location ID |
| is_active | Boolean | Location is operational |
| can_fulfill | Boolean | Can fulfill online orders |
| is_visible_online | Boolean | Show inventory online |
| fulfillment_priority | Integer| Order for fulfillment routing |
| opening_hours | JSONB | Weekly schedule |
| created_at | Timestamp | Creation timestamp |
| updated_at | Timestamp | Last update |
+------------------------------------------------------------------+
Domain Model: Register
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| REGISTER |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| location_id | UUID | FK to Location |
| register_number | String(20) | Register identifier |
| name | String(100) | Display name |
| receipt_footer | Text | Custom receipt message |
| is_active | Boolean | Register is operational |
| last_opened_at | Timestamp | Last opened |
| last_closed_at | Timestamp | Last closed |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: CashDrawer
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| CASH_DRAWER |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| register_id | UUID | FK to Register |
| employee_id | UUID | FK to Employee (opened by) |
| opened_at | Timestamp | When opened |
| closed_at | Timestamp | When closed |
| opening_balance | Decimal | Starting cash amount |
| closing_balance | Decimal | Ending cash amount |
| expected_balance| Decimal | Expected based on transactions |
| variance | Decimal | Difference (closing - expected) |
| notes | Text | Drawer notes |
| status | String(20) | open, closed, reconciled |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
Domain Model: TaxRate
Business context reference (formerly Ch 07: Domain Model, now in Ch 04: Architecture Styles, Section L.9C)
+------------------------------------------------------------------+
| TAX_RATE |
+------------------------------------------------------------------+
| id | UUID | Primary key |
| location_id | UUID | FK to Location (nullable=all) |
| name | String(100) | Tax name |
| rate | Decimal | Tax rate percentage |
| tax_code | String(20) | Tax category code |
| is_compound | Boolean | Calculated on tax-inclusive total |
| priority | Integer | Order of application |
| is_active | Boolean | Tax is active |
| created_at | Timestamp | Creation timestamp |
+------------------------------------------------------------------+
shifts
-- Shift lifecycle management
CREATE TABLE shifts (
id SERIAL PRIMARY KEY,
shift_number VARCHAR(20) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
cash_drawer_id INT NOT NULL REFERENCES cash_drawers(id) ON DELETE RESTRICT,
device_id UUID REFERENCES devices(id) ON DELETE SET NULL,
opened_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
closed_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
status VARCHAR(20) NOT NULL DEFAULT 'open',
opened_at TIMESTAMP NOT NULL DEFAULT NOW(),
closed_at TIMESTAMP,
opening_cash DECIMAL(12,2) NOT NULL,
expected_cash DECIMAL(12,2),
actual_cash DECIMAL(12,2),
cash_variance DECIMAL(12,2),
total_sales DECIMAL(12,2) DEFAULT 0,
total_refunds DECIMAL(12,2) DEFAULT 0,
total_voids DECIMAL(12,2) DEFAULT 0,
transaction_count INT DEFAULT 0,
notes TEXT,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT shifts_number_unique UNIQUE (shift_number),
CONSTRAINT shifts_status_check CHECK (status IN ('open', 'closing', 'closed', 'reconciled')),
CONSTRAINT shifts_opening_positive CHECK (opening_cash >= 0)
);
CREATE INDEX idx_shifts_location ON shifts(location_id, opened_at DESC);
CREATE INDEX idx_shifts_drawer_open ON shifts(cash_drawer_id, status) WHERE status = 'open';
CREATE INDEX idx_shifts_date ON shifts(opened_at DESC);
COMMENT ON TABLE shifts IS 'Employee shift lifecycle with cash accountability';
COMMENT ON COLUMN shifts.shift_number IS 'Format: LOC-YYYYMMDD-SEQUENCE';
cash_drawers
-- Physical cash drawer registration
CREATE TABLE cash_drawers (
id SERIAL PRIMARY KEY,
name VARCHAR(50) NOT NULL,
drawer_number VARCHAR(20) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
current_shift_id INT, -- FK added after shifts table exists
status VARCHAR(20) NOT NULL DEFAULT 'available',
is_active BOOLEAN DEFAULT TRUE,
last_counted_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT cash_drawers_number_unique UNIQUE (drawer_number),
CONSTRAINT cash_drawers_status_check CHECK (status IN ('available', 'in_use', 'maintenance'))
);
-- Add FK after shifts table exists
ALTER TABLE cash_drawers ADD CONSTRAINT cash_drawers_shift_fk
FOREIGN KEY (current_shift_id) REFERENCES shifts(id) ON DELETE SET NULL;
CREATE INDEX idx_cash_drawers_location ON cash_drawers(location_id);
CREATE INDEX idx_cash_drawers_status ON cash_drawers(status, location_id);
COMMENT ON TABLE cash_drawers IS 'Physical cash drawer registration and status';
cash_counts
-- Detailed cash denomination counts
CREATE TABLE cash_counts (
id SERIAL PRIMARY KEY,
shift_id INT NOT NULL REFERENCES shifts(id) ON DELETE CASCADE,
count_type VARCHAR(20) NOT NULL,
counted_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
count_timestamp TIMESTAMP NOT NULL DEFAULT NOW(),
-- Coins
pennies INT DEFAULT 0,
nickels INT DEFAULT 0,
dimes INT DEFAULT 0,
quarters INT DEFAULT 0,
half_dollars INT DEFAULT 0,
dollar_coins INT DEFAULT 0,
-- Bills
ones INT DEFAULT 0,
twos INT DEFAULT 0,
fives INT DEFAULT 0,
tens INT DEFAULT 0,
twenties INT DEFAULT 0,
fifties INT DEFAULT 0,
hundreds INT DEFAULT 0,
-- Other
rolled_coins DECIMAL(10,2) DEFAULT 0,
other_amount DECIMAL(10,2) DEFAULT 0,
total_amount DECIMAL(12,2) NOT NULL,
notes TEXT,
CONSTRAINT cash_counts_type_check CHECK (count_type IN (
'opening', 'closing', 'drop', 'pickup', 'audit', 'mid_shift'
)),
CONSTRAINT cash_counts_total_positive CHECK (total_amount >= 0)
);
CREATE INDEX idx_cash_counts_shift ON cash_counts(shift_id, count_type);
CREATE INDEX idx_cash_counts_timestamp ON cash_counts(count_timestamp DESC);
COMMENT ON TABLE cash_counts IS 'Denomination-level cash counts for accountability';
cash_movements
-- Cash audit trail for all drawer operations
CREATE TABLE cash_movements (
id SERIAL PRIMARY KEY,
shift_id INT NOT NULL REFERENCES shifts(id) ON DELETE CASCADE,
cash_drawer_id INT NOT NULL REFERENCES cash_drawers(id) ON DELETE RESTRICT,
movement_type VARCHAR(30) NOT NULL,
amount DECIMAL(12,2) NOT NULL,
reference_type VARCHAR(50),
reference_id INT,
performed_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
approved_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
reason VARCHAR(255),
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT cash_movements_type_check CHECK (movement_type IN (
'sale_cash', 'refund_cash', 'paid_in', 'paid_out',
'cash_drop', 'cash_pickup', 'opening_float', 'closing_count', 'no_sale'
))
);
CREATE INDEX idx_cash_movements_shift ON cash_movements(shift_id);
CREATE INDEX idx_cash_movements_drawer ON cash_movements(cash_drawer_id, created_at DESC);
CREATE INDEX idx_cash_movements_type ON cash_movements(movement_type);
CREATE INDEX idx_cash_movements_reference ON cash_movements(reference_type, reference_id)
WHERE reference_type IS NOT NULL;
COMMENT ON TABLE cash_movements IS 'Immutable audit trail of all cash drawer operations';
cash_drops
-- Cash drops from drawer to safe
CREATE TABLE cash_drops (
id SERIAL PRIMARY KEY,
shift_id INT NOT NULL REFERENCES shifts(id) ON DELETE CASCADE,
cash_drawer_id INT NOT NULL REFERENCES cash_drawers(id) ON DELETE RESTRICT,
drop_number VARCHAR(20) NOT NULL,
amount DECIMAL(12,2) NOT NULL,
dropped_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
witnessed_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
safe_bag_number VARCHAR(50),
counted_amount DECIMAL(12,2),
variance DECIMAL(12,2),
status VARCHAR(20) NOT NULL DEFAULT 'pending',
notes TEXT,
dropped_at TIMESTAMP NOT NULL DEFAULT NOW(),
verified_at TIMESTAMP,
CONSTRAINT cash_drops_number_unique UNIQUE (drop_number),
CONSTRAINT cash_drops_amount_positive CHECK (amount > 0),
CONSTRAINT cash_drops_status_check CHECK (status IN ('pending', 'verified', 'variance'))
);
CREATE INDEX idx_cash_drops_shift ON cash_drops(shift_id);
CREATE INDEX idx_cash_drops_status ON cash_drops(status) WHERE status = 'pending';
COMMENT ON TABLE cash_drops IS 'Cash drops from drawer to safe with verification tracking';
cash_pickups
-- Armored car pickup tracking
CREATE TABLE cash_pickups (
id SERIAL PRIMARY KEY,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
pickup_date DATE NOT NULL,
carrier VARCHAR(100) NOT NULL,
driver_name VARCHAR(100),
driver_id VARCHAR(50),
pickup_number VARCHAR(50),
expected_amount DECIMAL(12,2) NOT NULL,
bag_count INT NOT NULL,
bag_numbers TEXT[],
received_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
verified_amount DECIMAL(12,2),
variance DECIMAL(12,2),
status VARCHAR(20) NOT NULL DEFAULT 'picked_up',
notes TEXT,
picked_up_at TIMESTAMP NOT NULL DEFAULT NOW(),
deposited_at TIMESTAMP,
CONSTRAINT cash_pickups_number_unique UNIQUE (pickup_number),
CONSTRAINT cash_pickups_status_check CHECK (status IN (
'picked_up', 'in_transit', 'deposited', 'variance'
))
);
CREATE INDEX idx_cash_pickups_location ON cash_pickups(location_id, pickup_date DESC);
CREATE INDEX idx_cash_pickups_status ON cash_pickups(status);
COMMENT ON TABLE cash_pickups IS 'Armored car pickup and bank deposit tracking';
Domain 12: Payment Processing
payment_terminals
-- Payment device registration
CREATE TABLE payment_terminals (
id SERIAL PRIMARY KEY,
terminal_id VARCHAR(50) NOT NULL,
name VARCHAR(100) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
device_id UUID REFERENCES devices(id) ON DELETE SET NULL,
processor VARCHAR(50) NOT NULL,
terminal_type VARCHAR(30) NOT NULL,
model VARCHAR(100),
serial_number VARCHAR(100),
status VARCHAR(20) NOT NULL DEFAULT 'active',
supports_contactless BOOLEAN DEFAULT TRUE,
supports_emv BOOLEAN DEFAULT TRUE,
supports_swipe BOOLEAN DEFAULT TRUE,
last_transaction_at TIMESTAMP,
last_batch_at TIMESTAMP,
configuration JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT payment_terminals_id_unique UNIQUE (terminal_id),
CONSTRAINT payment_terminals_type_check CHECK (terminal_type IN (
'integrated', 'standalone', 'virtual', 'mobile'
)),
CONSTRAINT payment_terminals_status_check CHECK (status IN (
'active', 'offline', 'maintenance', 'disabled'
))
);
CREATE INDEX idx_payment_terminals_location ON payment_terminals(location_id);
CREATE INDEX idx_payment_terminals_status ON payment_terminals(status);
CREATE INDEX idx_payment_terminals_device ON payment_terminals(device_id) WHERE device_id IS NOT NULL;
COMMENT ON TABLE payment_terminals IS 'Payment device registration and configuration';
payment_attempts
-- Payment processing attempt tracking
CREATE TABLE payment_attempts (
id BIGSERIAL PRIMARY KEY,
order_id INT NOT NULL REFERENCES orders(id) ON DELETE RESTRICT,
terminal_id INT REFERENCES payment_terminals(id) ON DELETE SET NULL,
payment_method VARCHAR(30) NOT NULL,
card_type VARCHAR(20),
card_last_four CHAR(4),
card_entry_mode VARCHAR(20),
amount DECIMAL(12,2) NOT NULL,
tip_amount DECIMAL(10,2) DEFAULT 0,
total_amount DECIMAL(12,2) NOT NULL,
currency_code CHAR(3) NOT NULL DEFAULT 'USD',
status VARCHAR(20) NOT NULL,
processor_response_code VARCHAR(10),
processor_response_text VARCHAR(255),
authorization_code VARCHAR(20),
processor_transaction_id VARCHAR(100),
avs_result VARCHAR(10),
cvv_result VARCHAR(10),
emv_application_id VARCHAR(32),
emv_cryptogram VARCHAR(64),
risk_score INT,
created_at TIMESTAMP DEFAULT NOW(),
processed_at TIMESTAMP,
CONSTRAINT payment_attempts_method_check CHECK (payment_method IN (
'card', 'cash', 'gift_card', 'store_credit', 'check', 'mobile_pay'
)),
CONSTRAINT payment_attempts_status_check CHECK (status IN (
'pending', 'approved', 'declined', 'error', 'voided', 'refunded'
)),
CONSTRAINT payment_attempts_entry_check CHECK (card_entry_mode IS NULL OR card_entry_mode IN (
'chip', 'swipe', 'contactless', 'manual', 'ecommerce', 'fallback'
))
);
CREATE INDEX idx_payment_attempts_order ON payment_attempts(order_id);
CREATE INDEX idx_payment_attempts_status ON payment_attempts(status, created_at DESC);
CREATE INDEX idx_payment_attempts_processor ON payment_attempts(processor_transaction_id)
WHERE processor_transaction_id IS NOT NULL;
CREATE INDEX idx_payment_attempts_date ON payment_attempts(created_at DESC);
COMMENT ON TABLE payment_attempts IS 'Payment processing attempts with full audit trail';
COMMENT ON COLUMN payment_attempts.emv_cryptogram IS 'EMV TC/ARQC for chip transactions';
payment_batches
-- End-of-day settlement batch tracking
CREATE TABLE payment_batches (
id SERIAL PRIMARY KEY,
batch_number VARCHAR(50) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
terminal_id INT REFERENCES payment_terminals(id) ON DELETE SET NULL,
processor VARCHAR(50) NOT NULL,
batch_date DATE NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'open',
transaction_count INT NOT NULL DEFAULT 0,
gross_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
refund_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
net_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
fee_amount DECIMAL(10,2),
deposit_amount DECIMAL(12,2),
opened_at TIMESTAMP NOT NULL DEFAULT NOW(),
submitted_at TIMESTAMP,
settled_at TIMESTAMP,
deposit_reference VARCHAR(100),
notes TEXT,
CONSTRAINT payment_batches_number_unique UNIQUE (batch_number),
CONSTRAINT payment_batches_status_check CHECK (status IN (
'open', 'pending', 'settled', 'rejected'
)),
CONSTRAINT payment_batches_net_math CHECK (net_amount = gross_amount - refund_amount)
);
CREATE INDEX idx_payment_batches_location ON payment_batches(location_id, batch_date DESC);
CREATE INDEX idx_payment_batches_status ON payment_batches(status) WHERE status IN ('open', 'pending');
CREATE INDEX idx_payment_batches_date ON payment_batches(batch_date DESC);
COMMENT ON TABLE payment_batches IS 'End-of-day settlement batch tracking';
payment_reconciliation
-- Variance tracking between POS and processor/bank
CREATE TABLE payment_reconciliation (
id SERIAL PRIMARY KEY,
batch_id INT NOT NULL REFERENCES payment_batches(id) ON DELETE CASCADE,
reconciliation_date DATE NOT NULL,
pos_transaction_count INT NOT NULL,
processor_transaction_count INT NOT NULL,
transaction_count_variance INT NOT NULL,
pos_gross_amount DECIMAL(12,2) NOT NULL,
processor_gross_amount DECIMAL(12,2) NOT NULL,
gross_variance DECIMAL(12,2) NOT NULL,
pos_net_amount DECIMAL(12,2) NOT NULL,
bank_deposit_amount DECIMAL(12,2),
deposit_variance DECIMAL(12,2),
fee_variance DECIMAL(10,2),
status VARCHAR(20) NOT NULL DEFAULT 'pending',
variance_reason TEXT,
resolved_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
resolved_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT payment_recon_status_check CHECK (status IN (
'pending', 'matched', 'variance', 'resolved'
))
);
CREATE INDEX idx_payment_recon_batch ON payment_reconciliation(batch_id);
CREATE INDEX idx_payment_recon_date ON payment_reconciliation(reconciliation_date DESC);
CREATE INDEX idx_payment_recon_status ON payment_reconciliation(status)
WHERE status IN ('pending', 'variance');
COMMENT ON TABLE payment_reconciliation IS 'Payment reconciliation with variance tracking';
Domain 13: RFID Module (Optional)
taxes
-- Tax rate definitions
CREATE TABLE taxes (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
code VARCHAR(20) NOT NULL,
rate DECIMAL(5,4) NOT NULL,
is_compound BOOLEAN DEFAULT FALSE,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT taxes_name_unique UNIQUE (name),
CONSTRAINT taxes_code_unique UNIQUE (code),
CONSTRAINT taxes_rate_check CHECK (rate >= 0 AND rate <= 1)
);
COMMENT ON TABLE taxes IS 'Tax rate definitions (rate as decimal: 0.0825 = 8.25%)';
location_tax
-- Junction: taxes to locations with date ranges
CREATE TABLE location_tax (
id SERIAL PRIMARY KEY,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE CASCADE,
tax_id INT NOT NULL REFERENCES taxes(id) ON DELETE CASCADE,
effective_from TIMESTAMP NOT NULL,
effective_to TIMESTAMP,
CONSTRAINT location_tax_dates CHECK (effective_to IS NULL OR effective_to > effective_from)
);
CREATE INDEX idx_location_tax_effective ON location_tax(location_id, effective_from, effective_to);
COMMENT ON TABLE location_tax IS 'Assigns tax rates to locations with effective date ranges';
rfid_config
-- Tenant RFID configuration (counting subsystem)
CREATE TABLE rfid_config (
id SERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
epc_company_prefix VARCHAR(24) NOT NULL,
epc_indicator CHAR(1) NOT NULL DEFAULT '0',
epc_filter CHAR(1) NOT NULL DEFAULT '3',
epc_partition INT NOT NULL DEFAULT 5,
min_rssi_threshold SMALLINT NOT NULL DEFAULT -70,
auto_save_interval_seconds INT NOT NULL DEFAULT 30,
chunk_upload_size INT NOT NULL DEFAULT 5000,
default_template_id UUID,
default_printer_id UUID,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT rfid_config_tenant_unique UNIQUE (tenant_id)
);
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
-- Serial numbers use PostgreSQL SEQUENCE per tenant:
-- CREATE SEQUENCE rfid_epc_serial_{tenant_short_id} START 1 INCREMENT 1 NO CYCLE;
-- Application calls nextval('rfid_epc_serial_{tenant_short_id}') during tag encoding.
COMMENT ON TABLE rfid_config IS 'Tenant RFID configuration for EPC encoding and scanning';
COMMENT ON COLUMN rfid_config.min_rssi_threshold IS 'Minimum RSSI (dBm) to accept tag reads; default -70';
rfid_printers
-- Registered RFID printers
CREATE TABLE rfid_printers (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(100) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE CASCADE,
printer_type VARCHAR(50) NOT NULL,
connection_type VARCHAR(20) NOT NULL,
ip_address INET,
port INT DEFAULT 9100,
mac_address VARCHAR(17),
serial_number VARCHAR(100),
firmware_version VARCHAR(50),
dpi INT DEFAULT 203,
label_width_mm INT NOT NULL,
label_height_mm INT NOT NULL,
rfid_position VARCHAR(20) DEFAULT 'center',
status VARCHAR(20) NOT NULL DEFAULT 'offline',
last_seen_at TIMESTAMP,
error_message TEXT,
is_default BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT rfid_printers_type_check CHECK (printer_type IN (
'zebra_zd621r', 'zebra_zd500r', 'sato_cl4nx', 'tsc_mx240p'
)),
CONSTRAINT rfid_printers_conn_check CHECK (connection_type IN ('network', 'usb', 'bluetooth'))
);
CREATE INDEX idx_rfid_printers_location ON rfid_printers(location_id);
CREATE INDEX idx_rfid_printers_status ON rfid_printers(status);
CREATE UNIQUE INDEX idx_rfid_printers_default ON rfid_printers(location_id) WHERE is_default = TRUE;
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_printers IS 'RFID-enabled printers registered per location';
rfid_print_jobs
-- Print job queue
CREATE TABLE rfid_print_jobs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
job_number VARCHAR(20) NOT NULL,
printer_id UUID NOT NULL REFERENCES rfid_printers(id) ON DELETE RESTRICT,
template_id UUID NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'queued',
priority INT DEFAULT 5,
total_tags INT NOT NULL,
printed_tags INT DEFAULT 0,
failed_tags INT DEFAULT 0,
error_message TEXT,
job_data JSONB NOT NULL,
created_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
started_at TIMESTAMP,
completed_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT rfid_print_jobs_number_unique UNIQUE (job_number),
CONSTRAINT rfid_print_jobs_status_check CHECK (status IN (
'queued', 'printing', 'completed', 'failed', 'cancelled'
)),
CONSTRAINT rfid_print_jobs_priority_check CHECK (priority BETWEEN 1 AND 10)
);
CREATE INDEX idx_rfid_print_jobs_status ON rfid_print_jobs(status, priority, created_at)
WHERE status IN ('queued', 'printing');
CREATE INDEX idx_rfid_print_jobs_printer ON rfid_print_jobs(printer_id, created_at DESC);
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_print_jobs IS 'RFID tag print job queue with progress tracking';
rfid_tags
-- Individual RFID tags linked to variants (counting subsystem — no lifecycle fields)
CREATE TABLE rfid_tags (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
epc VARCHAR(24) NOT NULL,
variant_id INT NOT NULL REFERENCES variants(id) ON DELETE RESTRICT,
serial_number BIGINT NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'active',
print_job_id UUID REFERENCES rfid_print_jobs(id) ON DELETE SET NULL,
printed_at TIMESTAMP,
printed_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
current_location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
last_scanned_at TIMESTAMP,
last_scanned_session_id UUID,
notes TEXT,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
-- Scope: counting only — no sold_at, sold_order_id, transferred_at fields
-- Sales and transfers tracked by core inventory system via barcode, not RFID
CONSTRAINT rfid_tags_epc_unique UNIQUE (tenant_id, epc),
CONSTRAINT rfid_tags_epc_format CHECK (epc ~ '^[0-9A-F]{24}$'),
CONSTRAINT rfid_tags_status_check CHECK (status IN ('active', 'void', 'lost'))
);
CREATE INDEX idx_rfid_tags_variant ON rfid_tags(tenant_id, variant_id, status) WHERE status = 'active';
CREATE INDEX idx_rfid_tags_location ON rfid_tags(tenant_id, current_location_id, status) WHERE status = 'active';
CREATE INDEX idx_rfid_tags_serial ON rfid_tags(tenant_id, serial_number);
CREATE INDEX idx_rfid_tags_scan ON rfid_tags(last_scanned_at DESC) WHERE last_scanned_at IS NOT NULL;
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_tags IS 'Individual RFID tags for inventory counting (EPC-level tracking)';
COMMENT ON COLUMN rfid_tags.epc IS 'SGTIN-96 Electronic Product Code (24 hex chars)';
rfid_scan_sessions
-- Inventory scan sessions (counting subsystem — no 'receiving' type)
CREATE TABLE rfid_scan_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
session_number VARCHAR(20) NOT NULL,
location_id INT NOT NULL REFERENCES locations(id) ON DELETE RESTRICT,
session_type VARCHAR(30) NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'in_progress',
device_id UUID REFERENCES devices(id) ON DELETE SET NULL,
started_by UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
completed_by UUID REFERENCES shared.users(id) ON DELETE SET NULL,
started_at TIMESTAMP NOT NULL DEFAULT NOW(),
completed_at TIMESTAMP,
total_reads INT DEFAULT 0,
unique_tags INT DEFAULT 0,
matched_tags INT DEFAULT 0,
unmatched_tags INT DEFAULT 0,
expected_count INT,
variance INT,
variance_value DECIMAL(12,2),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT rfid_sessions_number_unique UNIQUE (tenant_id, session_number),
CONSTRAINT rfid_sessions_type_check CHECK (session_type IN (
'full_inventory', 'cycle_count', 'spot_check', 'find_item'
)),
-- NOTE: 'receiving' removed — RFID is counting only (see BRD Section 5.16)
CONSTRAINT rfid_sessions_status_check CHECK (status IN (
'in_progress', 'completed', 'cancelled', 'uploaded'
))
);
CREATE INDEX idx_rfid_sessions_location ON rfid_scan_sessions(tenant_id, location_id, started_at DESC);
CREATE INDEX idx_rfid_sessions_status ON rfid_scan_sessions(status) WHERE status = 'in_progress';
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_scan_sessions IS 'RFID inventory count sessions with variance calculation';
rfid_scan_events
-- Aggregated tag reads during scan sessions (one row per unique EPC per session)
CREATE TABLE rfid_scan_events (
id BIGSERIAL PRIMARY KEY,
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
session_id UUID NOT NULL REFERENCES rfid_scan_sessions(id) ON DELETE CASCADE,
epc VARCHAR(24) NOT NULL,
rfid_tag_id UUID REFERENCES rfid_tags(id) ON DELETE SET NULL,
rssi SMALLINT,
read_count INT DEFAULT 1,
antenna SMALLINT,
first_seen_at TIMESTAMP NOT NULL,
last_seen_at TIMESTAMP NOT NULL,
-- Idempotency: same EPC can only appear once per session
-- On duplicate upload (retry), use UPSERT:
-- ON CONFLICT (session_id, epc) DO UPDATE SET
-- rssi = GREATEST(excluded.rssi, rfid_scan_events.rssi),
-- read_count = rfid_scan_events.read_count + excluded.read_count,
-- last_seen_at = GREATEST(excluded.last_seen_at, rfid_scan_events.last_seen_at)
CONSTRAINT rfid_events_idempotent UNIQUE (session_id, epc)
);
-- Indexes
CREATE INDEX idx_rfid_events_session ON rfid_scan_events(session_id);
CREATE INDEX idx_rfid_events_epc ON rfid_scan_events(epc);
CREATE INDEX idx_rfid_events_tag ON rfid_scan_events(rfid_tag_id) WHERE rfid_tag_id IS NOT NULL;
CREATE INDEX idx_rfid_events_unknown ON rfid_scan_events(session_id) WHERE rfid_tag_id IS NULL;
CREATE INDEX idx_rfid_events_time ON rfid_scan_events USING BRIN (first_seen_at);
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_scan_events IS 'Aggregated RFID tag reads per session (pre-deduplicated by EPC)';
COMMENT ON COLUMN rfid_scan_events.rssi IS 'Best signal strength (-127 to 0 dBm)';
rfid_tag_templates
-- ZPL label templates for RFID tag printing
CREATE TABLE rfid_tag_templates (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
name VARCHAR(100) NOT NULL,
template_type VARCHAR(20) NOT NULL,
zpl_content TEXT NOT NULL,
variables JSONB NOT NULL DEFAULT '[]',
is_default BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT rfid_templates_type_check CHECK (template_type IN (
'hang_tag', 'price_tag', 'label'
))
);
CREATE INDEX idx_rfid_templates_tenant ON rfid_tag_templates(tenant_id);
CREATE UNIQUE INDEX idx_rfid_templates_default ON rfid_tag_templates(tenant_id, template_type)
WHERE is_default = TRUE;
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_tag_templates IS 'ZPL label templates for RFID tag printing';
rfid_tag_mappings
-- EPC prefix to product variant mappings for offline decoding
CREATE TABLE rfid_tag_mappings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
epc_prefix VARCHAR(20) NOT NULL,
variant_id INT NOT NULL REFERENCES variants(id) ON DELETE CASCADE,
sku VARCHAR(50) NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT rfid_mappings_prefix_unique UNIQUE (tenant_id, epc_prefix)
);
CREATE INDEX idx_rfid_mappings_sku ON rfid_tag_mappings(tenant_id, sku);
CREATE INDEX idx_rfid_mappings_variant ON rfid_tag_mappings(variant_id);
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE rfid_tag_mappings IS 'Maps EPC prefix ranges to product variants for offline decoding on Raptag mobile app';
session_operators
-- Multiple operators per RFID scan session with section assignments
CREATE TABLE session_operators (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id),
session_id UUID NOT NULL REFERENCES rfid_scan_sessions(id) ON DELETE CASCADE,
operator_id UUID NOT NULL REFERENCES shared.users(id) ON DELETE RESTRICT,
assigned_section TEXT,
device_id UUID REFERENCES devices(id) ON DELETE SET NULL,
joined_at TIMESTAMP NOT NULL DEFAULT NOW(),
left_at TIMESTAMP,
CONSTRAINT session_operators_unique UNIQUE (session_id, operator_id)
);
CREATE INDEX idx_session_operators_session ON session_operators(session_id);
CREATE INDEX idx_session_operators_user ON session_operators(operator_id);
-- RLS: tenant_id = current_setting('app.tenant_id')::uuid
COMMENT ON TABLE session_operators IS 'Multiple operators per RFID scan session with section assignments (max 10 per session)';
8.2 Table Count Summary
| Domain | Tables | Schema Location |
|---|---|---|
| 1-2. Catalog | 12 | tenant |
| 3. Inventory | 3 | tenant |
| 4. Sales | 3 | tenant |
| 5. Customer Loyalty | 4 | tenant |
| 6-7. Returns & Reporting | 3 | tenant |
| 8. Multi-tenant | 3 | shared |
| 9. Auth & Authorization | 4 | tenant |
| 10. Offline Sync | 4 | tenant |
| 11. Cash Drawer | 6 | tenant |
| 12. Payment Processing | 4 | tenant |
| 13. RFID + Tax | 12 | tenant |
| TOTAL | 51 |
Next Chapter: Chapter 09: Indexes & Performance - Index strategy and query optimization.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | III - Database |
| Chapter | 08 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 09: Indexes & Performance
Query Optimization and Index Strategy
9.1 Overview
This chapter provides the complete indexing strategy for the POS Platform database, organized by query pattern. Proper indexing is critical for a multi-tenant POS system where response times directly impact customer experience.
Performance Targets
| Operation | Target | Critical Threshold |
|---|---|---|
| Product lookup by SKU | < 5ms | 20ms |
| Product lookup by barcode | < 5ms | 20ms |
| Inventory check (single location) | < 10ms | 50ms |
| Order creation | < 50ms | 200ms |
| Customer search by name | < 20ms | 100ms |
| Daily sales report | < 500ms | 2s |
| Inventory count by location | < 100ms | 500ms |
9.2 Index Types and When to Use Them
B-Tree Indexes (Default)
Best for: Equality comparisons, range queries, sorting
-- Equality lookup (most common)
CREATE INDEX idx_products_sku ON products(sku);
-- Range query support
CREATE INDEX idx_orders_date ON orders(created_at);
-- Composite for multiple conditions
CREATE INDEX idx_inventory_lookup ON inventory_levels(variant_id, location_id);
BRIN Indexes (Block Range)
Best for: Time-series data, append-only tables, large datasets
-- Inventory transactions (append-only, ordered by time)
CREATE INDEX idx_inventory_trans_date ON inventory_transactions USING BRIN (created_at);
-- RFID scan events (high-volume, time-ordered)
CREATE INDEX idx_rfid_events_created ON rfid_scan_events USING BRIN (first_seen_at);
-- Sync queue (sequential inserts)
CREATE INDEX idx_sync_queue_created ON sync_queue USING BRIN (created_at);
BRIN Benefits:
- 100x smaller than B-tree for time-series
- Faster inserts (less index maintenance)
- Perfect for audit/event tables
BRIN Limitations:
- Only useful when data is physically ordered
- Less precise (scans blocks, not rows)
GIN Indexes (Generalized Inverted)
Best for: JSONB columns, full-text search, arrays
-- JSONB configuration columns
CREATE INDEX idx_devices_settings ON devices USING GIN (settings);
CREATE INDEX idx_tenant_modules_config ON tenant_modules USING GIN (configuration);
-- Full-text product search
CREATE INDEX idx_products_search ON products USING GIN (
to_tsvector('english', name || ' ' || COALESCE(description, ''))
);
-- Array columns
CREATE INDEX idx_cash_pickups_bags ON cash_pickups USING GIN (bag_numbers);
Partial Indexes
Best for: Queries with consistent WHERE clauses, reducing index size
-- Only active products (common filter)
CREATE INDEX idx_products_active ON products(name)
WHERE is_active = TRUE AND deleted_at IS NULL;
-- Only pending sync items
CREATE INDEX idx_sync_queue_pending ON sync_queue(device_id, priority, created_at)
WHERE status = 'pending';
-- Only open shifts
CREATE INDEX idx_shifts_open ON shifts(location_id, cash_drawer_id)
WHERE status = 'open';
-- Only unresolved conflicts
CREATE INDEX idx_sync_conflicts_unresolved ON sync_conflicts(entity_type, created_at)
WHERE resolved_at IS NULL;
Covering Indexes (INCLUDE)
Best for: Avoiding table lookups for frequently accessed columns
-- Product lookup returns name and price without table access
CREATE INDEX idx_products_sku_covering ON products(sku)
INCLUDE (name, base_price, is_active)
WHERE deleted_at IS NULL;
-- Inventory lookup includes quantity
CREATE INDEX idx_inventory_covering ON inventory_levels(variant_id, location_id)
INCLUDE (quantity_on_hand, quantity_reserved)
WHERE deleted_at IS NULL;
-- Customer lookup by loyalty number
CREATE INDEX idx_customers_loyalty_covering ON customers(loyalty_number)
INCLUDE (first_name, last_name, loyalty_points)
WHERE loyalty_number IS NOT NULL AND deleted_at IS NULL;
9.3 Index Strategy by Domain
Domain 1-2: Catalog (Products, Categories)
-- ============================================================
-- PRODUCT LOOKUP INDEXES
-- ============================================================
-- Primary product lookup by SKU (unique, filtered for soft delete)
CREATE UNIQUE INDEX idx_products_sku ON products(sku)
WHERE deleted_at IS NULL;
-- Product search by name (full-text)
CREATE INDEX idx_products_name_search ON products
USING GIN (to_tsvector('english', name));
-- Filter by brand (common in category pages)
CREATE INDEX idx_products_brand ON products(brand_id)
WHERE is_active = TRUE AND deleted_at IS NULL;
-- Filter by product group (department browsing)
CREATE INDEX idx_products_group ON products(product_group_id)
WHERE is_active = TRUE AND deleted_at IS NULL;
-- ============================================================
-- VARIANT LOOKUP INDEXES
-- ============================================================
-- Variant lookup by SKU (unique)
CREATE UNIQUE INDEX idx_variants_sku ON variants(sku)
WHERE deleted_at IS NULL;
-- POS barcode scan (unique, critical for checkout speed)
CREATE UNIQUE INDEX idx_variants_barcode ON variants(barcode)
WHERE barcode IS NOT NULL AND deleted_at IS NULL;
-- Product's variants list
CREATE INDEX idx_variants_product ON variants(product_id, size, color)
WHERE is_active = TRUE AND deleted_at IS NULL;
-- ============================================================
-- CATEGORY NAVIGATION INDEXES
-- ============================================================
-- Category hierarchy traversal
CREATE INDEX idx_categories_parent ON categories(parent_id)
WHERE is_active = TRUE;
-- Category sort order for UI
CREATE INDEX idx_categories_display ON categories(display_order, name)
WHERE is_active = TRUE;
-- ============================================================
-- COLLECTION & TAG INDEXES
-- ============================================================
-- Active collections (marketing pages)
CREATE INDEX idx_collections_active ON collections(is_active, start_date, end_date)
WHERE is_active = TRUE;
-- Products in collection
CREATE INDEX idx_product_collection_coll ON product_collection(collection_id, display_order);
-- Products with tag
CREATE INDEX idx_product_tag_tag ON product_tag(tag_id);
Domain 3: Inventory
-- ============================================================
-- INVENTORY LEVEL INDEXES
-- ============================================================
-- Primary lookup: variant + location (covered)
CREATE UNIQUE INDEX idx_inventory_lookup ON inventory_levels(variant_id, location_id)
INCLUDE (quantity_on_hand, quantity_reserved, reorder_point)
WHERE deleted_at IS NULL;
-- Location inventory list (for inventory screens)
CREATE INDEX idx_inventory_by_location ON inventory_levels(location_id, variant_id)
WHERE deleted_at IS NULL;
-- Low stock alerts (filtered, ordered by severity)
CREATE INDEX idx_inventory_low_stock ON inventory_levels(location_id, quantity_on_hand)
WHERE quantity_on_hand <= reorder_point
AND deleted_at IS NULL
AND reorder_point > 0;
-- Out of stock items
CREATE INDEX idx_inventory_out_of_stock ON inventory_levels(location_id)
WHERE quantity_on_hand <= 0
AND deleted_at IS NULL;
-- ============================================================
-- INVENTORY TRANSACTION INDEXES (BRIN + B-Tree)
-- ============================================================
-- Time-series primary index (BRIN for efficiency)
CREATE INDEX idx_inventory_trans_date ON inventory_transactions
USING BRIN (created_at);
-- Variant history (for product page history)
CREATE INDEX idx_inventory_trans_variant ON inventory_transactions(variant_id, created_at DESC);
-- Location activity (for location reports)
CREATE INDEX idx_inventory_trans_location ON inventory_transactions(location_id, created_at DESC);
-- Reference document lookup
CREATE INDEX idx_inventory_trans_ref ON inventory_transactions(reference_type, reference_id)
WHERE reference_type IS NOT NULL;
-- Transaction type filtering
CREATE INDEX idx_inventory_trans_type ON inventory_transactions(transaction_type, created_at DESC);
Domain 4: Sales (Orders, Customers)
-- ============================================================
-- ORDER INDEXES
-- ============================================================
-- Order number lookup (receipt reprint)
CREATE UNIQUE INDEX idx_orders_number ON orders(order_number);
-- Orders by date (primary reporting index)
CREATE INDEX idx_orders_date ON orders(created_at DESC);
-- Orders by location + date (store reports)
CREATE INDEX idx_orders_location_date ON orders(location_id, created_at DESC);
-- Customer order history
CREATE INDEX idx_orders_customer ON orders(customer_id, created_at DESC)
WHERE customer_id IS NOT NULL;
-- Shift reconciliation
CREATE INDEX idx_orders_shift ON orders(shift_id, status)
WHERE shift_id IS NOT NULL;
-- Order status filtering
CREATE INDEX idx_orders_status ON orders(status, created_at DESC)
WHERE status != 'completed'; -- Completed is default, filter for exceptions
-- ============================================================
-- ORDER ITEMS INDEXES
-- ============================================================
-- Line items for order
CREATE INDEX idx_order_items_order ON order_items(order_id);
-- Sales by variant (product performance)
CREATE INDEX idx_order_items_variant ON order_items(variant_id, created_at DESC);
-- Returns tracking
CREATE INDEX idx_order_items_returned ON order_items(order_id)
WHERE is_returned = TRUE;
-- ============================================================
-- CUSTOMER INDEXES
-- ============================================================
-- Loyalty card lookup (POS checkout)
CREATE UNIQUE INDEX idx_customers_loyalty ON customers(loyalty_number)
WHERE loyalty_number IS NOT NULL AND deleted_at IS NULL;
-- Email lookup (unique)
CREATE UNIQUE INDEX idx_customers_email ON customers(email)
WHERE email IS NOT NULL AND deleted_at IS NULL;
-- Phone lookup
CREATE INDEX idx_customers_phone ON customers(phone)
WHERE phone IS NOT NULL AND deleted_at IS NULL;
-- Name search (partial match supported)
CREATE INDEX idx_customers_name ON customers(last_name, first_name)
WHERE deleted_at IS NULL;
-- Customer value ranking
CREATE INDEX idx_customers_value ON customers(total_spent DESC)
WHERE deleted_at IS NULL;
-- Recent visitors
CREATE INDEX idx_customers_last_visit ON customers(last_visit DESC)
WHERE deleted_at IS NULL;
Domain 10: Offline Sync
-- ============================================================
-- SYNC QUEUE INDEXES
-- ============================================================
-- Idempotency check (unique, critical for exactly-once processing)
CREATE UNIQUE INDEX idx_sync_queue_idempotency ON sync_queue(idempotency_key);
-- Device sync sequence (primary sync ordering)
CREATE INDEX idx_sync_queue_device_seq ON sync_queue(device_id, sequence_number);
-- Pending queue (worker polling)
CREATE INDEX idx_sync_queue_pending ON sync_queue(status, priority, created_at)
WHERE status = 'pending';
-- Failed items for retry
CREATE INDEX idx_sync_queue_failed ON sync_queue(status, attempts, created_at)
WHERE status = 'failed' AND attempts < 5;
-- Entity lookup for conflict detection
CREATE INDEX idx_sync_queue_entity ON sync_queue(entity_type, entity_id);
-- ============================================================
-- SYNC CONFLICT INDEXES
-- ============================================================
-- Unresolved conflicts (admin dashboard)
CREATE INDEX idx_sync_conflicts_unresolved ON sync_conflicts(created_at DESC)
WHERE resolved_at IS NULL;
-- Conflicts by entity
CREATE INDEX idx_sync_conflicts_entity ON sync_conflicts(entity_type, entity_id);
-- Conflict type distribution
CREATE INDEX idx_sync_conflicts_type ON sync_conflicts(conflict_type)
WHERE resolved_at IS NULL;
-- ============================================================
-- DEVICE INDEXES
-- ============================================================
-- Hardware ID (unique, device registration)
CREATE UNIQUE INDEX idx_devices_hardware ON devices(hardware_id);
-- Devices by location
CREATE INDEX idx_devices_location ON devices(location_id, status);
-- Stale devices (monitoring)
CREATE INDEX idx_devices_last_seen ON devices(last_seen_at)
WHERE status = 'active';
Domain 11-12: Cash & Payment
-- ============================================================
-- SHIFT INDEXES
-- ============================================================
-- Shift number lookup
CREATE UNIQUE INDEX idx_shifts_number ON shifts(shift_number);
-- Open shifts by drawer (prevent duplicates)
CREATE UNIQUE INDEX idx_shifts_drawer_open ON shifts(cash_drawer_id)
WHERE status = 'open';
-- Shifts by location + date (reports)
CREATE INDEX idx_shifts_location_date ON shifts(location_id, opened_at DESC);
-- Unreconciled shifts
CREATE INDEX idx_shifts_unreconciled ON shifts(location_id, opened_at)
WHERE status IN ('closed', 'closing');
-- ============================================================
-- CASH MOVEMENT INDEXES
-- ============================================================
-- Movements by shift (reconciliation)
CREATE INDEX idx_cash_movements_shift ON cash_movements(shift_id, created_at);
-- Movements by type (auditing)
CREATE INDEX idx_cash_movements_type ON cash_movements(movement_type, created_at DESC);
-- Reference lookup
CREATE INDEX idx_cash_movements_ref ON cash_movements(reference_type, reference_id)
WHERE reference_type IS NOT NULL;
-- ============================================================
-- PAYMENT ATTEMPT INDEXES
-- ============================================================
-- Payments by order
CREATE INDEX idx_payment_attempts_order ON payment_attempts(order_id);
-- Payment status monitoring
CREATE INDEX idx_payment_attempts_status ON payment_attempts(status, created_at DESC);
-- Processor transaction lookup (chargebacks)
CREATE INDEX idx_payment_attempts_processor ON payment_attempts(processor_transaction_id)
WHERE processor_transaction_id IS NOT NULL;
-- Daily payment activity
CREATE INDEX idx_payment_attempts_date ON payment_attempts(created_at DESC);
-- ============================================================
-- PAYMENT BATCH INDEXES
-- ============================================================
-- Batch number lookup
CREATE UNIQUE INDEX idx_payment_batches_number ON payment_batches(batch_number);
-- Open batches (auto-close job)
CREATE INDEX idx_payment_batches_open ON payment_batches(location_id, batch_date)
WHERE status = 'open';
-- Pending settlement
CREATE INDEX idx_payment_batches_pending ON payment_batches(submitted_at)
WHERE status = 'pending';
Domain 13: RFID Module (Counting Subsystem)
-- ============================================================
-- RFID TAG INDEXES (all include tenant_id for RLS performance)
-- ============================================================
-- EPC lookup (unique per tenant, critical for scan performance)
CREATE UNIQUE INDEX idx_rfid_tags_epc ON rfid_tags(tenant_id, epc);
-- Tags by variant (product inventory counts)
CREATE INDEX idx_rfid_tags_variant ON rfid_tags(tenant_id, variant_id, status)
WHERE status = 'active';
-- Tags by location (location inventory counts)
CREATE INDEX idx_rfid_tags_location ON rfid_tags(tenant_id, current_location_id, status)
WHERE status = 'active';
-- Serial number sequence
CREATE INDEX idx_rfid_tags_serial ON rfid_tags(tenant_id, serial_number);
-- Recently scanned (for scan recency queries)
CREATE INDEX idx_rfid_tags_scanned ON rfid_tags(last_scanned_at DESC)
WHERE last_scanned_at IS NOT NULL;
-- ============================================================
-- RFID SCAN EVENT INDEXES (High-Volume, idempotent uploads)
-- ============================================================
-- Idempotency: one row per (session, epc) — prevents duplicate uploads
CREATE UNIQUE INDEX idx_rfid_events_idempotent ON rfid_scan_events(session_id, epc);
-- Session events
CREATE INDEX idx_rfid_events_session ON rfid_scan_events(session_id);
-- EPC lookup (match to tag)
CREATE INDEX idx_rfid_events_epc ON rfid_scan_events(epc);
-- Unknown tags (for investigation)
CREATE INDEX idx_rfid_events_unknown ON rfid_scan_events(session_id)
WHERE rfid_tag_id IS NULL;
-- Time-based partition key (if partitioning by first_seen_at)
CREATE INDEX idx_rfid_events_time ON rfid_scan_events USING BRIN (first_seen_at);
-- ============================================================
-- RFID PRINT JOB INDEXES
-- ============================================================
-- Job queue (worker polling)
CREATE INDEX idx_rfid_jobs_queue ON rfid_print_jobs(status, priority, created_at)
WHERE status IN ('queued', 'printing');
-- Jobs by printer
CREATE INDEX idx_rfid_jobs_printer ON rfid_print_jobs(printer_id, created_at DESC);
-- ============================================================
-- RFID TAG TEMPLATES & MAPPINGS
-- ============================================================
-- Templates by tenant
CREATE INDEX idx_rfid_templates_tenant ON rfid_tag_templates(tenant_id);
-- Default template per type per tenant
CREATE UNIQUE INDEX idx_rfid_templates_default ON rfid_tag_templates(tenant_id, template_type)
WHERE is_default = TRUE;
-- Tag mappings by SKU
CREATE INDEX idx_rfid_mappings_sku ON rfid_tag_mappings(tenant_id, sku);
-- Tag mappings by variant
CREATE INDEX idx_rfid_mappings_variant ON rfid_tag_mappings(variant_id);
-- ============================================================
-- SESSION OPERATORS (Multi-Operator Counting)
-- ============================================================
-- Operators by session
CREATE INDEX idx_session_operators_session ON session_operators(session_id);
-- Sessions by operator
CREATE INDEX idx_session_operators_user ON session_operators(operator_id);
9.4 Query Optimization Examples
Example 1: Product Lookup by Barcode
Query:
SELECT v.id, v.sku, p.name, p.base_price + v.price_adjustment AS price
FROM variants v
JOIN products p ON v.product_id = p.id
WHERE v.barcode = '012345678901'
AND v.deleted_at IS NULL
AND p.deleted_at IS NULL;
Optimization:
-- Covering index avoids table lookup for common columns
CREATE INDEX idx_variants_barcode_covering ON variants(barcode)
INCLUDE (sku, product_id, price_adjustment)
WHERE barcode IS NOT NULL AND deleted_at IS NULL;
-- Result: Index-only scan, < 1ms
EXPLAIN ANALYZE:
Index Only Scan using idx_variants_barcode_covering on variants v
Index Cond: (barcode = '012345678901'::character varying)
Heap Fetches: 0
Planning Time: 0.1 ms
Execution Time: 0.05 ms
Example 2: Daily Sales Report
Query:
SELECT
l.name AS location,
COUNT(o.id) AS transactions,
SUM(o.total_amount) AS sales,
SUM(o.tax_amount) AS tax
FROM orders o
JOIN locations l ON o.location_id = l.id
WHERE o.created_at >= '2025-01-01'
AND o.created_at < '2025-01-02'
AND o.status = 'completed'
GROUP BY l.name
ORDER BY sales DESC;
Optimization:
-- Composite index for date range + status + location
CREATE INDEX idx_orders_reporting ON orders(created_at, location_id)
INCLUDE (total_amount, tax_amount)
WHERE status = 'completed';
-- Result: Index scan with aggregate pushdown
EXPLAIN ANALYZE:
HashAggregate (cost=150..155 rows=5)
Group Key: l.name
-> Nested Loop (cost=0.5..140 rows=1200)
-> Index Scan using idx_orders_reporting on orders o
Index Cond: (created_at >= '...' AND created_at < '...')
Filter: (status = 'completed')
-> Index Scan using idx_locations_pkey on locations l
Index Cond: (id = o.location_id)
Planning Time: 0.5 ms
Execution Time: 12 ms
Example 3: Inventory Low Stock Alert
Query:
SELECT
v.sku,
p.name,
l.code AS location,
il.quantity_on_hand,
il.reorder_point,
il.reorder_quantity
FROM inventory_levels il
JOIN variants v ON il.variant_id = v.id
JOIN products p ON v.product_id = p.id
JOIN locations l ON il.location_id = l.id
WHERE il.quantity_on_hand <= il.reorder_point
AND il.reorder_point > 0
AND il.deleted_at IS NULL
AND l.is_active = TRUE
ORDER BY (il.reorder_point - il.quantity_on_hand) DESC
LIMIT 100;
Optimization:
-- Partial index for low stock condition
CREATE INDEX idx_inventory_low_stock_alert ON inventory_levels(
location_id,
(reorder_point - quantity_on_hand) DESC
)
INCLUDE (variant_id, quantity_on_hand, reorder_point, reorder_quantity)
WHERE quantity_on_hand <= reorder_point
AND reorder_point > 0
AND deleted_at IS NULL;
Example 4: Sync Queue Processing
Query:
SELECT id, device_id, operation_type, entity_type, entity_id, payload
FROM sync_queue
WHERE status = 'pending'
ORDER BY priority ASC, created_at ASC
LIMIT 50
FOR UPDATE SKIP LOCKED;
Optimization:
-- Partial index for pending items only
CREATE INDEX idx_sync_queue_worker ON sync_queue(priority, created_at)
INCLUDE (device_id, operation_type, entity_type, entity_id)
WHERE status = 'pending';
-- Result: Index-only scan, no lock contention
Example 5: RFID Tag Count by Location
Query:
SELECT
l.code,
l.name,
COUNT(*) FILTER (WHERE rt.status = 'active') AS active_tags,
COUNT(*) FILTER (WHERE rt.status = 'sold') AS sold_tags,
COUNT(*) AS total_tags
FROM locations l
LEFT JOIN rfid_tags rt ON rt.current_location_id = l.id
WHERE l.is_active = TRUE
GROUP BY l.id, l.code, l.name
ORDER BY active_tags DESC;
Optimization:
-- Pre-aggregated materialized view for dashboard
CREATE MATERIALIZED VIEW rfid_tag_counts AS
SELECT
current_location_id,
status,
COUNT(*) AS tag_count
FROM rfid_tags
GROUP BY current_location_id, status;
CREATE UNIQUE INDEX ON rfid_tag_counts(current_location_id, status);
-- Refresh periodically
REFRESH MATERIALIZED VIEW CONCURRENTLY rfid_tag_counts;
9.5 Performance Monitoring Queries
Index Usage Statistics
-- Most used indexes
SELECT
schemaname,
relname AS table_name,
indexrelname AS index_name,
idx_scan AS scans,
idx_tup_read AS tuples_read,
idx_tup_fetch AS tuples_fetched
FROM pg_stat_user_indexes
WHERE schemaname LIKE 'tenant_%'
ORDER BY idx_scan DESC
LIMIT 20;
Unused Indexes
-- Indexes never used (candidates for removal)
SELECT
schemaname,
relname AS table_name,
indexrelname AS index_name,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND schemaname LIKE 'tenant_%'
ORDER BY pg_relation_size(indexrelid) DESC;
Slow Queries
-- Enable pg_stat_statements extension first
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Top 20 slowest queries by mean time
SELECT
substring(query, 1, 100) AS query_preview,
calls,
round(mean_exec_time::numeric, 2) AS avg_ms,
round(total_exec_time::numeric, 2) AS total_ms,
rows
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
Table Bloat Check
-- Tables with significant dead tuples
SELECT
schemaname,
relname AS table_name,
n_live_tup AS live_rows,
n_dead_tup AS dead_rows,
round(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_pct,
last_autovacuum,
last_autoanalyze
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC
LIMIT 20;
Index Bloat Estimation
-- Estimate index bloat
SELECT
current_database() AS db,
schemaname,
tablename,
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
idx_scan AS scans,
idx_tup_read AS tuples_read
FROM pg_stat_user_indexes
JOIN pg_index ON indexrelid = pg_index.indexrelid
WHERE NOT indisunique -- Exclude unique indexes
ORDER BY pg_relation_size(indexrelid) DESC
LIMIT 20;
9.6 Index Maintenance
Routine Maintenance Commands
-- Reindex a specific table
REINDEX TABLE tenant_0001.products;
-- Reindex entire schema
REINDEX SCHEMA tenant_0001;
-- Concurrent reindex (no lock, PostgreSQL 12+)
REINDEX TABLE CONCURRENTLY tenant_0001.products;
-- Vacuum and analyze (update statistics)
VACUUM ANALYZE tenant_0001.products;
-- Full vacuum (reclaim space, requires exclusive lock)
VACUUM FULL tenant_0001.products;
Automated Maintenance Configuration
-- postgresql.conf settings
autovacuum = on
autovacuum_vacuum_scale_factor = 0.1 -- Vacuum when 10% of rows are dead
autovacuum_analyze_scale_factor = 0.05 -- Analyze when 5% of rows change
autovacuum_vacuum_cost_delay = 2ms -- Reduce I/O impact
autovacuum_max_workers = 4 -- Parallel workers
-- For high-update tables (sync_queue, rfid_scan_events)
ALTER TABLE tenant_0001.sync_queue SET (
autovacuum_vacuum_scale_factor = 0.01,
autovacuum_analyze_scale_factor = 0.01
);
Index Creation Best Practices
-- Create indexes concurrently (no table lock)
CREATE INDEX CONCURRENTLY idx_orders_new ON orders(created_at);
-- Check for invalid indexes after concurrent creation
SELECT indexrelid::regclass AS index_name, indisvalid
FROM pg_index
WHERE NOT indisvalid;
-- If invalid, drop and recreate
DROP INDEX CONCURRENTLY idx_orders_new;
CREATE INDEX CONCURRENTLY idx_orders_new ON orders(created_at);
9.7 Performance Checklist
Before Deployment
- All primary key columns have indexes (automatic)
- All foreign key columns have indexes (manual)
- Unique constraints have backing indexes (automatic)
- High-frequency query patterns have covering indexes
- BRIN indexes on time-series tables
- Partial indexes for filtered queries
- GIN indexes on JSONB columns with queries
- Full-text indexes if text search is used
Regular Monitoring
- Check pg_stat_statements for slow queries weekly
- Review unused indexes monthly (remove if truly unused)
- Monitor table bloat (vacuum if > 20% dead tuples)
- Verify index usage after schema changes
- Run ANALYZE after bulk data loads
Query Optimization Workflow
- Identify slow query via pg_stat_statements or application logs
- Run EXPLAIN ANALYZE to see execution plan
- Check for sequential scans on large tables
- Identify missing indexes or suboptimal index choice
- Create index (CONCURRENTLY for production)
- Verify improvement with EXPLAIN ANALYZE
- Monitor for regression
9.8 Quick Reference: Common Index Patterns
| Pattern | Index Type | Example |
|---|---|---|
| Unique lookup | B-tree UNIQUE | CREATE UNIQUE INDEX ... ON orders(order_number) |
| Foreign key | B-tree | CREATE INDEX ... ON order_items(order_id) |
| Range query | B-tree | CREATE INDEX ... ON orders(created_at) |
| Time-series | BRIN | CREATE INDEX ... USING BRIN (created_at) |
| Full-text | GIN | CREATE INDEX ... USING GIN (to_tsvector(...)) |
| JSONB | GIN | CREATE INDEX ... USING GIN (settings) |
| Soft delete | Partial B-tree | CREATE INDEX ... WHERE deleted_at IS NULL |
| Status filter | Partial B-tree | CREATE INDEX ... WHERE status = 'pending' |
| Covering | INCLUDE | CREATE INDEX ...(sku) INCLUDE (name, price) |
End of Part III: Database
Next: Part IV: Backend - Chapter 10: API Design - API design and service layer implementation.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-22 |
| Author | Claude Code |
| Status | Active |
| Part | III - Database |
| Chapter | 09 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 10: API Design
RESTful API Architecture for Multi-Tenant POS
This chapter provides the complete API specification for the POS Platform, including all endpoints, request/response formats, and real-time communication patterns.
10.1 Base URL Structure
Tenant-Aware URL Pattern
https://{tenant}.pos-platform.com/api/v1/{resource}
Examples:
https://nexus.pos-platform.com/api/v1/items
https://acme-retail.pos-platform.com/api/v1/sales
https://fashion-outlet.pos-platform.com/api/v1/inventory
Alternative: Header-Based Tenancy
For single-domain deployments:
https://api.pos-platform.com/api/v1/{resource}
X-Tenant-Id: nexus
10.2 API Versioning Strategy
/api/v1/... ← Current (stable)
/api/v2/... ← Future (when breaking changes needed)
Version Header:
X-API-Version: 2025-01-15
10.3 Complete Endpoint Reference
10.3.1 Catalog Domain
Items (Products)
# List all items (paginated)
GET /api/v1/items?page=1&pageSize=50&category=apparel&active=true
# Get single item
GET /api/v1/items/{id}
# Get item by SKU
GET /api/v1/items/by-sku/{sku}
# Get item by barcode
GET /api/v1/items/by-barcode/{barcode}
# Create item
POST /api/v1/items
# Update item
PUT /api/v1/items/{id}
# Partial update
PATCH /api/v1/items/{id}
# Delete (soft delete)
DELETE /api/v1/items/{id}
# Bulk operations
POST /api/v1/items/bulk-create
PUT /api/v1/items/bulk-update
POST /api/v1/items/bulk-import
Create Item Request:
{
"sku": "NXJ-1001-BLK-M",
"barcode": "0123456789012",
"name": "Classic Oxford Shirt",
"description": "Premium cotton oxford shirt",
"categoryId": "cat_apparel_shirts",
"vendorId": "vendor_acme",
"brand": "ACME Apparel",
"cost": 24.99,
"price": 59.99,
"taxable": true,
"trackInventory": true,
"reorderPoint": 10,
"reorderQuantity": 50,
"tags": ["new-arrival", "oxford", "premium"],
"attributes": {
"color": "Black",
"size": "Medium",
"material": "100% Cotton"
},
"metadata": {
"seasonCode": "SS2025",
"fabricId": "fab_cotton_100"
}
}
Item Response:
{
"id": "item_01HQWXYZ123",
"tenantId": "tenant_nexus",
"sku": "NXJ-1001-BLK-M",
"barcode": "0123456789012",
"name": "Classic Oxford Shirt",
"description": "Premium cotton oxford shirt",
"categoryId": "cat_apparel_shirts",
"categoryName": "Shirts",
"vendorId": "vendor_acme",
"vendorName": "ACME Apparel",
"cost": 24.99,
"price": 59.99,
"taxable": true,
"trackInventory": true,
"isActive": true,
"reorderPoint": 10,
"reorderQuantity": 50,
"totalQuantityOnHand": 145,
"attributes": {
"color": "Black",
"size": "Medium",
"material": "100% Cotton"
},
"inventoryByLocation": [
{ "locationId": "loc_hq", "locationName": "Warehouse", "quantity": 100 },
{ "locationId": "loc_gm", "locationName": "Greenbrier", "quantity": 25 },
{ "locationId": "loc_lm", "locationName": "Lynnhaven", "quantity": 20 }
],
"createdAt": "2025-01-15T10:30:00Z",
"updatedAt": "2025-01-20T14:22:00Z",
"_links": {
"self": "/api/v1/items/item_01HQWXYZ123",
"category": "/api/v1/categories/cat_apparel_shirts",
"vendor": "/api/v1/vendors/vendor_acme",
"inventory": "/api/v1/inventory?itemId=item_01HQWXYZ123"
}
}
Categories
GET /api/v1/categories # List all (hierarchical)
GET /api/v1/categories/{id} # Get single
GET /api/v1/categories/{id}/items # Items in category
POST /api/v1/categories # Create
PUT /api/v1/categories/{id} # Update
DELETE /api/v1/categories/{id} # Delete
Vendors
GET /api/v1/vendors # List all
GET /api/v1/vendors/{id} # Get single
GET /api/v1/vendors/{id}/items # Items from vendor
POST /api/v1/vendors # Create
PUT /api/v1/vendors/{id} # Update
DELETE /api/v1/vendors/{id} # Delete
10.3.2 Sales Domain
# List sales (paginated, filtered)
GET /api/v1/sales?page=1&pageSize=50&locationId=loc_gm&from=2025-01-01&to=2025-01-31
# Get single sale
GET /api/v1/sales/{id}
# Create sale (complete transaction)
POST /api/v1/sales
# Process return
POST /api/v1/sales/{id}/return
# Void transaction
POST /api/v1/sales/{id}/void
# Get receipt
GET /api/v1/sales/{id}/receipt
# Reprint receipt
POST /api/v1/sales/{id}/receipt/print
Create Sale Request:
{
"locationId": "loc_gm",
"registerId": "reg_gm_01",
"employeeId": "emp_john",
"customerId": "cust_jane",
"lineItems": [
{
"itemId": "item_01HQWXYZ123",
"quantity": 2,
"unitPrice": 59.99,
"discountAmount": 0
},
{
"itemId": "item_02ABCDEF456",
"quantity": 1,
"unitPrice": 29.99,
"discountAmount": 5.00
}
],
"discounts": [
{
"type": "percentage",
"value": 10,
"reason": "Loyalty Member Discount"
}
],
"payments": [
{
"method": "credit_card",
"amount": 135.42,
"reference": "ch_3MqL0Z2eZvKYlo2C",
"cardLast4": "4242",
"cardBrand": "visa"
}
],
"tax": {
"stateTax": 5.99,
"countyTax": 2.50,
"cityTax": 1.96,
"totalTax": 10.45
},
"notes": "Customer requested gift receipt"
}
Sale Response:
{
"id": "sale_01HQWXYZ789",
"receiptNumber": "GM-20250115-0042",
"tenantId": "tenant_nexus",
"locationId": "loc_gm",
"locationName": "Greenbrier Mall",
"registerId": "reg_gm_01",
"employeeId": "emp_john",
"employeeName": "John Smith",
"customerId": "cust_jane",
"customerName": "Jane Doe",
"status": "completed",
"subtotal": 144.97,
"discountTotal": 19.50,
"tax": {
"stateTax": 5.99,
"countyTax": 2.50,
"cityTax": 1.96,
"totalTax": 10.45
},
"grandTotal": 135.92,
"lineItems": [
{
"id": "li_001",
"itemId": "item_01HQWXYZ123",
"sku": "NXJ-1001-BLK-M",
"name": "Classic Oxford Shirt",
"quantity": 2,
"unitPrice": 59.99,
"extendedPrice": 119.98,
"discountAmount": 0,
"netPrice": 119.98
}
],
"payments": [
{
"id": "pmt_001",
"method": "credit_card",
"amount": 135.92,
"status": "captured",
"cardLast4": "4242",
"cardBrand": "visa"
}
],
"createdAt": "2025-01-15T14:32:00Z",
"_links": {
"self": "/api/v1/sales/sale_01HQWXYZ789",
"receipt": "/api/v1/sales/sale_01HQWXYZ789/receipt",
"customer": "/api/v1/customers/cust_jane"
}
}
Return Request:
{
"lineItems": [
{
"originalLineItemId": "li_001",
"quantity": 1,
"reason": "wrong_size"
}
],
"refundMethod": "original_payment",
"employeeId": "emp_john"
}
10.3.3 Inventory Domain
# Get inventory levels
GET /api/v1/inventory?locationId=loc_gm&itemId=item_01HQWXYZ123
# Get inventory for all locations
GET /api/v1/inventory/by-item/{itemId}
# Adjust inventory
POST /api/v1/inventory/adjust
# Transfer between locations
POST /api/v1/inventory/transfer
# Start inventory count
POST /api/v1/inventory/count
# Submit count results
PUT /api/v1/inventory/count/{countId}
# Finalize count
POST /api/v1/inventory/count/{countId}/finalize
# Get adjustment history
GET /api/v1/inventory/adjustments?itemId={itemId}&from={date}&to={date}
Adjust Inventory Request:
{
"locationId": "loc_gm",
"adjustments": [
{
"itemId": "item_01HQWXYZ123",
"quantityChange": -2,
"reason": "damaged",
"notes": "Water damage from roof leak"
}
],
"employeeId": "emp_manager"
}
Transfer Request:
{
"fromLocationId": "loc_hq",
"toLocationId": "loc_gm",
"items": [
{
"itemId": "item_01HQWXYZ123",
"quantity": 20
},
{
"itemId": "item_02ABCDEF456",
"quantity": 15
}
],
"notes": "Weekly replenishment",
"employeeId": "emp_warehouse"
}
Transfer Response:
{
"id": "transfer_01HQWXYZ",
"transferNumber": "TRF-20250115-001",
"status": "pending",
"fromLocationId": "loc_hq",
"fromLocationName": "Warehouse",
"toLocationId": "loc_gm",
"toLocationName": "Greenbrier Mall",
"items": [
{
"itemId": "item_01HQWXYZ123",
"sku": "NXJ-1001-BLK-M",
"name": "Classic Oxford Shirt",
"quantity": 20
}
],
"createdAt": "2025-01-15T09:00:00Z",
"createdBy": "emp_warehouse"
}
10.3.4 Customers Domain
# Search customers
GET /api/v1/customers/search?q=jane&email=jane@example.com
# List customers (paginated)
GET /api/v1/customers?page=1&pageSize=50
# Get single customer
GET /api/v1/customers/{id}
# Create customer
POST /api/v1/customers
# Update customer
PUT /api/v1/customers/{id}
# Get customer purchase history
GET /api/v1/customers/{id}/purchases
# Get customer loyalty points
GET /api/v1/customers/{id}/loyalty
Customer Response:
{
"id": "cust_jane",
"firstName": "Jane",
"lastName": "Doe",
"email": "jane.doe@example.com",
"phone": "+1-555-123-4567",
"loyaltyTier": "gold",
"loyaltyPoints": 2450,
"totalPurchases": 45,
"totalSpent": 3245.67,
"lastVisit": "2025-01-15T14:32:00Z",
"preferredLocationId": "loc_gm",
"marketingOptIn": true,
"createdAt": "2024-03-15T10:00:00Z"
}
10.3.5 Employees Domain
# List employees
GET /api/v1/employees?locationId=loc_gm&active=true
# Get single employee
GET /api/v1/employees/{id}
# Clock in
POST /api/v1/employees/{id}/clock-in
# Clock out
POST /api/v1/employees/{id}/clock-out
# Get time entries
GET /api/v1/employees/{id}/time-entries?from=2025-01-01&to=2025-01-15
# Get sales performance
GET /api/v1/employees/{id}/performance?period=month
Clock-In Request:
{
"locationId": "loc_gm",
"pin": "1234",
"registerId": "reg_gm_01"
}
Clock-In Response:
{
"timeEntryId": "time_01HQWXYZ",
"employeeId": "emp_john",
"employeeName": "John Smith",
"locationId": "loc_gm",
"clockInTime": "2025-01-15T09:00:00Z",
"status": "clocked_in"
}
10.3.6 Reports Domain
# Sales Summary
GET /api/v1/reports/sales-summary?from=2025-01-01&to=2025-01-31&locationId=loc_gm
# Inventory Value
GET /api/v1/reports/inventory-value?locationId=loc_gm
# Employee Performance
GET /api/v1/reports/employee-performance?from=2025-01-01&to=2025-01-31
# Category Sales
GET /api/v1/reports/category-sales?from=2025-01-01&to=2025-01-31
# Top Sellers
GET /api/v1/reports/top-sellers?limit=20&period=month
# Slow Movers
GET /api/v1/reports/slow-movers?daysWithoutSale=30
Sales Summary Response:
{
"period": {
"from": "2025-01-01T00:00:00Z",
"to": "2025-01-31T23:59:59Z"
},
"summary": {
"totalTransactions": 1250,
"totalGrossSales": 89500.00,
"totalDiscounts": 4250.00,
"totalReturns": 1200.00,
"totalNetSales": 84050.00,
"totalTax": 6723.00,
"averageTransactionValue": 67.24,
"itemsSold": 3450
},
"byLocation": [
{
"locationId": "loc_gm",
"locationName": "Greenbrier Mall",
"transactions": 450,
"netSales": 32500.00
}
],
"byPaymentMethod": [
{ "method": "credit_card", "amount": 65000.00, "count": 980 },
{ "method": "cash", "amount": 15000.00, "count": 220 },
{ "method": "gift_card", "amount": 4050.00, "count": 50 }
]
}
10.3.7 RFID Domain (Optional Module — Counting Only)
Scope: RFID endpoints support inventory counting operations only. Receiving is handled by barcode Scanner endpoints. See BRD Section 5.16.6 for Scanner vs RFID distinction.
# Tag Printing
POST /api/v1/rfid/tags/print # Queue tags for printing
GET /api/v1/rfid/tags/print/{jobId} # Get print job status
# Session Management
POST /api/v1/rfid/scans/sessions # Create counting session
POST /api/v1/rfid/scans/sessions/{sessionId}/join # Join as additional operator
POST /api/v1/rfid/scans/sessions/{sessionId}/complete # Complete session → variance calc
# Chunked Upload (idempotent, ≤5,000 events/chunk)
POST /api/v1/rfid/scans/sessions/{sessionId}/chunks # Upload event chunk
GET /api/v1/rfid/scans/sessions/{sessionId}/upload-status # Check upload progress (for resume)
# Configuration (read-only for mobile app)
GET /api/v1/rfid/config # Tenant RFID configuration
GET /api/v1/rfid/products # Product catalog cache
GET /api/v1/rfid/tag-mappings # EPC → SKU mappings
Key Design Decisions:
- Chunked uploads: Sessions with 100,000+ tag reads are uploaded in chunks of 5,000 events. Server deduplicates by
UNIQUE(session_id, epc)constraint, making retries safe. - Multi-operator: Up to 10 operators can join a single session via
/join. Each scans an assigned section. Server merges results, keeping highest RSSI per EPC. - Offline-first: Mobile app creates sessions locally, uploads chunks when connectivity is available.
GET /upload-statusenables resume after network failure.
See Appendix A, Section A.13 for full request/response schemas.
10.4 Pagination Pattern
All list endpoints use cursor-based or offset pagination:
Request:
GET /api/v1/items?page=2&pageSize=50&sortBy=name&sortOrder=asc
Response Envelope:
{
"data": [...],
"pagination": {
"page": 2,
"pageSize": 50,
"totalItems": 1250,
"totalPages": 25,
"hasNextPage": true,
"hasPreviousPage": true
},
"_links": {
"self": "/api/v1/items?page=2&pageSize=50",
"first": "/api/v1/items?page=1&pageSize=50",
"prev": "/api/v1/items?page=1&pageSize=50",
"next": "/api/v1/items?page=3&pageSize=50",
"last": "/api/v1/items?page=25&pageSize=50"
}
}
10.5 Error Response Format
All errors follow RFC 7807 Problem Details:
{
"type": "https://pos-platform.com/errors/validation-error",
"title": "Validation Error",
"status": 400,
"detail": "One or more validation errors occurred.",
"instance": "/api/v1/items",
"traceId": "00-abc123-def456-01",
"errors": {
"sku": ["SKU is required", "SKU must be unique"],
"price": ["Price must be greater than 0"]
}
}
Common Error Types:
| Status | Type | Description |
|---|---|---|
| 400 | validation-error | Request validation failed |
| 401 | authentication-required | No valid credentials |
| 403 | permission-denied | Insufficient permissions |
| 404 | resource-not-found | Entity does not exist |
| 409 | conflict | Duplicate or state conflict |
| 422 | business-rule-violation | Domain logic failure |
| 500 | internal-error | Server error |
10.6 SignalR Hub Events
Real-time events for POS clients:
Hub Endpoint
wss://{tenant}.pos-platform.com/hubs/pos
Event Catalog
// Server → Client Events
public interface IPosHubClient
{
// Inventory changes
Task InventoryUpdated(InventoryUpdateEvent data);
// Price changes
Task PriceUpdated(PriceUpdateEvent data);
// Item changes
Task ItemUpdated(ItemUpdateEvent data);
Task ItemCreated(ItemCreateEvent data);
Task ItemDeleted(string itemId);
// Register events
Task RegisterStatusChanged(RegisterStatusEvent data);
// Shift events
Task ShiftStarted(ShiftEvent data);
Task ShiftEnded(ShiftEvent data);
// Sync commands
Task SyncRequired(SyncCommand data);
Task CacheInvalidated(CacheInvalidateEvent data);
}
Event Payload Examples:
// InventoryUpdated
{
"eventType": "InventoryUpdated",
"timestamp": "2025-01-15T14:32:00Z",
"data": {
"itemId": "item_01HQWXYZ123",
"locationId": "loc_gm",
"previousQuantity": 25,
"newQuantity": 23,
"changeType": "sale"
}
}
// SyncRequired
{
"eventType": "SyncRequired",
"timestamp": "2025-01-15T14:32:00Z",
"data": {
"scope": "items",
"reason": "bulk_import",
"affectedCount": 150
}
}
10.7 Request/Response Headers
Required Request Headers
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
Content-Type: application/json
Accept: application/json
X-Request-Id: uuid-for-tracing
X-Location-Id: loc_gm # For POS operations
X-Register-Id: reg_gm_01 # For POS operations
Response Headers
X-Request-Id: uuid-for-tracing
X-Tenant-Id: tenant_nexus
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 995
X-RateLimit-Reset: 1705330800
10.8 Rate Limiting
| Endpoint Type | Limit | Window |
|---|---|---|
| Standard API | 1000/hour | Per tenant |
| Bulk Operations | 10/hour | Per tenant |
| Reports | 100/hour | Per tenant |
| Auth Endpoints | 20/minute | Per IP |
Response when rate limited:
{
"type": "https://pos-platform.com/errors/rate-limit-exceeded",
"title": "Rate Limit Exceeded",
"status": 429,
"detail": "You have exceeded the rate limit. Try again in 300 seconds.",
"retryAfter": 300
}
10.9 API Controller Implementation
// File: src/POS.Api/Controllers/ItemsController.cs
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using POS.Core.Catalog;
using POS.Core.Common;
namespace POS.Api.Controllers;
[ApiController]
[Route("api/v1/items")]
[Authorize]
public class ItemsController : ControllerBase
{
private readonly IItemService _itemService;
private readonly ITenantContext _tenantContext;
private readonly ILogger<ItemsController> _logger;
public ItemsController(
IItemService itemService,
ITenantContext tenantContext,
ILogger<ItemsController> logger)
{
_itemService = itemService;
_tenantContext = tenantContext;
_logger = logger;
}
[HttpGet]
[Authorize(Policy = "catalog.items.read")]
public async Task<ActionResult<PagedResult<ItemDto>>> GetItems(
[FromQuery] ItemQueryParams query,
CancellationToken ct)
{
var result = await _itemService.GetItemsAsync(query, ct);
return Ok(result);
}
[HttpGet("{id}")]
[Authorize(Policy = "catalog.items.read")]
public async Task<ActionResult<ItemDto>> GetItem(string id, CancellationToken ct)
{
var item = await _itemService.GetByIdAsync(id, ct);
if (item is null)
return NotFound(ProblemFactory.NotFound("Item", id));
return Ok(item);
}
[HttpGet("by-sku/{sku}")]
[Authorize(Policy = "catalog.items.read")]
public async Task<ActionResult<ItemDto>> GetItemBySku(string sku, CancellationToken ct)
{
var item = await _itemService.GetBySkuAsync(sku, ct);
if (item is null)
return NotFound(ProblemFactory.NotFound("Item", sku));
return Ok(item);
}
[HttpGet("by-barcode/{barcode}")]
[Authorize(Policy = "catalog.items.read")]
public async Task<ActionResult<ItemDto>> GetItemByBarcode(
string barcode,
CancellationToken ct)
{
var item = await _itemService.GetByBarcodeAsync(barcode, ct);
if (item is null)
return NotFound(ProblemFactory.NotFound("Item", barcode));
return Ok(item);
}
[HttpPost]
[Authorize(Policy = "catalog.items.write")]
public async Task<ActionResult<ItemDto>> CreateItem(
[FromBody] CreateItemRequest request,
CancellationToken ct)
{
var result = await _itemService.CreateAsync(request, ct);
return result.Match<ActionResult<ItemDto>>(
success => CreatedAtAction(
nameof(GetItem),
new { id = success.Id },
success),
error => BadRequest(ProblemFactory.FromError(error))
);
}
[HttpPut("{id}")]
[Authorize(Policy = "catalog.items.write")]
public async Task<ActionResult<ItemDto>> UpdateItem(
string id,
[FromBody] UpdateItemRequest request,
CancellationToken ct)
{
var result = await _itemService.UpdateAsync(id, request, ct);
return result.Match<ActionResult<ItemDto>>(
success => Ok(success),
error => error.Code switch
{
"NOT_FOUND" => NotFound(ProblemFactory.NotFound("Item", id)),
_ => BadRequest(ProblemFactory.FromError(error))
}
);
}
[HttpDelete("{id}")]
[Authorize(Policy = "catalog.items.delete")]
public async Task<IActionResult> DeleteItem(string id, CancellationToken ct)
{
var result = await _itemService.DeleteAsync(id, ct);
return result.Match<IActionResult>(
success => NoContent(),
error => NotFound(ProblemFactory.NotFound("Item", id))
);
}
[HttpPost("bulk-import")]
[Authorize(Policy = "catalog.items.bulk")]
[RequestSizeLimit(10_000_000)] // 10MB
public async Task<ActionResult<BulkImportResult>> BulkImport(
[FromBody] BulkImportRequest request,
CancellationToken ct)
{
var result = await _itemService.BulkImportAsync(request, ct);
return Ok(result);
}
}
10.10 Query Parameters and Filtering
// File: src/POS.Core/Common/ItemQueryParams.cs
public record ItemQueryParams
{
public int Page { get; init; } = 1;
public int PageSize { get; init; } = 50;
public string? Search { get; init; }
public string? CategoryId { get; init; }
public string? VendorId { get; init; }
public bool? Active { get; init; }
public bool? TrackInventory { get; init; }
public decimal? MinPrice { get; init; }
public decimal? MaxPrice { get; init; }
public string SortBy { get; init; } = "name";
public string SortOrder { get; init; } = "asc";
}
Summary
This chapter defined the complete REST API structure for the POS Platform:
- Tenant-aware URL structure with subdomain routing
- Six domain areas: Catalog, Sales, Inventory, Customers, Employees, Reports
- Consistent patterns for pagination, errors, and HATEOAS links
- Real-time SignalR events for inventory and price updates
- Complete controller implementation with authorization policies
Next: Chapter 11: Service Layer covers the service layer that implements this API.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-22 |
| Author | Claude Code |
| Status | Active |
| Part | IV - Backend |
| Chapter | 10 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 11: Service Layer
Clean Architecture Implementation for Multi-Tenant POS
This chapter provides the complete service layer architecture, including interfaces, implementations, unit of work patterns, and transaction handling.
11.1 Clean Architecture Overview
┌─────────────────────────────────────────────────────────────────┐
│ API Controllers │
│ ItemsController, SalesController, InventoryController, etc. │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Application Services │
│ IOrderService, IInventoryService, ICustomerService, etc. │
│ (Business logic, orchestration, validation) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Domain Layer │
│ Entities, Value Objects, Domain Events, Business Rules │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Infrastructure Layer │
│ Repositories, DbContext, External Services, Messaging │
└─────────────────────────────────────────────────────────────────┘
11.2 Project Structure
src/
├── POS.Api/ # ASP.NET Core Web API
│ ├── Controllers/
│ ├── Middleware/
│ └── Program.cs
│
├── POS.Application/ # Application Services
│ ├── Interfaces/
│ │ ├── IOrderService.cs
│ │ ├── IInventoryService.cs
│ │ ├── ICustomerService.cs
│ │ ├── IItemService.cs
│ │ └── IReportService.cs
│ ├── Services/
│ │ ├── OrderService.cs
│ │ ├── InventoryService.cs
│ │ └── ...
│ ├── DTOs/
│ └── Validators/
│
├── POS.Domain/ # Domain Layer
│ ├── Entities/
│ ├── ValueObjects/
│ ├── Events/
│ └── Exceptions/
│
└── POS.Infrastructure/ # Infrastructure Layer
├── Persistence/
│ ├── PosDbContext.cs
│ ├── Repositories/
│ └── Configurations/
├── External/
└── Messaging/
11.3 Service Interfaces
11.3.1 IOrderService
// File: src/POS.Application/Interfaces/IOrderService.cs
using POS.Application.DTOs;
using POS.Domain.Common;
namespace POS.Application.Interfaces;
public interface IOrderService
{
// Query operations
Task<PagedResult<OrderSummaryDto>> GetOrdersAsync(
OrderQueryParams query,
CancellationToken ct = default);
Task<OrderDto?> GetByIdAsync(string orderId, CancellationToken ct = default);
Task<OrderDto?> GetByReceiptNumberAsync(
string receiptNumber,
CancellationToken ct = default);
// Command operations
Task<Result<OrderDto>> CreateOrderAsync(
CreateOrderRequest request,
CancellationToken ct = default);
Task<Result<OrderDto>> ProcessReturnAsync(
string orderId,
ProcessReturnRequest request,
CancellationToken ct = default);
Task<Result<OrderDto>> VoidOrderAsync(
string orderId,
VoidOrderRequest request,
CancellationToken ct = default);
// Receipt operations
Task<ReceiptDto> GetReceiptAsync(string orderId, CancellationToken ct = default);
Task<Result> PrintReceiptAsync(
string orderId,
PrintReceiptRequest request,
CancellationToken ct = default);
// Held orders (park/recall)
Task<Result<OrderDto>> HoldOrderAsync(
HoldOrderRequest request,
CancellationToken ct = default);
Task<IReadOnlyList<HeldOrderDto>> GetHeldOrdersAsync(
string locationId,
CancellationToken ct = default);
Task<Result<OrderDto>> RecallHeldOrderAsync(
string heldOrderId,
CancellationToken ct = default);
}
11.3.2 IInventoryService
// File: src/POS.Application/Interfaces/IInventoryService.cs
namespace POS.Application.Interfaces;
public interface IInventoryService
{
// Query operations
Task<InventoryLevelDto?> GetInventoryLevelAsync(
string itemId,
string locationId,
CancellationToken ct = default);
Task<IReadOnlyList<InventoryLevelDto>> GetInventoryByItemAsync(
string itemId,
CancellationToken ct = default);
Task<PagedResult<InventoryLevelDto>> GetInventoryByLocationAsync(
string locationId,
InventoryQueryParams query,
CancellationToken ct = default);
// Adjustment operations
Task<Result<AdjustmentDto>> AdjustInventoryAsync(
AdjustInventoryRequest request,
CancellationToken ct = default);
Task<Result<TransferDto>> CreateTransferAsync(
CreateTransferRequest request,
CancellationToken ct = default);
Task<Result<TransferDto>> ReceiveTransferAsync(
string transferId,
ReceiveTransferRequest request,
CancellationToken ct = default);
// Count operations
Task<Result<CountDto>> StartCountAsync(
StartCountRequest request,
CancellationToken ct = default);
Task<Result<CountDto>> UpdateCountAsync(
string countId,
UpdateCountRequest request,
CancellationToken ct = default);
Task<Result<CountDto>> FinalizeCountAsync(
string countId,
CancellationToken ct = default);
// History
Task<PagedResult<InventoryEventDto>> GetAdjustmentHistoryAsync(
InventoryHistoryQuery query,
CancellationToken ct = default);
// Internal (called by other services)
Task<Result> DeductInventoryAsync(
DeductInventoryCommand command,
CancellationToken ct = default);
Task<Result> RestoreInventoryAsync(
RestoreInventoryCommand command,
CancellationToken ct = default);
}
11.3.3 ICustomerService
// File: src/POS.Application/Interfaces/ICustomerService.cs
namespace POS.Application.Interfaces;
public interface ICustomerService
{
Task<PagedResult<CustomerSummaryDto>> GetCustomersAsync(
CustomerQueryParams query,
CancellationToken ct = default);
Task<CustomerDto?> GetByIdAsync(string customerId, CancellationToken ct = default);
Task<IReadOnlyList<CustomerSummaryDto>> SearchAsync(
string searchTerm,
int limit = 10,
CancellationToken ct = default);
Task<Result<CustomerDto>> CreateAsync(
CreateCustomerRequest request,
CancellationToken ct = default);
Task<Result<CustomerDto>> UpdateAsync(
string customerId,
UpdateCustomerRequest request,
CancellationToken ct = default);
Task<PagedResult<OrderSummaryDto>> GetPurchaseHistoryAsync(
string customerId,
PurchaseHistoryQuery query,
CancellationToken ct = default);
Task<LoyaltyInfoDto> GetLoyaltyInfoAsync(
string customerId,
CancellationToken ct = default);
Task<Result<LoyaltyInfoDto>> AddLoyaltyPointsAsync(
string customerId,
int points,
string reason,
CancellationToken ct = default);
Task<Result<LoyaltyInfoDto>> RedeemLoyaltyPointsAsync(
string customerId,
int points,
string orderId,
CancellationToken ct = default);
}
11.3.4 IItemService
// File: src/POS.Application/Interfaces/IItemService.cs
namespace POS.Application.Interfaces;
public interface IItemService
{
Task<PagedResult<ItemSummaryDto>> GetItemsAsync(
ItemQueryParams query,
CancellationToken ct = default);
Task<ItemDto?> GetByIdAsync(string itemId, CancellationToken ct = default);
Task<ItemDto?> GetBySkuAsync(string sku, CancellationToken ct = default);
Task<ItemDto?> GetByBarcodeAsync(string barcode, CancellationToken ct = default);
Task<Result<ItemDto>> CreateAsync(
CreateItemRequest request,
CancellationToken ct = default);
Task<Result<ItemDto>> UpdateAsync(
string itemId,
UpdateItemRequest request,
CancellationToken ct = default);
Task<Result> DeleteAsync(string itemId, CancellationToken ct = default);
Task<BulkImportResult> BulkImportAsync(
BulkImportRequest request,
CancellationToken ct = default);
Task<IReadOnlyList<ItemDto>> GetByIdsAsync(
IEnumerable<string> itemIds,
CancellationToken ct = default);
}
11.4 Unit of Work Pattern
// File: src/POS.Application/Interfaces/IUnitOfWork.cs
namespace POS.Application.Interfaces;
public interface IUnitOfWork : IDisposable
{
IItemRepository Items { get; }
IOrderRepository Orders { get; }
ICustomerRepository Customers { get; }
IInventoryRepository Inventory { get; }
IEmployeeRepository Employees { get; }
ILocationRepository Locations { get; }
Task<int> SaveChangesAsync(CancellationToken ct = default);
Task BeginTransactionAsync(CancellationToken ct = default);
Task CommitTransactionAsync(CancellationToken ct = default);
Task RollbackTransactionAsync(CancellationToken ct = default);
}
// File: src/POS.Infrastructure/Persistence/UnitOfWork.cs
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Storage;
namespace POS.Infrastructure.Persistence;
public class UnitOfWork : IUnitOfWork
{
private readonly PosDbContext _context;
private IDbContextTransaction? _transaction;
public IItemRepository Items { get; }
public IOrderRepository Orders { get; }
public ICustomerRepository Customers { get; }
public IInventoryRepository Inventory { get; }
public IEmployeeRepository Employees { get; }
public ILocationRepository Locations { get; }
public UnitOfWork(
PosDbContext context,
IItemRepository items,
IOrderRepository orders,
ICustomerRepository customers,
IInventoryRepository inventory,
IEmployeeRepository employees,
ILocationRepository locations)
{
_context = context;
Items = items;
Orders = orders;
Customers = customers;
Inventory = inventory;
Employees = employees;
Locations = locations;
}
public async Task<int> SaveChangesAsync(CancellationToken ct = default)
{
return await _context.SaveChangesAsync(ct);
}
public async Task BeginTransactionAsync(CancellationToken ct = default)
{
_transaction = await _context.Database.BeginTransactionAsync(ct);
}
public async Task CommitTransactionAsync(CancellationToken ct = default)
{
if (_transaction is not null)
{
await _transaction.CommitAsync(ct);
await _transaction.DisposeAsync();
_transaction = null;
}
}
public async Task RollbackTransactionAsync(CancellationToken ct = default)
{
if (_transaction is not null)
{
await _transaction.RollbackAsync(ct);
await _transaction.DisposeAsync();
_transaction = null;
}
}
public void Dispose()
{
_transaction?.Dispose();
_context.Dispose();
}
}
11.5 Complete OrderService Implementation
// File: src/POS.Application/Services/OrderService.cs
using Microsoft.Extensions.Logging;
using POS.Application.DTOs;
using POS.Application.Interfaces;
using POS.Domain.Common;
using POS.Domain.Entities;
using POS.Domain.Events;
using POS.Domain.Exceptions;
namespace POS.Application.Services;
public class OrderService : IOrderService
{
private readonly IUnitOfWork _unitOfWork;
private readonly IInventoryService _inventoryService;
private readonly ICustomerService _customerService;
private readonly IPaymentService _paymentService;
private readonly IEventPublisher _eventPublisher;
private readonly ITenantContext _tenantContext;
private readonly ILogger<OrderService> _logger;
public OrderService(
IUnitOfWork unitOfWork,
IInventoryService inventoryService,
ICustomerService customerService,
IPaymentService paymentService,
IEventPublisher eventPublisher,
ITenantContext tenantContext,
ILogger<OrderService> logger)
{
_unitOfWork = unitOfWork;
_inventoryService = inventoryService;
_customerService = customerService;
_paymentService = paymentService;
_eventPublisher = eventPublisher;
_tenantContext = tenantContext;
_logger = logger;
}
public async Task<Result<OrderDto>> CreateOrderAsync(
CreateOrderRequest request,
CancellationToken ct = default)
{
_logger.LogInformation(
"Creating order for location {LocationId} with {ItemCount} items",
request.LocationId,
request.LineItems.Count);
try
{
await _unitOfWork.BeginTransactionAsync(ct);
// 1. Validate location and register
var location = await _unitOfWork.Locations.GetByIdAsync(
request.LocationId, ct);
if (location is null)
return Result<OrderDto>.Failure(
DomainError.NotFound("Location", request.LocationId));
// 2. Validate employee
var employee = await _unitOfWork.Employees.GetByIdAsync(
request.EmployeeId, ct);
if (employee is null)
return Result<OrderDto>.Failure(
DomainError.NotFound("Employee", request.EmployeeId));
// 3. Load items and validate inventory
var itemIds = request.LineItems.Select(li => li.ItemId).ToList();
var items = await _unitOfWork.Items.GetByIdsAsync(itemIds, ct);
var itemLookup = items.ToDictionary(i => i.Id);
foreach (var lineItem in request.LineItems)
{
if (!itemLookup.TryGetValue(lineItem.ItemId, out var item))
{
return Result<OrderDto>.Failure(
DomainError.NotFound("Item", lineItem.ItemId));
}
// Check inventory if tracked
if (item.TrackInventory)
{
var inventory = await _inventoryService.GetInventoryLevelAsync(
item.Id, request.LocationId, ct);
if (inventory is null || inventory.QuantityOnHand < lineItem.Quantity)
{
return Result<OrderDto>.Failure(
DomainError.InsufficientInventory(
item.Sku,
lineItem.Quantity,
inventory?.QuantityOnHand ?? 0));
}
}
}
// 4. Create order entity
var order = new Order
{
Id = IdGenerator.NewId("order"),
TenantId = _tenantContext.TenantId,
LocationId = request.LocationId,
RegisterId = request.RegisterId,
EmployeeId = request.EmployeeId,
CustomerId = request.CustomerId,
ReceiptNumber = await GenerateReceiptNumberAsync(
request.LocationId, ct),
Status = OrderStatus.Completed,
CreatedAt = DateTime.UtcNow
};
// 5. Build line items
decimal subtotal = 0;
foreach (var li in request.LineItems)
{
var item = itemLookup[li.ItemId];
var lineItem = new OrderLineItem
{
Id = IdGenerator.NewId("li"),
OrderId = order.Id,
ItemId = item.Id,
Sku = item.Sku,
Name = item.Name,
Quantity = li.Quantity,
UnitPrice = li.UnitPrice ?? item.Price,
DiscountAmount = li.DiscountAmount,
Taxable = item.Taxable
};
lineItem.ExtendedPrice = lineItem.Quantity * lineItem.UnitPrice;
lineItem.NetPrice = lineItem.ExtendedPrice - lineItem.DiscountAmount;
subtotal += lineItem.NetPrice;
order.LineItems.Add(lineItem);
}
// 6. Apply order-level discounts
decimal discountTotal = 0;
foreach (var discount in request.Discounts ?? [])
{
var discountAmount = discount.Type == DiscountType.Percentage
? subtotal * (discount.Value / 100m)
: discount.Value;
discountTotal += discountAmount;
order.Discounts.Add(new OrderDiscount
{
Id = IdGenerator.NewId("disc"),
OrderId = order.Id,
Type = discount.Type,
Value = discount.Value,
Amount = discountAmount,
Reason = discount.Reason
});
}
// 7. Calculate tax
decimal taxableAmount = order.LineItems
.Where(li => li.Taxable)
.Sum(li => li.NetPrice);
// Apply discount proportionally to taxable amount
if (subtotal > 0 && discountTotal > 0)
{
var taxableRatio = taxableAmount / subtotal;
taxableAmount -= discountTotal * taxableRatio;
}
var taxRate = location.TaxRate;
order.TaxAmount = Math.Round(taxableAmount * taxRate, 2);
// 8. Set totals
order.Subtotal = subtotal;
order.DiscountTotal = discountTotal;
order.GrandTotal = subtotal - discountTotal + order.TaxAmount;
// 9. Process payments
decimal paymentTotal = 0;
foreach (var payment in request.Payments)
{
var paymentResult = await _paymentService.ProcessPaymentAsync(
new ProcessPaymentCommand
{
OrderId = order.Id,
Method = payment.Method,
Amount = payment.Amount,
Reference = payment.Reference
}, ct);
if (!paymentResult.IsSuccess)
{
await _unitOfWork.RollbackTransactionAsync(ct);
return Result<OrderDto>.Failure(paymentResult.Error!);
}
order.Payments.Add(new OrderPayment
{
Id = IdGenerator.NewId("pmt"),
OrderId = order.Id,
Method = payment.Method,
Amount = payment.Amount,
Status = PaymentStatus.Captured,
Reference = paymentResult.Value!.TransactionId,
CardLast4 = payment.CardLast4,
CardBrand = payment.CardBrand
});
paymentTotal += payment.Amount;
}
// 10. Validate payment covers total
if (paymentTotal < order.GrandTotal)
{
await _unitOfWork.RollbackTransactionAsync(ct);
return Result<OrderDto>.Failure(
DomainError.InsufficientPayment(order.GrandTotal, paymentTotal));
}
order.ChangeGiven = paymentTotal - order.GrandTotal;
// 11. Deduct inventory
foreach (var lineItem in order.LineItems)
{
var item = itemLookup[lineItem.ItemId];
if (item.TrackInventory)
{
var deductResult = await _inventoryService.DeductInventoryAsync(
new DeductInventoryCommand
{
ItemId = lineItem.ItemId,
LocationId = request.LocationId,
Quantity = lineItem.Quantity,
Reason = InventoryChangeReason.Sale,
ReferenceId = order.Id,
ReferenceType = "Order"
}, ct);
if (!deductResult.IsSuccess)
{
await _unitOfWork.RollbackTransactionAsync(ct);
return Result<OrderDto>.Failure(deductResult.Error!);
}
}
}
// 12. Award loyalty points
if (request.CustomerId is not null)
{
var pointsToAward = CalculateLoyaltyPoints(order.GrandTotal);
await _customerService.AddLoyaltyPointsAsync(
request.CustomerId,
pointsToAward,
$"Purchase: {order.ReceiptNumber}",
ct);
}
// 13. Save order
await _unitOfWork.Orders.AddAsync(order, ct);
await _unitOfWork.SaveChangesAsync(ct);
await _unitOfWork.CommitTransactionAsync(ct);
// 14. Publish domain events
await _eventPublisher.PublishAsync(new OrderCompletedEvent
{
OrderId = order.Id,
TenantId = order.TenantId,
LocationId = order.LocationId,
ReceiptNumber = order.ReceiptNumber,
GrandTotal = order.GrandTotal,
ItemCount = order.LineItems.Count,
CustomerId = order.CustomerId,
EmployeeId = order.EmployeeId,
OccurredAt = DateTime.UtcNow
}, ct);
_logger.LogInformation(
"Order {OrderId} created successfully. Receipt: {ReceiptNumber}, Total: {Total}",
order.Id,
order.ReceiptNumber,
order.GrandTotal);
return Result<OrderDto>.Success(MapToDto(order));
}
catch (Exception ex)
{
await _unitOfWork.RollbackTransactionAsync(ct);
_logger.LogError(ex, "Failed to create order");
throw;
}
}
public async Task<Result<OrderDto>> ProcessReturnAsync(
string orderId,
ProcessReturnRequest request,
CancellationToken ct = default)
{
_logger.LogInformation(
"Processing return for order {OrderId}",
orderId);
try
{
await _unitOfWork.BeginTransactionAsync(ct);
var originalOrder = await _unitOfWork.Orders.GetByIdAsync(orderId, ct);
if (originalOrder is null)
return Result<OrderDto>.Failure(
DomainError.NotFound("Order", orderId));
if (originalOrder.Status == OrderStatus.Voided)
return Result<OrderDto>.Failure(
DomainError.InvalidOperation("Cannot return a voided order"));
// Create return order
var returnOrder = new Order
{
Id = IdGenerator.NewId("order"),
TenantId = _tenantContext.TenantId,
LocationId = originalOrder.LocationId,
RegisterId = request.RegisterId,
EmployeeId = request.EmployeeId,
CustomerId = originalOrder.CustomerId,
ReceiptNumber = await GenerateReceiptNumberAsync(
originalOrder.LocationId, ct),
Status = OrderStatus.Completed,
OrderType = OrderType.Return,
OriginalOrderId = orderId,
CreatedAt = DateTime.UtcNow
};
decimal returnSubtotal = 0;
foreach (var returnItem in request.LineItems)
{
var originalLineItem = originalOrder.LineItems
.FirstOrDefault(li => li.Id == returnItem.OriginalLineItemId);
if (originalLineItem is null)
return Result<OrderDto>.Failure(
DomainError.NotFound("LineItem", returnItem.OriginalLineItemId));
if (returnItem.Quantity > originalLineItem.Quantity)
return Result<OrderDto>.Failure(
DomainError.InvalidOperation(
$"Return quantity exceeds original quantity"));
var returnLineItem = new OrderLineItem
{
Id = IdGenerator.NewId("li"),
OrderId = returnOrder.Id,
ItemId = originalLineItem.ItemId,
Sku = originalLineItem.Sku,
Name = originalLineItem.Name,
Quantity = -returnItem.Quantity,
UnitPrice = originalLineItem.UnitPrice,
DiscountAmount = 0,
Taxable = originalLineItem.Taxable,
ReturnReason = returnItem.Reason
};
returnLineItem.ExtendedPrice = returnLineItem.Quantity *
returnLineItem.UnitPrice;
returnLineItem.NetPrice = returnLineItem.ExtendedPrice;
returnSubtotal += returnLineItem.NetPrice;
returnOrder.LineItems.Add(returnLineItem);
// Restore inventory
var item = await _unitOfWork.Items.GetByIdAsync(
originalLineItem.ItemId, ct);
if (item?.TrackInventory == true)
{
await _inventoryService.RestoreInventoryAsync(
new RestoreInventoryCommand
{
ItemId = originalLineItem.ItemId,
LocationId = originalOrder.LocationId,
Quantity = returnItem.Quantity,
Reason = InventoryChangeReason.Return,
ReferenceId = returnOrder.Id,
ReferenceType = "Return"
}, ct);
}
}
// Calculate return tax
var location = await _unitOfWork.Locations.GetByIdAsync(
originalOrder.LocationId, ct);
decimal taxableReturnAmount = returnOrder.LineItems
.Where(li => li.Taxable)
.Sum(li => li.NetPrice);
returnOrder.TaxAmount = Math.Round(
Math.Abs(taxableReturnAmount) * location!.TaxRate, 2) * -1;
returnOrder.Subtotal = returnSubtotal;
returnOrder.GrandTotal = returnSubtotal + returnOrder.TaxAmount;
// Process refund
var refundResult = await _paymentService.ProcessRefundAsync(
new ProcessRefundCommand
{
OriginalOrderId = orderId,
RefundOrderId = returnOrder.Id,
Amount = Math.Abs(returnOrder.GrandTotal),
Method = request.RefundMethod
}, ct);
if (!refundResult.IsSuccess)
{
await _unitOfWork.RollbackTransactionAsync(ct);
return Result<OrderDto>.Failure(refundResult.Error!);
}
returnOrder.Payments.Add(new OrderPayment
{
Id = IdGenerator.NewId("pmt"),
OrderId = returnOrder.Id,
Method = request.RefundMethod,
Amount = returnOrder.GrandTotal,
Status = PaymentStatus.Refunded,
Reference = refundResult.Value!.TransactionId
});
await _unitOfWork.Orders.AddAsync(returnOrder, ct);
await _unitOfWork.SaveChangesAsync(ct);
await _unitOfWork.CommitTransactionAsync(ct);
await _eventPublisher.PublishAsync(new OrderReturnedEvent
{
OrderId = returnOrder.Id,
OriginalOrderId = orderId,
TenantId = returnOrder.TenantId,
RefundAmount = Math.Abs(returnOrder.GrandTotal),
OccurredAt = DateTime.UtcNow
}, ct);
return Result<OrderDto>.Success(MapToDto(returnOrder));
}
catch (Exception ex)
{
await _unitOfWork.RollbackTransactionAsync(ct);
_logger.LogError(ex, "Failed to process return for order {OrderId}", orderId);
throw;
}
}
public async Task<Result<OrderDto>> VoidOrderAsync(
string orderId,
VoidOrderRequest request,
CancellationToken ct = default)
{
var order = await _unitOfWork.Orders.GetByIdAsync(orderId, ct);
if (order is null)
return Result<OrderDto>.Failure(DomainError.NotFound("Order", orderId));
if (order.Status == OrderStatus.Voided)
return Result<OrderDto>.Failure(
DomainError.InvalidOperation("Order is already voided"));
// Check void window (typically same day only)
if (order.CreatedAt.Date != DateTime.UtcNow.Date)
return Result<OrderDto>.Failure(
DomainError.InvalidOperation("Orders can only be voided on the same day"));
try
{
await _unitOfWork.BeginTransactionAsync(ct);
// Void all payments
foreach (var payment in order.Payments.Where(p =>
p.Status == PaymentStatus.Captured))
{
var voidResult = await _paymentService.VoidPaymentAsync(
payment.Reference!, ct);
if (!voidResult.IsSuccess)
{
await _unitOfWork.RollbackTransactionAsync(ct);
return Result<OrderDto>.Failure(voidResult.Error!);
}
payment.Status = PaymentStatus.Voided;
}
// Restore inventory
foreach (var lineItem in order.LineItems)
{
var item = await _unitOfWork.Items.GetByIdAsync(lineItem.ItemId, ct);
if (item?.TrackInventory == true)
{
await _inventoryService.RestoreInventoryAsync(
new RestoreInventoryCommand
{
ItemId = lineItem.ItemId,
LocationId = order.LocationId,
Quantity = lineItem.Quantity,
Reason = InventoryChangeReason.Void,
ReferenceId = order.Id,
ReferenceType = "VoidedOrder"
}, ct);
}
}
// Reverse loyalty points
if (order.CustomerId is not null)
{
var pointsToDeduct = CalculateLoyaltyPoints(order.GrandTotal);
await _customerService.AddLoyaltyPointsAsync(
order.CustomerId,
-pointsToDeduct,
$"Voided: {order.ReceiptNumber}",
ct);
}
order.Status = OrderStatus.Voided;
order.VoidedAt = DateTime.UtcNow;
order.VoidedBy = request.EmployeeId;
order.VoidReason = request.Reason;
await _unitOfWork.SaveChangesAsync(ct);
await _unitOfWork.CommitTransactionAsync(ct);
await _eventPublisher.PublishAsync(new OrderVoidedEvent
{
OrderId = order.Id,
TenantId = order.TenantId,
Reason = request.Reason,
VoidedBy = request.EmployeeId,
OccurredAt = DateTime.UtcNow
}, ct);
return Result<OrderDto>.Success(MapToDto(order));
}
catch (Exception ex)
{
await _unitOfWork.RollbackTransactionAsync(ct);
_logger.LogError(ex, "Failed to void order {OrderId}", orderId);
throw;
}
}
private async Task<string> GenerateReceiptNumberAsync(
string locationId,
CancellationToken ct)
{
var location = await _unitOfWork.Locations.GetByIdAsync(locationId, ct);
var prefix = location?.Code ?? "XX";
var date = DateTime.UtcNow.ToString("yyyyMMdd");
var sequence = await _unitOfWork.Orders.GetNextSequenceAsync(locationId, ct);
return $"{prefix}-{date}-{sequence:D4}";
}
private static int CalculateLoyaltyPoints(decimal amount)
{
return (int)Math.Floor(amount);
}
private static OrderDto MapToDto(Order order)
{
return new OrderDto
{
Id = order.Id,
ReceiptNumber = order.ReceiptNumber,
Status = order.Status.ToString(),
// ... map all properties
};
}
// ... other interface methods
}
11.6 Event Publishing Pattern
// File: src/POS.Application/Interfaces/IEventPublisher.cs
namespace POS.Application.Interfaces;
public interface IEventPublisher
{
Task PublishAsync<TEvent>(TEvent @event, CancellationToken ct = default)
where TEvent : IDomainEvent;
Task PublishManyAsync<TEvent>(IEnumerable<TEvent> events, CancellationToken ct = default)
where TEvent : IDomainEvent;
}
// File: src/POS.Infrastructure/Messaging/EventPublisher.cs
using MassTransit;
using Microsoft.AspNetCore.SignalR;
using POS.Api.Hubs;
namespace POS.Infrastructure.Messaging;
public class EventPublisher : IEventPublisher
{
private readonly IPublishEndpoint _publishEndpoint;
private readonly IHubContext<PosHub, IPosHubClient> _hubContext;
private readonly ILogger<EventPublisher> _logger;
public EventPublisher(
IPublishEndpoint publishEndpoint,
IHubContext<PosHub, IPosHubClient> hubContext,
ILogger<EventPublisher> logger)
{
_publishEndpoint = publishEndpoint;
_hubContext = hubContext;
_logger = logger;
}
public async Task PublishAsync<TEvent>(TEvent @event, CancellationToken ct = default)
where TEvent : IDomainEvent
{
// Publish to message bus (for background processing)
await _publishEndpoint.Publish(@event, ct);
// Publish to SignalR (for real-time UI updates)
await PublishToSignalRAsync(@event, ct);
_logger.LogDebug(
"Published event {EventType} for tenant {TenantId}",
typeof(TEvent).Name,
@event.TenantId);
}
private async Task PublishToSignalRAsync<TEvent>(TEvent @event, CancellationToken ct)
where TEvent : IDomainEvent
{
var tenantGroup = $"tenant:{@event.TenantId}";
switch (@event)
{
case OrderCompletedEvent order:
await _hubContext.Clients.Group(tenantGroup)
.OrderCompleted(new OrderCompletedNotification
{
OrderId = order.OrderId,
ReceiptNumber = order.ReceiptNumber,
GrandTotal = order.GrandTotal
});
break;
case InventoryUpdatedEvent inv:
await _hubContext.Clients.Group(tenantGroup)
.InventoryUpdated(new InventoryUpdateNotification
{
ItemId = inv.ItemId,
LocationId = inv.LocationId,
NewQuantity = inv.NewQuantity
});
break;
}
}
}
11.7 Dependency Injection Configuration
// File: src/POS.Api/Program.cs (partial)
public static class ServiceCollectionExtensions
{
public static IServiceCollection AddApplicationServices(
this IServiceCollection services)
{
// Application services
services.AddScoped<IOrderService, OrderService>();
services.AddScoped<IInventoryService, InventoryService>();
services.AddScoped<ICustomerService, CustomerService>();
services.AddScoped<IItemService, ItemService>();
services.AddScoped<IEmployeeService, EmployeeService>();
services.AddScoped<IReportService, ReportService>();
services.AddScoped<IPaymentService, PaymentService>();
// Infrastructure
services.AddScoped<IUnitOfWork, UnitOfWork>();
services.AddScoped<IEventPublisher, EventPublisher>();
// Repositories
services.AddScoped<IItemRepository, ItemRepository>();
services.AddScoped<IOrderRepository, OrderRepository>();
services.AddScoped<ICustomerRepository, CustomerRepository>();
services.AddScoped<IInventoryRepository, InventoryRepository>();
services.AddScoped<IEmployeeRepository, EmployeeRepository>();
services.AddScoped<ILocationRepository, LocationRepository>();
return services;
}
}
Summary
This chapter defined the complete service layer architecture:
- Clean architecture with clear separation of concerns
- Service interfaces for all major domains
- Unit of Work pattern for transaction management
- Complete OrderService implementation with full transaction flow
- Event publishing for real-time updates and background processing
Next: Chapter 12: Security & Authentication covers security and authentication patterns.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-22 |
| Author | Claude Code |
| Status | Active |
| Part | IV - Backend |
| Chapter | 11 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 12: Security and Authentication
Multi-Mode Authentication for POS and Admin Portals
This chapter provides complete security implementation including dual authentication flows, JWT tokens, role-based access control, and tenant isolation.
12.1 Authentication Architecture Overview
┌───────────────────────────────────────────────────────────────────┐
│ Authentication Flows │
├───────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────────────┐ │
│ │ POS Client │ │ Admin Portal │ │
│ │ (Touch Screen) │ │ (Web Browser) │ │
│ └────────┬────────┘ └───────────┬─────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────────────┐ │
│ │ PIN Login │ │ Email/Password Login │ │
│ │ (4-6 digits) │ │ + Optional MFA │ │
│ └────────┬────────┘ └───────────┬─────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ JWT Token Issued │ │
│ │ • Short-lived for POS (8 hours = shift) │ │
│ │ • Longer for Admin (24 hours with refresh) │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└───────────────────────────────────────────────────────────────────┘
12.2 JWT Token Structure
{
"header": {
"alg": "RS256",
"typ": "JWT",
"kid": "key-2025-01"
},
"payload": {
"sub": "emp_john_smith",
"tid": "tenant_nexus",
"lid": "loc_gm",
"rid": "reg_gm_01",
"name": "John Smith",
"email": "john@nexus.com",
"roles": ["staff"],
"permissions": [
"pos.sale.create",
"pos.sale.void",
"pos.discount.apply",
"pos.customer.view",
"pos.customer.create"
],
"auth_method": "pin",
"iat": 1705320000,
"exp": 1705348800,
"iss": "https://auth.pos-platform.com",
"aud": "pos-api"
}
}
Token Claims Explained
| Claim | Description |
|---|---|
sub | Subject (employee/user ID) |
tid | Tenant ID |
lid | Location ID (POS only) |
rid | Register ID (POS only) |
roles | Role names |
permissions | Fine-grained permissions |
auth_method | “pin” or “password” |
12.3 Role-Based Permission Matrix
Role Definitions
// File: src/POS.Domain/Security/Roles.cs
namespace POS.Domain.Security;
public static class Roles
{
public const string Staff = "staff";
public const string Manager = "manager";
public const string Admin = "admin";
public const string Buyer = "buyer";
public const string Owner = "owner";
}
Permission Catalog
// File: src/POS.Domain/Security/Permissions.cs
namespace POS.Domain.Security;
public static class Permissions
{
// POS Operations
public const string PosSaleCreate = "pos.sale.create";
public const string PosSaleVoid = "pos.sale.void";
public const string PosSaleReturn = "pos.sale.return";
public const string PosDiscountApply = "pos.discount.apply";
public const string PosDiscountOverride = "pos.discount.override";
public const string PosPriceOverride = "pos.price.override";
public const string PosDrawerOpen = "pos.drawer.open";
public const string PosDrawerCount = "pos.drawer.count";
public const string PosHoldRecall = "pos.hold.recall";
// Customer Operations
public const string CustomerView = "pos.customer.view";
public const string CustomerCreate = "pos.customer.create";
public const string CustomerUpdate = "pos.customer.update";
public const string CustomerDelete = "pos.customer.delete";
public const string CustomerLoyaltyAdjust = "pos.customer.loyalty.adjust";
// Inventory Operations
public const string InventoryView = "inventory.view";
public const string InventoryAdjust = "inventory.adjust";
public const string InventoryTransfer = "inventory.transfer";
public const string InventoryCount = "inventory.count";
public const string InventoryReceive = "inventory.receive";
// Catalog Operations
public const string CatalogItemView = "catalog.items.read";
public const string CatalogItemCreate = "catalog.items.write";
public const string CatalogItemUpdate = "catalog.items.write";
public const string CatalogItemDelete = "catalog.items.delete";
public const string CatalogItemBulk = "catalog.items.bulk";
// Reports
public const string ReportsView = "reports.view";
public const string ReportsExport = "reports.export";
public const string ReportsSalesDetail = "reports.sales.detail";
public const string ReportsEmployeePerformance = "reports.employee.performance";
// Administration
public const string AdminEmployees = "admin.employees";
public const string AdminLocations = "admin.locations";
public const string AdminSettings = "admin.settings";
public const string AdminIntegrations = "admin.integrations";
public const string AdminBilling = "admin.billing";
public const string AdminAuditLog = "admin.audit";
}
Role-Permission Mapping
| Permission | Staff | Manager | Admin | Buyer | Owner |
|---|---|---|---|---|---|
| pos.sale.create | X | X | X | - | X |
| pos.sale.void | - | X | X | - | X |
| pos.sale.return | - | X | X | - | X |
| pos.discount.apply | - | X | X | - | X |
| pos.discount.override | - | X | X | - | X |
| pos.price.override | - | X | X | - | X |
| pos.drawer.open | X | X | X | - | X |
| pos.drawer.count | X | X | X | - | X |
| pos.customer.view | X | X | X | - | X |
| pos.customer.create | X | X | X | - | X |
| pos.customer.update | - | X | X | - | X |
| pos.customer.delete | - | - | X | - | X |
| inventory.view | X | X | X | X | X |
| inventory.adjust | - | X | X | - | X |
| inventory.transfer | - | X | X | X | X |
| inventory.receive | - | X | X | X | X |
| inventory.count | - | X | X | X | X |
| catalog.items.read | X | X | X | X | X |
| catalog.items.write | - | X | X | X | X |
| catalog.items.delete | - | - | X | - | X |
| reports.view | - | X | X | X | X |
| reports.export | - | X | X | - | X |
| admin.employees | - | - | X | - | X |
| admin.locations | - | - | X | - | X |
| admin.settings | - | - | X | - | X |
| admin.billing | - | - | - | - | X |
12.4 Authentication Controller
// File: src/POS.Api/Controllers/AuthController.cs
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using Microsoft.IdentityModel.Tokens;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Claims;
namespace POS.Api.Controllers;
[ApiController]
[Route("api/v1/auth")]
public class AuthController : ControllerBase
{
private readonly IEmployeeService _employeeService;
private readonly IUserService _userService;
private readonly ITenantService _tenantService;
private readonly ITokenService _tokenService;
private readonly IAuditLogger _auditLogger;
private readonly ILogger<AuthController> _logger;
public AuthController(
IEmployeeService employeeService,
IUserService userService,
ITenantService tenantService,
ITokenService tokenService,
IAuditLogger auditLogger,
ILogger<AuthController> logger)
{
_employeeService = employeeService;
_userService = userService;
_tenantService = tenantService;
_tokenService = tokenService;
_auditLogger = auditLogger;
_logger = logger;
}
/// <summary>
/// PIN-based login for POS terminals
/// </summary>
[HttpPost("pin-login")]
[AllowAnonymous]
public async Task<ActionResult<LoginResponse>> PinLogin(
[FromBody] PinLoginRequest request,
CancellationToken ct)
{
// Validate tenant
var tenant = await _tenantService.GetBySubdomainAsync(
request.TenantSubdomain, ct);
if (tenant is null || !tenant.IsActive)
{
_logger.LogWarning(
"PIN login attempt for unknown tenant: {Tenant}",
request.TenantSubdomain);
return Unauthorized(new ProblemDetails
{
Title = "Invalid Credentials",
Detail = "The provided credentials are invalid."
});
}
// Validate location
var location = await _tenantService.GetLocationAsync(
tenant.Id, request.LocationId, ct);
if (location is null || !location.IsActive)
{
return Unauthorized(new ProblemDetails
{
Title = "Invalid Location",
Detail = "The specified location is not available."
});
}
// Validate employee PIN
var employee = await _employeeService.ValidatePinAsync(
tenant.Id, request.Pin, ct);
if (employee is null)
{
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = tenant.Id,
EventType = "AuthFailure",
Details = $"Failed PIN login attempt at {request.LocationId}",
IpAddress = HttpContext.Connection.RemoteIpAddress?.ToString()
}, ct);
return Unauthorized(new ProblemDetails
{
Title = "Invalid Credentials",
Detail = "The provided PIN is incorrect."
});
}
// Check employee has access to this location
if (!employee.LocationIds.Contains(request.LocationId) &&
!employee.Roles.Contains(Roles.Admin))
{
return Unauthorized(new ProblemDetails
{
Title = "Location Access Denied",
Detail = "You do not have access to this location."
});
}
// Generate token (8-hour shift duration)
var token = await _tokenService.GenerateTokenAsync(new TokenRequest
{
Subject = employee.Id,
TenantId = tenant.Id,
LocationId = request.LocationId,
RegisterId = request.RegisterId,
Name = employee.FullName,
Email = employee.Email,
Roles = employee.Roles,
AuthMethod = "pin",
ExpiresIn = TimeSpan.FromHours(8)
});
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = tenant.Id,
EmployeeId = employee.Id,
EventType = "PinLogin",
Details = $"Logged in at {request.LocationId}, register {request.RegisterId}",
IpAddress = HttpContext.Connection.RemoteIpAddress?.ToString()
}, ct);
_logger.LogInformation(
"Employee {EmployeeId} logged in at {LocationId}",
employee.Id, request.LocationId);
return Ok(new LoginResponse
{
Token = token.AccessToken,
ExpiresAt = token.ExpiresAt,
Employee = new EmployeeInfo
{
Id = employee.Id,
Name = employee.FullName,
Roles = employee.Roles,
Permissions = employee.Permissions
}
});
}
/// <summary>
/// Email/password login for Admin portal
/// </summary>
[HttpPost("login")]
[AllowAnonymous]
public async Task<ActionResult<LoginResponse>> Login(
[FromBody] LoginRequest request,
CancellationToken ct)
{
// Validate tenant
var tenant = await _tenantService.GetBySubdomainAsync(
request.TenantSubdomain, ct);
if (tenant is null || !tenant.IsActive)
{
return Unauthorized(new ProblemDetails
{
Title = "Invalid Credentials",
Detail = "The provided credentials are invalid."
});
}
// Validate user credentials
var user = await _userService.ValidateCredentialsAsync(
tenant.Id, request.Email, request.Password, ct);
if (user is null)
{
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = tenant.Id,
EventType = "AuthFailure",
Details = $"Failed login attempt for {request.Email}",
IpAddress = HttpContext.Connection.RemoteIpAddress?.ToString()
}, ct);
return Unauthorized(new ProblemDetails
{
Title = "Invalid Credentials",
Detail = "The provided credentials are invalid."
});
}
// Check if MFA is required
if (user.MfaEnabled)
{
if (string.IsNullOrEmpty(request.MfaCode))
{
return Ok(new LoginResponse
{
RequiresMfa = true,
MfaToken = await _tokenService.GenerateMfaTokenAsync(user.Id)
});
}
var mfaValid = await _userService.ValidateMfaCodeAsync(
user.Id, request.MfaCode, ct);
if (!mfaValid)
{
return Unauthorized(new ProblemDetails
{
Title = "Invalid MFA Code",
Detail = "The provided MFA code is incorrect."
});
}
}
// Generate tokens (24-hour access, 7-day refresh)
var token = await _tokenService.GenerateTokenAsync(new TokenRequest
{
Subject = user.Id,
TenantId = tenant.Id,
Name = user.FullName,
Email = user.Email,
Roles = user.Roles,
AuthMethod = "password",
ExpiresIn = TimeSpan.FromHours(24),
IncludeRefreshToken = true
});
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = tenant.Id,
UserId = user.Id,
EventType = "Login",
Details = "Admin portal login",
IpAddress = HttpContext.Connection.RemoteIpAddress?.ToString()
}, ct);
return Ok(new LoginResponse
{
Token = token.AccessToken,
RefreshToken = token.RefreshToken,
ExpiresAt = token.ExpiresAt,
User = new UserInfo
{
Id = user.Id,
Name = user.FullName,
Email = user.Email,
Roles = user.Roles,
Permissions = user.Permissions
}
});
}
/// <summary>
/// Refresh access token using refresh token
/// </summary>
[HttpPost("refresh")]
[AllowAnonymous]
public async Task<ActionResult<LoginResponse>> RefreshToken(
[FromBody] RefreshTokenRequest request,
CancellationToken ct)
{
var result = await _tokenService.RefreshTokenAsync(
request.RefreshToken, ct);
if (!result.IsSuccess)
{
return Unauthorized(new ProblemDetails
{
Title = "Invalid Token",
Detail = "The refresh token is invalid or expired."
});
}
return Ok(new LoginResponse
{
Token = result.Value!.AccessToken,
RefreshToken = result.Value.RefreshToken,
ExpiresAt = result.Value.ExpiresAt
});
}
/// <summary>
/// Logout and invalidate tokens
/// </summary>
[HttpPost("logout")]
[Authorize]
public async Task<IActionResult> Logout(CancellationToken ct)
{
var userId = User.FindFirstValue(ClaimTypes.NameIdentifier);
var tenantId = User.FindFirstValue("tid");
await _tokenService.RevokeAllTokensAsync(userId!, ct);
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = tenantId!,
UserId = userId,
EventType = "Logout",
IpAddress = HttpContext.Connection.RemoteIpAddress?.ToString()
}, ct);
return NoContent();
}
/// <summary>
/// Change PIN (for POS employees)
/// </summary>
[HttpPost("change-pin")]
[Authorize]
public async Task<IActionResult> ChangePin(
[FromBody] ChangePinRequest request,
CancellationToken ct)
{
var employeeId = User.FindFirstValue(ClaimTypes.NameIdentifier);
var tenantId = User.FindFirstValue("tid");
var result = await _employeeService.ChangePinAsync(
tenantId!, employeeId!, request.CurrentPin, request.NewPin, ct);
if (!result.IsSuccess)
{
return BadRequest(new ProblemDetails
{
Title = "PIN Change Failed",
Detail = result.Error!.Message
});
}
return NoContent();
}
/// <summary>
/// Validate current session
/// </summary>
[HttpGet("me")]
[Authorize]
public async Task<ActionResult<SessionInfo>> GetCurrentSession(
CancellationToken ct)
{
var userId = User.FindFirstValue(ClaimTypes.NameIdentifier);
var tenantId = User.FindFirstValue("tid");
var authMethod = User.FindFirstValue("auth_method");
if (authMethod == "pin")
{
var employee = await _employeeService.GetByIdAsync(
tenantId!, userId!, ct);
return Ok(new SessionInfo
{
UserId = userId!,
TenantId = tenantId!,
Name = employee!.FullName,
Roles = User.FindAll(ClaimTypes.Role).Select(c => c.Value).ToList(),
Permissions = User.FindAll("permission").Select(c => c.Value).ToList(),
LocationId = User.FindFirstValue("lid"),
RegisterId = User.FindFirstValue("rid"),
AuthMethod = authMethod
});
}
else
{
var user = await _userService.GetByIdAsync(tenantId!, userId!, ct);
return Ok(new SessionInfo
{
UserId = userId!,
TenantId = tenantId!,
Name = user!.FullName,
Email = user.Email,
Roles = User.FindAll(ClaimTypes.Role).Select(c => c.Value).ToList(),
Permissions = User.FindAll("permission").Select(c => c.Value).ToList(),
AuthMethod = authMethod!
});
}
}
}
12.5 Token Service Implementation
// File: src/POS.Infrastructure/Security/TokenService.cs
using Microsoft.Extensions.Options;
using Microsoft.IdentityModel.Tokens;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Claims;
using System.Security.Cryptography;
namespace POS.Infrastructure.Security;
public class TokenService : ITokenService
{
private readonly JwtSettings _jwtSettings;
private readonly IRefreshTokenRepository _refreshTokenRepo;
private readonly IRolePermissionResolver _permissionResolver;
private readonly ILogger<TokenService> _logger;
public TokenService(
IOptions<JwtSettings> jwtSettings,
IRefreshTokenRepository refreshTokenRepo,
IRolePermissionResolver permissionResolver,
ILogger<TokenService> logger)
{
_jwtSettings = jwtSettings.Value;
_refreshTokenRepo = refreshTokenRepo;
_permissionResolver = permissionResolver;
_logger = logger;
}
public async Task<TokenResult> GenerateTokenAsync(TokenRequest request)
{
var permissions = await _permissionResolver.ResolvePermissionsAsync(
request.Roles);
var claims = new List<Claim>
{
new(JwtRegisteredClaimNames.Sub, request.Subject),
new(JwtRegisteredClaimNames.Jti, Guid.NewGuid().ToString()),
new("tid", request.TenantId),
new("name", request.Name),
new("auth_method", request.AuthMethod)
};
if (!string.IsNullOrEmpty(request.Email))
claims.Add(new Claim(JwtRegisteredClaimNames.Email, request.Email));
if (!string.IsNullOrEmpty(request.LocationId))
claims.Add(new Claim("lid", request.LocationId));
if (!string.IsNullOrEmpty(request.RegisterId))
claims.Add(new Claim("rid", request.RegisterId));
foreach (var role in request.Roles)
claims.Add(new Claim(ClaimTypes.Role, role));
foreach (var permission in permissions)
claims.Add(new Claim("permission", permission));
var key = new SymmetricSecurityKey(
Convert.FromBase64String(_jwtSettings.SecretKey));
var credentials = new SigningCredentials(
key, SecurityAlgorithms.HmacSha256);
var expires = DateTime.UtcNow.Add(request.ExpiresIn);
var token = new JwtSecurityToken(
issuer: _jwtSettings.Issuer,
audience: _jwtSettings.Audience,
claims: claims,
expires: expires,
signingCredentials: credentials
);
var accessToken = new JwtSecurityTokenHandler().WriteToken(token);
var result = new TokenResult
{
AccessToken = accessToken,
ExpiresAt = expires
};
if (request.IncludeRefreshToken)
{
var refreshToken = GenerateRefreshToken();
await _refreshTokenRepo.StoreAsync(new RefreshTokenEntity
{
Token = refreshToken,
UserId = request.Subject,
TenantId = request.TenantId,
ExpiresAt = DateTime.UtcNow.AddDays(7),
CreatedAt = DateTime.UtcNow
});
result.RefreshToken = refreshToken;
}
return result;
}
public async Task<Result<TokenResult>> RefreshTokenAsync(
string refreshToken,
CancellationToken ct = default)
{
var stored = await _refreshTokenRepo.GetByTokenAsync(refreshToken, ct);
if (stored is null || stored.IsRevoked || stored.ExpiresAt < DateTime.UtcNow)
{
return Result<TokenResult>.Failure(
DomainError.InvalidToken("Refresh token is invalid or expired"));
}
// Revoke old refresh token
await _refreshTokenRepo.RevokeAsync(refreshToken, ct);
// Generate new tokens
var newToken = await GenerateTokenAsync(new TokenRequest
{
Subject = stored.UserId,
TenantId = stored.TenantId,
Name = stored.UserName,
Email = stored.Email,
Roles = stored.Roles,
AuthMethod = "password",
ExpiresIn = TimeSpan.FromHours(24),
IncludeRefreshToken = true
});
return Result<TokenResult>.Success(newToken);
}
public async Task RevokeAllTokensAsync(string userId, CancellationToken ct = default)
{
await _refreshTokenRepo.RevokeAllForUserAsync(userId, ct);
}
private static string GenerateRefreshToken()
{
var randomBytes = new byte[64];
using var rng = RandomNumberGenerator.Create();
rng.GetBytes(randomBytes);
return Convert.ToBase64String(randomBytes);
}
}
12.6 Tenant Context Middleware
// File: src/POS.Api/Middleware/TenantContextMiddleware.cs
using System.Security.Claims;
namespace POS.Api.Middleware;
public class TenantContextMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<TenantContextMiddleware> _logger;
public TenantContextMiddleware(
RequestDelegate next,
ILogger<TenantContextMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(
HttpContext context,
ITenantContext tenantContext,
ITenantService tenantService)
{
string? tenantId = null;
// 1. Try from JWT claims (authenticated requests)
if (context.User.Identity?.IsAuthenticated == true)
{
tenantId = context.User.FindFirstValue("tid");
}
// 2. Try from subdomain
if (string.IsNullOrEmpty(tenantId))
{
var host = context.Request.Host.Host;
var subdomain = GetSubdomain(host);
if (!string.IsNullOrEmpty(subdomain))
{
var tenant = await tenantService.GetBySubdomainAsync(
subdomain, context.RequestAborted);
tenantId = tenant?.Id;
}
}
// 3. Try from header (API integrations)
if (string.IsNullOrEmpty(tenantId))
{
tenantId = context.Request.Headers["X-Tenant-Id"].FirstOrDefault();
}
if (!string.IsNullOrEmpty(tenantId))
{
tenantContext.SetTenant(tenantId);
// Add to response headers for debugging
context.Response.Headers["X-Tenant-Id"] = tenantId;
}
else if (!IsPublicEndpoint(context.Request.Path))
{
_logger.LogWarning(
"Unable to resolve tenant for path {Path}",
context.Request.Path);
context.Response.StatusCode = 400;
await context.Response.WriteAsJsonAsync(new ProblemDetails
{
Title = "Tenant Required",
Detail = "Unable to determine tenant context.",
Status = 400
});
return;
}
await _next(context);
}
private static string? GetSubdomain(string host)
{
var parts = host.Split('.');
if (parts.Length >= 3 && parts[0] != "www" && parts[0] != "api")
{
return parts[0];
}
return null;
}
private static bool IsPublicEndpoint(PathString path)
{
var publicPaths = new[]
{
"/health",
"/api/v1/auth/login",
"/api/v1/auth/pin-login",
"/swagger"
};
return publicPaths.Any(p =>
path.StartsWithSegments(p, StringComparison.OrdinalIgnoreCase));
}
}
// Tenant Context Interface and Implementation
public interface ITenantContext
{
string? TenantId { get; }
void SetTenant(string tenantId);
}
public class TenantContext : ITenantContext
{
public string? TenantId { get; private set; }
public void SetTenant(string tenantId)
{
TenantId = tenantId;
}
}
12.7 API Key Authentication for Integrations
// File: src/POS.Api/Authentication/ApiKeyAuthenticationHandler.cs
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Options;
using System.Security.Claims;
using System.Text.Encodings.Web;
namespace POS.Api.Authentication;
public class ApiKeyAuthenticationHandler : AuthenticationHandler<ApiKeyAuthenticationOptions>
{
private const string ApiKeyHeaderName = "X-API-Key";
private readonly IApiKeyService _apiKeyService;
public ApiKeyAuthenticationHandler(
IOptionsMonitor<ApiKeyAuthenticationOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
IApiKeyService apiKeyService)
: base(options, logger, encoder)
{
_apiKeyService = apiKeyService;
}
protected override async Task<AuthenticateResult> HandleAuthenticateAsync()
{
if (!Request.Headers.TryGetValue(ApiKeyHeaderName, out var apiKeyHeaderValues))
{
return AuthenticateResult.NoResult();
}
var providedApiKey = apiKeyHeaderValues.FirstOrDefault();
if (string.IsNullOrEmpty(providedApiKey))
{
return AuthenticateResult.NoResult();
}
var apiKey = await _apiKeyService.ValidateApiKeyAsync(
providedApiKey, Context.RequestAborted);
if (apiKey is null)
{
return AuthenticateResult.Fail("Invalid API key");
}
if (apiKey.ExpiresAt.HasValue && apiKey.ExpiresAt < DateTime.UtcNow)
{
return AuthenticateResult.Fail("API key has expired");
}
var claims = new List<Claim>
{
new(ClaimTypes.NameIdentifier, apiKey.Id),
new("tid", apiKey.TenantId),
new("api_key_name", apiKey.Name),
new("auth_method", "api_key")
};
foreach (var scope in apiKey.Scopes)
{
claims.Add(new Claim("scope", scope));
}
var identity = new ClaimsIdentity(claims, Scheme.Name);
var principal = new ClaimsPrincipal(identity);
var ticket = new AuthenticationTicket(principal, Scheme.Name);
// Update last used
await _apiKeyService.RecordUsageAsync(apiKey.Id, Context.RequestAborted);
return AuthenticateResult.Success(ticket);
}
}
public class ApiKeyAuthenticationOptions : AuthenticationSchemeOptions { }
12.8 Authorization Policies
// File: src/POS.Api/Extensions/AuthorizationExtensions.cs
using Microsoft.AspNetCore.Authorization;
namespace POS.Api.Extensions;
public static class AuthorizationExtensions
{
public static IServiceCollection AddPosAuthorization(
this IServiceCollection services)
{
services.AddAuthorization(options =>
{
// POS Operations
options.AddPolicy("pos.sale.create",
policy => policy.RequireClaim("permission", Permissions.PosSaleCreate));
options.AddPolicy("pos.sale.void",
policy => policy.RequireClaim("permission", Permissions.PosSaleVoid));
options.AddPolicy("pos.sale.return",
policy => policy.RequireClaim("permission", Permissions.PosSaleReturn));
options.AddPolicy("pos.discount.apply",
policy => policy.RequireClaim("permission", Permissions.PosDiscountApply));
// Inventory Operations
options.AddPolicy("inventory.view",
policy => policy.RequireClaim("permission", Permissions.InventoryView));
options.AddPolicy("inventory.adjust",
policy => policy.RequireClaim("permission", Permissions.InventoryAdjust));
// Catalog Operations
options.AddPolicy("catalog.items.read",
policy => policy.RequireClaim("permission", Permissions.CatalogItemView));
options.AddPolicy("catalog.items.write",
policy => policy.RequireClaim("permission", Permissions.CatalogItemCreate));
options.AddPolicy("catalog.items.delete",
policy => policy.RequireClaim("permission", Permissions.CatalogItemDelete));
options.AddPolicy("catalog.items.bulk",
policy => policy.RequireClaim("permission", Permissions.CatalogItemBulk));
// Reports
options.AddPolicy("reports.view",
policy => policy.RequireClaim("permission", Permissions.ReportsView));
// Admin
options.AddPolicy("admin.settings",
policy => policy.RequireClaim("permission", Permissions.AdminSettings));
// Role-based policies
options.AddPolicy("ManagerOrAbove",
policy => policy.RequireRole(
Roles.Manager, Roles.Admin, Roles.Owner));
options.AddPolicy("AdminOrOwner",
policy => policy.RequireRole(Roles.Admin, Roles.Owner));
// Integration API policy
options.AddPolicy("api.integration",
policy => policy.RequireAssertion(context =>
context.User.HasClaim("auth_method", "api_key") &&
context.User.HasClaim("scope", "integration")));
});
return services;
}
}
12.9 Password Hashing
// File: src/POS.Infrastructure/Security/PasswordHasher.cs
using System.Security.Cryptography;
namespace POS.Infrastructure.Security;
public class PasswordHasher : IPasswordHasher
{
private const int SaltSize = 16;
private const int HashSize = 32;
private const int Iterations = 100000;
public string HashPassword(string password)
{
using var algorithm = new Rfc2898DeriveBytes(
password,
SaltSize,
Iterations,
HashAlgorithmName.SHA256);
var salt = algorithm.Salt;
var hash = algorithm.GetBytes(HashSize);
var hashBytes = new byte[SaltSize + HashSize];
Array.Copy(salt, 0, hashBytes, 0, SaltSize);
Array.Copy(hash, 0, hashBytes, SaltSize, HashSize);
return Convert.ToBase64String(hashBytes);
}
public bool VerifyPassword(string password, string hashedPassword)
{
var hashBytes = Convert.FromBase64String(hashedPassword);
var salt = new byte[SaltSize];
Array.Copy(hashBytes, 0, salt, 0, SaltSize);
using var algorithm = new Rfc2898DeriveBytes(
password,
salt,
Iterations,
HashAlgorithmName.SHA256);
var hash = algorithm.GetBytes(HashSize);
for (var i = 0; i < HashSize; i++)
{
if (hashBytes[SaltSize + i] != hash[i])
return false;
}
return true;
}
}
public class PinHasher : IPinHasher
{
public string HashPin(string pin)
{
using var sha256 = SHA256.Create();
var bytes = System.Text.Encoding.UTF8.GetBytes(pin);
var hash = sha256.ComputeHash(bytes);
return Convert.ToBase64String(hash);
}
public bool VerifyPin(string pin, string hashedPin)
{
var hash = HashPin(pin);
return hash == hashedPin;
}
}
Summary
This chapter covered the complete security implementation:
- Dual authentication flows: PIN for POS, Email/Password for Admin
- JWT token structure with tenant, location, and permission claims
- Role-based permission matrix from Staff to Owner
- Complete AuthController with all authentication endpoints
- Tenant context middleware for multi-tenant isolation
- API key authentication for external integrations
Next: Chapter 13: Integration Patterns covers integration patterns for Shopify and payment processing.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-22 |
| Author | Claude Code |
| Status | Active |
| Part | IV - Backend |
| Chapter | 12 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 13: Integration Patterns
Shopify, Payment Processing, and External API Integration
This chapter provides complete implementation patterns for integrating with Shopify, payment processors (Stripe/Square), and external APIs with PCI-DSS compliance.
13.1 Integration Architecture
┌─────────────────────────────────────────────────────────────────────────────┐
│ POS Platform │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────────────┐ │
│ │ Shopify │ │ Payment │ │ Other Integrations │ │
│ │ Integration │ │ Processing │ │ (Accounting, etc.) │ │
│ └────────┬────────┘ └────────┬────────┘ └───────────┬─────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ Integration Service Layer │ │
│ │ • Webhook handlers • Payment abstraction │ │
│ │ • Retry logic • Token management │ │
│ │ • Event publishing • Audit logging │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Shopify │ │ Stripe API │ │ QuickBooks │
│ Admin API │ │ Square API │ │ Online │
└───────────────┘ └───────────────┘ └───────────────┘
13.2 Shopify Integration
13.2.1 Webhook Configuration
// File: src/POS.Infrastructure/Integrations/Shopify/ShopifyWebhookConfig.cs
namespace POS.Infrastructure.Integrations.Shopify;
public static class ShopifyWebhookTopics
{
// Order webhooks
public const string OrdersCreate = "orders/create";
public const string OrdersUpdated = "orders/updated";
public const string OrdersCancelled = "orders/cancelled";
public const string OrdersFulfilled = "orders/fulfilled";
public const string OrdersPaid = "orders/paid";
// Inventory webhooks
public const string InventoryLevelsUpdate = "inventory_levels/update";
public const string InventoryLevelsConnect = "inventory_levels/connect";
public const string InventoryLevelsDisconnect = "inventory_levels/disconnect";
// Product webhooks
public const string ProductsCreate = "products/create";
public const string ProductsUpdate = "products/update";
public const string ProductsDelete = "products/delete";
// Customer webhooks
public const string CustomersCreate = "customers/create";
public const string CustomersUpdate = "customers/update";
// Refund webhooks
public const string RefundsCreate = "refunds/create";
}
13.2.2 Webhook Controller
// File: src/POS.Api/Controllers/ShopifyWebhookController.cs
using Microsoft.AspNetCore.Mvc;
using System.Security.Cryptography;
using System.Text;
namespace POS.Api.Controllers;
[ApiController]
[Route("api/v1/webhooks/shopify")]
public class ShopifyWebhookController : ControllerBase
{
private readonly IShopifyWebhookHandler _webhookHandler;
private readonly IShopifyCredentialService _credentialService;
private readonly ILogger<ShopifyWebhookController> _logger;
public ShopifyWebhookController(
IShopifyWebhookHandler webhookHandler,
IShopifyCredentialService credentialService,
ILogger<ShopifyWebhookController> logger)
{
_webhookHandler = webhookHandler;
_credentialService = credentialService;
_logger = logger;
}
[HttpPost("{tenantId}")]
public async Task<IActionResult> HandleWebhook(
string tenantId,
CancellationToken ct)
{
// Read raw body for HMAC verification
Request.EnableBuffering();
using var reader = new StreamReader(Request.Body, leaveOpen: true);
var rawBody = await reader.ReadToEndAsync();
Request.Body.Position = 0;
// Verify HMAC signature
var hmacHeader = Request.Headers["X-Shopify-Hmac-Sha256"].FirstOrDefault();
if (string.IsNullOrEmpty(hmacHeader))
{
_logger.LogWarning("Missing HMAC header for tenant {TenantId}", tenantId);
return Unauthorized();
}
var credentials = await _credentialService.GetCredentialsAsync(tenantId, ct);
if (credentials is null)
{
_logger.LogWarning("No Shopify credentials for tenant {TenantId}", tenantId);
return NotFound();
}
if (!VerifyHmac(rawBody, hmacHeader, credentials.WebhookSecret))
{
_logger.LogWarning("Invalid HMAC for tenant {TenantId}", tenantId);
return Unauthorized();
}
// Extract webhook topic
var topic = Request.Headers["X-Shopify-Topic"].FirstOrDefault();
var shopDomain = Request.Headers["X-Shopify-Shop-Domain"].FirstOrDefault();
var webhookId = Request.Headers["X-Shopify-Webhook-Id"].FirstOrDefault();
_logger.LogInformation(
"Received Shopify webhook {Topic} from {Shop} for tenant {TenantId}",
topic, shopDomain, tenantId);
// Queue for processing (respond quickly to Shopify)
await _webhookHandler.QueueWebhookAsync(new ShopifyWebhookEvent
{
TenantId = tenantId,
Topic = topic!,
ShopDomain = shopDomain!,
WebhookId = webhookId!,
Payload = rawBody,
ReceivedAt = DateTime.UtcNow
}, ct);
return Ok();
}
private static bool VerifyHmac(string body, string hmacHeader, string secret)
{
using var hmac = new HMACSHA256(Encoding.UTF8.GetBytes(secret));
var hash = hmac.ComputeHash(Encoding.UTF8.GetBytes(body));
var computedHmac = Convert.ToBase64String(hash);
return hmacHeader == computedHmac;
}
}
13.2.3 Webhook Handler Implementation
// File: src/POS.Infrastructure/Integrations/Shopify/ShopifyWebhookHandler.cs
using System.Text.Json;
using MassTransit;
namespace POS.Infrastructure.Integrations.Shopify;
public class ShopifyWebhookHandler : IShopifyWebhookHandler
{
private readonly IPublishEndpoint _publishEndpoint;
private readonly IInventoryService _inventoryService;
private readonly IOrderService _orderService;
private readonly IItemService _itemService;
private readonly ITenantContext _tenantContext;
private readonly ILogger<ShopifyWebhookHandler> _logger;
public ShopifyWebhookHandler(
IPublishEndpoint publishEndpoint,
IInventoryService inventoryService,
IOrderService orderService,
IItemService itemService,
ITenantContext tenantContext,
ILogger<ShopifyWebhookHandler> logger)
{
_publishEndpoint = publishEndpoint;
_inventoryService = inventoryService;
_orderService = orderService;
_itemService = itemService;
_tenantContext = tenantContext;
_logger = logger;
}
public async Task QueueWebhookAsync(
ShopifyWebhookEvent webhook,
CancellationToken ct)
{
// Publish to message queue for async processing
await _publishEndpoint.Publish(webhook, ct);
}
public async Task ProcessWebhookAsync(
ShopifyWebhookEvent webhook,
CancellationToken ct)
{
_tenantContext.SetTenant(webhook.TenantId);
try
{
switch (webhook.Topic)
{
case ShopifyWebhookTopics.OrdersCreate:
await HandleOrderCreatedAsync(webhook.Payload, ct);
break;
case ShopifyWebhookTopics.OrdersUpdated:
await HandleOrderUpdatedAsync(webhook.Payload, ct);
break;
case ShopifyWebhookTopics.OrdersCancelled:
await HandleOrderCancelledAsync(webhook.Payload, ct);
break;
case ShopifyWebhookTopics.InventoryLevelsUpdate:
await HandleInventoryUpdateAsync(webhook.Payload, ct);
break;
case ShopifyWebhookTopics.ProductsCreate:
case ShopifyWebhookTopics.ProductsUpdate:
await HandleProductUpdateAsync(webhook.Payload, ct);
break;
case ShopifyWebhookTopics.ProductsDelete:
await HandleProductDeleteAsync(webhook.Payload, ct);
break;
default:
_logger.LogWarning(
"Unhandled webhook topic: {Topic}",
webhook.Topic);
break;
}
}
catch (Exception ex)
{
_logger.LogError(ex,
"Error processing webhook {Topic} for tenant {TenantId}",
webhook.Topic, webhook.TenantId);
throw;
}
}
private async Task HandleOrderCreatedAsync(string payload, CancellationToken ct)
{
var order = JsonSerializer.Deserialize<ShopifyOrder>(payload,
new JsonSerializerOptions { PropertyNameCaseInsensitive = true });
if (order is null) return;
_logger.LogInformation(
"Processing Shopify order {OrderNumber} ({OrderId})",
order.OrderNumber, order.Id);
// Import order to POS system
var importResult = await _orderService.ImportShopifyOrderAsync(
new ImportShopifyOrderCommand
{
ShopifyOrderId = order.Id.ToString(),
OrderNumber = order.OrderNumber,
CustomerEmail = order.Email,
CustomerName = $"{order.Customer?.FirstName} {order.Customer?.LastName}",
TotalPrice = order.TotalPrice,
Currency = order.Currency,
LineItems = order.LineItems.Select(li => new ImportedLineItem
{
ShopifyLineItemId = li.Id.ToString(),
Sku = li.Sku,
Title = li.Title,
Quantity = li.Quantity,
Price = li.Price,
VariantId = li.VariantId?.ToString()
}).ToList(),
FulfillmentStatus = order.FulfillmentStatus,
FinancialStatus = order.FinancialStatus,
ShippingAddress = order.ShippingAddress != null
? new AddressDto
{
Address1 = order.ShippingAddress.Address1,
Address2 = order.ShippingAddress.Address2,
City = order.ShippingAddress.City,
Province = order.ShippingAddress.Province,
Zip = order.ShippingAddress.Zip,
Country = order.ShippingAddress.Country
}
: null,
CreatedAt = order.CreatedAt
}, ct);
if (!importResult.IsSuccess)
{
_logger.LogError(
"Failed to import Shopify order {OrderNumber}: {Error}",
order.OrderNumber, importResult.Error?.Message);
}
}
private async Task HandleInventoryUpdateAsync(string payload, CancellationToken ct)
{
var update = JsonSerializer.Deserialize<ShopifyInventoryLevel>(payload,
new JsonSerializerOptions { PropertyNameCaseInsensitive = true });
if (update is null) return;
_logger.LogInformation(
"Processing inventory update for variant {InventoryItemId} at location {LocationId}",
update.InventoryItemId, update.LocationId);
// Find item by Shopify inventory item ID
var item = await _itemService.GetByShopifyInventoryItemIdAsync(
update.InventoryItemId.ToString(), ct);
if (item is null)
{
_logger.LogWarning(
"Item not found for Shopify inventory item {InventoryItemId}",
update.InventoryItemId);
return;
}
// Find POS location by Shopify location ID
var location = await _inventoryService.GetLocationByShopifyIdAsync(
update.LocationId.ToString(), ct);
if (location is null)
{
_logger.LogWarning(
"Location not found for Shopify location {LocationId}",
update.LocationId);
return;
}
// Update inventory (from Shopify, not triggering sync back)
await _inventoryService.SyncFromShopifyAsync(
new SyncInventoryCommand
{
ItemId = item.Id,
LocationId = location.Id,
Quantity = update.Available,
Source = "shopify_webhook",
ShopifyUpdatedAt = update.UpdatedAt
}, ct);
}
private async Task HandleProductUpdateAsync(string payload, CancellationToken ct)
{
var product = JsonSerializer.Deserialize<ShopifyProduct>(payload,
new JsonSerializerOptions { PropertyNameCaseInsensitive = true });
if (product is null) return;
_logger.LogInformation(
"Processing product update for {ProductTitle} ({ProductId})",
product.Title, product.Id);
foreach (var variant in product.Variants)
{
var existingItem = await _itemService.GetByShopifyVariantIdAsync(
variant.Id.ToString(), ct);
if (existingItem is not null)
{
// Update existing item
await _itemService.UpdateFromShopifyAsync(
existingItem.Id,
new UpdateFromShopifyCommand
{
Name = $"{product.Title} - {variant.Title}",
Sku = variant.Sku,
Barcode = variant.Barcode,
Price = variant.Price,
CompareAtPrice = variant.CompareAtPrice,
Weight = variant.Weight,
WeightUnit = variant.WeightUnit
}, ct);
}
else
{
_logger.LogInformation(
"New Shopify variant {VariantId} not linked to POS item",
variant.Id);
}
}
}
private async Task HandleOrderCancelledAsync(string payload, CancellationToken ct)
{
var order = JsonSerializer.Deserialize<ShopifyOrder>(payload,
new JsonSerializerOptions { PropertyNameCaseInsensitive = true });
if (order is null) return;
await _orderService.CancelShopifyOrderAsync(order.Id.ToString(), ct);
}
private async Task HandleProductDeleteAsync(string payload, CancellationToken ct)
{
var deleteEvent = JsonSerializer.Deserialize<ShopifyProductDelete>(payload,
new JsonSerializerOptions { PropertyNameCaseInsensitive = true });
if (deleteEvent is null) return;
_logger.LogInformation(
"Shopify product {ProductId} deleted - marking POS items as inactive",
deleteEvent.Id);
await _itemService.DeactivateByShopifyProductIdAsync(
deleteEvent.Id.ToString(), ct);
}
}
13.2.4 Shopify API Client
// File: src/POS.Infrastructure/Integrations/Shopify/ShopifyClient.cs
using System.Net.Http.Json;
using System.Text.Json;
namespace POS.Infrastructure.Integrations.Shopify;
public class ShopifyClient : IShopifyClient
{
private readonly HttpClient _httpClient;
private readonly IShopifyCredentialService _credentialService;
private readonly ILogger<ShopifyClient> _logger;
public ShopifyClient(
HttpClient httpClient,
IShopifyCredentialService credentialService,
ILogger<ShopifyClient> logger)
{
_httpClient = httpClient;
_credentialService = credentialService;
_logger = logger;
}
public async Task<bool> UpdateInventoryLevelAsync(
string tenantId,
string inventoryItemId,
string locationId,
int quantity,
CancellationToken ct)
{
var credentials = await _credentialService.GetCredentialsAsync(tenantId, ct);
if (credentials is null)
throw new InvalidOperationException($"No Shopify credentials for tenant {tenantId}");
var baseUrl = $"https://{credentials.ShopDomain}/admin/api/2024-01";
var request = new HttpRequestMessage(HttpMethod.Post,
$"{baseUrl}/inventory_levels/set.json");
request.Headers.Add("X-Shopify-Access-Token", credentials.AccessToken);
request.Content = JsonContent.Create(new
{
inventory_item_id = long.Parse(inventoryItemId),
location_id = long.Parse(locationId),
available = quantity
});
var response = await _httpClient.SendAsync(request, ct);
if (!response.IsSuccessStatusCode)
{
var error = await response.Content.ReadAsStringAsync(ct);
_logger.LogError(
"Failed to update Shopify inventory: {StatusCode} - {Error}",
response.StatusCode, error);
return false;
}
return true;
}
public async Task<bool> FulfillOrderAsync(
string tenantId,
string orderId,
string locationId,
IEnumerable<FulfillmentLineItem> lineItems,
string? trackingNumber,
string? trackingCompany,
CancellationToken ct)
{
var credentials = await _credentialService.GetCredentialsAsync(tenantId, ct);
if (credentials is null)
throw new InvalidOperationException($"No Shopify credentials for tenant {tenantId}");
var baseUrl = $"https://{credentials.ShopDomain}/admin/api/2024-01";
// First, get fulfillment order
var fulfillmentOrderRequest = new HttpRequestMessage(HttpMethod.Get,
$"{baseUrl}/orders/{orderId}/fulfillment_orders.json");
fulfillmentOrderRequest.Headers.Add("X-Shopify-Access-Token", credentials.AccessToken);
var foResponse = await _httpClient.SendAsync(fulfillmentOrderRequest, ct);
if (!foResponse.IsSuccessStatusCode)
{
_logger.LogError("Failed to get fulfillment orders for order {OrderId}", orderId);
return false;
}
var foResult = await foResponse.Content.ReadFromJsonAsync<FulfillmentOrdersResponse>(ct);
var fulfillmentOrder = foResult?.FulfillmentOrders?.FirstOrDefault();
if (fulfillmentOrder is null)
{
_logger.LogWarning("No fulfillment order found for order {OrderId}", orderId);
return false;
}
// Create fulfillment
var fulfillmentRequest = new HttpRequestMessage(HttpMethod.Post,
$"{baseUrl}/fulfillments.json");
fulfillmentRequest.Headers.Add("X-Shopify-Access-Token", credentials.AccessToken);
var fulfillmentPayload = new
{
fulfillment = new
{
line_items_by_fulfillment_order = new[]
{
new
{
fulfillment_order_id = fulfillmentOrder.Id,
fulfillment_order_line_items = lineItems.Select(li => new
{
id = li.FulfillmentOrderLineItemId,
quantity = li.Quantity
}).ToArray()
}
},
tracking_info = !string.IsNullOrEmpty(trackingNumber) ? new
{
number = trackingNumber,
company = trackingCompany
} : null,
notify_customer = true
}
};
fulfillmentRequest.Content = JsonContent.Create(fulfillmentPayload);
var response = await _httpClient.SendAsync(fulfillmentRequest, ct);
if (!response.IsSuccessStatusCode)
{
var error = await response.Content.ReadAsStringAsync(ct);
_logger.LogError(
"Failed to create Shopify fulfillment: {StatusCode} - {Error}",
response.StatusCode, error);
return false;
}
return true;
}
}
13.3 Payment Processing
13.3.1 PCI-DSS Compliance Pattern
┌─────────────────────────────────────────────────────────────────────────────┐
│ PCI-DSS Compliant Payment Flow │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. Card Data NEVER touches POS server │
│ 2. Use payment terminal or tokenization │
│ 3. Only store payment tokens │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Customer │ │ Payment │ │ Payment │ │
│ │ Card │───────►│ Terminal │───────►│ Processor │ │
│ └─────────────┘ └─────────────┘ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ POS │◄───────│ Token + │◄───────│ Response │ │
│ │ Server │ │ Last 4 │ │ (Success) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ Stored: payment_token, card_last_4, card_brand │
│ NOT Stored: card_number, cvv, expiry │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
13.3.2 Payment Service Interface
// File: src/POS.Application/Interfaces/IPaymentService.cs
namespace POS.Application.Interfaces;
public interface IPaymentService
{
Task<Result<PaymentResult>> ProcessPaymentAsync(
ProcessPaymentCommand command,
CancellationToken ct = default);
Task<Result<RefundResult>> ProcessRefundAsync(
ProcessRefundCommand command,
CancellationToken ct = default);
Task<Result> VoidPaymentAsync(
string transactionId,
CancellationToken ct = default);
Task<PaymentMethodsResult> GetAvailableMethodsAsync(
string locationId,
CancellationToken ct = default);
// Terminal operations
Task<TerminalStatus> GetTerminalStatusAsync(
string terminalId,
CancellationToken ct = default);
Task<Result<TerminalPaymentIntent>> CreateTerminalPaymentIntentAsync(
CreateTerminalPaymentCommand command,
CancellationToken ct = default);
Task<Result<PaymentResult>> CaptureTerminalPaymentAsync(
string paymentIntentId,
CancellationToken ct = default);
Task<Result> CancelTerminalPaymentAsync(
string paymentIntentId,
CancellationToken ct = default);
}
13.3.3 Stripe Terminal Integration
// File: src/POS.Infrastructure/Payments/StripePaymentService.cs
using Stripe;
using Stripe.Terminal;
namespace POS.Infrastructure.Payments;
public class StripePaymentService : IPaymentService
{
private readonly IPaymentCredentialService _credentialService;
private readonly ITenantContext _tenantContext;
private readonly IAuditLogger _auditLogger;
private readonly ILogger<StripePaymentService> _logger;
public StripePaymentService(
IPaymentCredentialService credentialService,
ITenantContext tenantContext,
IAuditLogger auditLogger,
ILogger<StripePaymentService> logger)
{
_credentialService = credentialService;
_tenantContext = tenantContext;
_auditLogger = auditLogger;
_logger = logger;
}
public async Task<Result<PaymentResult>> ProcessPaymentAsync(
ProcessPaymentCommand command,
CancellationToken ct = default)
{
var credentials = await _credentialService.GetStripeCredentialsAsync(
_tenantContext.TenantId!, ct);
if (credentials is null)
return Result<PaymentResult>.Failure(
DomainError.PaymentNotConfigured("Stripe"));
StripeConfiguration.ApiKey = credentials.SecretKey;
try
{
switch (command.Method)
{
case PaymentMethod.CreditCard when command.TerminalId is not null:
return await ProcessTerminalPaymentAsync(command, ct);
case PaymentMethod.CreditCard when command.PaymentToken is not null:
return await ProcessTokenPaymentAsync(command, ct);
case PaymentMethod.Cash:
return await ProcessCashPaymentAsync(command, ct);
default:
return Result<PaymentResult>.Failure(
DomainError.InvalidPaymentMethod(command.Method.ToString()));
}
}
catch (StripeException ex)
{
_logger.LogError(ex, "Stripe payment failed: {Code}", ex.StripeError?.Code);
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(ex.StripeError?.Message ?? ex.Message));
}
}
private async Task<Result<PaymentResult>> ProcessTerminalPaymentAsync(
ProcessPaymentCommand command,
CancellationToken ct)
{
_logger.LogInformation(
"Processing terminal payment of {Amount} on terminal {TerminalId}",
command.Amount, command.TerminalId);
// Create PaymentIntent
var paymentIntentService = new PaymentIntentService();
var paymentIntent = await paymentIntentService.CreateAsync(
new PaymentIntentCreateOptions
{
Amount = (long)(command.Amount * 100), // Convert to cents
Currency = "usd",
PaymentMethodTypes = new List<string> { "card_present" },
CaptureMethod = "automatic",
Metadata = new Dictionary<string, string>
{
["order_id"] = command.OrderId,
["tenant_id"] = _tenantContext.TenantId!,
["location_id"] = command.LocationId
}
}, cancellationToken: ct);
// Process on terminal
var readerService = new ReaderService();
var processResult = await readerService.ProcessPaymentIntentAsync(
command.TerminalId,
new ReaderProcessPaymentIntentOptions
{
PaymentIntent = paymentIntent.Id
}, cancellationToken: ct);
// Wait for payment to complete (simplified - real impl would poll)
var updatedIntent = await WaitForPaymentCompletionAsync(
paymentIntent.Id, TimeSpan.FromSeconds(60), ct);
if (updatedIntent.Status != "succeeded")
{
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(
$"Terminal payment failed with status: {updatedIntent.Status}"));
}
var charge = updatedIntent.LatestCharge;
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = _tenantContext.TenantId!,
EventType = "PaymentProcessed",
Details = $"Card payment {command.Amount:C} via terminal {command.TerminalId}",
ReferenceId = paymentIntent.Id,
ReferenceType = "StripePaymentIntent"
}, ct);
return Result<PaymentResult>.Success(new PaymentResult
{
Success = true,
TransactionId = paymentIntent.Id,
ChargeId = charge?.Id,
Amount = command.Amount,
CardLast4 = charge?.PaymentMethodDetails?.CardPresent?.Last4,
CardBrand = charge?.PaymentMethodDetails?.CardPresent?.Brand,
AuthorizationCode = charge?.AuthorizationCode
});
}
private async Task<Result<PaymentResult>> ProcessTokenPaymentAsync(
ProcessPaymentCommand command,
CancellationToken ct)
{
_logger.LogInformation(
"Processing token payment of {Amount}",
command.Amount);
var paymentIntentService = new PaymentIntentService();
var paymentIntent = await paymentIntentService.CreateAsync(
new PaymentIntentCreateOptions
{
Amount = (long)(command.Amount * 100),
Currency = "usd",
PaymentMethod = command.PaymentToken,
Confirm = true,
Metadata = new Dictionary<string, string>
{
["order_id"] = command.OrderId,
["tenant_id"] = _tenantContext.TenantId!
}
}, cancellationToken: ct);
if (paymentIntent.Status != "succeeded")
{
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(
$"Payment failed with status: {paymentIntent.Status}"));
}
var charge = paymentIntent.LatestCharge;
return Result<PaymentResult>.Success(new PaymentResult
{
Success = true,
TransactionId = paymentIntent.Id,
ChargeId = charge?.Id,
Amount = command.Amount,
CardLast4 = charge?.PaymentMethodDetails?.Card?.Last4,
CardBrand = charge?.PaymentMethodDetails?.Card?.Brand
});
}
private Task<Result<PaymentResult>> ProcessCashPaymentAsync(
ProcessPaymentCommand command,
CancellationToken ct)
{
// Cash payments don't need external processing
var transactionId = $"CASH-{Guid.NewGuid():N}"[..24];
_logger.LogInformation(
"Recording cash payment of {Amount}",
command.Amount);
return Task.FromResult(Result<PaymentResult>.Success(new PaymentResult
{
Success = true,
TransactionId = transactionId,
Amount = command.Amount
}));
}
public async Task<Result<RefundResult>> ProcessRefundAsync(
ProcessRefundCommand command,
CancellationToken ct = default)
{
var credentials = await _credentialService.GetStripeCredentialsAsync(
_tenantContext.TenantId!, ct);
if (credentials is null)
return Result<RefundResult>.Failure(
DomainError.PaymentNotConfigured("Stripe"));
StripeConfiguration.ApiKey = credentials.SecretKey;
try
{
var refundService = new RefundService();
var refund = await refundService.CreateAsync(
new RefundCreateOptions
{
PaymentIntent = command.OriginalTransactionId,
Amount = (long)(command.Amount * 100),
Reason = MapRefundReason(command.Reason),
Metadata = new Dictionary<string, string>
{
["refund_order_id"] = command.RefundOrderId,
["original_order_id"] = command.OriginalOrderId
}
}, cancellationToken: ct);
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = _tenantContext.TenantId!,
EventType = "RefundProcessed",
Details = $"Refund {command.Amount:C} for order {command.OriginalOrderId}",
ReferenceId = refund.Id,
ReferenceType = "StripeRefund"
}, ct);
return Result<RefundResult>.Success(new RefundResult
{
Success = true,
TransactionId = refund.Id,
Amount = command.Amount,
Status = refund.Status
});
}
catch (StripeException ex)
{
_logger.LogError(ex, "Stripe refund failed: {Code}", ex.StripeError?.Code);
return Result<RefundResult>.Failure(
DomainError.RefundFailed(ex.StripeError?.Message ?? ex.Message));
}
}
public async Task<Result> VoidPaymentAsync(
string transactionId,
CancellationToken ct = default)
{
var credentials = await _credentialService.GetStripeCredentialsAsync(
_tenantContext.TenantId!, ct);
if (credentials is null)
return Result.Failure(DomainError.PaymentNotConfigured("Stripe"));
StripeConfiguration.ApiKey = credentials.SecretKey;
try
{
var paymentIntentService = new PaymentIntentService();
await paymentIntentService.CancelAsync(transactionId, cancellationToken: ct);
await _auditLogger.LogAsync(new AuditEvent
{
TenantId = _tenantContext.TenantId!,
EventType = "PaymentVoided",
ReferenceId = transactionId,
ReferenceType = "StripePaymentIntent"
}, ct);
return Result.Success();
}
catch (StripeException ex) when (ex.StripeError?.Code == "payment_intent_unexpected_state")
{
// Already captured - need to refund instead
var refundService = new RefundService();
await refundService.CreateAsync(
new RefundCreateOptions { PaymentIntent = transactionId },
cancellationToken: ct);
return Result.Success();
}
catch (StripeException ex)
{
_logger.LogError(ex, "Stripe void failed: {Code}", ex.StripeError?.Code);
return Result.Failure(
DomainError.VoidFailed(ex.StripeError?.Message ?? ex.Message));
}
}
private async Task<PaymentIntent> WaitForPaymentCompletionAsync(
string paymentIntentId,
TimeSpan timeout,
CancellationToken ct)
{
var paymentIntentService = new PaymentIntentService();
var startTime = DateTime.UtcNow;
while (DateTime.UtcNow - startTime < timeout)
{
var intent = await paymentIntentService.GetAsync(
paymentIntentId, cancellationToken: ct);
if (intent.Status is "succeeded" or "canceled" or "requires_payment_method")
{
return intent;
}
await Task.Delay(1000, ct);
}
throw new TimeoutException("Payment processing timed out");
}
private static string MapRefundReason(RefundReason reason) => reason switch
{
RefundReason.CustomerRequest => "requested_by_customer",
RefundReason.Duplicate => "duplicate",
RefundReason.Fraudulent => "fraudulent",
_ => "requested_by_customer"
};
}
13.3.4 Square Integration Pattern
// File: src/POS.Infrastructure/Payments/SquarePaymentService.cs
using Square;
using Square.Models;
namespace POS.Infrastructure.Payments;
public class SquarePaymentService : IPaymentService
{
private readonly IPaymentCredentialService _credentialService;
private readonly ITenantContext _tenantContext;
private readonly ILogger<SquarePaymentService> _logger;
public SquarePaymentService(
IPaymentCredentialService credentialService,
ITenantContext tenantContext,
ILogger<SquarePaymentService> logger)
{
_credentialService = credentialService;
_tenantContext = tenantContext;
_logger = logger;
}
public async Task<Result<PaymentResult>> ProcessPaymentAsync(
ProcessPaymentCommand command,
CancellationToken ct = default)
{
var credentials = await _credentialService.GetSquareCredentialsAsync(
_tenantContext.TenantId!, ct);
if (credentials is null)
return Result<PaymentResult>.Failure(
DomainError.PaymentNotConfigured("Square"));
var client = new SquareClient.Builder()
.Environment(credentials.IsSandbox
? Square.Environment.Sandbox
: Square.Environment.Production)
.AccessToken(credentials.AccessToken)
.Build();
try
{
// Create terminal checkout for card-present
if (command.TerminalId is not null)
{
var checkoutRequest = new CreateTerminalCheckoutRequest.Builder(
Guid.NewGuid().ToString(),
new TerminalCheckout.Builder(
new Money.Builder()
.Amount((long)(command.Amount * 100))
.Currency("USD")
.Build(),
command.TerminalId)
.ReferenceId(command.OrderId)
.Build())
.Build();
var checkoutResponse = await client.TerminalApi.CreateTerminalCheckoutAsync(
checkoutRequest);
if (checkoutResponse.Errors?.Any() == true)
{
var error = checkoutResponse.Errors.First();
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(error.Detail));
}
var checkout = checkoutResponse.Checkout;
// Poll for completion
var completedCheckout = await WaitForCheckoutCompletionAsync(
client, checkout.Id, TimeSpan.FromSeconds(60), ct);
if (completedCheckout.Status != "COMPLETED")
{
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(
$"Checkout failed with status: {completedCheckout.Status}"));
}
return Result<PaymentResult>.Success(new PaymentResult
{
Success = true,
TransactionId = completedCheckout.PaymentIds?.FirstOrDefault(),
Amount = command.Amount,
CardLast4 = completedCheckout.CardDetails?.Card?.Last4,
CardBrand = completedCheckout.CardDetails?.Card?.CardBrand
});
}
else
{
// Token-based payment
var paymentRequest = new CreatePaymentRequest.Builder(
command.PaymentToken!,
Guid.NewGuid().ToString())
.AmountMoney(new Money.Builder()
.Amount((long)(command.Amount * 100))
.Currency("USD")
.Build())
.LocationId(credentials.LocationId)
.ReferenceId(command.OrderId)
.Build();
var paymentResponse = await client.PaymentsApi.CreatePaymentAsync(
paymentRequest);
if (paymentResponse.Errors?.Any() == true)
{
var error = paymentResponse.Errors.First();
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(error.Detail));
}
var payment = paymentResponse.Payment;
return Result<PaymentResult>.Success(new PaymentResult
{
Success = true,
TransactionId = payment.Id,
Amount = command.Amount,
CardLast4 = payment.CardDetails?.Card?.Last4,
CardBrand = payment.CardDetails?.Card?.CardBrand
});
}
}
catch (Exception ex)
{
_logger.LogError(ex, "Square payment failed");
return Result<PaymentResult>.Failure(
DomainError.PaymentFailed(ex.Message));
}
}
private async Task<TerminalCheckout> WaitForCheckoutCompletionAsync(
SquareClient client,
string checkoutId,
TimeSpan timeout,
CancellationToken ct)
{
var startTime = DateTime.UtcNow;
while (DateTime.UtcNow - startTime < timeout)
{
var response = await client.TerminalApi.GetTerminalCheckoutAsync(checkoutId);
var checkout = response.Checkout;
if (checkout.Status is "COMPLETED" or "CANCELED")
{
return checkout;
}
await Task.Delay(1000, ct);
}
throw new TimeoutException("Checkout processing timed out");
}
// ... other interface methods
}
13.4 External API Patterns
13.4.1 Circuit Breaker Pattern with Polly v8
The Circuit Breaker pattern prevents cascading failures when external services (Shopify, payment processors) become unavailable. Polly v8 introduces a new fluent API with improved resilience pipelines.
Circuit Breaker States
+------------------------------------------------------------------+
| CIRCUIT BREAKER STATES |
+------------------------------------------------------------------+
| |
| CLOSED OPEN HALF-OPEN |
| (Normal Flow) (Fail Fast) (Test Recovery) |
| |
| ┌─────────┐ ┌─────────┐ ┌─────────┐ |
| │ Request │ │ Request │ │ Request │ |
| │ passes │ │ blocked │ │ limited │ |
| │ through │ │ (fast │ │ (test │ |
| │ │ │ fail) │ │ probe) │ |
| └────┬────┘ └────┬────┘ └────┬────┘ |
| │ │ │ |
| ▼ ▼ ▼ |
| ┌─────────┐ ┌─────────┐ ┌─────────┐ |
| │ Track │ │ Return │ │ If OK: │ |
| │ failures│ │ cached/ │ │ → CLOSED│ |
| │ If > 5: │ │ fallback│ │ If fail:│ |
| │ → OPEN │ │ After │ │ → OPEN │ |
| │ │ │ timeout:│ │ │ |
| │ │ │→HALF-OPN│ │ │ |
| └─────────┘ └─────────┘ └─────────┘ |
| |
+------------------------------------------------------------------+
Polly v8 Configuration
// File: src/POS.Infrastructure/Http/ResilienceConfiguration.cs
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Http.Resilience;
using Polly;
using Polly.CircuitBreaker;
using Polly.Retry;
using Polly.Timeout;
namespace POS.Infrastructure.Http;
public static class ResilienceConfiguration
{
public static IServiceCollection AddResilientHttpClients(
this IServiceCollection services)
{
// Shopify client with full resilience pipeline
services.AddHttpClient<IShopifyClient, ShopifyClient>()
.AddResilienceHandler("shopify", ConfigureShopifyResilience);
// Payment processor with stricter circuit breaker
services.AddHttpClient<IStripeClient, StripeClient>()
.AddResilienceHandler("stripe", ConfigurePaymentResilience);
services.AddHttpClient<ISquareClient, SquareClient>()
.AddResilienceHandler("square", ConfigurePaymentResilience);
return services;
}
private static void ConfigureShopifyResilience(ResiliencePipelineBuilder<HttpResponseMessage> builder)
{
builder
// 1. Timeout for individual requests
.AddTimeout(new TimeoutStrategyOptions
{
Timeout = TimeSpan.FromSeconds(10),
OnTimeout = args =>
{
Log.Warning("Shopify request timed out after {Timeout}s",
args.Timeout.TotalSeconds);
return default;
}
})
// 2. Retry with exponential backoff
.AddRetry(new RetryStrategyOptions<HttpResponseMessage>
{
ShouldHandle = new PredicateBuilder<HttpResponseMessage>()
.Handle<HttpRequestException>()
.Handle<TimeoutRejectedException>()
.HandleResult(r => r.StatusCode == System.Net.HttpStatusCode.TooManyRequests)
.HandleResult(r => (int)r.StatusCode >= 500),
MaxRetryAttempts = 3,
Delay = TimeSpan.FromSeconds(1),
BackoffType = DelayBackoffType.Exponential,
UseJitter = true, // Prevents thundering herd
OnRetry = args =>
{
Log.Warning(
"Retrying Shopify request. Attempt {Attempt} after {Delay}ms. " +
"Status: {StatusCode}",
args.AttemptNumber,
args.RetryDelay.TotalMilliseconds,
args.Outcome.Result?.StatusCode);
return default;
}
})
// 3. Circuit Breaker
.AddCircuitBreaker(new CircuitBreakerStrategyOptions<HttpResponseMessage>
{
ShouldHandle = new PredicateBuilder<HttpResponseMessage>()
.Handle<HttpRequestException>()
.Handle<TimeoutRejectedException>()
.HandleResult(r => (int)r.StatusCode >= 500),
// Open circuit after 5 failures in 30 seconds
FailureRatio = 0.5, // 50% failure rate
SamplingDuration = TimeSpan.FromSeconds(30),
MinimumThroughput = 5, // Minimum requests before evaluating
// Stay open for 30 seconds before testing
BreakDuration = TimeSpan.FromSeconds(30),
OnOpened = args =>
{
Log.Error(
"Shopify circuit OPENED. Breaking for {Duration}s. " +
"Reason: {Exception}",
args.BreakDuration.TotalSeconds,
args.Outcome.Exception?.Message ?? "Server errors");
// Trigger alert
AlertService.SendCircuitBreakerAlert("Shopify", "OPEN");
return default;
},
OnClosed = args =>
{
Log.Information("Shopify circuit CLOSED. Service recovered.");
AlertService.SendCircuitBreakerAlert("Shopify", "CLOSED");
return default;
},
OnHalfOpened = args =>
{
Log.Information("Shopify circuit HALF-OPEN. Testing recovery...");
return default;
}
});
}
private static void ConfigurePaymentResilience(ResiliencePipelineBuilder<HttpResponseMessage> builder)
{
builder
// Shorter timeout for payment operations
.AddTimeout(new TimeoutStrategyOptions
{
Timeout = TimeSpan.FromSeconds(30) // Payment processors need more time
})
// Fewer retries for payments (idempotency concerns)
.AddRetry(new RetryStrategyOptions<HttpResponseMessage>
{
ShouldHandle = new PredicateBuilder<HttpResponseMessage>()
.Handle<HttpRequestException>()
.HandleResult(r => r.StatusCode == System.Net.HttpStatusCode.TooManyRequests),
// Do NOT retry 5xx for payments - could cause double charges
MaxRetryAttempts = 2,
Delay = TimeSpan.FromSeconds(2),
BackoffType = DelayBackoffType.Linear
})
// Stricter circuit breaker for payments
.AddCircuitBreaker(new CircuitBreakerStrategyOptions<HttpResponseMessage>
{
FailureRatio = 0.3, // Open at 30% failure rate
SamplingDuration = TimeSpan.FromSeconds(60),
MinimumThroughput = 3,
BreakDuration = TimeSpan.FromMinutes(1),
OnOpened = args =>
{
Log.Critical(
"PAYMENT CIRCUIT OPENED - Switching to fallback processor");
AlertService.SendCriticalAlert("Payment", "Circuit breaker opened");
return default;
}
});
}
}
Fallback Strategy for Circuit Breaker
// File: src/POS.Infrastructure/Http/FallbackShopifyClient.cs
public class ResilientShopifyClient : IShopifyClient
{
private readonly HttpClient _httpClient;
private readonly IShopifyCache _cache;
private readonly ILogger _logger;
private readonly ResiliencePipeline<HttpResponseMessage> _pipeline;
public async Task<ShopifyProduct?> GetProductAsync(
string tenantId,
string productId,
CancellationToken ct)
{
try
{
var response = await _pipeline.ExecuteAsync(async token =>
{
var request = CreateRequest(tenantId, $"/products/{productId}.json");
return await _httpClient.SendAsync(request, token);
}, ct);
if (response.IsSuccessStatusCode)
{
var product = await response.Content
.ReadFromJsonAsync<ShopifyProductResponse>(ct);
// Cache successful response for fallback
await _cache.SetProductAsync(productId, product.Product, ct);
return product.Product;
}
return null;
}
catch (BrokenCircuitException)
{
// Circuit is open - use cached data as fallback
_logger.LogWarning(
"Shopify circuit open. Using cached product {ProductId}",
productId);
return await _cache.GetProductAsync(productId, ct);
}
catch (TimeoutRejectedException)
{
_logger.LogWarning("Shopify request timed out for product {ProductId}", productId);
return await _cache.GetProductAsync(productId, ct);
}
}
}
Circuit Breaker Metrics for Grafana
# prometheus/alerts/circuit-breaker.yml
groups:
- name: circuit-breaker-alerts
rules:
- alert: CircuitBreakerOpen
expr: polly_circuit_breaker_state{state="open"} == 1
for: 0m
labels:
severity: critical
annotations:
summary: "Circuit breaker OPEN for {{ $labels.service }}"
description: "External service {{ $labels.service }} is failing"
runbook_url: "https://wiki/runbooks/circuit-breaker"
- alert: CircuitBreakerHalfOpen
expr: polly_circuit_breaker_state{state="half_open"} == 1
for: 5m
labels:
severity: warning
annotations:
summary: "Circuit breaker stuck in HALF-OPEN for {{ $labels.service }}"
13.4.2 API Rate Limiting
Rate limiting protects both the POS API from abuse and external APIs from being overwhelmed. Implementation uses Token Bucket algorithm for smooth traffic shaping.
Rate Limiting Architecture
+------------------------------------------------------------------+
| RATE LIMITING LAYERS |
+------------------------------------------------------------------+
| |
| Layer 1: Global Rate Limit (per IP) |
| └── 1000 requests/minute per IP |
| |
| Layer 2: Tenant Rate Limit (per API key) |
| └── Based on subscription tier |
| └── Free: 100/min, Pro: 1000/min, Enterprise: 10000/min │
| |
| Layer 3: Endpoint Rate Limit (per route) |
| └── /api/payments: 10/min per tenant (prevent fraud) │
| └── /api/reports: 5/min (expensive queries) │
| |
| Layer 4: Outbound Rate Limit (to external APIs) |
| └── Shopify: 40 requests/second per store │
| └── Stripe: 100 requests/second per account │
| |
+------------------------------------------------------------------+
Token Bucket Implementation
// File: src/POS.Infrastructure/RateLimiting/TokenBucketRateLimiter.cs
using System.Threading.RateLimiting;
public static class RateLimitingConfiguration
{
public static IServiceCollection AddRateLimiting(
this IServiceCollection services,
IConfiguration configuration)
{
services.AddRateLimiter(options =>
{
// Global rate limit by IP
options.GlobalLimiter = PartitionedRateLimiter.Create<HttpContext, string>(
httpContext =>
{
var clientIp = httpContext.Connection.RemoteIpAddress?.ToString() ?? "unknown";
return RateLimitPartition.GetTokenBucketLimiter(
partitionKey: clientIp,
factory: _ => new TokenBucketRateLimiterOptions
{
TokenLimit = 100, // Bucket capacity
TokensPerPeriod = 100, // Refill amount
ReplenishmentPeriod = TimeSpan.FromMinutes(1),
QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
QueueLimit = 10, // Queue up to 10 requests
AutoReplenishment = true
});
});
// Tenant-based rate limit (by API key)
options.AddPolicy("tenant", httpContext =>
{
var tenantId = httpContext.Request.Headers["X-Tenant-Id"].FirstOrDefault();
var tier = GetTenantTier(httpContext, tenantId);
return RateLimitPartition.GetTokenBucketLimiter(
partitionKey: tenantId ?? "anonymous",
factory: _ => tier switch
{
"enterprise" => new TokenBucketRateLimiterOptions
{
TokenLimit = 10000,
TokensPerPeriod = 10000,
ReplenishmentPeriod = TimeSpan.FromMinutes(1)
},
"pro" => new TokenBucketRateLimiterOptions
{
TokenLimit = 1000,
TokensPerPeriod = 1000,
ReplenishmentPeriod = TimeSpan.FromMinutes(1)
},
_ => new TokenBucketRateLimiterOptions
{
TokenLimit = 100,
TokensPerPeriod = 100,
ReplenishmentPeriod = TimeSpan.FromMinutes(1)
}
});
});
// Payment endpoint (stricter limit)
options.AddPolicy("payments", httpContext =>
{
var tenantId = httpContext.Request.Headers["X-Tenant-Id"].FirstOrDefault();
return RateLimitPartition.GetFixedWindowLimiter(
partitionKey: tenantId ?? "anonymous",
factory: _ => new FixedWindowRateLimiterOptions
{
PermitLimit = 10,
Window = TimeSpan.FromMinutes(1),
QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
QueueLimit = 2
});
});
// Reporting endpoint (expensive queries)
options.AddPolicy("reports", httpContext =>
{
var tenantId = httpContext.Request.Headers["X-Tenant-Id"].FirstOrDefault();
return RateLimitPartition.GetSlidingWindowLimiter(
partitionKey: tenantId ?? "anonymous",
factory: _ => new SlidingWindowRateLimiterOptions
{
PermitLimit = 5,
Window = TimeSpan.FromMinutes(1),
SegmentsPerWindow = 6, // 10-second segments
QueueLimit = 0 // No queuing for reports
});
});
// Custom rejection response
options.OnRejected = async (context, token) =>
{
context.HttpContext.Response.StatusCode = StatusCodes.Status429TooManyRequests;
context.HttpContext.Response.Headers.RetryAfter =
context.Lease.TryGetMetadata(MetadataName.RetryAfter, out var retryAfter)
? ((int)retryAfter.TotalSeconds).ToString()
: "60";
await context.HttpContext.Response.WriteAsJsonAsync(new
{
error = "rate_limit_exceeded",
message = "Too many requests. Please retry after the specified time.",
retry_after_seconds = retryAfter?.TotalSeconds ?? 60
}, token);
};
});
return services;
}
}
Applying Rate Limits to Controllers
// File: src/POS.Api/Controllers/PaymentsController.cs
[ApiController]
[Route("api/v1/payments")]
[EnableRateLimiting("payments")] // Apply payments rate limit
public class PaymentsController : ControllerBase
{
[HttpPost]
[EnableRateLimiting("payments")]
public async Task<IActionResult> ProcessPayment(
[FromBody] PaymentRequest request)
{
// Rate limited to 10/minute per tenant
return Ok(await _paymentService.ProcessAsync(request));
}
}
[ApiController]
[Route("api/v1/reports")]
[EnableRateLimiting("reports")]
public class ReportsController : ControllerBase
{
[HttpGet("sales")]
public async Task<IActionResult> GetSalesReport(
[FromQuery] DateRange range)
{
// Rate limited to 5/minute per tenant
return Ok(await _reportService.GenerateSalesReport(range));
}
}
Outbound Rate Limiting for External APIs
// File: src/POS.Infrastructure/Http/OutboundRateLimiter.cs
public class ShopifyRateLimitedClient : IShopifyClient
{
private readonly HttpClient _httpClient;
private readonly RateLimiter _rateLimiter;
private readonly ILogger _logger;
public ShopifyRateLimitedClient(HttpClient httpClient, ILogger<ShopifyRateLimitedClient> logger)
{
_httpClient = httpClient;
_logger = logger;
// Shopify allows 40 requests per second per store
// Use sliding window to smooth out bursts
_rateLimiter = new SlidingWindowRateLimiter(new SlidingWindowRateLimiterOptions
{
PermitLimit = 40,
Window = TimeSpan.FromSeconds(1),
SegmentsPerWindow = 4, // 250ms segments
QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
QueueLimit = 100 // Queue excess requests
});
}
public async Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request,
CancellationToken ct)
{
using var lease = await _rateLimiter.AcquireAsync(1, ct);
if (!lease.IsAcquired)
{
_logger.LogWarning("Shopify rate limit exceeded. Request queued.");
throw new RateLimitExceededException("Shopify API rate limit exceeded");
}
var response = await _httpClient.SendAsync(request, ct);
// Check Shopify's rate limit headers
if (response.Headers.TryGetValues("X-Shopify-Shop-Api-Call-Limit", out var values))
{
var callLimit = values.First(); // e.g., "35/40"
var parts = callLimit.Split('/');
var current = int.Parse(parts[0]);
var max = int.Parse(parts[1]);
if (current > max * 0.8) // 80% threshold
{
_logger.LogWarning(
"Shopify API approaching limit: {Current}/{Max}",
current, max);
}
}
return response;
}
}
Rate Limit Headers in Responses
// File: src/POS.Api/Middleware/RateLimitHeaderMiddleware.cs
public class RateLimitHeaderMiddleware
{
public async Task InvokeAsync(HttpContext context, RequestDelegate next)
{
await next(context);
// Add rate limit headers to response
if (context.Features.Get<IRateLimitFeature>() is { } feature)
{
context.Response.Headers["X-RateLimit-Limit"] = feature.Limit.ToString();
context.Response.Headers["X-RateLimit-Remaining"] = feature.Remaining.ToString();
context.Response.Headers["X-RateLimit-Reset"] = feature.Reset.ToUnixTimeSeconds().ToString();
}
}
}
// Response example:
// HTTP/1.1 200 OK
// X-RateLimit-Limit: 100
// X-RateLimit-Remaining: 87
// X-RateLimit-Reset: 1706140800
13.4.3 Basic Retry Configuration (Legacy Reference)
For simpler scenarios without full Polly v8 pipeline:
// File: src/POS.Infrastructure/Http/HttpClientConfiguration.cs
using Microsoft.Extensions.DependencyInjection;
using Polly;
using Polly.Extensions.Http;
namespace POS.Infrastructure.Http;
public static class HttpClientConfiguration
{
public static IServiceCollection AddExternalApiClients(
this IServiceCollection services)
{
// Shopify client with retry
services.AddHttpClient<IShopifyClient, ShopifyClient>()
.AddPolicyHandler(GetRetryPolicy())
.AddPolicyHandler(GetCircuitBreakerPolicy());
// Payment clients
services.AddHttpClient<IStripeClient, StripeClient>()
.AddPolicyHandler(GetRetryPolicy());
services.AddHttpClient<ISquareClient, SquareClient>()
.AddPolicyHandler(GetRetryPolicy());
return services;
}
private static IAsyncPolicy<HttpResponseMessage> GetRetryPolicy()
{
return HttpPolicyExtensions
.HandleTransientHttpError()
.OrResult(msg => msg.StatusCode == System.Net.HttpStatusCode.TooManyRequests)
.WaitAndRetryAsync(3, retryAttempt =>
TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));
}
private static IAsyncPolicy<HttpResponseMessage> GetCircuitBreakerPolicy()
{
return HttpPolicyExtensions
.HandleTransientHttpError()
.CircuitBreakerAsync(5, TimeSpan.FromSeconds(30));
}
}
13.4.4 Credential Management
// File: src/POS.Infrastructure/Security/CredentialService.cs
using Microsoft.Extensions.Caching.Memory;
using Azure.Security.KeyVault.Secrets;
namespace POS.Infrastructure.Security;
public class CredentialService : IPaymentCredentialService, IShopifyCredentialService
{
private readonly IIntegrationCredentialRepository _repository;
private readonly SecretClient? _keyVaultClient;
private readonly IMemoryCache _cache;
private readonly ILogger<CredentialService> _logger;
public async Task<StripeCredentials?> GetStripeCredentialsAsync(
string tenantId,
CancellationToken ct)
{
var cacheKey = $"stripe:{tenantId}";
if (_cache.TryGetValue(cacheKey, out StripeCredentials? cached))
return cached;
var integration = await _repository.GetByTypeAsync(
tenantId, IntegrationType.Stripe, ct);
if (integration is null)
return null;
// Decrypt secret key from Key Vault or encrypted storage
var secretKey = _keyVaultClient is not null
? (await _keyVaultClient.GetSecretAsync(
$"stripe-{tenantId}", cancellationToken: ct)).Value.Value
: DecryptSecret(integration.EncryptedSecretKey);
var credentials = new StripeCredentials
{
PublishableKey = integration.PublicKey,
SecretKey = secretKey,
WebhookSecret = integration.WebhookSecret
};
_cache.Set(cacheKey, credentials, TimeSpan.FromMinutes(15));
return credentials;
}
public async Task<ShopifyCredentials?> GetCredentialsAsync(
string tenantId,
CancellationToken ct)
{
var cacheKey = $"shopify:{tenantId}";
if (_cache.TryGetValue(cacheKey, out ShopifyCredentials? cached))
return cached;
var integration = await _repository.GetByTypeAsync(
tenantId, IntegrationType.Shopify, ct);
if (integration is null)
return null;
var accessToken = _keyVaultClient is not null
? (await _keyVaultClient.GetSecretAsync(
$"shopify-{tenantId}", cancellationToken: ct)).Value.Value
: DecryptSecret(integration.EncryptedSecretKey);
var credentials = new ShopifyCredentials
{
ShopDomain = integration.ExternalId,
AccessToken = accessToken,
WebhookSecret = integration.WebhookSecret
};
_cache.Set(cacheKey, credentials, TimeSpan.FromMinutes(15));
return credentials;
}
private static string DecryptSecret(string encryptedValue)
{
// Implementation depends on encryption strategy
// Could use DPAPI, AES, etc.
throw new NotImplementedException(
"Implement based on your encryption strategy");
}
}
13.5 Integration Error Codes (ERR-6xxx)
All integration-related errors use the ERR-6xxx range per the BRD error code convention. See Chapter 05 (Architecture Components) for the full Module 6: Integrations & External Systems architecture, including Amazon SP-API, Google Merchant API, and enhanced Shopify patterns.
| Code Range | Domain | Description |
|---|---|---|
| ERR-6001–6009 | General | Cross-integration errors (auth failures, timeout, config missing) |
| ERR-6010–6029 | Shopify | Webhook verification, product sync, inventory sync, order import |
| ERR-6030–6049 | Amazon SP-API | Feed submission, listing sync, order pull, inventory push |
| ERR-6050–6069 | Google Merchant | Product feed, price update, availability sync |
| ERR-6070–6089 | Payment Processors | Stripe/Square terminal errors, batch settlement, refund failures |
| ERR-6090–6099 | Email/Shipping | Notification delivery, label generation, tracking sync |
Common Integration Error Constants
// File: src/POS.Domain/Errors/IntegrationErrors.cs
namespace POS.Domain.Errors;
public static class IntegrationErrors
{
// General (ERR-6001–6009)
public const string AuthFailed = "ERR-6001"; // OAuth/API key authentication failed
public const string Timeout = "ERR-6002"; // External API call timed out
public const string CircuitOpen = "ERR-6003"; // Circuit breaker is open
public const string ConfigMissing = "ERR-6004"; // Integration not configured for tenant
public const string RateLimited = "ERR-6005"; // External API rate limit exceeded
public const string MappingFailed = "ERR-6006"; // Data mapping/transform error
public const string WebhookVerifyFailed = "ERR-6007"; // Webhook signature verification failed
public const string DuplicateSync = "ERR-6008"; // Idempotency check — already synced
public const string ChannelDisabled = "ERR-6009"; // Sales channel disabled for tenant
// Shopify (ERR-6010–6029)
public const string ShopifyWebhookInvalid = "ERR-6010";
public const string ShopifyProductSyncFailed = "ERR-6011";
public const string ShopifyInventorySyncFailed = "ERR-6012";
public const string ShopifyOrderImportFailed = "ERR-6013";
public const string ShopifyFulfillmentFailed = "ERR-6014";
public const string ShopifyGraphQLError = "ERR-6015";
// Amazon SP-API (ERR-6030–6049)
public const string AmazonFeedFailed = "ERR-6030";
public const string AmazonListingSyncFailed = "ERR-6031";
public const string AmazonOrderPullFailed = "ERR-6032";
public const string AmazonInventoryPushFailed = "ERR-6033";
// Google Merchant (ERR-6050–6069)
public const string GoogleProductFeedFailed = "ERR-6050";
public const string GooglePriceUpdateFailed = "ERR-6051";
public const string GoogleAvailabilitySyncFailed = "ERR-6052";
// Payment Processors (ERR-6070–6089)
public const string StripeTerminalError = "ERR-6070";
public const string StripeSettlementFailed = "ERR-6071";
public const string SquareTerminalError = "ERR-6075";
public const string SquareSettlementFailed = "ERR-6076";
public const string PaymentRefundFailed = "ERR-6080";
public const string BatchCloseFailed = "ERR-6081";
}
Summary
This chapter covered complete integration patterns:
- Shopify Integration: Webhooks for orders, inventory, and products with HMAC verification
- Payment Processing: PCI-DSS compliant patterns with Stripe Terminal and Square
- Token-only storage: Never store card numbers, only payment tokens
- External API resilience: Retry policies and circuit breakers with Polly
- Credential management: Secure storage with caching
- Integration error codes: ERR-6xxx range covering all external system failures
See also: Chapter 05 (Architecture Components) defines the complete Module 6: Integrations & External Systems architecture with Amazon SP-API, Google Merchant API, enhanced Shopify integration, and the strictest-rule-wins cross-platform validation strategy.
Next: Part V: Frontend - Chapter 14: POS Client covers frontend implementation with the POS Client application.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | IV - Backend |
| Chapter | 13 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 14: POS Client Application
The Point of Sale Terminal
The POS Client is the primary interface for retail associates. It must be fast, reliable, and work offline when network connectivity is lost. This chapter provides complete specifications for building a production-grade POS terminal.
14.1 Technology Stack
| Component | Technology | Rationale |
|---|---|---|
| Framework | .NET MAUI or Blazor Hybrid | Cross-platform, native performance |
| Local Database | SQLite | Embedded, zero-config, reliable |
| State Management | Fluxor or custom MVVM | Predictable state changes |
| Hardware API | Platform Invoke (P/Invoke) | Direct hardware access |
| Sync Engine | Custom HTTP + SignalR | Real-time + batch sync |
14.2 Architecture Overview
┌─────────────────────────────────────────────────────────────────────┐
│ POS CLIENT APPLICATION │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Views │ │ ViewModels │ │ Services │ │ Hardware │ │
│ │ (XAML/ │◄─┤ (State + │◄─┤ (Business │◄─┤ Drivers │ │
│ │ Blazor) │ │ Commands) │ │ Logic) │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │ │
│ └────────────────┴────────────────┴────────────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ Local SQLite │ │
│ │ Database │ │
│ └────────┬────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ Sync Engine │ │
│ │ (Online/Queue) │ │
│ └────────┬────────┘ │
└──────────────────────────────────┬──────────────────────────────────┘
│
┌────────▼────────┐
│ Central API │
│ (When Online) │
└─────────────────┘
14.3 Screen Specifications
Screen 1: Login Screen
Purpose: Authenticate retail associates with fast PIN entry.
Route: /login
╔════════════════════════════════════════════════════════════════════╗
║ ║
║ ┌──────────────────────────┐ ║
║ │ │ ║
║ │ STORE LOGO │ ║
║ │ [128x128] │ ║
║ │ │ ║
║ └──────────────────────────┘ ║
║ ║
║ NEXUS CLOTHING ║
║ Greenbrier Mall (GM) ║
║ ║
║ ┌──────────────────────────┐ ║
║ │ │ ║
║ │ Enter Employee PIN │ ║
║ │ │ ║
║ │ ● ● ● ○ ○ ○ │ ║
║ │ │ ║
║ └──────────────────────────┘ ║
║ ║
║ ┌─────┬─────┬─────┐ ║
║ │ 1 │ 2 │ 3 │ ║
║ ├─────┼─────┼─────┤ ║
║ │ 4 │ 5 │ 6 │ ║
║ ├─────┼─────┼─────┤ ║
║ │ 7 │ 8 │ 9 │ ║
║ ├─────┼─────┼─────┤ ║
║ │ CLR │ 0 │ ENT │ ║
║ └─────┴─────┴─────┘ ║
║ ║
║ [Manager Override] ║
║ ║
║ ───────────────────────────────────────────────────────────── ║
║ Status: ● Online | Last Sync: 2 min ago | v1.2.0 ║
╚════════════════════════════════════════════════════════════════════╝
Components:
| Component | Specification |
|---|---|
| Logo | 128x128px, tenant-specific |
| Store Name | 24px, Bold, Primary color |
| Location | 14px, Secondary text |
| PIN Display | 6 circles, filled = entered |
| Numpad | 80x80px buttons, touch-friendly |
| Clear (CLR) | Resets PIN entry |
| Enter (ENT) | Submits PIN for validation |
| Manager Override | Opens manager auth dialog |
| Status Bar | Connection, sync, version |
Behavior:
- PIN validated locally first (hash comparison)
- Failed attempts: 3 max before lockout
- Lockout duration: 5 minutes (configurable)
- Auto-login timeout: 30 seconds of inactivity returns to login
Screen 2: Main Sale Screen
Purpose: Primary transaction interface for ringing up sales.
Route: /sale
╔════════════════════════════════════════════════════════════════════╗
║ NEXUS CLOTHING - GM Sarah M. 12/29/2024 2:45 PM ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ ┌─────────────────────────────────────────────────────────────┐ ║
║ │ [Scan Item or Enter SKU...] [SEARCH] │ ║
║ └─────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────┐ ┌───────────────────────┐ ║
║ │ CART (3) │ │ TOTALS │ ║
║ ├──────────────────────────────────┤ ├───────────────────────┤ ║
║ │ │ │ │ ║
║ │ 1. Galaxy V-Neck Tee $29 │ │ Subtotal: $104.00 │ ║
║ │ Size: M | Color: Navy │ │ │ ║
║ │ Qty: 2 [-] [+] $58 │ │ Discount: -$10.00 │ ║
║ │ [DEL] │ │ │ ║
║ │ ─────────────────────────────────│ │ Tax (6%): $5.64 │ ║
║ │ 2. Slim Fit Chinos $46 │ │ │ ║
║ │ Size: 32 | Color: Khaki │ │ ───────────────────── │ ║
║ │ Qty: 1 [-] [+] $46 │ │ │ ║
║ │ [DEL] │ │ TOTAL: $99.64 │ ║
║ │ ─────────────────────────────────│ │ │ ║
║ │ │ │ │ ║
║ │ │ └───────────────────────┘ ║
║ │ │ ║
║ │ │ ┌───────────────────────┐ ║
║ │ │ │ [DISCOUNT] [HOLD] │ ║
║ │ │ │ │ ║
║ │ │ │ [CUSTOMER] [VOID] │ ║
║ │ │ │ │ ║
║ └──────────────────────────────────┘ │ ┌───────────────────┐ │ ║
║ │ │ │ │ ║
║ ┌──────────────────────────────────┐ │ │ PAY │ │ ║
║ │ Customer: John Smith │ │ │ $99.64 │ │ ║
║ │ Loyalty: Gold (2,450 pts) │ │ │ │ │ ║
║ │ [Remove Customer] │ │ └───────────────────┘ │ ║
║ └──────────────────────────────────┘ └───────────────────────┘ ║
║ ║
╠════════════════════════════════════════════════════════════════════╣
║ [F1 Help] [F2 Lookup] [F3 Returns] [F4 Reports] ● Online Rcpt#42 ║
╚════════════════════════════════════════════════════════════════════╝
Layout Regions:
| Region | Width | Content |
|---|---|---|
| Header | 100% | Store, associate, date/time |
| Search Bar | 100% | SKU/barcode entry with search |
| Cart Panel | 60% | Line items with quantity controls |
| Totals Panel | 40% | Running totals, discounts, tax |
| Action Buttons | 40% | Discount, Hold, Customer, Void |
| Pay Button | 40% | Large, prominent payment trigger |
| Customer Info | 60% | Attached customer details |
| Footer | 100% | Function keys, status, receipt # |
Cart Item Layout:
┌─────────────────────────────────────────────────────────────┐
│ 1. Galaxy V-Neck Tee $29 │
│ Size: M | Color: Navy │
│ Qty: 2 [-] [+] $58 │
│ [DEL] │
└─────────────────────────────────────────────────────────────┘
Keyboard Shortcuts:
| Key | Action |
|---|---|
| F1 | Help overlay |
| F2 | Product lookup |
| F3 | Returns mode |
| F4 | Quick reports |
| F5 | Price check |
| F8 | Suspend sale |
| F9 | Recall sale |
| F12 | Manager functions |
| Enter | Add scanned item |
| Esc | Cancel current action |
Screen 3: Customer Lookup
Purpose: Find or create customer records for loyalty tracking.
Route: /customer-lookup (Modal overlay)
╔════════════════════════════════════════════════════════════════════╗
║ CUSTOMER LOOKUP [X] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ [Search by name, phone, email, or loyalty #...] │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ RESULTS (3 found) │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ │ ║
║ │ ○ John Smith │ ║
║ │ Phone: (555) 123-4567 │ ║
║ │ Email: john.smith@email.com │ ║
║ │ Loyalty: Gold (2,450 pts) | Last Visit: 12/15/2024 │ ║
║ │ │ ║
║ │ ─────────────────────────────────────────────────────────── │ ║
║ │ │ ║
║ │ ○ Johnny Smith Jr. │ ║
║ │ Phone: (555) 234-5678 │ ║
║ │ Email: johnny.jr@email.com │ ║
║ │ Loyalty: Silver (890 pts) | Last Visit: 11/20/2024 │ ║
║ │ │ ║
║ │ ─────────────────────────────────────────────────────────── │ ║
║ │ │ ║
║ │ ○ Jonathan Smithson │ ║
║ │ Phone: (555) 345-6789 │ ║
║ │ Email: j.smithson@work.com │ ║
║ │ Loyalty: None | Last Visit: 10/05/2024 │ ║
║ │ │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [NEW CUSTOMER] [SELECT] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
New Customer Form:
╔════════════════════════════════════════════════════════════════════╗
║ NEW CUSTOMER [X] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ First Name * Last Name * ║
║ ┌──────────────────┐ ┌──────────────────────────────────────────┐ ║
║ │ John │ │ Smith │ ║
║ └──────────────────┘ └──────────────────────────────────────────┘ ║
║ ║
║ Phone * Email ║
║ ┌──────────────────────────┐ ┌────────────────────────────────┐ ║
║ │ (555) 123-4567 │ │ john.smith@email.com │ ║
║ └──────────────────────────┘ └────────────────────────────────┘ ║
║ ║
║ Address Line 1 ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ 123 Main Street │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ City State ZIP ║
║ ┌────────────────────┐ ┌─────────┐ ┌───────────────────────────┐║
║ │ Virginia Beach │ │ VA ▼ │ │ 23451 │║
║ └────────────────────┘ └─────────┘ └───────────────────────────┘║
║ ║
║ [ ] Enroll in Loyalty Program ║
║ [ ] Subscribe to email marketing ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [SAVE] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
Screen 4: Returns Processing
Purpose: Process merchandise returns and exchanges.
Route: /returns
╔════════════════════════════════════════════════════════════════════╗
║ RETURNS PROCESSING [Exit Return]║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ STEP 1: FIND ORIGINAL TRANSACTION ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ Receipt #: [________________] OR [Lookup by Customer] │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ ORIGINAL TRANSACTION #12345 12/20/2024 │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ Customer: John Smith │ ║
║ │ Payment: Visa ****4242 │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ │ ║
║ │ [x] 1. Galaxy V-Neck Tee (M, Navy) $29.00 │ ║
║ │ Reason: [Wrong Size ▼] │ ║
║ │ Condition: [Good - Resellable ▼] │ ║
║ │ │ ║
║ │ [ ] 2. Slim Fit Chinos (32, Khaki) $46.00 │ ║
║ │ │ ║
║ │ [ ] 3. Leather Belt (M) $35.00 │ ║
║ │ │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────┐ ┌───────────────────────────────────┐║
║ │ RETURN SUMMARY │ │ REFUND TO │║
║ ├────────────────────────┤ ├───────────────────────────────────┤║
║ │ Items: 1 │ │ ○ Original Payment (Visa ****42) │║
║ │ Subtotal: $29.00 │ │ ○ Store Credit │║
║ │ Tax Refund: $1.74 │ │ ○ Cash │║
║ │ ────────────────── │ │ ○ Exchange (Add to New Sale) │║
║ │ TOTAL: $30.74 │ │ │║
║ └────────────────────────┘ └───────────────────────────────────┘║
║ ║
║ Manager Approval Required: [ ] Over $100 [ ] No Receipt ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [SCAN RETURN ITEMS] [PROCESS RETURN] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
Return Reasons (Configurable):
- Wrong Size
- Wrong Color
- Defective
- Changed Mind
- Gift Return
- Price Adjustment
- Other
Return Conditions:
- Good - Resellable
- Damaged - Cannot Resell
- Missing Tags - Markdown
Screen 5: Inventory Lookup
Purpose: Check stock levels across all locations.
Route: /inventory (Modal overlay)
╔════════════════════════════════════════════════════════════════════╗
║ INVENTORY LOOKUP [X] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ [Search by SKU, name, or scan barcode...] [SEARCH]│ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Galaxy V-Neck Tee $29.00 │ ║
║ │ SKU: NXJ1078-NAV-M │ ║
║ │ ────────────────────────────────────────────────────────────│ ║
║ │ │ ║
║ │ VARIANTS: │ ║
║ │ ┌────────────┬─────┬─────┬─────┬─────┬─────┬───────┐ │ ║
║ │ │ Size/Color │ HQ │ GM │ HM │ LM │ NM │ TOTAL │ │ ║
║ │ ├────────────┼─────┼─────┼─────┼─────┼─────┼───────┤ │ ║
║ │ │ S / Navy │ 12 │ 3 │ 2 │ 4 │ 1 │ 22 │ │ ║
║ │ │ M / Navy │ 15 │ 5*│ 3 │ 2 │ 0 │ 25 │ │ ║
║ │ │ L / Navy │ 8 │ 4 │ 1 │ 3 │ 2 │ 18 │ │ ║
║ │ │ XL / Navy │ 4 │ 2 │ 0 │ 1 │ 1 │ 8 │ │ ║
║ │ │ S / Black │ 10 │ 2 │ 3 │ 2 │ 2 │ 19 │ │ ║
║ │ │ M / Black │ 18 │ 6 │ 4 │ 5 │ 3 │ 36 │ │ ║
║ │ └────────────┴─────┴─────┴─────┴─────┴─────┴───────┘ │ ║
║ │ │ ║
║ │ * Current Location (GM) │ ║
║ │ │ ║
║ │ Last Updated: 12/29/2024 2:30 PM │ ║
║ │ │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [REQUEST TRANSFER] [PRICE CHECK] [CLOSE] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
Screen 6: End of Day
Purpose: Close register, balance cash, generate reports.
Route: /end-of-day
╔════════════════════════════════════════════════════════════════════╗
║ END OF DAY - Close Register [Cancel] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ Register: REGISTER-01 (GM) Date: 12/29/2024 ║
║ Cashier: Sarah Miller Shift: 9:00 AM - 5:30 PM ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ SALES SUMMARY │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ │ ║
║ │ Total Transactions: 47 │ ║
║ │ Gross Sales: $3,245.67 │ ║
║ │ Returns: -$125.00 │ ║
║ │ Discounts: -$89.50 │ ║
║ │ ────────────────────────────────────── │ ║
║ │ Net Sales: $3,031.17 │ ║
║ │ Tax Collected: $181.87 │ ║
║ │ │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ CASH COUNT │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ │ ║
║ │ Starting Cash: $200.00 │ ║
║ │ Cash Sales: $845.50 │ ║
║ │ Cash Returns: -$45.00 │ ║
║ │ ────────────────────────────────────── │ ║
║ │ Expected Cash: $1,000.50 │ ║
║ │ │ ║
║ │ Counted Cash: [_______________] <-- Enter amount │ ║
║ │ │ ║
║ │ Variance: $___.__ (Calculates automatically) │ ║
║ │ │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ PAYMENT BREAKDOWN │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ Cash: $845.50 (28 trans) │ ║
║ │ Credit Card: $1,856.32 (15 trans) │ ║
║ │ Debit Card: $254.35 (3 trans) │ ║
║ │ Store Credit: $75.00 (1 trans) │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [PRINT REPORT] [RECOUNT] [CLOSE REGISTER] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
14.4 Payment Screen
Purpose: Process various payment methods.
╔════════════════════════════════════════════════════════════════════╗
║ PAYMENT [X] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ AMOUNT DUE: $99.64 ║
║ ║
║ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ║
║ │ │ │ │ │ │ ║
║ │ [CREDIT] │ │ [DEBIT] │ │ [CASH] │ ║
║ │ CARD │ │ CARD │ │ │ ║
║ │ │ │ │ │ │ ║
║ └─────────────────┘ └─────────────────┘ └─────────────────┘ ║
║ ║
║ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ║
║ │ │ │ │ │ │ ║
║ │ [GIFT] │ │ [STORE] │ │ [SPLIT] │ ║
║ │ CARD │ │ CREDIT │ │ PAYMENT │ ║
║ │ │ │ │ │ │ ║
║ └─────────────────┘ └─────────────────┘ └─────────────────┘ ║
║ ║
║ ═══════════════════════════════════════════════════════════════ ║
║ ║
║ CASH QUICK AMOUNTS: ║
║ ║
║ ┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ ║
║ │ $20 │ │ $50 │ │ $100 │ │ $120 │ │ EXACT │ │ OTHER │ ║
║ └───────┘ └───────┘ └───────┘ └───────┘ └───────┘ └───────┘ ║
║ ║
║ Amount Tendered: $________ ║
║ Change Due: $________ ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [PROCESS] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
14.5 State Management
Application State Model
public class PosState
{
// Authentication
public AuthState Auth { get; set; }
// Current Transaction
public TransactionState Transaction { get; set; }
// Cart Items
public List<CartItem> Cart { get; set; }
// Customer
public CustomerState Customer { get; set; }
// Register
public RegisterState Register { get; set; }
// Sync Status
public SyncState Sync { get; set; }
// UI State
public UiState Ui { get; set; }
}
public class TransactionState
{
public string TransactionId { get; set; }
public TransactionType Type { get; set; } // Sale, Return, Exchange
public TransactionStatus Status { get; set; }
public decimal Subtotal { get; set; }
public decimal DiscountTotal { get; set; }
public decimal TaxTotal { get; set; }
public decimal GrandTotal { get; set; }
public List<PaymentEntry> Payments { get; set; }
public decimal BalanceDue { get; set; }
}
State Actions
| Action | Description |
|---|---|
AddToCart | Add item with quantity |
UpdateQuantity | Change line item quantity |
RemoveFromCart | Delete line item |
ApplyDiscount | Add transaction/line discount |
AttachCustomer | Link customer to sale |
ProcessPayment | Record payment entry |
VoidTransaction | Cancel entire transaction |
SuspendSale | Park sale for later |
RecallSale | Resume suspended sale |
14.6 Sync Service Design
Sync Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ SYNC ENGINE │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ OUTBOUND │ │ INBOUND │ │ CONFLICT │ │
│ │ QUEUE │────▶│ HANDLER │────▶│ RESOLVER │ │
│ │ (SQLite) │ │ (API Sync) │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ LOCAL SQLITE DATABASE │ │
│ │ - Transactions (pending sync) │ │
│ │ - Products (cached catalog) │ │
│ │ - Customers (cached records) │ │
│ │ - Inventory (last known levels) │ │
│ │ - Sync metadata (timestamps, versions) │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
Sync Priorities
| Priority | Data Type | Frequency | Direction |
|---|---|---|---|
| 1 (Critical) | Transactions | Immediate | Outbound |
| 2 (High) | Inventory Changes | 5 min | Both |
| 3 (Medium) | Customers | 15 min | Both |
| 4 (Low) | Products | 1 hour | Inbound |
| 5 (Batch) | Reports | Daily | Outbound |
Conflict Resolution: When offline transactions sync, inventory conflicts (e.g., stock sold by another terminal) are resolved using the strategies defined in Chapter 05 Section 5.6 (Offline-First Architecture). The POS Client applies partial-fulfillment or last-write-wins depending on the entity type. See also Chapter 04 Section L.10A.1 for CRDT-based conflict resolution patterns.
Offline Queue Schema
CREATE TABLE sync_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
entity_type TEXT NOT NULL, -- 'transaction', 'customer', etc.
entity_id TEXT NOT NULL,
action TEXT NOT NULL, -- 'create', 'update', 'delete'
payload TEXT NOT NULL, -- JSON serialized data
priority INTEGER DEFAULT 5,
retry_count INTEGER DEFAULT 0,
created_at TEXT NOT NULL,
last_attempt TEXT,
status TEXT DEFAULT 'pending' -- 'pending', 'syncing', 'failed', 'synced'
);
CREATE INDEX idx_sync_queue_status ON sync_queue(status, priority);
14.7 Hardware Integration
Receipt Printer
public interface IReceiptPrinter
{
Task<bool> PrintReceiptAsync(Receipt receipt);
Task<bool> OpenCashDrawerAsync();
Task<bool> CutPaperAsync();
Task<PrinterStatus> GetStatusAsync();
}
public class EpsonTM88Printer : IReceiptPrinter
{
private readonly string _portName;
public async Task<bool> PrintReceiptAsync(Receipt receipt)
{
var commands = new List<byte>();
// Initialize printer
commands.AddRange(new byte[] { 0x1B, 0x40 }); // ESC @
// Center align
commands.AddRange(new byte[] { 0x1B, 0x61, 0x01 }); // ESC a 1
// Store header (double width/height)
commands.AddRange(new byte[] { 0x1D, 0x21, 0x11 }); // GS ! 0x11
commands.AddRange(Encoding.ASCII.GetBytes(receipt.StoreName + "\n"));
// Reset text size
commands.AddRange(new byte[] { 0x1D, 0x21, 0x00 });
// ... additional formatting
// Cut paper
commands.AddRange(new byte[] { 0x1D, 0x56, 0x00 }); // GS V 0
return await SendToPortAsync(commands.ToArray());
}
}
Barcode Scanner
public interface IBarcodeScanner
{
event EventHandler<BarcodeScannedEventArgs> BarcodeScanned;
Task StartListeningAsync();
Task StopListeningAsync();
}
public class HoneywellScanner : IBarcodeScanner
{
public event EventHandler<BarcodeScannedEventArgs> BarcodeScanned;
private SerialPort _port;
public async Task StartListeningAsync()
{
_port = new SerialPort("COM3", 9600);
_port.DataReceived += OnDataReceived;
_port.Open();
}
private void OnDataReceived(object sender, SerialDataReceivedEventArgs e)
{
var barcode = _port.ReadLine().Trim();
BarcodeScanned?.Invoke(this, new BarcodeScannedEventArgs(barcode));
}
}
Cash Drawer
public interface ICashDrawer
{
Task<bool> OpenAsync();
Task<bool> IsOpenAsync();
}
public class ApgCashDrawer : ICashDrawer
{
private readonly IReceiptPrinter _printer;
public async Task<bool> OpenAsync()
{
// Most cash drawers open via printer kick command
return await _printer.OpenCashDrawerAsync();
}
}
14.8 Local Database Schema
-- Products (cached from central)
CREATE TABLE products (
id TEXT PRIMARY KEY,
sku TEXT NOT NULL UNIQUE,
barcode TEXT,
name TEXT NOT NULL,
description TEXT,
price REAL NOT NULL,
cost REAL,
category_id TEXT,
tax_rate REAL DEFAULT 0,
is_active INTEGER DEFAULT 1,
last_synced TEXT NOT NULL
);
-- Inventory (cached levels)
CREATE TABLE inventory (
product_id TEXT NOT NULL,
location_code TEXT NOT NULL,
quantity INTEGER NOT NULL,
last_synced TEXT NOT NULL,
PRIMARY KEY (product_id, location_code)
);
-- Customers (cached)
CREATE TABLE customers (
id TEXT PRIMARY KEY,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL,
phone TEXT,
email TEXT,
loyalty_tier TEXT,
loyalty_points INTEGER DEFAULT 0,
last_synced TEXT NOT NULL
);
-- Transactions (local first, then synced)
CREATE TABLE transactions (
id TEXT PRIMARY KEY,
transaction_number INTEGER NOT NULL,
type TEXT NOT NULL,
status TEXT NOT NULL,
customer_id TEXT,
associate_id TEXT NOT NULL,
register_id TEXT NOT NULL,
subtotal REAL NOT NULL,
discount_total REAL DEFAULT 0,
tax_total REAL NOT NULL,
grand_total REAL NOT NULL,
created_at TEXT NOT NULL,
completed_at TEXT,
synced_at TEXT,
FOREIGN KEY (customer_id) REFERENCES customers(id)
);
-- Transaction Line Items
CREATE TABLE transaction_items (
id TEXT PRIMARY KEY,
transaction_id TEXT NOT NULL,
product_id TEXT NOT NULL,
sku TEXT NOT NULL,
name TEXT NOT NULL,
quantity INTEGER NOT NULL,
unit_price REAL NOT NULL,
discount REAL DEFAULT 0,
tax_amount REAL NOT NULL,
line_total REAL NOT NULL,
FOREIGN KEY (transaction_id) REFERENCES transactions(id)
);
-- Payments
CREATE TABLE payments (
id TEXT PRIMARY KEY,
transaction_id TEXT NOT NULL,
method TEXT NOT NULL,
amount REAL NOT NULL,
reference TEXT,
created_at TEXT NOT NULL,
FOREIGN KEY (transaction_id) REFERENCES transactions(id)
);
14.9 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| App Launch | < 3 seconds | Cold start to login screen |
| Item Scan | < 100ms | Barcode to cart display |
| Product Search | < 200ms | Keystroke to results |
| Payment Process | < 2 seconds | Button tap to receipt |
| Offline Switch | Instant | Seamless transition |
| Sync Latency | < 5 seconds | Transaction to central |
14.10 Security Considerations
| Concern | Mitigation |
|---|---|
| PIN Storage | Hashed with bcrypt, salted |
| Local DB | SQLCipher encryption |
| API Tokens | Secure storage (Keychain/DPAPI) |
| PCI Compliance | No card data stored locally |
| Session Timeout | Auto-logout after inactivity |
| Audit Trail | All actions logged with timestamp |
14.11 Summary
The POS Client Application is designed for:
- Speed: Sub-second response times for all common operations
- Reliability: Full offline capability with automatic sync
- Usability: Touch-friendly, keyboard shortcuts, minimal training
- Security: PIN auth, encrypted storage, audit logging
- Integration: Hardware support for printers, scanners, drawers
Cross-Reference: For detailed offline conflict resolution logic, see Chapter 05 Section 1.16.3.
Next: Chapter 15: Tenant Admin Portal covers the Merchant Dashboard.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | V - Frontend |
| Chapter | 14 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 15: Tenant Admin Portal (Merchant Dashboard)
The Back Office Command Center for Merchants
Architecture Note: RapOS uses a three-portal architecture based on industry best practices from leading POS vendors (Square, Toast, Stripe). See the Three-Portal Strategy section for the complete picture.
The Tenant Admin Portal (app.{domain}) is the web-based management interface for tenant users - store managers, regional managers, and tenant administrators. It provides comprehensive control over their business’s inventory, products, employees, and reporting.
This is NOT the Platform Admin Portal - that’s a separate internal tool for NexusDenim team members to manage the multi-tenant platform itself.
15.1 Technology Stack
| Component | Technology | Rationale |
|---|---|---|
| Framework | Blazor Server | Real-time updates, shared .NET codebase |
| Styling | Custom CSS + Bootstrap 5 | Consistent design system |
| State | Blazor component state + services | Simple, reactive |
| Real-time | SignalR (built-in) | Live dashboard updates |
| Charts | Chart.js or ApexCharts | Interactive visualizations |
15.2 Architecture Overview
┌─────────────────────────────────────────────────────────────────────┐
│ ADMIN PORTAL (Blazor Server) │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────┐│
│ │ HEADER ││
│ │ [Logo] [Breadcrumb] [Notifications] [Profile] ││
│ └─────────────────────────────────────────────────────────────────┘│
│ ┌────────────┬────────────────────────────────────────────────────┐│
│ │ │ ││
│ │ SIDEBAR │ CONTENT AREA ││
│ │ │ ││
│ │ Dashboard │ ┌──────────────────────────────────────────┐ ││
│ │ Inventory │ │ │ ││
│ │ Products │ │ Page Content │ ││
│ │ Employees │ │ │ ││
│ │ Reports │ │ │ ││
│ │ Settings │ └──────────────────────────────────────────┘ ││
│ │ │ ││
│ └────────────┴────────────────────────────────────────────────────┘│
│ ┌─────────────────────────────────────────────────────────────────┐│
│ │ STATUS BAR ││
│ └─────────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────────┘
15.3 Navigation Structure
ADMIN PORTAL
│
├── Dashboard (/dashboard)
│ ├── KPIs
│ ├── Alerts
│ └── Activity Feed
│
├── Inventory (/inventory)
│ ├── Stock Levels (/inventory/levels)
│ ├── Transfers (/inventory/transfers)
│ │ ├── Create Transfer
│ │ └── Transfer History
│ ├── Counts (/inventory/counts)
│ │ ├── New Count
│ │ ├── In Progress
│ │ └── Completed
│ └── Adjustments (/inventory/adjustments)
│
├── Products (/products)
│ ├── Catalog (/products/catalog)
│ │ ├── Product List
│ │ ├── Product Detail
│ │ └── Add/Edit Product
│ ├── Categories (/products/categories)
│ ├── Pricing (/products/pricing)
│ └── Import/Export (/products/import)
│
├── Employees (/employees)
│ ├── Users (/employees/users)
│ ├── Roles (/employees/roles)
│ ├── Schedules (/employees/schedules)
│ └── Performance (/employees/performance)
│
├── Reports (/reports)
│ ├── Sales (/reports/sales)
│ ├── Inventory (/reports/inventory)
│ ├── RFID Analytics (/reports/rfid) ← Feature-flagged
│ │ ├── Scan Sessions
│ │ ├── Tag Lifecycle
│ │ └── Reconciliation
│ ├── Performance (/reports/performance)
│ └── Custom (/reports/custom)
│
├── Devices (/devices) ← NEW section
│ ├── POS Terminals (/devices/pos)
│ └── RFID Scanners (/devices/rfid) ← Feature-flagged
│ ├── Device List
│ ├── Claim Codes
│ └── Activity Log
│
└── Settings (/settings)
├── Locations (/settings/locations)
├── Integrations (/settings/integrations)
├── RFID Configuration (/settings/rfid) ← Feature-flagged
│ ├── Devices (/settings/rfid/devices)
│ ├── Printers (/settings/rfid/printers)
│ ├── Tag Config (/settings/rfid/tags)
│ └── Templates (/settings/rfid/templates)
├── System (/settings/system)
└── Audit Log (/settings/audit)
15.4 Screen Specifications
Screen 1: Dashboard
Purpose: Executive overview with KPIs, alerts, and activity monitoring.
Route: /dashboard
╔════════════════════════════════════════════════════════════════════════════════╗
║ ADMIN PORTAL [Bell] [?] [Admin User ▼] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ │ ║
║ Dashbrd │ DASHBOARD Today [Date Range ▼] ║
║ │ ║
║ ─────── │ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ║
║ │ │ TODAY'S │ │ REVENUE │ │ ITEMS │ │ AVG ORDER │ ║
║ INVNTRY │ │ SALES │ │ │ │ SOLD │ │ VALUE │ ║
║ Levels │ │ │ │ │ │ │ │ │ ║
║ Transfr│ │ $12,450 │ │ $45,230 │ │ 423 │ │ $78.50 │ ║
║ Counts │ │ +12% ▲ │ │ +8% ▲ │ │ +15% ▲ │ │ +3% ▲ │ ║
║ Adjust │ └────────────┘ └────────────┘ └────────────┘ └────────────┘ ║
║ │ ║
║ ─────── │ ┌────────────────────────────────┐ ┌──────────────────────────────┐║
║ │ │ SALES TREND (Last 7 Days) │ │ ALERTS (5) │║
║ PRODUCT │ │ │ ├──────────────────────────────┤║
║ Catalog│ │ ╱╲ ╱╲ │ │ [!] Low Stock: NXJ1078 ││
║ Categry│ │ ╱╲ ╱ ╲ ╱ ╲ │ │ Only 3 units at GM ││
║ Pricing│ │ ╱ ╲╱ ╲ ╱ ╲ │ │ │║
║ Import │ │ ╱ ╲╱ ╲ │ │ [!] Price Mismatch: SKU-042 ││
║ │ │ ╱ ╲_ │ │ Shopify: $29, POS: $32 ││
║ ─────── │ │ │ │ │║
║ │ │ Mon Tue Wed Thu Fri Sat Sun │ │ [i] Transfer #1234 Ready ││
║ EMPLOYE │ └────────────────────────────────┘ │ From HQ to GM (5 items) ││
║ Users │ │ │║
║ Roles │ ┌────────────────────────────────┐ │ [i] Count #567 Pending ││
║ Sched │ │ STORE PERFORMANCE │ │ GM needs review ││
║ Perform│ ├────────────────────────────────┤ │ │║
║ │ │ Location │ Sales │ Trans │ % │ │ [!] Register offline: NM-02 ││
║ ─────── │ │───────────┼────────┼───────┼───│ │ Last seen: 15 min ago ││
║ │ │ GM │ $4,230 │ 47 │32%│ └──────────────────────────────┘║
║ REPORTS │ │ HM │ $3,890 │ 42 │29%│ ║
║ Sales │ │ LM │ $2,980 │ 35 │22%│ ┌──────────────────────────────┐║
║ Invntry│ │ NM │ $2,130 │ 28 │16%│ │ RECENT ACTIVITY │║
║ Perform│ └────────────────────────────────┘ ├──────────────────────────────┤║
║ Custom │ │ 2:45 PM - Sale #4521 ($89) ││
║ │ │ 2:42 PM - Return processed ││
║ ─────── │ │ 2:38 PM - New customer added ││
║ │ │ 2:35 PM - Inventory adjusted ││
║ SETTING │ │ 2:30 PM - Transfer completed ││
║ Locatns│ │ │║
║ Integr │ │ [View All Activity] │║
║ System │ └──────────────────────────────┘║
║ Audit │ ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ Connected: 4 registers | Last sync: 30 sec ago | System: Healthy ║
╚════════════════════════════════════════════════════════════════════════════════╝
Dashboard Components:
| Component | Specification |
|---|---|
| KPI Cards (4) | Stat value, trend arrow, percentage change |
| Sales Chart | Line chart, 7-day trend, interactive hover |
| Alerts Panel | Priority-sorted, color-coded, actionable |
| Store Table | Sortable columns, percentage bar |
| Activity Feed | Real-time, auto-scroll, clickable items |
Screen 2: Inventory Management
Purpose: Monitor and manage stock levels across all locations.
Route: /inventory/levels
╔════════════════════════════════════════════════════════════════════════════════╗
║ INVENTORY > STOCK LEVELS [Refresh] [Export]║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ [Search by SKU, name, barcode...] Location: [All ▼] │ ║
║ │ │ ║
║ │ Category: [All Categories ▼] Status: [All ▼] Stock: [All Levels ▼] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ SKU │ PRODUCT │ CAT │ HQ │ GM │ HM │ LM │ NM │ ║
║ ├────────────┼──────────────────┼───────┼──────┼──────┼──────┼──────┼─────┤ ║
║ │ │ │ │ │ │ │ │ │ ║
║ │ NXJ1078 │ Galaxy V-Neck │ Tops │ 45 │ 3* │ 12 │ 8 │ 15 │ ║
║ │ │ │ │ │ LOW │ │ │ │ ║
║ │────────────┼──────────────────┼───────┼──────┼──────┼──────┼──────┼─────│ ║
║ │ NXP0892 │ Slim Fit Chinos │Bottms │ 32 │ 18 │ 14 │ 0* │ 22 │ ║
║ │ │ │ │ │ │ │ OUT │ │ ║
║ │────────────┼──────────────────┼───────┼──────┼──────┼──────┼──────┼─────│ ║
║ │ NXA0234 │ Leather Belt │Access │ 60 │ 25 │ 20 │ 15 │ 18 │ ║
║ │ │ │ │ │ │ │ │ │ ║
║ │────────────┼──────────────────┼───────┼──────┼──────┼──────┼──────┼─────│ ║
║ │ NXJ2156 │ Oxford Shirt │ Tops │ 28 │ 12 │ 8 │ 10 │ 6 │ ║
║ │ │ │ │ │ │ │ │ │ ║
║ │────────────┼──────────────────┼───────┼──────┼──────┼──────┼──────┼─────│ ║
║ │ NXP1045 │ Classic Jeans │Bottms │ 75 │ 30 │ 25 │ 22 │ 28 │ ║
║ │ │ │ │ │ │ │ │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ Showing 1-25 of 1,245 items << < Page 1 of 50 > >> Items/page: [25▼]║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ BULK ACTIONS: [Create Transfer] [Request Recount] [Adjust Stock] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
Inventory Features:
| Feature | Description |
|---|---|
| Multi-location Grid | Shows stock at all locations in one view |
| Status Indicators | LOW (yellow), OUT (red), OK (green) |
| Click-to-Filter | Click column headers to filter by location |
| Bulk Actions | Select multiple items for batch operations |
| Export | CSV/Excel download with current filters |
Transfer Creation Modal:
╔════════════════════════════════════════════════════════════════════╗
║ CREATE INVENTORY TRANSFER [X] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ From Location: [HQ - Headquarters ▼] ║
║ To Location: [GM - Greenbrier Mall ▼] ║
║ ║
║ ┌──────────────────────────────────────────────────────────────┐ ║
║ │ ITEMS TO TRANSFER │ ║
║ ├──────────────────────────────────────────────────────────────┤ ║
║ │ SKU │ Product │ Available │ Transfer Qty │ ║
║ │───────────┼───────────────────┼───────────┼──────────────────│ ║
║ │ NXJ1078 │ Galaxy V-Neck (M) │ 45 │ [ 10 ] │ ║
║ │ NXP0892 │ Slim Fit Chinos │ 32 │ [ 5 ] │ ║
║ └──────────────────────────────────────────────────────────────┘ ║
║ ║
║ [+ Add More Items] ║
║ ║
║ Notes: [________________________________] ║
║ ║
║ Priority: ○ Normal ○ Urgent ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [CREATE TRANSFER] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════╝
Screen 3: Product Catalog
Purpose: Manage product information, variants, and pricing.
Route: /products/catalog
╔════════════════════════════════════════════════════════════════════════════════╗
║ PRODUCTS > CATALOG [+ Add Product] [Import] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ [Search products...] │ ║
║ │ │ ║
║ │ Category: [All ▼] Status: [Active ▼] Price Range: [$0] to [$999] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ [x] │ PRODUCT │ SKU │ CATEGORY │ PRICE │ STOCK│ ACT│ ║
║ ├─────┼──────────────────────┼───────────┼──────────┼────────┼──────┼────┤ ║
║ │ │ │ │ │ │ │ │ ║
║ │ [ ] │ [img] Galaxy V-Neck │ NXJ1078 │ Tops │ $29.00 │ 83 │[Ed]│ ║
║ │ │ 3 variants │ │ │ │ │[De]│ ║
║ │─────┼──────────────────────┼───────────┼──────────┼────────┼──────┼────│ ║
║ │ [ ] │ [img] Slim Fit Chino │ NXP0892 │ Bottoms │ $46.00 │ 86 │[Ed]│ ║
║ │ │ 5 variants │ │ │ │ │[De]│ ║
║ │─────┼──────────────────────┼───────────┼──────────┼────────┼──────┼────│ ║
║ │ [ ] │ [img] Oxford Shirt │ NXJ2156 │ Tops │ $54.00 │ 64 │[Ed]│ ║
║ │ │ 4 variants │ │ │ │ │[De]│ ║
║ │─────┼──────────────────────┼───────────┼──────────┼────────┼──────┼────│ ║
║ │ [ ] │ [img] Leather Belt │ NXA0234 │ Access. │ $35.00 │ 138 │[Ed]│ ║
║ │ │ 3 variants │ │ │ │ │[De]│ ║
║ │─────┼──────────────────────┼───────────┼──────────┼────────┼──────┼────│ ║
║ │ [ ] │ [img] Classic Jeans │ NXP1045 │ Bottoms │ $59.00 │ 180 │[Ed]│ ║
║ │ │ 6 variants │ │ │ │ │[De]│ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ Selected: 0 | Total Products: 1,245 << < 1 of 50 > >> ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ WITH SELECTED: [Edit Category] [Update Pricing] [Archive] [Delete] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
Product Detail/Edit Modal:
╔════════════════════════════════════════════════════════════════════════════════╗
║ EDIT PRODUCT [X] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ [Basic Info] [Variants] [Pricing] [Inventory] [Media] ║
║ ═══════════════════════════════════════════════════════════════════════════ ║
║ ║
║ ┌────────────────────┐ PRODUCT INFORMATION ║
║ │ │ ║
║ │ [Product Image] │ Name * ║
║ │ │ ┌──────────────────────────────────────────────────┐ ║
║ │ [Upload Image] │ │ Galaxy V-Neck Tee │ ║
║ │ │ └──────────────────────────────────────────────────┘ ║
║ └────────────────────┘ ║
║ SKU * Barcode ║
║ ┌────────────────────┐ ┌────────────────────────┐ ║
║ │ NXJ1078 │ │ 0657381512532 │ ║
║ └────────────────────┘ └────────────────────────┘ ║
║ ║
║ Category * Brand ║
║ ┌────────────────────┐ ┌────────────────────────────────────────────────┐ ║
║ │ Tops ▼ │ │ Nexus Originals │ ║
║ └────────────────────┘ └────────────────────────────────────────────────┘ ║
║ ║
║ Description ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ Classic V-neck tee made from premium cotton blend. Features a │ ║
║ │ relaxed fit and reinforced stitching for durability. │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ [x] Active [x] Visible on Website [ ] Featured ║
║ ║
║ ┌────────────────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [SAVE CHANGES] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════════════════╝
Screen 4: Employee Management
Purpose: Manage users, roles, permissions, and schedules.
Route: /employees/users
╔════════════════════════════════════════════════════════════════════════════════╗
║ EMPLOYEES > USERS [+ Add Employee] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ [All Employees] [Active (24)] [Inactive (3)] [Pending (2)] ║
║ ═══════════════════════════════════════════════════════════════════════════ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ [Search by name, email, employee ID...] Location: [All ▼] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ EMPLOYEE │ ROLE │ LOCATION │ STATUS │ ACTIONS│ ║
║ ├───────┼──────────────────┼───────────────┼──────────┼─────────┼────────┤ ║
║ │ │ │ │ │ │ │ ║
║ │ [img] │ Sarah Miller │ Store Manager │ GM │ Active │ [Ed] │ ║
║ │ │ sarah.m@nexus │ │ │ Online │ [...] │ ║
║ │───────┼──────────────────┼───────────────┼──────────┼─────────┼────────│ ║
║ │ [img] │ James Wilson │ Sales Assoc. │ GM │ Active │ [Ed] │ ║
║ │ │ james.w@nexus │ │ │ Offline │ [...] │ ║
║ │───────┼──────────────────┼───────────────┼──────────┼─────────┼────────│ ║
║ │ [img] │ Maria Garcia │ Asst. Manager │ HM │ Active │ [Ed] │ ║
║ │ │ maria.g@nexus │ │ │ Online │ [...] │ ║
║ │───────┼──────────────────┼───────────────┼──────────┼─────────┼────────│ ║
║ │ [img] │ David Chen │ Sales Assoc. │ LM │ Active │ [Ed] │ ║
║ │ │ david.c@nexus │ │ │ Offline │ [...] │ ║
║ │───────┼──────────────────┼───────────────┼──────────┼─────────┼────────│ ║
║ │ [img] │ Emma Johnson │ Sales Assoc. │ NM │ Pending │ [Ed] │ ║
║ │ │ emma.j@nexus │ │ │ Invite │ [...] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ Showing 1-25 of 29 employees << < 1 of 2 > >> ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
Employee Detail Form:
╔════════════════════════════════════════════════════════════════════════════════╗
║ EDIT EMPLOYEE [X] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ [Profile] [Permissions] [Schedule] [Performance] ║
║ ═══════════════════════════════════════════════════════════════════════════ ║
║ ║
║ ┌────────────────────┐ PERSONAL INFORMATION ║
║ │ │ ║
║ │ [Photo] │ First Name * Last Name * ║
║ │ │ ┌───────────────────────┐ ┌───────────────────────┐ ║
║ │ [Change Photo] │ │ Sarah │ │ Miller │ ║
║ │ │ └───────────────────────┘ └───────────────────────┘ ║
║ └────────────────────┘ ║
║ Email * Phone ║
║ ┌───────────────────────┐ ┌───────────────────────┐ ║
║ │ sarah.m@nexuscloth.com│ │ (555) 123-4567 │ ║
║ └───────────────────────┘ └───────────────────────┘ ║
║ ║
║ EMPLOYMENT ║
║ ║
║ Employee ID Hire Date Status ║
║ ┌───────────────────┐ ┌───────────────────────┐ ┌───────────────────────┐ ║
║ │ EMP-00042 │ │ 03/15/2022 │ │ Active ▼ │ ║
║ └───────────────────┘ └───────────────────────┘ └───────────────────────┘ ║
║ ║
║ Role * Primary Location * ║
║ ┌───────────────────┐ ┌───────────────────────────────────────────────────┐ ║
║ │ Store Manager ▼ │ │ GM - Greenbrier Mall ▼ │ ║
║ └───────────────────┘ └───────────────────────────────────────────────────┘ ║
║ ║
║ Additional Locations (can work at): ║
║ [x] HQ - Headquarters [x] HM - Peninsula [ ] LM - Lynnhaven [ ] NM ║
║ ║
║ PIN: [****] [Reset PIN] ║
║ ║
║ ┌────────────────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ [SAVE CHANGES] [CANCEL] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════════════════╝
Screen 5: Reporting
Purpose: Generate and view sales, inventory, and performance reports.
Route: /reports/sales
╔════════════════════════════════════════════════════════════════════════════════╗
║ REPORTS > SALES [Schedule] [Export] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ [Sales Summary] [By Product] [By Location] [By Employee] [By Time] ║
║ ═══════════════════════════════════════════════════════════════════════════ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ Date Range: [12/01/2024] to [12/29/2024] Location: [All ▼] │ ║
║ │ │ ║
║ │ Compare to: [x] Previous Period [ ] Same Period Last Year │ ║
║ │ [Generate Report] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────┐ ┌────────────────────────────────────┐ ║
║ │ TOTAL REVENUE │ │ TRANSACTIONS │ ║
║ │ │ │ │ ║
║ │ $145,678.90 │ │ 1,847 │ ║
║ │ +12.3% vs prev │ │ +8.2% vs prev │ ║
║ └────────────────────────────────┘ └────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ REVENUE TREND │ ║
║ │ │ ║
║ │ $8K ┤ ╱╲ │ ║
║ │ │ ╱╲ ╱ ╲ ╱╲ │ ║
║ │ $6K ┤ ╱╲ ╱ ╲ ╱ ╲ ╱ ╲ │ ║
║ │ │ ╱╲ ╱ ╲ ╱ ╲ ╱ ╲ ╱ ╲ │ ║
║ │ $4K ┤ ╱╲ ╱ ╲ ╱ ╲ ╱ ╲ ╱ ╲_╱ ╲_ │ ║
║ │ │ ╱ ╲_╱ ╲__╱ ╲__╱ ╲_╱ │ ║
║ │ $2K ┤ │ ║
║ │ └────────────────────────────────────────────────────────────── │ ║
║ │ Dec 1 Dec 5 Dec 10 Dec 15 Dec 20 Dec 25 Dec 29 │ ║
║ │ │ ║
║ │ ── Current Period - - Previous Period │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ TOP SELLING PRODUCTS │ SALES BY LOCATION │ ║
║ ├───────────────────────────────────────────────┼─────────────────────────┤ ║
║ │ 1. Galaxy V-Neck Tee $12,450 (8.5%) │ GM ████████████ 35% │ ║
║ │ 2. Slim Fit Chinos $10,230 (7.0%) │ HM █████████ 28% │ ║
║ │ 3. Classic Jeans $ 9,875 (6.8%) │ LM ███████ 22% │ ║
║ │ 4. Oxford Shirt $ 8,920 (6.1%) │ NM █████ 15% │ ║
║ │ 5. Leather Belt $ 7,560 (5.2%) │ │ ║
║ └───────────────────────────────────────────────┴─────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
Screen 6: Settings
Purpose: Configure locations, integrations, and system parameters.
Settings Categories: The full settings taxonomy is defined in the BRD Module 5 (Setup & Configuration), Sections 5.2-5.19. The Tenant Admin Portal exposes these settings across 5 tabs: Locations (5.2-5.5), Integrations (5.8-5.12), RFID (5.16), System (5.13-5.15), and Audit Log (5.17-5.19). Platform-level settings (tenant provisioning, billing, feature flags) are managed in the Platform Admin Portal only.
Route: /settings
╔════════════════════════════════════════════════════════════════════════════════╗
║ SETTINGS ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ [Locations] [Integrations] [RFID] [System] [Audit Log] ║
║ ═══════════════════════════════════════════════════════════════════════════ ║
║ ║
║ STORE LOCATIONS [+ Add Location] ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ CODE │ NAME │ ADDRESS │ STATUS │ REGISTERS │ ║
║ ├──────┼─────────────────────┼──────────────────────┼─────────┼───────────┤ ║
║ │ │ │ │ │ │ ║
║ │ HQ │ Headquarters │ 123 Warehouse Blvd │ Active │ 0 │ ║
║ │ │ │ Chesapeake, VA │ │ │ ║
║ │──────┼─────────────────────┼──────────────────────┼─────────┼───────────│ ║
║ │ GM │ Greenbrier Mall │ 1401 Greenbrier Pkwy │ Active │ 3 │ ║
║ │ │ │ Chesapeake, VA │ Online │ │ ║
║ │──────┼─────────────────────┼──────────────────────┼─────────┼───────────│ ║
║ │ HM │ Peninsula Town Ctr │ 4410 E Claiborne Sq │ Active │ 2 │ ║
║ │ │ │ Hampton, VA │ Online │ │ ║
║ │──────┼─────────────────────┼──────────────────────┼─────────┼───────────│ ║
║ │ LM │ Lynnhaven Mall │ 701 Lynnhaven Pkwy │ Active │ 2 │ ║
║ │ │ │ Virginia Beach, VA │ Online │ │ ║
║ │──────┼─────────────────────┼──────────────────────┼─────────┼───────────│ ║
║ │ NM │ Patrick Henry Mall │ 12300 Jefferson Ave │ Active │ 2 │ ║
║ │ │ │ Newport News, VA │ Offline │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ GENERAL SETTINGS ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Company Name: [Nexus Clothing ] │ ║
║ │ Tax Rate: [6.00 ] % │ ║
║ │ Currency: [USD - US Dollar ▼ ] │ ║
║ │ Timezone: [America/New_York ▼ ] │ ║
║ │ │ ║
║ │ Receipt Footer: [Thank you for shopping at Nexus! ] │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────────────────────╢
║ │ [!] You have unsaved changes ║
║ │ [RESET] [SAVE SETTINGS] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════════════════╝
Integrations Tab:
╔════════════════════════════════════════════════════════════════════════════════╗
║ SETTINGS > INTEGRATIONS ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ SHOPIFY ● Connected ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Store Domain: nexuspremier.myshopify.com │ ║
║ │ API Version: 2024-01 │ ║
║ │ Last Sync: 12/29/2024 2:45 PM │ ║
║ │ Sync Status: ✓ Products ✓ Orders ✓ Inventory │ ║
║ │ │ ║
║ │ [Test Connection] [Sync Now] [Configure] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ QUICKBOOKS DESKTOP ● Connected ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Company File: NexusClothing.qbw │ ║
║ │ QB Version: QuickBooks POS v19 │ ║
║ │ Bridges Online: 4 of 5 │ ║
║ │ │ ║
║ │ [View Bridge Status] [Refresh] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ PAYMENT PROCESSOR ○ Not Connected ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Provider: [Select Payment Processor ▼] │ ║
║ │ │ ║
║ │ [Configure] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
RFID Tab (Feature-flagged for RFID subscribers):
╔════════════════════════════════════════════════════════════════════════════════╗
║ SETTINGS > RFID (Raptag) ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ [Devices] [Printers] [Tag Configuration] [Templates] ║
║ ═══════════════════════════════════════════════════════════════════════════ ║
║ ║
║ RFID SCANNERS [+ Generate Claim Code]║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ DEVICE │ LOCATION │ LAST SEEN │ STATUS │ ACTIONS │ ║
║ ├─────────────────┼─────────────┼──────────────────┼─────────┼────────────┤ ║
║ │ │ │ │ │ │ ║
║ │ Zebra MC3300 │ HQ │ 2 min ago │ ● Online│ [Release] │ ║
║ │ ID: RFID-001 │ │ │ │ │ ║
║ │─────────────────┼─────────────┼──────────────────┼─────────┼────────────│ ║
║ │ Zebra MC3300 │ GM │ 15 min ago │ ● Online│ [Release] │ ║
║ │ ID: RFID-002 │ │ │ │ │ ║
║ │─────────────────┼─────────────┼──────────────────┼─────────┼────────────│ ║
║ │ Zebra MC3300 │ HM │ 3 hours ago │ ○ Idle │ [Release] │ ║
║ │ ID: RFID-003 │ │ │ │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ PENDING CLAIM CODES ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ CODE │ CREATED │ FOR LOCATION │ EXPIRES │ ACTION │ ║
║ ├───────────┼──────────────────┼───────────────┼──────────────┼──────────┤ ║
║ │ X7K9M2 │ Today 2:30 PM │ LM │ in 23 hours │ [Revoke] │ ║
║ │ P4N8Q1 │ Yesterday │ NM │ EXPIRED │ [Delete] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
RFID Printers Tab:
╔════════════════════════════════════════════════════════════════════════════════╗
║ SETTINGS > RFID > PRINTERS [+ Add Printer] ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ RFID TAG PRINTERS ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ PRINTER │ MODEL │ IP ADDRESS │ STATUS │ ACTIONS │ ║
║ ├─────────────────┼────────────┼────────────────┼───────────┼────────────┤ ║
║ │ │ │ │ │ │ ║
║ │ HQ Warehouse │ Zebra ZD500R│192.168.1.100 │ ● Ready │ [Test] │ ║
║ │ │ │ │ │ [Edit] │ ║
║ │ │ │ │ │ [Delete] │ ║
║ │─────────────────┼────────────┼────────────────┼───────────┼────────────│ ║
║ │ GM Stockroom │ Zebra ZD500R│192.168.2.100 │ ● Ready │ [Test] │ ║
║ │ │ │ │ │ [Edit] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ ADD NEW PRINTER │ ║
║ ├─────────────────────────────────────────────────────────────────────────┤ ║
║ │ │ ║
║ │ Printer Name: [HQ Warehouse Printer ] │ ║
║ │ Model: [Zebra ZD500R ▼ ] │ ║
║ │ IP Address: [192.168.1.100 ] │ ║
║ │ Location: [HQ - Headquarters ▼ ] │ ║
║ │ │ ║
║ │ [Test Connection] [Save Printer] │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
Tag Configuration Tab (Advanced Settings):
╔════════════════════════════════════════════════════════════════════════════════╗
║ SETTINGS > RFID > TAG CONFIGURATION ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ EPC ENCODING CONFIGURATION ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Company EPC Prefix: 52E98DC418 (read-only) │ ║
║ │ ⓘ Contact support to change │ ║
║ │ │ ║
║ │ Prefix Length: 38 bits │ ║
║ │ Asset ID Bits: 20 bits (allows 1,048,576 unique SKUs) │ ║
║ │ Serial Bits: 38 bits (allows 274 billion tags per SKU) │ ║
║ │ │ ║
║ │ Next Serial Number: 400,001 │ ║
║ │ [Reset Counter] ⚠️ Use with caution │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ SCAN SETTINGS ║
║ ┌─────────────────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Variance Threshold: [5 ] % (alert if count differs by more) │ ║
║ │ Session Timeout: [30 ] minutes (auto-close inactive sessions) │ ║
║ │ Require Zone Selection: [x] Yes - cashiers must select zone first │ ║
║ │ Allow Overrides: [x] Yes - managers can override variances │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────────────────────╢
║ │ [RESET] [SAVE CONFIGURATION] ║
║ └────────────────────────────────────────────────────────────────────────────╢
╚════════════════════════════════════════════════════════════════════════════════╝
Implementation Note: The RFID settings section is feature-flagged and only visible to tenants with RFID subscription enabled. See ADR-010 for the architectural decision to embed RFID configuration in the Tenant Admin Portal rather than creating a separate RFID portal.
Cross-Reference: For comprehensive Setup & Configuration module specifications (Sections 5.1-5.19), see Chapter 05 Module 5. RFID configuration (see Chapter 05 Section 5.16) is a one-time tenant setup performed during initial Raptag deployment.
RFID Configuration Lifecycle: RFID device management follows a claim-code provisioning flow: Admin generates a 6-digit claim code in Settings > RFID > Devices, operator enters it on the Raptag mobile app (Chapter 16), device appears as “pending” until first successful scan session, then transitions to “active”. Device decommissioning requires OWNER role with type-to-confirm safety. Tag templates and EPC serial sequences are configured in Settings > RFID > Templates before any counting sessions can begin.
15.5 Responsive Design Considerations
Breakpoints
| Breakpoint | Width | Layout Adaptation |
|---|---|---|
| Desktop XL | 1400px+ | Full 3-column layout |
| Desktop | 1024-1399px | 2-column with collapsible panels |
| Tablet | 768-1023px | Hamburger menu, stacked cards |
| Mobile | < 768px | Single column, bottom nav |
Sidebar Behavior
/* Desktop: Fixed sidebar */
@media (min-width: 1024px) {
.sidebar {
width: 240px;
position: fixed;
height: calc(100vh - 56px - 40px);
}
}
/* Tablet: Overlay sidebar */
@media (max-width: 1023px) {
.sidebar {
position: fixed;
z-index: 1000;
transform: translateX(-100%);
transition: transform 0.3s ease;
}
.sidebar.open {
transform: translateX(0);
}
}
/* Mobile: Hide sidebar, use bottom nav */
@media (max-width: 767px) {
.sidebar { display: none; }
.bottom-nav { display: flex; }
}
Table Responsiveness
/* Horizontal scroll for data tables on smaller screens */
@media (max-width: 1023px) {
.data-table-container {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
}
/* Card layout for mobile */
@media (max-width: 767px) {
.data-table tr {
display: block;
margin-bottom: 16px;
border: 1px solid var(--color-border);
border-radius: 8px;
}
.data-table td {
display: flex;
justify-content: space-between;
padding: 8px 12px;
}
.data-table td::before {
content: attr(data-label);
font-weight: 600;
}
}
15.6 Role-Based Access
| Feature | Admin | Manager | Supervisor | Associate |
|---|---|---|---|---|
| Dashboard | Full | Full | Limited | View Only |
| Inventory - View | Yes | Yes | Yes | Yes |
| Inventory - Transfer | Yes | Yes | Yes | No |
| Inventory - Adjust | Yes | Yes | Request | No |
| Products - View | Yes | Yes | Yes | Yes |
| Products - Edit | Yes | Yes | No | No |
| Products - Create | Yes | Yes | No | No |
| Employees - View | Yes | Yes | Own Team | Self |
| Employees - Edit | Yes | Own Store | No | No |
| Reports - All | Yes | Yes | Limited | No |
| Settings - View | Yes | Limited | No | No |
| Settings - Edit | Yes | No | No | No |
15.7 Real-Time Updates
The Admin Portal uses SignalR for live updates:
| Event | Hub Method | UI Update |
|---|---|---|
| Sale Completed | ReceiveSale | Dashboard KPIs, Activity Feed |
| Inventory Changed | ReceiveInventoryUpdate | Inventory grid, alerts |
| Bridge Status | ReceiveBridgeStatus | Status indicators |
| New Alert | ReceiveAlert | Alert panel, notification bell |
| Transfer Complete | ReceiveTransfer | Inventory grid, activity |
15.8 Summary
The Tenant Admin Portal provides:
- Dashboard: Real-time KPIs, alerts, and activity monitoring
- Inventory: Multi-location stock management with transfers
- Products: Full catalog CRUD with variants and pricing
- Employees: User management, roles, schedules
- Reports: Comprehensive sales and performance analytics
- Settings: Location, integration, and system configuration
15.9 Three-Portal Architecture
Overview
Based on industry best practices from leading SaaS POS vendors, RapOS implements a three-portal architecture with clear separation of concerns:
┌─────────────────────────────────────────────────────────────────────────────────┐
│ RAPOS THREE-PORTAL ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ PLATFORM ADMIN │ │ TENANT ADMIN │ │ POS TERMINAL │ │
│ │ admin.rapos.com │ │ app.rapos.com │ │ pos.rapos.com │ │
│ ├─────────────────────┤ ├─────────────────────┤ ├─────────────────────┤ │
│ │ │ │ │ │ │ │
│ │ • Tenant CRUD │ │ • Store Dashboard │ │ • Sale Screen │ │
│ │ • System Health │ │ • Inventory Mgmt │ │ • Payment Flow │ │
│ │ • Billing/Usage │ │ • Product Catalog │ │ • Receipt Print │ │
│ │ • Global Settings │ │ • Employee Mgmt │ │ • Offline Mode │ │
│ │ • Support Tickets │ │ • Reports │ │ • Hardware IO │ │
│ │ • Feature Flags │ │ • Settings │ │ • Sync Engine │ │
│ │ │ │ │ │ │ │
│ │ Users: ~5 │ │ Users: Thousands │ │ Users: Per Store │ │
│ │ (NexusDenim team) │ │ (Merchants) │ │ (Cashiers) │ │
│ └─────────────────────┘ └─────────────────────┘ └─────────────────────┘ │
│ │
│ Blazor Server Blazor Server .NET MAUI Hybrid │
│ Internal Only Multi-tenant Native + Offline │
│ │
└─────────────────────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ SHARED BACKEND SERVICES │
├───────────────────────────────────────┤
│ • Auth Service (JWT + PIN) │
│ • API Gateway (tenant routing) │
│ • Database (schema-per-tenant) │
│ • Event Bus (RabbitMQ) │
│ • File Storage (receipts, images) │
└───────────────────────────────────────┘
Portal Comparison
| Aspect | Platform Admin | Tenant Admin (This Chapter) | POS Terminal |
|---|---|---|---|
| URL | admin.rapos.com | app.rapos.com or {tenant}.rapos.com | Native app or pos.rapos.com |
| Technology | Blazor Server | Blazor Server | .NET MAUI Blazor Hybrid |
| Users | ~5 (NexusDenim staff) | Thousands (merchants) | Per-store (cashiers) |
| Access | Internal network / VPN | Public internet (multi-tenant) | In-store (offline-capable) |
| Authentication | Email + Password + MFA | Email + Password | PIN (fast clock-in) |
| Primary Focus | Platform operations | Business management | Revenue transactions |
| Offline Support | No | No | Yes (critical) |
| Hardware Integration | No | No | Printers, scanners, cash drawers |
Why Three Portals?
Research findings from Toast, Square, Stripe, Lightspeed, and Clover:
- Security Isolation: Platform admin tools should NEVER be accessible from the public tenant portal
- UX Optimization: Different user personas have different mental models and workflows
- Performance: Tenant portal doesn’t load platform-admin code; POS terminal doesn’t load admin UI
- Compliance: PCI DSS requires separation of administrative functions
- Scalability: Platform admin remains fast even with thousands of tenants
Subdomain Strategy
PRODUCTION DOMAINS
==================
Platform Admin: admin.pos.nexusdenim.com → admin.rapos.com (future)
Tenant Admin: app.pos.nexusdenim.com → app.rapos.com (future)
{tenant}.pos.nexusdenim.com → {tenant}.rapos.com
POS Terminal: Download from app portal → pos.rapos.com (PWA fallback)
API Gateway: api.pos.nexusdenim.com → api.rapos.com (future)
Update Server: updates.pos.nexusdenim.com → updates.rapos.com
DEVELOPMENT DOMAINS
===================
Platform Admin: admin.pos-dev.nexusdenim.com
Tenant Admin: app.pos-dev.nexusdenim.com
API Gateway: api.pos-dev.nexusdenim.com
Implementation Strategy
Phase 1 (MVP): Single codebase with feature flags
RapOS.Web/
├── Areas/
│ ├── Platform/ # Platform Admin pages (feature-flagged)
│ │ ├── Tenants/
│ │ ├── Billing/
│ │ └── System/
│ ├── Merchant/ # Tenant Admin pages (main focus)
│ │ ├── Dashboard/
│ │ ├── Inventory/
│ │ ├── Products/
│ │ └── Settings/
│ └── Shared/ # Shared components
Phase 2 (Post-MVP): Physical separation
apps/
├── platform-admin/ # Deployed to admin.rapos.com
├── merchant-portal/ # Deployed to app.rapos.com
├── pos-terminal/ # .NET MAUI native apps
└── shared/ # Shared component library
Platform Admin Portal (Internal)
Not covered in this chapter - The Platform Admin Portal is documented separately because:
- It’s internal tooling for ~5 users (NexusDenim team)
- It has different security requirements (VPN access)
- It manages cross-tenant concerns
Platform Admin Features:
- Tenant CRUD (create, suspend, delete tenants)
- System health monitoring
- Usage/billing dashboards
- Global feature flags
- Support ticket escalation
- Database maintenance tools
See: Chapter-20A-Platform-Admin.md (to be created)
15.10 Role Hierarchy Clarification
The role system spans all three portals:
ROLE HIERARCHY (Cross-Portal)
==============================
Level 5: SuperAdmin (Platform Admin only)
├── Manages ALL tenants
├── System configuration
└── Billing/subscriptions
Level 4: Admin (Tenant Admin)
├── Full tenant access
├── Manages all locations
└── Creates managers
Level 3: Manager (Tenant Admin)
├── Own store access
├── Employee management
└── Reports
Level 2: Cashier (POS Terminal only)
├── Sale transactions
├── Returns with approval
└── End-of-day counts
Level 1: Viewer (Tenant Admin)
├── Read-only access
└── View reports
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | V - Frontend |
| Chapter | 15 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 16: Mobile Raptag Application
RFID Inventory Management
The Raptag mobile application enables rapid inventory counting using RFID technology. Associates can scan entire racks of merchandise in seconds, dramatically reducing inventory count time and improving accuracy.
Configuration Note: RFID settings and device management are configured in the Tenant Admin Portal (Chapter 15), not in the mobile app itself. The mobile app downloads its configuration from the central API on startup. See Chapter 15: Settings > RFID for device registration, printer setup, and tag configuration.
16.1 Technology Stack
| Component | Technology | Rationale |
|---|---|---|
| Framework | .NET MAUI | Cross-platform, native RFID SDK access |
| RFID SDK | Zebra RFID SDK | Enterprise-grade, widely deployed |
| Local Database | SQLite | Offline-capable, lightweight |
| Sync | REST API + Background Service | Reliable batch uploads |
| Tag Printing | Zebra ZPL | Industry standard label format |
16.2 Architecture Overview
┌─────────────────────────────────────────────────────────────────────┐
│ RAPTAG MOBILE APPLICATION │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────┐│
│ │ UI LAYER ││
│ │ Login │ Dashboard │ Session │ Scanning │ Summary │ Sync ││
│ └─────────────────────────────────────────────────────────────────┘│
│ │ │
│ ┌─────────────────────────────────────────────────────────────────┐│
│ │ VIEW MODELS ││
│ │ LoginVM │ SessionVM │ ScanVM │ SummaryVM │ SyncVM ││
│ └─────────────────────────────────────────────────────────────────┘│
│ │ │
│ ┌───────────────┬───────────────┬───────────────┬─────────────────┐│
│ │ RFID Service │ Sync Service │ Print Service │ Session Service ││
│ │ (Zebra SDK) │ (HTTP/Queue) │ (ZPL/BT) │ (State Mgmt) ││
│ └───────────────┴───────────────┴───────────────┴─────────────────┘│
│ │ │
│ ┌─────────────────────────────────────────────────────────────────┐│
│ │ LOCAL SQLITE DATABASE ││
│ │ - Sessions - Tags ││
│ │ - Scan Records - Product Cache ││
│ │ - Sync Queue - Settings ││
│ └─────────────────────────────────────────────────────────────────┘│
│ │
└─────────────────────────────────────────────────────────────────────┘
│
┌────────▼────────┐
│ Central API │
│ (When Online) │
└─────────────────┘
16.3 Screen Specifications
Screen 1: Login
Purpose: Authenticate user and select operational context.
Route: /login
╔════════════════════════════════════════════════════════════════════╗
║ ║
║ ║
║ ┌──────────────────────────┐ ║
║ │ │ ║
║ │ ████████████████ │ ║
║ │ ██ RAPTAG ██ │ ║
║ │ ████████████████ │ ║
║ │ │ ║
║ └──────────────────────────┘ ║
║ ║
║ RFID Inventory System ║
║ ║
║ ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ Employee PIN │ ║
║ │ ┌────────────────────────────────────────────────────┐ │ ║
║ │ │ ● ● ● ● ○ ○ │ │ ║
║ │ └────────────────────────────────────────────────────┘ │ ║
║ │ │ ║
║ │ ┌───────┐ ┌───────┐ ┌───────┐ │ ║
║ │ │ 1 │ │ 2 │ │ 3 │ │ ║
║ │ └───────┘ └───────┘ └───────┘ │ ║
║ │ ┌───────┐ ┌───────┐ ┌───────┐ │ ║
║ │ │ 4 │ │ 5 │ │ 6 │ │ ║
║ │ └───────┘ └───────┘ └───────┘ │ ║
║ │ ┌───────┐ ┌───────┐ ┌───────┐ │ ║
║ │ │ 7 │ │ 8 │ │ 9 │ │ ║
║ │ └───────┘ └───────┘ └───────┘ │ ║
║ │ ┌───────┐ ┌───────┐ ┌───────┐ │ ║
║ │ │ CLR │ │ 0 │ │ GO │ │ ║
║ │ └───────┘ └───────┘ └───────┘ │ ║
║ │ │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ ║
║ ──────────────────────────────────────────────────────────────── ║
║ Reader: MC3390R | Battery: 85% | ● Offline ║
╚════════════════════════════════════════════════════════════════════╝
Components:
| Component | Specification |
|---|---|
| Logo | Raptag brand, centered |
| PIN Display | 6 digits with masked/filled indicators |
| Numpad | Large touch targets (64x64px min) |
| Clear (CLR) | Reset PIN entry |
| Go | Submit PIN for validation |
| Status Bar | Reader model, battery, connection |
Behavior:
- PIN validated against local cache (for offline)
- Sync user list on startup when online
- Auto-login from last session option
- Lock screen after 5 minutes of inactivity
Screen 2: Home Dashboard
Purpose: At-a-glance overview of assigned counts, recent activity, sync health, and device status.
Route: /home
╔════════════════════════════════════════════════════════════════════╗
║ RAPTAG [⚙] [Logout] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ Welcome, Sarah Miller ║
║ 📍 GM - Greenbrier Mall Today: December 29, 2024 ║
║ ║
║ MY ASSIGNED COUNTS ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ ▶ Full Inventory - Section A Due: Today 5:00 PM ║
║ │ Expected: 505 items Assigned by: Manager ║
║ │ ║
║ │ ▶ Cycle Count - Section C Due: Tomorrow 10:00 AM ║
║ │ Expected: 312 items Assigned by: Manager ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ RECENT SESSIONS ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ✓ #GM-1228-003 Spot Check 156 items Synced ║
║ │ ✓ #GM-1228-001 Cycle Count 289 items Synced ║
║ │ ● #GM-1227-002 Full Inventory 2,847 items Pending Upload ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ ┌─────────────────────────┐ ┌─────────────────────────────────╢
║ │ SYNC STATUS │ │ DEVICE HEALTH ║
║ │ ● 1 session pending │ │ Battery: 85% ● ║
║ │ Last sync: 5 min ago │ │ Reader: MC3390R Connected ║
║ │ │ │ Storage: 2.1 GB free ║
║ └─────────────────────────┘ └─────────────────────────────────╢
║ ║
║ ┌──────────────┐ ┌──────────────┐ ┌────────────┐ ┌────────────┐║
║ │ NEW SESSION │ │ JOIN SESSION │ │ PRINT TAGS │ │ SYNC NOW │║
║ └──────────────┘ └──────────────┘ └────────────┘ └────────────┘║
║ ║
║ ──────────────────────────────────────────────────────────────── ║
║ Reader: MC3390R | ● Battery: 85% | ● Online ║
╚════════════════════════════════════════════════════════════════════╝
Dashboard Components:
| Component | Data Source | Refresh |
|---|---|---|
| Assigned Counts | GET /api/rfid/sessions?assigned_to={operator}&status=pending | On screen load |
| Recent Sessions | Local SQLite sessions table, last 5 | Real-time |
| Sync Status | Local sync_queue pending count | Real-time |
| Device Health | System APIs (battery, storage, Bluetooth) | Every 30s |
Quick Actions:
| Button | Action |
|---|---|
| New Session | Navigate to Screen 3 (Session Start) |
| Join Session | Navigate to Screen 3, scroll to “Join Existing Session” |
| Print Tags | Navigate to Tag Printing screen (18.5) |
| Sync Now | Trigger immediate sync of pending sessions |
Screen 3: Session Start
Purpose: Configure a new inventory counting session or join an existing one.
Route: /session/new
╔════════════════════════════════════════════════════════════════════╗
║ NEW INVENTORY SESSION [< Back] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ ────────────────────────────────────────────────────────────── ║
║ ║
║ LOCATION * ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ GM - Greenbrier Mall ▼ │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ COUNT TYPE * ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ ○ Full Inventory │ ║
║ │ Complete inventory of entire location │ ║
║ │ │ ║
║ │ ● Cycle Count │ ║
║ │ Count specific section/department │ ║
║ │ │ ║
║ │ ○ Spot Check │ ║
║ │ Quick verification of selected items │ ║
║ │ │ ║
║ │ ○ Find Item │ ║
║ │ Locate specific product by EPC/SKU │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ SECTION (Required for Cycle Count) ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ Section A - Men's Tops ▼ │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ NOTES (Optional) ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ Pre-inventory count for Q4 audit │ ║
║ │ │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ START NEW SESSION │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ ──────── OR ──────── ║
║ ║
║ JOIN EXISTING SESSION ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ ● #GM-2024-1229-001 Full Inventory 3 operators │ ║
║ │ Started: 1:30 PM Sections: B, D available │ ║
║ │ │ ║
║ │ ○ #GM-2024-1229-002 Cycle Count 1 operator │ ║
║ │ Started: 2:15 PM Section: E (Accessories) │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌────────────────────────────────────────────────────────────┐ ║
║ │ JOIN SELECTED SESSION │ ║
║ └────────────────────────────────────────────────────────────┘ ║
║ ║
║ ──────────────────────────────────────────────────────────────── ║
║ Reader: Ready | ● Battery: 85% | ● Online ║
╚════════════════════════════════════════════════════════════════════╝
Count Types (RFID counting only — no receiving; see BRD Section 5.16.4):
| Type | Use Case | Expected Items |
|---|---|---|
| Full Inventory | Annual/semi-annual full count | 2,000-100,000+ |
| Cycle Count | Section/department audits | 200-1,000 |
| Spot Check | Discrepancy verification | 10-50 |
| Find Item | Locate specific product by EPC | 1 |
Note: “Receiving” is handled by the barcode Scanner in the POS Client (Ch 14), not RFID. See BRD Section 5.16.6 for the Scanner vs RFID distinction.
Sections (Configurable per location in Admin Portal):
- Section A - Men’s Tops
- Section B - Men’s Bottoms
- Section C - Women’s Tops
- Section D - Women’s Bottoms
- Section E - Accessories
- Backroom
- Display Window
Join Session Flow:
- App queries
GET /api/rfid/sessions?location={code}&status=activefor active sessions at operator’s location - Operator selects a session and picks an available section
- App calls
POST /api/rfid/sessions/{id}/joinwith operator details - Server adds row to
session_operatorstable - Scanning screen opens with session context pre-loaded
- Deduplication: If multiple operators scan the same EPC, server keeps highest RSSI reading (see Chapter 05 Section 4.6.8 for RSSI-based dedup rules)
Business Rules:
- Maximum 10 operators per session (see Chapter 05 Section 5.16.4)
- One active session per operator (must complete or leave current session before joining another)
- Section assignment is advisory (not enforced by reader hardware)
- Session creator is automatically the first operator
Screen 4: Scanning (Main Interface)
Purpose: The primary RFID scanning interface during an active session.
Route: /session/scan
╔════════════════════════════════════════════════════════════════════╗
║ SCANNING - Cycle Count [Pause] [End] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ Location: GM - Greenbrier Mall Section: A - Men's Tops ║
║ Started: 2:45 PM Duration: 00:12:34 ║
║ Operator: Sarah Miller Session: #GM-2024-1229-001 ║
║ ║
║ PROGRESS ║
║ ████████████████████░░░░░░░░░░ 62% 312 of 505 items ║
║ Est. remaining: ~8 min ║
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ LIVE SCAN ║
║ │ ║
║ │ ┌──────────────────────────────────────────┐ ║
║ │ │ │ ║
║ │ │ ████ SCANNING ████ │ ║
║ │ │ │ ║
║ │ │ Tags Read: 847 │ ║
║ │ │ Unique Items: 312 │ ║
║ │ │ Read Rate: 42/sec │ ║
║ │ │ │ ║
║ │ └──────────────────────────────────────────┘ ║
║ │ ║
║ │ [HOLD TRIGGER TO SCAN] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ RECENT SCANS ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ ✓ NXJ1078-NAV-M Galaxy V-Neck (M, Navy) x3 ║
║ │ ✓ NXJ1078-NAV-L Galaxy V-Neck (L, Navy) x2 ║
║ │ ✓ NXP0892-KHK-32 Slim Fit Chinos (32, Khaki) x1 ║
║ │ ! UNKNOWN TAG E280116060000... x1 ║
║ │ ✓ NXA0234-BLK-M Leather Belt (M, Black) x4 ║
║ │ ║
║ │ [View All 312 Items] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ║
║ │ [MANUAL ADD] │ │ [FIND ITEM] │ │ [SETTINGS] │ ║
║ └─────────────────┘ └─────────────────┘ └─────────────────┘ ║
║ ║
║ ──────────────────────────────────────────────────────────────── ║
║ Reader: Scanning | ● Battery: 82% | Signal: Strong ║
║ Auto-save: 30s ago | Checkpoint: 847 reads ║
╚════════════════════════════════════════════════════════════════════╝
Progress Tracking:
| Metric | Source | Display |
|---|---|---|
| Progress % | unique_items / expected_count × 100 | Progress bar + percentage |
| Items count | "312 of 505 items" | Current unique vs expected |
| Time estimate | (elapsed / progress%) × remaining% | "~8 min remaining" |
| Auto-save | last_checkpoint_at vs now | "Auto-save: 30s ago" |
Note:
expected_countcomes from the server when the session is created (based on last known inventory at that location/section). If unavailable (offline session start), the progress bar is hidden and only raw counts are shown.
Scanning States:
IDLE STATE SCANNING STATE
┌────────────────────────┐ ┌────────────────────────┐
│ │ │ │
│ ○ ○ ○ ○ ○ ○ ○ ○ │ │ ████████████████ │
│ │ │ ████ ACTIVE ████ │
│ Ready to Scan │ │ ████████████████ │
│ │ │ │
│ Tags: 0 │ │ Tags: 847 │
│ │ │ Rate: 42/sec │
│ │ │ │
└────────────────────────┘ └────────────────────────┘
PAUSED STATE COMPLETED STATE
┌────────────────────────┐ ┌────────────────────────┐
│ │ │ │
│ ║ ║ PAUSED ║ ║ │ │ ✓ COMPLETE ✓ │
│ │ │ │
│ Session paused │ │ Session ended │
│ Tap to resume │ │ │
│ │ │ Total: 847 tags │
│ Tags: 312 │ │ 312 unique items │
│ │ │ │
└────────────────────────┘ └────────────────────────┘
Reader Signal Strength:
| Level | Icon | Read Rate |
|---|---|---|
| Strong | 4 bars | 40+ tags/sec |
| Good | 3 bars | 20-40 tags/sec |
| Fair | 2 bars | 10-20 tags/sec |
| Weak | 1 bar | < 10 tags/sec |
| None | X | No connection |
Quick Actions:
| Action | Purpose |
|---|---|
| Manual Add | Barcode scan for untagged items |
| Find Item | Locate specific SKU using reader |
| Settings | Adjust power, beep, vibration |
Screen 5: Session Summary
Purpose: Review results and submit completed count session.
Route: /session/summary
╔════════════════════════════════════════════════════════════════════╗
║ SESSION SUMMARY [< Back] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ SESSION #GM-2024-1229-001 ║
║ │ Cycle Count - Section A (Men's Tops) ║
║ │ Location: GM - Greenbrier Mall ║
║ │ Operator: Sarah Miller ║
║ │ Date: December 29, 2024 ║
║ │ Duration: 00:23:45 ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ SCAN RESULTS ║
║ ┌─────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ ┌─────────────────────┐ ┌─────────────────────┐ │ ║
║ │ │ TOTAL TAGS │ │ UNIQUE ITEMS │ │ ║
║ │ │ 847 │ │ 312 │ │ ║
║ │ └─────────────────────┘ └─────────────────────┘ │ ║
║ │ │ ║
║ │ ┌─────────────────────┐ ┌─────────────────────┐ │ ║
║ │ │ EXPECTED │ │ VARIANCE │ │ ║
║ │ │ 305 │ │ +7 (2.3%) │ │ ║
║ │ └─────────────────────┘ └─────────────────────┘ │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────┘ ║
║ ║
║ DISCREPANCIES View All ║
║ ┌─────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ ▲ OVER (5 items) │ ║
║ │ NXJ1078-NAV-M Expected: 12 Counted: 15 (+3) │ ║
║ │ NXP0892-KHK-32 Expected: 8 Counted: 10 (+2) │ ║
║ │ │ ║
║ │ ▼ SHORT (3 items) │ ║
║ │ NXJ2156-WHT-L Expected: 6 Counted: 4 (-2) │ ║
║ │ NXA0234-BRN-M Expected: 5 Counted: 4 (-1) │ ║
║ │ │ ║
║ │ ? UNKNOWN (2 tags) │ ║
║ │ E280116060000207523456789 │ ║
║ │ E280116060000207523456790 │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────┐ ┌─────────────────────────────────────┐ ║
║ │ │ │ │ ║
║ │ [RECOUNT SECTION] │ │ SUBMIT SESSION │ ║
║ │ │ │ │ ║
║ └─────────────────────┘ └─────────────────────────────────────┘ ║
║ ║
║ ──────────────────────────────────────────────────────────────── ║
║ Reader: Idle | Battery: 78% | ● Online ║
╚════════════════════════════════════════════════════════════════════╝
Variance Thresholds (Configurable):
| Variance | Color | Action |
|---|---|---|
| 0% | Green | Auto-approve |
| 1-2% | Yellow | Review recommended |
| 3-5% | Orange | Manager review required |
| > 5% | Red | Recount required |
Screen 6: Sync Status
Purpose: Monitor data synchronization with central server.
Route: /sync
╔════════════════════════════════════════════════════════════════════╗
║ SYNC STATUS [< Menu] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ CONNECTION ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ ● Connected to Central Server ║
║ │ Server: api.nexuspos.com ║
║ │ Latency: 45ms ║
║ │ Last Sync: 2 minutes ago ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ PENDING UPLOADS ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ ┌─────────────────────────────────────────────────────────┐║
║ │ │ Session #GM-2024-1229-001 Uploading │║
║ │ │ 312 items, 847 tag reads │║
║ │ │ Progress: ████████████░░░░░░░░ 62% │║
║ │ └─────────────────────────────────────────────────────────┘║
║ │ ║
║ │ ┌─────────────────────────────────────────────────────────┐║
║ │ │ Session #GM-2024-1228-003 Pending │║
║ │ │ 156 items, 423 tag reads │║
║ │ │ Waiting... │║
║ │ └─────────────────────────────────────────────────────────┘║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ RECENT SYNCS ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ ✓ Product Catalog Updated 10 min ago 1,245 items ║
║ │ ✓ Tag Mappings Updated 10 min ago 8,432 tags ║
║ │ ✓ User List Updated 1 hour ago 24 users ║
║ │ ✓ Location Config Updated 1 hour ago 5 locations ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ ┌─────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ [SYNC NOW] │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────┘ ║
║ ║
║ Storage: 245 MB used of 2 GB | Last Full Sync: 12/29 9:00 AM ║
║ ║
║ ──────────────────────────────────────────────────────────────── ║
║ Reader: Idle | Battery: 78% | ● Online ║
╚════════════════════════════════════════════════════════════════════╝
16.4 Zebra RFID Reader Integration
Supported Devices
| Model | Form Factor | Range | Use Case |
|---|---|---|---|
| MC3390R | Handheld | 20 ft | Store counts |
| RFD40 | Sled | 12 ft | Attaches to phone |
| FX9600 | Fixed | 30 ft | Dock door receiving |
SDK Integration
public interface IRfidService
{
event EventHandler<TagReadEventArgs> TagRead;
event EventHandler<BatteryEventArgs> BatteryChanged;
event EventHandler<ReaderEventArgs> ReaderConnected;
event EventHandler<ReaderEventArgs> ReaderDisconnected;
Task<bool> ConnectAsync();
Task DisconnectAsync();
Task StartInventoryAsync();
Task StopInventoryAsync();
Task<ReaderStatus> GetStatusAsync();
Task SetPowerLevelAsync(int dbm);
}
public class ZebraRfidService : IRfidService
{
private readonly RFIDReader _reader;
private readonly EventHandler _eventHandler;
public async Task<bool> ConnectAsync()
{
var readers = RFIDReader.GetAvailableReaders();
if (readers.Count == 0) return false;
_reader = readers[0];
_reader.Events.TagReadEvent += OnTagRead;
_reader.Events.ReaderAppearEvent += OnReaderAppear;
_reader.Events.ReaderDisappearEvent += OnReaderDisappear;
_reader.Events.BatteryEvent += OnBatteryChanged;
return await _reader.ConnectAsync();
}
public async Task StartInventoryAsync()
{
var config = new InventoryConfig
{
MemoryBank = MEMORY_BANK.MEMORY_BANK_EPC,
ReportUnique = true,
StopTrigger = new StopTrigger
{
StopTriggerType = STOP_TRIGGER_TYPE.STOP_TRIGGER_TYPE_TAG_OBSERVATION
}
};
await _reader.Inventory.PerformAsync(config);
}
private void OnTagRead(object sender, TagDataEventArgs e)
{
foreach (var tag in e.ReadEventData.TagData)
{
var epc = tag.TagID;
var rssi = tag.PeakRSSI;
TagRead?.Invoke(this, new TagReadEventArgs(epc, rssi));
}
}
}
Power Level Settings
| Power (dBm) | Range | Battery Impact | Use Case |
|---|---|---|---|
| 30 (Max) | 20+ ft | High | Full store |
| 25 | 15 ft | Medium | Zone count |
| 20 | 10 ft | Low | Spot check |
| 15 | 5 ft | Minimal | Single rack |
16.5 Tag Printing Workflow
Encoding New Tags
╔════════════════════════════════════════════════════════════════════╗
║ PRINT RFID TAGS [< Back] ║
╠════════════════════════════════════════════════════════════════════╣
║ ║
║ PRODUCT ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ NXJ1078-NAV-M ║
║ │ Galaxy V-Neck Tee - Navy, Medium ║
║ │ Price: $29.00 ║
║ │ Current Stock: 15 (GM) ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ PRINT SETTINGS ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ║
║ │ Quantity: [ 5 ] tags ║
║ │ ║
║ │ Tag Type: ○ Hang Tag (Apparel) ║
║ │ ● Price Tag (Standard) ║
║ │ ○ Label (Adhesive) ║
║ │ ║
║ │ Printer: [Zebra ZD621R ▼] ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ TAG PREVIEW ║
║ ┌────────────────────────────────────────────────────────────────╢
║ │ ┌─────────────────────────┐ ║
║ │ │ NEXUS CLOTHING │ ║
║ │ │ │ ║
║ │ │ Galaxy V-Neck Tee │ ║
║ │ │ Navy / Medium │ ║
║ │ │ │ ║
║ │ │ $29.00 │ ║
║ │ │ │ ║
║ │ │ ||||||||||||||||||| │ <- Barcode ║
║ │ │ NXJ1078-NAV-M │ ║
║ │ │ │ ║
║ │ │ [RFID ENCODED] │ <- Chip indicator ║
║ │ └─────────────────────────┘ ║
║ │ ║
║ └────────────────────────────────────────────────────────────────╢
║ ║
║ ┌─────────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ [PRINT 5 TAGS] │ ║
║ │ │ ║
║ └─────────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════════════════════════════════════════╝
ZPL Template
^XA
^FO50,50^A0N,30,30^FDNexus Clothing^FS
^FO50,100^A0N,40,40^FD%PRODUCT_NAME%^FS
^FO50,150^A0N,25,25^FD%VARIANT%^FS
^FO50,200^A0N,50,50^FD$%PRICE%^FS
^FO50,280^BY2^BCN,80,Y,N,N^FD%SKU%^FS
^RFW,H,2,4,1^FD%EPC%^FS
^RFR,H,0,12,1^FN0^FS
^XZ
Template Variables:
| Variable | Source | Example |
|---|---|---|
| %PRODUCT_NAME% | Product.Name | Galaxy V-Neck Tee |
| %VARIANT% | Size/Color | Navy / Medium |
| %PRICE% | Product.Price | 29.00 |
| %SKU% | Product.SKU | NXJ1078-NAV-M |
| %EPC% | Generated | E28011606000020752345 |
16.6 Local Database Schema
-- Sessions (with auto-save checkpoint support)
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
server_session_id TEXT, -- ID from POST /sessions response
location_code TEXT NOT NULL,
count_type TEXT NOT NULL, -- full_inventory, cycle_count, spot_check, find_item
section TEXT, -- assigned section (replaces zone)
operator_id TEXT NOT NULL,
started_at TEXT NOT NULL,
ended_at TEXT,
status TEXT DEFAULT 'active', -- active, paused, completed, cancelled
notes TEXT,
expected_count INTEGER, -- from server, for progress tracking
last_checkpoint_at TEXT, -- auto-save: last SQLite flush timestamp
total_reads_at_checkpoint INTEGER DEFAULT 0, -- reads saved at last checkpoint
interrupted INTEGER DEFAULT 0, -- 1 if app crashed/closed during session
is_joined INTEGER DEFAULT 0, -- 1 if operator joined existing session
synced_at TEXT
);
-- Tag Reads (raw data, deduplicated locally by EPC)
CREATE TABLE tag_reads (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
epc TEXT NOT NULL,
rssi INTEGER,
read_count INTEGER DEFAULT 1, -- times this EPC was read
first_seen_at TEXT NOT NULL,
last_seen_at TEXT NOT NULL,
UNIQUE (session_id, epc), -- one row per EPC per session
FOREIGN KEY (session_id) REFERENCES sessions(id)
);
-- Session Items (aggregated: EPC → SKU resolution)
CREATE TABLE session_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
sku TEXT,
epc TEXT NOT NULL,
product_name TEXT,
quantity INTEGER DEFAULT 1,
expected_qty INTEGER,
status TEXT DEFAULT 'matched', -- matched, over, short, unknown
FOREIGN KEY (session_id) REFERENCES sessions(id)
);
-- Product Cache
CREATE TABLE products (
sku TEXT PRIMARY KEY,
name TEXT NOT NULL,
barcode TEXT,
price REAL,
category TEXT,
last_synced TEXT NOT NULL
);
-- Tag Mappings (EPC prefix → SKU for offline decoding)
CREATE TABLE tag_mappings (
epc_prefix TEXT PRIMARY KEY,
sku TEXT NOT NULL,
last_synced TEXT NOT NULL
);
-- Sync Queue (chunked upload tracking)
CREATE TABLE sync_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
entity_type TEXT NOT NULL, -- 'session_chunk', 'session_complete'
entity_id TEXT NOT NULL, -- session_id
action TEXT NOT NULL, -- 'upload_chunk', 'complete'
payload TEXT NOT NULL, -- JSON chunk data
chunk_index INTEGER, -- 0-based chunk number
total_chunks INTEGER, -- total expected chunks for this session
chunks_uploaded INTEGER DEFAULT 0, -- chunks successfully uploaded so far
retry_count INTEGER DEFAULT 0,
created_at TEXT NOT NULL,
status TEXT DEFAULT 'pending' -- pending, uploading, completed, failed
);
16.7 Offline Capabilities
| Feature | Offline Behavior |
|---|---|
| Login | Uses cached credentials |
| Session Start | Creates local session ID |
| Scanning | Full functionality |
| Product Lookup | Uses cached catalog |
| Session Summary | Calculates from local data |
| Submit | Queues for later sync |
| Sync Status | Shows pending items |
Sync Priority
| Priority | Data Type | Frequency |
|---|---|---|
| 1 (Critical) | Completed sessions | Immediate when online |
| 2 (High) | Session chunks | Background chunked upload |
| 3 (Medium) | Product updates | Pull on app launch |
| 4 (Low) | User list | Daily refresh |
Chunked Upload Strategy
Large sessions (100,000+ tag reads) are uploaded in chunks of 5,000 events each:
Session: 47,000 tag reads → 10 chunks
Chunk 0: events[0..4999] → POST /sessions/{id}/chunks ✓
Chunk 1: events[5000..9999] → POST /sessions/{id}/chunks ✓
Chunk 2: events[10000..14999] → POST /sessions/{id}/chunks ✗ (network error)
...retry after reconnect...
Chunk 2: events[10000..14999] → POST /sessions/{id}/chunks ✓ (idempotent)
Chunk 3-9: ...
POST /sessions/{id}/complete → Trigger variance calculation
Resume logic: On network failure, call GET /sessions/{id}/upload-status to identify missing chunks and retry only those. Server deduplicates by (session_id, epc) UNIQUE constraint, making retries safe.
Session Auto-Save & Recovery
Auto-save protects against data loss from app crashes, battery death, or accidental closure.
Auto-Save Triggers:
| Trigger | Action |
|---|---|
| Every 30 seconds (configurable) | Flush tag_reads to SQLite, update last_checkpoint_at |
| Battery ≤ 20% | Force checkpoint + yellow warning |
| Battery ≤ 10% | Force checkpoint + orange warning + “Save & Exit” prompt |
| Battery ≤ 5% | Force checkpoint + auto-pause session |
| App backgrounded | Force checkpoint |
Recovery Flow (on app restart):
App Launch
│
├─── Check: Any sessions WHERE status='active' AND ended_at IS NULL?
│
├── NO → Normal flow → Home Dashboard
│
└── YES → Show Recovery Dialog
┌─────────────────────────────────────────────┐
│ │
│ ⚠️ Interrupted Session Found │
│ │
│ Session: #GM-2024-1229-001 │
│ Type: Full Inventory │
│ Tags Read: 2,847 │
│ Last Save: 3:42 PM (12 min ago) │
│ │
│ ┌─────────────┐ ┌─────────────────────┐ │
│ │ RESUME │ │ DISCARD SESSION │ │
│ └─────────────┘ └─────────────────────┘ │
│ │
└─────────────────────────────────────────────┘
- Resume: Reload cached
tag_reads, reconnect reader, continue scanning from checkpoint - Discard: Mark session
cancelledin SQLite. Data preserved locally but not uploaded to server
Battery Warning Indicators (shown in status bar during scanning):
| Battery | Color | Icon | Action |
|---|---|---|---|
| > 20% | Green | ● | Normal operation |
| 11-20% | Yellow | ▲ | Warning badge, checkpoint forced |
| 6-10% | Orange | ▲▲ | “Save & Exit” prompt |
| ≤ 5% | Red | ◼ | Auto-pause, force checkpoint |
16.8 Configuration Architecture
Where Configuration Lives
┌─────────────────────────────────────────────────────────────────────────────────┐
│ RFID CONFIGURATION ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────────┤
│ │
│ TENANT ADMIN PORTAL (app.rapos.com) │
│ ┌───────────────────────────────────────────────────────────────────────────┐ │
│ │ Settings > RFID │ │
│ │ ├── Devices (claim codes, device list) ← ADMIN CONFIGURES HERE │ │
│ │ ├── Printers (IP addresses, templates) │ │
│ │ ├── Tag Config (EPC prefix, thresholds) │ │
│ │ └── Templates (label designs) │ │
│ └───────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────────────────────────────────────────────────────────┐ │
│ │ CENTRAL API (api.rapos.com) │ │
│ │ ├── GET /api/rfid/config ← Mobile app fetches on startup │ │
│ │ ├── GET /api/rfid/products ← Product catalog cache │ │
│ │ ├── GET /api/rfid/tag-mappings ← EPC → SKU mappings │ │
│ │ └── POST /api/rfid/sessions ← Upload scan sessions │ │
│ └───────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ RAPTAG MOBILE APP (This Chapter) │
│ ┌───────────────────────────────────────────────────────────────────────────┐ │
│ │ Local SQLite Database │ │
│ │ ├── cached_config ← Downloaded from API on startup │ │
│ │ ├── products ← Product catalog for offline use │ │
│ │ ├── tag_mappings ← EPC decoding for offline use │ │
│ │ └── sessions ← Locally created, synced when online │ │
│ └───────────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────────┘
Configuration Flow
1. DEVICE REGISTRATION (One-time setup)
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Admin Portal│────▶│ Generate │────▶│ Display │
│ Settings │ │ Claim Code │ │ X7K9M2 │
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Scanner │◀────│ Validate │◀────│ Enter Code │
│ Registered │ │ & Activate │ │ on Device │
└─────────────┘ └─────────────┘ └─────────────┘
> **Claim Code**: The code is a 6-character alphanumeric string with 24-hour expiry and one-time use. Before initializing the RFID reader, operators must enter the claim code generated in the Admin Portal. See Chapter 05 Section 5.16.1 for full claim code specifications.
2. ONGOING CONFIGURATION SYNC
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ App Launch │────▶│ Check API │────▶│ Download │
│ │ │ /rfid/config│ │ Updates │
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Update │◀────│ Merge with │◀────│ Store in │
│ Local Cache │ │ Local Data │ │ SQLite │
└─────────────┘ └─────────────┘ └─────────────┘
What’s Configured Where
| Setting | Location | Notes |
|---|---|---|
| EPC Prefix | Tenant Admin > RFID > Tag Config | Read-only in app |
| Variance Threshold | Tenant Admin > RFID > Tag Config | Applied during session summary |
| Auto-Save Interval | Tenant Admin > RFID > Tag Config | Default 30s, synced to app |
| Chunk Upload Size | Tenant Admin > RFID > Tag Config | Default 5,000 events |
| RSSI Threshold | Tenant Admin > RFID > Tag Config | Default -70 dBm, filter phantom reads |
| Claim Codes | Tenant Admin > RFID > Devices | Generated per location |
| Printer IPs | Tenant Admin > RFID > Printers | Synced to app |
| Label Templates | Tenant Admin > RFID > Templates | Synced to app |
| Sections | Tenant Admin > RFID > Locations | Configurable per location |
| Power Level | Mobile App > Settings | User-adjustable per session |
| Sound/Vibration | Mobile App > Settings | User preference |
| Session Notes | Mobile App > Session Start | Per-session |
API Endpoints for Mobile App
// Configuration downloaded on startup
GET /api/rfid/config
Response: {
tenant_id: string,
epc_prefix: string,
variance_threshold: number,
auto_save_interval_seconds: number,
chunk_upload_size: number,
min_rssi_threshold: number,
allow_overrides: boolean,
session_timeout_minutes: number,
printers: [{
id: string,
name: string,
ip_address: string,
location_code: string,
model: string
}],
templates: [{
id: string,
name: string,
type: "hang_tag" | "price_tag" | "label",
zpl_content: string
}],
sections: [{
location_code: string,
sections: string[]
}]
}
// Product catalog for offline use
GET /api/rfid/products
Response: [{
sku: string,
name: string,
barcode: string,
price: number,
category: string
}]
// EPC → SKU mappings for offline decoding
GET /api/rfid/tag-mappings
Response: [{
epc_prefix: string,
sku: string
}]
// Create new counting session
POST /api/rfid/sessions
Body: {
location_code: string,
count_type: "full_inventory" | "cycle_count" | "spot_check" | "find_item",
section: string,
notes: string
}
Response: {
session_id: string,
expected_count: number,
sections_available: string[]
}
// Join existing session as additional operator
POST /api/rfid/sessions/{sessionId}/join
Body: {
operator_id: string,
device_id: string,
assigned_section: string
}
Response: {
session_id: string,
operator_count: number,
your_section: string,
session_started_at: string
}
// Upload scan events in chunks (≤5,000 events per chunk)
// Idempotent: duplicate (session_id, epc) pairs are deduplicated server-side
POST /api/rfid/sessions/{sessionId}/chunks
Body: {
chunk_index: number,
total_chunks: number,
operator_id: string,
device_id: string,
events: [{
epc: string,
rssi: number,
read_count: number,
first_seen_at: string,
last_seen_at: string
}]
}
Response: {
events_accepted: number,
events_deduplicated: number,
chunks_received: number,
chunks_expected: number
}
// Check upload progress (for resume after network failure)
GET /api/rfid/sessions/{sessionId}/upload-status
Response: {
session_id: string,
status: "uploading" | "complete" | "incomplete",
chunks_received: number[],
chunks_missing: number[],
total_events: number,
unique_epcs: number
}
// Complete session and trigger variance calculation
POST /api/rfid/sessions/{sessionId}/complete
Body: {
ended_at: string,
notes: string
}
Response: {
session_id: string,
status: "completed",
variance_percent: number,
review_required: boolean
}
16.9 Summary
The Raptag mobile application provides:
- Fast Authentication: PIN-based login with offline support
- Home Dashboard: At-a-glance view of assigned counts, sync status, and device health
- Flexible Counting: Multiple session types for different inventory needs
- Multi-Operator Sessions: Multiple operators scanning sections in parallel with server-side deduplication
- Real-time Scanning: Live tag counts with progress tracking and signal feedback
- Auto-Save & Recovery: 30-second SQLite checkpoints with crash recovery
- Accuracy Tracking: Variance calculation and discrepancy flagging
- Chunked Sync: Reliable batch uploads (5,000 events/chunk) with idempotent deduplication
- Tag Printing: Integrated label printing with SGTIN-96 encoding
Configuration is managed centrally in the Tenant Admin Portal (Chapter 15), with the mobile app downloading settings on startup. This ensures consistent configuration across all devices and enables remote management without physically accessing scanners.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | V - Frontend |
| Chapter | 16 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 17: UI Component Library
The Shared Design System
This chapter defines the complete UI component library shared across all POS Platform applications. These specifications ensure visual consistency, reduce development time, and enable rapid prototyping.
17.1 Design Tokens
Color Palette
Primary Colors
| Token | Hex | RGB | Usage |
|---|---|---|---|
--color-primary | #1976D2 | 25, 118, 210 | Main brand, primary buttons, links |
--color-primary-dark | #1565C0 | 21, 101, 192 | Hover states, headers |
--color-primary-light | #BBDEFB | 187, 222, 251 | Selected backgrounds, info panels |
--color-primary-50 | #E3F2FD | 227, 242, 253 | Subtle backgrounds |
Secondary Colors
| Token | Hex | RGB | Usage |
|---|---|---|---|
--color-secondary | #424242 | 66, 66, 66 | Secondary buttons, icons |
--color-secondary-dark | #212121 | 33, 33, 33 | Text, headings |
--color-secondary-light | #757575 | 117, 117, 117 | Secondary text, labels |
Status Colors
| Token | Hex | RGB | Usage |
|---|---|---|---|
--color-success | #4CAF50 | 76, 175, 80 | Success states, positive |
--color-success-light | #E8F5E9 | 232, 245, 233 | Success backgrounds |
--color-success-dark | #2E7D32 | 46, 125, 50 | Success text on light bg |
--color-warning | #FF9800 | 255, 152, 0 | Warning states, caution |
--color-warning-light | #FFF3E0 | 255, 243, 224 | Warning backgrounds |
--color-warning-dark | #E65100 | 230, 81, 0 | Warning text on light bg |
--color-error | #F44336 | 244, 67, 54 | Error states, destructive |
--color-error-light | #FFEBEE | 255, 235, 238 | Error backgrounds |
--color-error-dark | #C62828 | 198, 40, 40 | Error text on light bg |
--color-info | #2196F3 | 33, 150, 243 | Informational states |
--color-info-light | #E3F2FD | 227, 242, 253 | Info backgrounds |
--color-info-dark | #1565C0 | 21, 101, 192 | Info text on light bg |
Neutral Colors
| Token | Hex | Usage |
|---|---|---|
--color-white | #FFFFFF | Card backgrounds, content areas |
--color-gray-50 | #FAFAFA | Alternating row backgrounds |
--color-gray-100 | #F5F5F5 | Page backgrounds, disabled |
--color-gray-200 | #EEEEEE | Light borders, dividers |
--color-gray-300 | #E0E0E0 | Standard borders |
--color-gray-400 | #BDBDBD | Input borders, icons |
--color-gray-500 | #9E9E9E | Disabled text, placeholders |
--color-gray-600 | #757575 | Secondary text |
--color-gray-700 | #616161 | Icons, labels |
--color-gray-800 | #424242 | Body text |
--color-gray-900 | #212121 | Headings, primary text |
--color-black | #000000 | Maximum contrast |
Typography Scale
Font Families
--font-family-base: 'Segoe UI', -apple-system, BlinkMacSystemFont,
'Roboto', 'Helvetica Neue', Arial, sans-serif;
--font-family-mono: 'Cascadia Code', 'Fira Code', 'Consolas',
'Monaco', 'Courier New', monospace;
Font Sizes
| Token | Size | Line Height | Usage |
|---|---|---|---|
--font-size-xs | 11px | 1.4 | Captions, badges |
--font-size-sm | 12px | 1.4 | Secondary text, timestamps |
--font-size-base | 14px | 1.5 | Body text, inputs |
--font-size-md | 16px | 1.5 | Emphasized body |
--font-size-lg | 18px | 1.4 | Section headers |
--font-size-xl | 20px | 1.3 | Card titles |
--font-size-2xl | 24px | 1.3 | Page titles |
--font-size-3xl | 30px | 1.2 | Dashboard stats |
--font-size-4xl | 36px | 1.1 | Large numbers |
Font Weights
| Token | Weight | Usage |
|---|---|---|
--font-weight-light | 300 | Large titles |
--font-weight-normal | 400 | Body text |
--font-weight-medium | 500 | Buttons, emphasized |
--font-weight-semibold | 600 | Headers, labels |
--font-weight-bold | 700 | Stats, strong emphasis |
Spacing System
| Token | Value | Usage |
|---|---|---|
--space-0 | 0 | No spacing |
--space-1 | 4px | Tight, inline elements |
--space-2 | 8px | Component padding, gaps |
--space-3 | 12px | Card padding |
--space-4 | 16px | Section margins |
--space-5 | 20px | Larger gaps |
--space-6 | 24px | Panel padding |
--space-8 | 32px | Section spacing |
--space-10 | 40px | Large separations |
--space-12 | 48px | Page margins |
Border Radius
| Token | Value | Usage |
|---|---|---|
--radius-none | 0 | Sharp corners |
--radius-sm | 2px | Subtle rounding |
--radius-base | 4px | Inputs, buttons |
--radius-md | 6px | Cards |
--radius-lg | 8px | Panels, modals |
--radius-xl | 12px | Large cards |
--radius-full | 9999px | Pills, circles |
Shadows
| Token | Value | Usage |
|---|---|---|
--shadow-sm | 0 1px 2px rgba(0,0,0,0.05) | Subtle lift |
--shadow-base | 0 2px 4px rgba(0,0,0,0.1) | Standard cards |
--shadow-md | 0 4px 8px rgba(0,0,0,0.12) | Elevated cards |
--shadow-lg | 0 8px 16px rgba(0,0,0,0.15) | Dropdowns, popovers |
--shadow-xl | 0 12px 24px rgba(0,0,0,0.2) | Modals |
17.2 Component Specifications
1. StatCard
Purpose: Display key metrics with trend indicators on dashboards.
ASCII Wireframe:
┌────────────────────────────────────┐
│ [icon] │
│ │
│ LABEL │
│ 12,450 │
│ +12.3% vs previous │
│ │
└────────────────────────────────────┘
Variants:
STANDARD COMPACT INLINE
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────────────┐
│ [icon] │ │ Orders 1,234 │ │ [icon] Orders: 1,234 +5% │
│ Orders │ │ +12% ▲ │ └──────────────────────────┘
│ 1,234 │ └──────────────────┘
│ +12% ▲ │
└──────────────────┘
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Title | string | required | Metric label |
Value | string | required | Primary value |
Icon | IconType | null | Optional icon |
Change | string | null | Change indicator (e.g., “+12%”) |
IsPositive | bool | true | Trend direction |
Color | string | “primary” | primary, success, warning, error |
Size | string | “standard” | standard, compact, inline |
Blazor Usage:
<StatCard Title="Today's Sales"
Value="$12,450"
Icon="IconType.DollarSign"
Change="+12.3%"
IsPositive="true"
Color="success" />
2. DataGrid
Purpose: Display tabular data with sorting, filtering, and pagination.
ASCII Wireframe:
┌─────────────────────────────────────────────────────────────────────┐
│ [x] │ ORDER # ▼ │ DATE │ CUSTOMER │ AMOUNT ▼ │ STATUS │
├─────┼────────────┼────────────┼───────────────┼──────────┼─────────┤
│ [ ] │ #1234 │ 12/29/2024 │ John Smith │ $99.00 │ ● New │
│ [x] │ #1235 │ 12/29/2024 │ Jane Doe │ $149.00 │ ● Done │
│ [ ] │ #1236 │ 12/28/2024 │ Bob Johnson │ $75.50 │ ! Error │
├─────┴────────────┴────────────┴───────────────┴──────────┴─────────┤
│ Showing 1-50 of 256 << < Page 1 of 6 > >> │
└─────────────────────────────────────────────────────────────────────┘
Column Types:
TEXT COLUMN NUMBER COLUMN STATUS COLUMN ACTION COLUMN
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ John Smith │ │ $99.00 │ │ ● Completed │ │ [Ed] [Del] │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
Left Right Center Center
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Items | IEnumerable | required | Data source |
Columns | List | required | Column definitions |
Selectable | bool | false | Enable row selection |
Sortable | bool | true | Enable column sorting |
Paginate | bool | true | Enable pagination |
PageSize | int | 25 | Items per page |
OnRowClick | EventCallback | null | Row click handler |
OnSelectionChange | EventCallback | null | Selection handler |
Column Definition:
public class Column<T>
{
public string Header { get; set; }
public Func<T, object> ValueFunc { get; set; }
public string Align { get; set; } = "left"; // left, center, right
public bool Sortable { get; set; } = true;
public string Width { get; set; } = "auto";
public Func<T, RenderFragment> Template { get; set; }
}
Blazor Usage:
<DataGrid Items="@orders" Selectable="true" OnRowClick="ViewOrder">
<Column Header="Order #" ValueFunc="@(o => o.OrderNumber)" />
<Column Header="Date" ValueFunc="@(o => o.Date.ToShortDateString())" />
<Column Header="Amount" ValueFunc="@(o => o.Total)" Align="right" />
<Column Header="Status">
<Template>
<StatusBadge Status="@context.Status" />
</Template>
</Column>
</DataGrid>
3. StatusBadge
Purpose: Display color-coded status indicators.
ASCII Wireframe:
SUCCESS WARNING ERROR INFO NEUTRAL
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│● Active │ │● Pending│ │● Failed │ │● Syncing│ │● Draft │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
Green bg Orange bg Red bg Blue bg Gray bg
Size Variants:
SMALL MEDIUM (Default) LARGE
┌──────────┐ ┌─────────────┐ ┌────────────────┐
│ ● Active │ │ ● Active │ │ ● Active │
└──────────┘ └─────────────┘ └────────────────┘
11px font 13px font 15px font
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Status | string | required | Status text |
Variant | string | “info” | success, warning, error, info, neutral |
Size | string | “medium” | small, medium, large |
ShowDot | bool | true | Show status dot |
CSS Classes:
.status-badge {
display: inline-flex;
align-items: center;
gap: 6px;
padding: 4px 8px;
border-radius: var(--radius-base);
font-size: var(--font-size-sm);
font-weight: var(--font-weight-medium);
}
.status-badge--success {
background: var(--color-success-light);
color: var(--color-success-dark);
}
.status-badge--warning {
background: var(--color-warning-light);
color: var(--color-warning-dark);
}
.status-badge--error {
background: var(--color-error-light);
color: var(--color-error-dark);
}
.status-badge--info {
background: var(--color-info-light);
color: var(--color-info-dark);
}
.status-badge--neutral {
background: var(--color-gray-100);
color: var(--color-gray-700);
}
.status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
background: currentColor;
}
Blazor Usage:
<StatusBadge Status="Active" Variant="success" />
<StatusBadge Status="Pending" Variant="warning" />
<StatusBadge Status="Failed" Variant="error" ShowDot="false" />
4. SearchInput
Purpose: Debounced search input with autocomplete support.
ASCII Wireframe:
EMPTY STATE WITH VALUE
┌────────────────────────────────┐ ┌────────────────────────────────┐
│ [O] Search products... │ │ [O] galaxy v-neck [X] │
└────────────────────────────────┘ └────────────────────────────────┘
WITH AUTOCOMPLETE LOADING STATE
┌────────────────────────────────┐ ┌────────────────────────────────┐
│ [O] galaxy v │ │ [O] galaxy v-neck [...] │
├────────────────────────────────┤ └────────────────────────────────┘
│ Galaxy V-Neck Tee │
│ Galaxy V-Neck Tank │
│ Galaxy Vintage Wash │
└────────────────────────────────┘
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Value | string | “” | Current value |
Placeholder | string | “Search…” | Placeholder text |
DebounceMs | int | 300 | Debounce delay |
AutoComplete | bool | false | Enable autocomplete |
Items | IEnumerable | null | Autocomplete items |
OnSearch | EventCallback | null | Search handler |
OnSelect | EventCallback | null | Selection handler |
Disabled | bool | false | Disable input |
Blazor Usage:
<SearchInput @bind-Value="searchTerm"
Placeholder="Search products..."
DebounceMs="300"
OnSearch="HandleSearch" />
<SearchInput @bind-Value="productSearch"
AutoComplete="true"
Items="@productSuggestions"
OnSelect="SelectProduct" />
5. Modal
Purpose: Overlay dialog for forms, confirmations, and detail views.
ASCII Wireframe:
STANDARD MODAL
┌────────────────────────────────────────────────────────────┐
│ Modal Title [X] │
├────────────────────────────────────────────────────────────┤
│ │
│ Modal content goes here. │
│ │
│ This can include forms, text, images, or any other │
│ content that needs to be displayed in an overlay. │
│ │
├────────────────────────────────────────────────────────────┤
│ [Cancel] [Confirm] │
└────────────────────────────────────────────────────────────┘
CONFIRMATION MODAL (Compact)
┌─────────────────────────────────────────────┐
│ [!] Delete Item? [X] │
├─────────────────────────────────────────────┤
│ │
│ Are you sure you want to delete this item? │
│ This action cannot be undone. │
│ │
├─────────────────────────────────────────────┤
│ [Cancel] [Delete] │
└─────────────────────────────────────────────┘
FULLSCREEN MODAL (Mobile)
╔═════════════════════════════════════════════╗
║ [<] Modal Title ║
╠═════════════════════════════════════════════╣
║ ║
║ Full content area ║
║ (scrollable) ║
║ ║
╠═════════════════════════════════════════════╣
║ [Primary Action] ║
╚═════════════════════════════════════════════╝
Size Variants:
| Size | Width | Use Case |
|---|---|---|
small | 400px | Confirmations, alerts |
medium | 600px | Forms, details |
large | 800px | Complex forms, tables |
fullscreen | 100% | Mobile, immersive |
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Title | string | null | Modal title |
IsOpen | bool | false | Visibility state |
Size | string | “medium” | small, medium, large, fullscreen |
ShowClose | bool | true | Show close button |
CloseOnOverlay | bool | true | Close on backdrop click |
OnClose | EventCallback | null | Close handler |
ChildContent | RenderFragment | required | Modal body |
Footer | RenderFragment | null | Footer actions |
Blazor Usage:
<Modal Title="Edit Product"
IsOpen="@showModal"
Size="medium"
OnClose="CloseModal">
<ChildContent>
<EditForm Model="@product">
<!-- Form fields -->
</EditForm>
</ChildContent>
<Footer>
<Button Variant="secondary" OnClick="CloseModal">Cancel</Button>
<Button Variant="primary" OnClick="SaveProduct">Save</Button>
</Footer>
</Modal>
6. Toast
Purpose: Non-blocking notifications that auto-dismiss.
ASCII Wireframe:
SUCCESS TOAST ERROR TOAST
┌──────────────────────────┐ ┌──────────────────────────┐
│ [check] Product saved │ │ [X] Failed to save │
│ successfully │ │ Please try again │
│ [X] │ │ [X] │
└──────────────────────────┘ └──────────────────────────┘
WARNING TOAST INFO TOAST
┌──────────────────────────┐ ┌──────────────────────────┐
│ [!] Low inventory │ │ [i] Sync completed │
│ Check stock levels │ │ 245 items updated │
│ [X] │ │ [X] │
└──────────────────────────┘ └──────────────────────────┘
TOAST WITH ACTION
┌──────────────────────────────────────────┐
│ [!] Order requires attention │
│ Missing shipping address │
│ [View] [Dismiss]│
└──────────────────────────────────────────┘
Position Options:
TOP-RIGHT (Default) TOP-CENTER BOTTOM-RIGHT
┌─────────────────┐ ┌─────────────────┐
│ │ │ │
│ [T] │ │ [T] │
│ [T] │ │ [T] │ ┌─────────────────┐
│ │ │ │ │ │
│ │ │ │ │ [T] │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Message | string | required | Toast message |
Title | string | null | Optional title |
Variant | string | “info” | success, warning, error, info |
Duration | int | 5000 | Auto-dismiss (ms), 0 = persist |
Position | string | “top-right” | Toast position |
ShowClose | bool | true | Show dismiss button |
Action | RenderFragment | null | Action buttons |
Toast Service:
public interface IToastService
{
void ShowSuccess(string message, string title = null);
void ShowError(string message, string title = null);
void ShowWarning(string message, string title = null);
void ShowInfo(string message, string title = null);
void Show(ToastOptions options);
void DismissAll();
}
Blazor Usage:
@inject IToastService Toast
<button @onclick="SaveProduct">Save</button>
@code {
async Task SaveProduct()
{
try
{
await productService.SaveAsync(product);
Toast.ShowSuccess("Product saved successfully");
}
catch
{
Toast.ShowError("Failed to save product", "Error");
}
}
}
7. LoadingSpinner
Purpose: Indicate loading states.
ASCII Wireframe:
SPINNER ONLY WITH TEXT OVERLAY
◐ ◐ ┌─────────────────┐
╱ ╲ Loading... │ ░░░░░░░░░ │
◜ ◝ │ ░ ◐ ░ │
│ ░Loading░ │
│ ░░░░░░░░░ │
└─────────────────┘
Size Variants:
| Size | Diameter | Use Case |
|---|---|---|
small | 16px | Inline, buttons |
medium | 24px | Cards, sections |
large | 48px | Page, full overlay |
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Size | string | “medium” | small, medium, large |
Text | string | null | Loading text |
Overlay | bool | false | Full overlay mode |
Color | string | “primary” | Spinner color |
Blazor Usage:
<!-- Inline spinner -->
<LoadingSpinner Size="small" />
<!-- With text -->
<LoadingSpinner Text="Saving..." />
<!-- Full overlay -->
<LoadingSpinner Overlay="true" Text="Processing order..." />
<!-- In button -->
<Button Disabled="@isSaving">
@if (isSaving)
{
<LoadingSpinner Size="small" Color="white" />
<span>Saving...</span>
}
else
{
<span>Save</span>
}
</Button>
8. EmptyState
Purpose: Display meaningful placeholder when no data is available.
ASCII Wireframe:
STANDARD EMPTY STATE
┌─────────────────────────────────────────────────────┐
│ │
│ [ ICON ] │
│ │
│ No products found │
│ │
│ Try adjusting your search or filters to │
│ find what you're looking for. │
│ │
│ [Clear Filters] │
│ │
└─────────────────────────────────────────────────────┘
COMPACT EMPTY STATE WITH ACTION
┌─────────────────────────┐ ┌─────────────────────────┐
│ [icon] │ │ [icon] │
│ No items found │ │ No orders yet │
└─────────────────────────┘ │ │
│ [Create Order] │
└─────────────────────────┘
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Icon | IconType | null | Illustration icon |
Title | string | required | Empty state title |
Description | string | null | Explanatory text |
Action | RenderFragment | null | Action button(s) |
Size | string | “medium” | compact, medium, large |
Blazor Usage:
<EmptyState Icon="IconType.Box"
Title="No products found"
Description="Try adjusting your search or filters.">
<Action>
<Button Variant="secondary" OnClick="ClearFilters">Clear Filters</Button>
</Action>
</EmptyState>
17.3 Button Component
Purpose: Primary interactive element for triggering actions.
ASCII Wireframe:
PRIMARY SECONDARY TERTIARY/TEXT
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Save │ │ Cancel │ │ Learn More │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Solid background Outlined No border
DANGER WITH ICON LOADING
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Delete │ │ [+] Add Item │ │ [o] Saving... │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Red background Icon + text Spinner + text
Size Variants:
| Size | Height | Padding | Font Size |
|---|---|---|---|
small | 28px | 8px 12px | 12px |
medium | 36px | 10px 16px | 14px |
large | 44px | 12px 20px | 16px |
Props:
| Prop | Type | Default | Description |
|---|---|---|---|
Variant | string | “primary” | primary, secondary, tertiary, danger |
Size | string | “medium” | small, medium, large |
Icon | IconType | null | Leading icon |
IconPosition | string | “left” | left, right |
Loading | bool | false | Show loading state |
Disabled | bool | false | Disable button |
FullWidth | bool | false | 100% width |
OnClick | EventCallback | null | Click handler |
17.4 Form Components
TextInput
LABEL WITH INPUT ERROR STATE
┌────────────────────────────┐ ┌────────────────────────────┐
│ Email Address │ │ Email Address │
│ ┌────────────────────────┐ │ │ ┌────────────────────────┐ │
│ │ user@example.com │ │ │ │ invalid-email │ │
│ └────────────────────────┘ │ │ └────────────────────────┘ │
└────────────────────────────┘ │ Please enter a valid email │
└────────────────────────────┘
Select/Dropdown
CLOSED OPEN
┌────────────────────────────┐ ┌────────────────────────────┐
│ Select option [v] │ │ Option One [^] │
└────────────────────────────┘ ├────────────────────────────┤
│ Option One [check] │
│ Option Two │
│ Option Three │
└────────────────────────────┘
Checkbox
UNCHECKED CHECKED INDETERMINATE
[ ] Option One [x] Option Two [-] Select All
Radio Button
UNSELECTED SELECTED
( ) Option One (o) Option Two
17.5 Dark Mode Considerations
Color Mapping
| Light Mode | Dark Mode |
|---|---|
| #FFFFFF (white) | #1E1E1E (dark surface) |
| #F5F5F5 (gray-100) | #2D2D2D (elevated surface) |
| #212121 (text) | #FFFFFF (text) |
| #757575 (secondary) | #B0B0B0 (secondary) |
| #1976D2 (primary) | #64B5F6 (lighter primary) |
Dark Mode Tokens
:root[data-theme="dark"] {
--color-background: #121212;
--color-surface: #1E1E1E;
--color-surface-elevated: #2D2D2D;
--color-text-primary: #FFFFFF;
--color-text-secondary: #B0B0B0;
--color-text-disabled: #6B6B6B;
--color-border: #3D3D3D;
--color-primary: #64B5F6;
--color-primary-dark: #90CAF9;
}
Component Adjustments
| Component | Light | Dark |
|---|---|---|
| Cards | White bg, shadow | Dark surface, border |
| Inputs | White bg, gray border | Dark bg, light border |
| Badges | Colored bg | Reduced opacity bg |
| Buttons | Standard | Slightly elevated |
17.6 Accessibility Guidelines
Focus States
*:focus-visible {
outline: 2px solid var(--color-primary);
outline-offset: 2px;
}
/* High contrast mode */
@media (prefers-contrast: high) {
*:focus-visible {
outline-width: 3px;
}
}
Color Contrast
| Requirement | Ratio | Usage |
|---|---|---|
| AA Normal | 4.5:1 | Body text |
| AA Large | 3:1 | 18px+ text |
| AAA Normal | 7:1 | Enhanced |
| AAA Large | 4.5:1 | Enhanced large |
ARIA Labels
<!-- Button with icon only -->
<Button Icon="IconType.Search" aria-label="Search products" />
<!-- Loading state -->
<LoadingSpinner aria-label="Loading content" role="status" />
<!-- Badge with context -->
<StatusBadge Status="Error"
aria-label="Order status: Error - requires attention" />
17.7 Summary
The Component Library provides:
- StatCard: Dashboard metrics with trends
- DataGrid: Sortable, filterable data tables
- StatusBadge: Color-coded status indicators
- SearchInput: Debounced search with autocomplete
- Modal: Overlay dialogs for forms and confirmations
- Toast: Non-blocking notifications
- LoadingSpinner: Loading state indicators
- EmptyState: Meaningful placeholders
All components follow:
- Consistent design tokens
- Responsive sizing
- Dark mode support
- Accessibility standards
Next: Part VI covers the Implementation Guide starting with Chapter 18: Development Environment.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | V - Frontend |
| Chapter | 17 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 18: Development Environment Setup
18.1 Overview
This chapter provides complete, step-by-step instructions for setting up your development environment for the POS platform. By the end, you will have a fully functional local development stack.
18.2 Prerequisites
Required Software
| Software | Version | Purpose |
|---|---|---|
| .NET SDK | 8.0+ | Backend development |
| PostgreSQL | 16+ | Primary database |
| Docker | 24.0+ | Containerization |
| Docker Compose | 2.20+ | Multi-container orchestration |
| Node.js | 20 LTS | Frontend tooling |
| Git | 2.40+ | Version control |
Hardware Requirements
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 8 GB | 16 GB |
| Storage | 20 GB free | 50 GB SSD |
| CPU | 4 cores | 8 cores |
18.3 Project Structure
/volume1/docker/pos-platform/
├── CLAUDE.md # AI assistant guidance
├── README.md # Quick start guide
├── .gitignore # Git ignore patterns
├── .env.example # Environment template
├── pos-platform.sln # .NET solution file
│
├── docker/
│ ├── docker-compose.yml # Development stack
│ ├── docker-compose.prod.yml # Production overrides
│ ├── Dockerfile # API container build
│ ├── Dockerfile.web # Web container build
│ └── .env # Docker environment (gitignored)
│
├── src/
│ ├── PosPlatform.Core/ # Domain layer
│ │ ├── Entities/ # Domain entities
│ │ ├── ValueObjects/ # Immutable value objects
│ │ ├── Events/ # Domain events
│ │ ├── Exceptions/ # Domain exceptions
│ │ ├── Interfaces/ # Repository interfaces
│ │ └── Services/ # Domain services
│ │
│ ├── PosPlatform.Infrastructure/ # Infrastructure layer
│ │ ├── Data/ # EF Core contexts
│ │ ├── Repositories/ # Repository implementations
│ │ ├── Services/ # External service integrations
│ │ ├── Messaging/ # Event bus, queues
│ │ └── MultiTenant/ # Tenant resolution
│ │
│ ├── PosPlatform.Api/ # API layer
│ │ ├── Controllers/ # REST endpoints
│ │ ├── Middleware/ # Request pipeline
│ │ ├── Filters/ # Action filters
│ │ ├── DTOs/ # Data transfer objects
│ │ └── Program.cs # Application entry
│ │
│ └── PosPlatform.Web/ # Blazor frontend
│ ├── Components/ # Blazor components
│ ├── Pages/ # Routable pages
│ ├── Services/ # Frontend services
│ └── wwwroot/ # Static assets
│
├── tests/
│ ├── PosPlatform.Core.Tests/ # Unit tests
│ ├── PosPlatform.Api.Tests/ # API integration tests
│ └── PosPlatform.E2E.Tests/ # End-to-end tests
│
└── database/
├── migrations/ # EF Core migrations
├── seed/ # Seed data scripts
└── init.sql # Database initialization
18.4 Step 1: Install Prerequisites
Linux (Ubuntu/Debian)
# Update package manager
sudo apt update && sudo apt upgrade -y
# Install .NET 8 SDK
wget https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt update
sudo apt install -y dotnet-sdk-8.0
# Verify .NET installation
dotnet --version
# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in for group changes
# Verify Docker
docker --version
docker compose version
# Install Node.js 20 LTS
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Verify Node.js
node --version
npm --version
# Install Git
sudo apt install -y git
git --version
macOS
# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install .NET 8 SDK
brew install dotnet-sdk
# Install Docker Desktop
brew install --cask docker
# Install Node.js
brew install node@20
# Install Git
brew install git
Windows
# Install with winget (Windows Package Manager)
winget install Microsoft.DotNet.SDK.8
winget install Docker.DockerDesktop
winget install OpenJS.NodeJS.LTS
winget install Git.Git
# Alternatively, download installers from:
# - https://dotnet.microsoft.com/download
# - https://docker.com/products/docker-desktop
# - https://nodejs.org/
# - https://git-scm.com/
18.5 Step 2: Create Project Structure
Initialize Repository
# Create project directory
mkdir -p /volume1/docker/pos-platform
cd /volume1/docker/pos-platform
# Initialize Git repository
git init
git branch -M main
# Create initial structure
mkdir -p docker src tests database/migrations database/seed
Create .gitignore
cat > .gitignore << 'EOF'
# Build outputs
bin/
obj/
publish/
# IDE
.vs/
.vscode/
.idea/
*.user
*.suo
# Environment
.env
*.env.local
appsettings.*.json
!appsettings.json
!appsettings.Development.json
# Logs
logs/
*.log
# Docker
docker/.env
# Node
node_modules/
dist/
# Database
*.db
*.sqlite
# OS
.DS_Store
Thumbs.db
# Secrets
*.pem
*.key
secrets/
EOF
Create Solution File
# Create .NET solution
dotnet new sln -n pos-platform
# Create projects
dotnet new classlib -n PosPlatform.Core -o src/PosPlatform.Core
dotnet new classlib -n PosPlatform.Infrastructure -o src/PosPlatform.Infrastructure
dotnet new webapi -n PosPlatform.Api -o src/PosPlatform.Api
dotnet new blazorserver -n PosPlatform.Web -o src/PosPlatform.Web
# Create test projects
dotnet new xunit -n PosPlatform.Core.Tests -o tests/PosPlatform.Core.Tests
dotnet new xunit -n PosPlatform.Api.Tests -o tests/PosPlatform.Api.Tests
# Add projects to solution
dotnet sln add src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet sln add src/PosPlatform.Infrastructure/PosPlatform.Infrastructure.csproj
dotnet sln add src/PosPlatform.Api/PosPlatform.Api.csproj
dotnet sln add src/PosPlatform.Web/PosPlatform.Web.csproj
dotnet sln add tests/PosPlatform.Core.Tests/PosPlatform.Core.Tests.csproj
dotnet sln add tests/PosPlatform.Api.Tests/PosPlatform.Api.Tests.csproj
# Add project references
dotnet add src/PosPlatform.Infrastructure/PosPlatform.Infrastructure.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add src/PosPlatform.Api/PosPlatform.Api.csproj reference src/PosPlatform.Infrastructure/PosPlatform.Infrastructure.csproj
dotnet add src/PosPlatform.Api/PosPlatform.Api.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add src/PosPlatform.Web/PosPlatform.Web.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add tests/PosPlatform.Core.Tests/PosPlatform.Core.Tests.csproj reference src/PosPlatform.Core/PosPlatform.Core.csproj
dotnet add tests/PosPlatform.Api.Tests/PosPlatform.Api.Tests.csproj reference src/PosPlatform.Api/PosPlatform.Api.csproj
18.6 Step 3: Docker Configuration
docker-compose.yml
# /volume1/docker/pos-platform/docker/docker-compose.yml
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:16-alpine
container_name: pos-postgres
environment:
POSTGRES_USER: ${DB_USER:-pos_admin}
POSTGRES_PASSWORD: ${DB_PASSWORD:-PosDevPass2025!}
POSTGRES_DB: ${DB_NAME:-pos_platform}
ports:
- "5434:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ../database/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-pos_admin} -d ${DB_NAME:-pos_platform}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- pos-network
# Redis for Caching and Sessions
redis:
image: redis:7-alpine
container_name: pos-redis
ports:
- "6380:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- pos-network
# RabbitMQ for Event Bus
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: pos-rabbitmq
environment:
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER:-pos_user}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS:-PosRabbit2025!}
ports:
- "5673:5672" # AMQP
- "15673:15672" # Management UI
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "check_running"]
interval: 30s
timeout: 10s
retries: 5
networks:
- pos-network
# POS API (Development)
api:
build:
context: ..
dockerfile: docker/Dockerfile
container_name: pos-api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:8080
- ConnectionStrings__DefaultConnection=Host=postgres;Port=5432;Database=${DB_NAME:-pos_platform};Username=${DB_USER:-pos_admin};Password=${DB_PASSWORD:-PosDevPass2025!}
- Redis__ConnectionString=redis:6379
- RabbitMQ__Host=rabbitmq
- RabbitMQ__Username=${RABBITMQ_USER:-pos_user}
- RabbitMQ__Password=${RABBITMQ_PASS:-PosRabbit2025!}
ports:
- "5100:8080"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
volumes:
- ../src:/app/src:ro
- api_logs:/app/logs
networks:
- pos-network
# POS Web (Development)
web:
build:
context: ..
dockerfile: docker/Dockerfile.web
container_name: pos-web
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:8080
- ApiBaseUrl=http://api:8080
ports:
- "5101:8080"
depends_on:
- api
networks:
- pos-network
volumes:
postgres_data:
redis_data:
rabbitmq_data:
api_logs:
networks:
pos-network:
driver: bridge
Dockerfile for API
# /volume1/docker/pos-platform/docker/Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src
# Copy solution and project files
COPY *.sln ./
COPY src/PosPlatform.Core/*.csproj ./src/PosPlatform.Core/
COPY src/PosPlatform.Infrastructure/*.csproj ./src/PosPlatform.Infrastructure/
COPY src/PosPlatform.Api/*.csproj ./src/PosPlatform.Api/
# Restore dependencies
RUN dotnet restore src/PosPlatform.Api/PosPlatform.Api.csproj
# Copy source code
COPY src/ ./src/
# Build and publish
WORKDIR /src/src/PosPlatform.Api
RUN dotnet publish -c Release -o /app/publish --no-restore
# Runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
WORKDIR /app
# Install culture support
RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
# Copy published app
COPY --from=build /app/publish .
# Create non-root user
RUN adduser -D -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 8080
ENTRYPOINT ["dotnet", "PosPlatform.Api.dll"]
Dockerfile for Web
# /volume1/docker/pos-platform/docker/Dockerfile.web
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src
# Copy solution and project files
COPY *.sln ./
COPY src/PosPlatform.Core/*.csproj ./src/PosPlatform.Core/
COPY src/PosPlatform.Web/*.csproj ./src/PosPlatform.Web/
# Restore dependencies
RUN dotnet restore src/PosPlatform.Web/PosPlatform.Web.csproj
# Copy source code
COPY src/ ./src/
# Build and publish
WORKDIR /src/src/PosPlatform.Web
RUN dotnet publish -c Release -o /app/publish --no-restore
# Runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
WORKDIR /app
RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
COPY --from=build /app/publish .
RUN adduser -D -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 8080
ENTRYPOINT ["dotnet", "PosPlatform.Web.dll"]
Environment Template
# /volume1/docker/pos-platform/docker/.env.example
# Database
DB_USER=pos_admin
DB_PASSWORD=PosDevPass2025!
DB_NAME=pos_platform
# RabbitMQ
RABBITMQ_USER=pos_user
RABBITMQ_PASS=PosRabbit2025!
# API Keys (development)
JWT_SECRET=dev-jwt-secret-key-min-32-characters-long
ENCRYPTION_KEY=dev-encryption-key-32-chars-long
18.7 Step 4: Database Initialization
init.sql
-- /volume1/docker/pos-platform/database/init.sql
-- Create extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
-- Create shared schema for platform-wide data
CREATE SCHEMA IF NOT EXISTS shared;
-- Tenants table (platform-wide)
CREATE TABLE shared.tenants (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
code VARCHAR(10) NOT NULL UNIQUE,
name VARCHAR(100) NOT NULL,
domain VARCHAR(255),
status VARCHAR(20) NOT NULL DEFAULT 'active',
settings JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ
);
-- Platform users (super admins)
CREATE TABLE shared.platform_users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email VARCHAR(255) NOT NULL UNIQUE,
password_hash VARCHAR(255) NOT NULL,
full_name VARCHAR(100) NOT NULL,
role VARCHAR(50) NOT NULL DEFAULT 'admin',
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Function to create tenant schema
CREATE OR REPLACE FUNCTION shared.create_tenant_schema(tenant_code VARCHAR)
RETURNS VOID AS $$
BEGIN
EXECUTE format('CREATE SCHEMA IF NOT EXISTS tenant_%s', tenant_code);
-- Create tenant-specific tables
EXECUTE format('
CREATE TABLE tenant_%s.locations (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
code VARCHAR(10) NOT NULL UNIQUE,
name VARCHAR(100) NOT NULL,
address JSONB,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW()
)', tenant_code);
EXECUTE format('
CREATE TABLE tenant_%s.users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
employee_id VARCHAR(20) UNIQUE,
full_name VARCHAR(100) NOT NULL,
email VARCHAR(255),
pin_hash VARCHAR(255),
role VARCHAR(50) NOT NULL,
location_id UUID REFERENCES tenant_%s.locations(id),
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW()
)', tenant_code, tenant_code);
EXECUTE format('
CREATE TABLE tenant_%s.products (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
sku VARCHAR(50) NOT NULL UNIQUE,
name VARCHAR(255) NOT NULL,
description TEXT,
category_id UUID,
base_price DECIMAL(10,2) NOT NULL,
cost DECIMAL(10,2),
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ
)', tenant_code);
END;
$$ LANGUAGE plpgsql;
-- Insert default platform admin
INSERT INTO shared.platform_users (email, password_hash, full_name, role)
VALUES (
'admin@posplatform.local',
'$2a$12$LQv3c1yqBWVHxkd0LHAkCOYz6TtxMQJqhN8/X4.vttYqBZq.kxVQ6', -- "admin123"
'Platform Administrator',
'super_admin'
);
-- Insert demo tenant
INSERT INTO shared.tenants (code, name, domain, status, settings)
VALUES (
'DEMO',
'Demo Retail Store',
'demo.posplatform.local',
'active',
'{"timezone": "America/New_York", "currency": "USD", "taxRate": 0.07}'
);
-- Create demo tenant schema
SELECT shared.create_tenant_schema('demo');
COMMENT ON SCHEMA shared IS 'Platform-wide shared data';
18.8 Step 5: IDE Setup
VS Code Configuration
// /volume1/docker/pos-platform/.vscode/settings.json
{
"editor.formatOnSave": true,
"editor.defaultFormatter": "ms-dotnettools.csharp",
"omnisharp.enableRoslynAnalyzers": true,
"omnisharp.enableEditorConfigSupport": true,
"dotnet.defaultSolution": "pos-platform.sln",
"files.exclude": {
"**/bin": true,
"**/obj": true,
"**/node_modules": true
},
"[csharp]": {
"editor.defaultFormatter": "ms-dotnettools.csharp"
}
}
// /volume1/docker/pos-platform/.vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Launch API",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build-api",
"program": "${workspaceFolder}/src/PosPlatform.Api/bin/Debug/net8.0/PosPlatform.Api.dll",
"args": [],
"cwd": "${workspaceFolder}/src/PosPlatform.Api",
"console": "internalConsole",
"stopAtEntry": false,
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
{
"name": "Launch Web",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build-web",
"program": "${workspaceFolder}/src/PosPlatform.Web/bin/Debug/net8.0/PosPlatform.Web.dll",
"args": [],
"cwd": "${workspaceFolder}/src/PosPlatform.Web",
"console": "internalConsole",
"stopAtEntry": false
}
]
}
Recommended VS Code Extensions
// /volume1/docker/pos-platform/.vscode/extensions.json
{
"recommendations": [
"ms-dotnettools.csharp",
"ms-dotnettools.csdevkit",
"ms-azuretools.vscode-docker",
"eamodio.gitlens",
"streetsidesoftware.code-spell-checker",
"editorconfig.editorconfig",
"humao.rest-client",
"mtxr.sqltools",
"mtxr.sqltools-driver-pg"
]
}
18.9 Step 6: Git Workflow
Branch Strategy
main # Production-ready code
|
+-- develop # Integration branch
|
+-- feature/* # New features
+-- bugfix/* # Bug fixes
+-- hotfix/* # Urgent production fixes
Initial Commit
cd /volume1/docker/pos-platform
# Stage all files
git add .
# Initial commit
git commit -m "Initial project structure with Docker development stack
- Created .NET 8 solution with 4 projects (Core, Infrastructure, Api, Web)
- Added docker-compose with PostgreSQL 16, Redis, RabbitMQ
- Configured multi-tenant database initialization
- Set up VS Code development environment
Generated with Claude Code"
# Create develop branch
git checkout -b develop
18.10 Quick Reference Commands
Start Development Stack
cd /volume1/docker/pos-platform/docker
# Copy environment file
cp .env.example .env
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# Check status
docker compose ps
Database Access
# Connect to PostgreSQL
docker exec -it pos-postgres psql -U pos_admin -d pos_platform
# List schemas
\dn
# List tables in shared schema
\dt shared.*
# List tables in tenant schema
\dt tenant_demo.*
Build and Run Locally
cd /volume1/docker/pos-platform
# Restore dependencies
dotnet restore
# Build solution
dotnet build
# Run API (from project directory)
cd src/PosPlatform.Api
dotnet run
# Run tests
cd /volume1/docker/pos-platform
dotnet test
Stop and Clean
cd /volume1/docker/pos-platform/docker
# Stop services
docker compose down
# Stop and remove volumes (WARNING: deletes data)
docker compose down -v
# Remove unused images
docker image prune -f
18.11 Verification Checklist
After completing setup, verify each component:
-
dotnet --versionshows 8.0.x -
docker compose psshows all containers healthy - PostgreSQL accepts connections on port 5434
- Redis responds to ping on port 6380
- RabbitMQ management UI accessible at http://localhost:15673
-
Solution builds without errors:
dotnet build -
All tests pass:
dotnet test
18.12 Diagrams as Code Strategy
Overview
To ensure “soft architecture” matches the actual code and enables rapid root-cause analysis, all architecture diagrams must be maintained as code alongside the source.
Primary Strategy
| Attribute | Selection |
|---|---|
| Approach | Diagrams as Code |
| Rationale | Prevent documentation drift; diagrams stay current |
| Storage | Git repository alongside source code |
Tooling Options
| Tool | Best For | Format |
|---|---|---|
| Structurizr | C4 Model, professional docs | DSL |
| Mermaid.js | Quick diagrams, GitHub-native | Markdown |
| PlantUML | Detailed UML, sequence diagrams | Text |
Recommended: Structurizr (C4 Model)
# Install Structurizr CLI
docker pull structurizr/cli
# Create workspace directory
mkdir -p /volume1/docker/pos-platform/docs/architecture
// /volume1/docker/pos-platform/docs/architecture/workspace.dsl
workspace "POS Platform" "Multi-tenant Point of Sale System" {
model {
// People
cashier = person "Cashier" "Processes sales transactions"
manager = person "Store Manager" "Manages inventory and reports"
admin = person "Platform Admin" "Manages tenants and system"
// External Systems
shopify = softwareSystem "Shopify" "E-commerce platform" "External"
paymentGateway = softwareSystem "Payment Gateway" "Stripe/Square" "External"
// POS Platform
posSystem = softwareSystem "POS Platform" "Multi-tenant retail POS" {
posClient = container "POS Client" "Desktop/tablet app" ".NET MAUI" "Client"
centralApi = container "Central API" "REST API" "ASP.NET Core" "API"
webPortal = container "Web Portal" "Admin dashboard" "Blazor" "Web"
database = container "Database" "PostgreSQL 16" "PostgreSQL" "Database"
kafka = container "Event Streaming" "Apache Kafka" "Kafka" "Queue"
redis = container "Cache" "Redis" "Redis" "Cache"
}
// Relationships
cashier -> posClient "Uses"
manager -> webPortal "Uses"
admin -> webPortal "Manages tenants"
posClient -> centralApi "API calls" "HTTPS"
webPortal -> centralApi "API calls" "HTTPS"
centralApi -> database "Reads/writes" "PostgreSQL"
centralApi -> kafka "Publishes events"
centralApi -> redis "Caches data"
centralApi -> shopify "Syncs inventory" "REST"
centralApi -> paymentGateway "Processes payments" "REST"
}
views {
systemContext posSystem "SystemContext" {
include *
autoLayout
}
container posSystem "Containers" {
include *
autoLayout
}
theme default
}
}
Generate Diagrams
# Export to PNG/SVG
docker run --rm -v $(pwd)/docs/architecture:/workspace structurizr/cli \
export -workspace /workspace/workspace.dsl -format plantuml
# Or use Structurizr Lite for local preview
docker run -it --rm -p 8888:8080 \
-v $(pwd)/docs/architecture:/usr/local/structurizr \
structurizr/lite
Alternative: Mermaid.js
For simpler diagrams, use Mermaid directly in markdown:
<!-- /volume1/docker/pos-platform/docs/architecture/system-overview.md -->
# System Overview
```mermaid
graph TB
subgraph Client["POS Client"]
UI[UI Layer]
SL[Service Layer]
DB[(SQLite)]
end
subgraph Cloud["Cloud Infrastructure"]
API[Central API]
PG[(PostgreSQL)]
K((Kafka))
end
UI --> SL
SL --> DB
SL --> API
API --> PG
API --> K
### Claude Code Integration
Use Claude Code CLI to auto-generate diagram updates during refactoring:
```bash
# After code changes, regenerate diagrams
claude-code /architect-review --update-diagrams
# Or use the dev-team skill
/dev-team update architecture diagrams
Diagram Update Workflow
+------------------------------------------------------------------+
| DIAGRAM UPDATE WORKFLOW |
+------------------------------------------------------------------+
| |
| 1. Developer changes code structure |
| ↓ |
| 2. Pre-commit hook or CI checks for diagram drift |
| ↓ |
| 3. If drift detected, Claude Code suggests updates |
| ↓ |
| 4. Developer reviews and commits updated diagrams |
| ↓ |
| 5. CI generates PNG/SVG exports for documentation |
| |
+------------------------------------------------------------------+
18.13 Quality Assurance (QA) & Testing Strategy
Overview
To ensure end-to-end reliability for financial transactions, the platform implements a comprehensive testing strategy covering unit, integration, E2E, and load testing.
Testing Pyramid
/\
/ \
/ E2E \ Cypress/Playwright (Few, Slow)
/ \
/--------\
/Integration\ API Tests (Some, Medium)
/ \
/--------------\
/ Unit Tests \ xUnit (Many, Fast)
/ \
/--------------------\
Unit Testing
| Attribute | Selection |
|---|---|
| Framework | xUnit |
| Mocking | Moq |
| Assertions | FluentAssertions |
| Coverage Target | 80%+ for Core domain |
# Run unit tests
dotnet test tests/PosPlatform.Core.Tests
# With coverage report
dotnet test --collect:"XPlat Code Coverage"
Integration Testing
| Attribute | Selection |
|---|---|
| Framework | xUnit + WebApplicationFactory |
| Database | Testcontainers (PostgreSQL) |
| Scope | API endpoints, repository queries |
// tests/PosPlatform.Api.Tests/SalesControllerTests.cs
public class SalesControllerTests : IClassFixture<PosApiFactory>
{
private readonly HttpClient _client;
public SalesControllerTests(PosApiFactory factory)
{
_client = factory.CreateClient();
}
[Fact]
public async Task CreateSale_ValidRequest_Returns201()
{
// Arrange
var request = new CreateSaleRequest
{
LocationId = Guid.NewGuid(),
LineItems = new[] { new LineItemDto { Sku = "TEST001", Quantity = 1 } }
};
// Act
var response = await _client.PostAsJsonAsync("/api/v1/sales", request);
// Assert
response.StatusCode.Should().Be(HttpStatusCode.Created);
}
}
E2E (End-to-End) Testing
| Attribute | Selection |
|---|---|
| Tool | Playwright (Primary) or Cypress |
| Scope | Full user flows: Login → Sale → Payment → Receipt |
| Environment | Dockerized test environment |
# Install Playwright
npm init playwright@latest
# Install test dependencies
npm install -D @playwright/test
// tests/e2e/cashier-flow.spec.ts
import { test, expect } from '@playwright/test';
test.describe('Cashier Sales Flow', () => {
test.beforeEach(async ({ page }) => {
await page.goto('http://localhost:5101');
await page.fill('[data-testid="pin-input"]', '1234');
await page.click('[data-testid="login-button"]');
});
test('complete sale with cash payment', async ({ page }) => {
// Scan item
await page.fill('[data-testid="barcode-input"]', 'NXJ1078');
await page.press('[data-testid="barcode-input"]', 'Enter');
// Verify item added
await expect(page.locator('[data-testid="cart-item"]')).toHaveCount(1);
await expect(page.locator('[data-testid="cart-total"]')).toContainText('$');
// Process payment
await page.click('[data-testid="pay-button"]');
await page.click('[data-testid="cash-payment"]');
await page.fill('[data-testid="cash-tendered"]', '50.00');
await page.click('[data-testid="complete-sale"]');
// Verify receipt
await expect(page.locator('[data-testid="receipt-modal"]')).toBeVisible();
await expect(page.locator('[data-testid="change-due"]')).toBeVisible();
});
test('void line item from cart', async ({ page }) => {
// Add items
await page.fill('[data-testid="barcode-input"]', 'NXJ1078');
await page.press('[data-testid="barcode-input"]', 'Enter');
await page.fill('[data-testid="barcode-input"]', 'NXJ1079');
await page.press('[data-testid="barcode-input"]', 'Enter');
// Void first item
await page.click('[data-testid="cart-item"]:first-child [data-testid="void-item"]');
await page.click('[data-testid="confirm-void"]');
// Verify removed
await expect(page.locator('[data-testid="cart-item"]')).toHaveCount(1);
});
});
Load Testing
| Attribute | Selection |
|---|---|
| Tool | k6 (Primary) or JMeter |
| Scenario | “Black Friday” - 500 concurrent transactions |
| Targets | p99 < 500ms, no errors |
# Install k6
brew install k6 # macOS
# or
docker pull grafana/k6
// tests/load/black-friday.js
import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate } from 'k6/metrics';
const errorRate = new Rate('errors');
export const options = {
scenarios: {
black_friday: {
executor: 'ramping-vus',
startVUs: 0,
stages: [
{ duration: '2m', target: 100 }, // Ramp up
{ duration: '5m', target: 500 }, // Peak load
{ duration: '2m', target: 0 }, // Ramp down
],
gracefulRampDown: '30s',
},
},
thresholds: {
http_req_duration: ['p(99)<500'], // 99% of requests < 500ms
errors: ['rate<0.01'], // Error rate < 1%
},
};
const BASE_URL = __ENV.API_URL || 'http://localhost:5100';
export default function () {
// Simulate sale creation
const salePayload = JSON.stringify({
locationId: 'b5f8e9a0-1234-5678-9abc-def012345678',
lineItems: [
{ sku: 'NXJ1078', quantity: 1, unitPrice: 29.99 },
{ sku: 'NXJ1079', quantity: 2, unitPrice: 19.99 },
],
});
const params = {
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${__ENV.AUTH_TOKEN}`,
'X-Tenant-Id': 'demo',
},
};
const response = http.post(`${BASE_URL}/api/v1/sales`, salePayload, params);
const success = check(response, {
'status is 201': (r) => r.status === 201,
'response time < 500ms': (r) => r.timings.duration < 500,
});
errorRate.add(!success);
sleep(1);
}
# Run load test
k6 run --env API_URL=http://localhost:5100 --env AUTH_TOKEN=xxx tests/load/black-friday.js
# Run with Docker
docker run --rm -i grafana/k6 run - <tests/load/black-friday.js
Code Versioning & Traceability
| Attribute | Selection |
|---|---|
| Platform | GitHub/GitLab |
| Versioning | Semantic Versioning (SemVer) |
| Tags | v1.0.0, v1.1.0, v2.0.0 |
# Version tagging workflow
git tag -a v1.0.0 -m "Version 1.0.0 - Initial Release"
git push origin v1.0.0
# Each POS terminal tracks deployed version
# API returns version in health check
curl http://localhost:5100/health
# {"status":"healthy","version":"1.2.3","commit":"abc123f"}
CI/CD Pipeline Testing
# .github/workflows/test.yml
name: Test Suite
on: [push, pull_request]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'
- run: dotnet test tests/PosPlatform.Core.Tests --logger "trx"
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_PASSWORD: test
ports:
- 5432:5432
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'
- run: dotnet test tests/PosPlatform.Api.Tests
e2e-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npx playwright install --with-deps
- run: docker compose -f docker/docker-compose.yml up -d
- run: npx playwright test
- uses: actions/upload-artifact@v4
if: failure()
with:
name: playwright-report
path: playwright-report/
load-test:
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- uses: grafana/k6-action@v0.3.1
with:
filename: tests/load/black-friday.js
Test Data Management
-- tests/seed/test-data.sql
-- Test tenant
INSERT INTO shared.tenants (id, code, name, status)
VALUES ('11111111-1111-1111-1111-111111111111', 'TEST', 'Test Tenant', 'active');
-- Test products
INSERT INTO tenant_test.products (sku, name, base_price)
VALUES
('TEST001', 'Test Product 1', 9.99),
('TEST002', 'Test Product 2', 19.99),
('TEST003', 'Test Product 3', 29.99);
-- Test user (PIN: 1234)
INSERT INTO tenant_test.users (employee_id, full_name, pin_hash, role)
VALUES ('E001', 'Test Cashier', '$2a$12$...', 'cashier');
Reference
For complete architecture characteristics and style selection rationale, see:
18.14 Chaos Engineering Strategy
Overview
Chaos Engineering validates system resilience by intentionally injecting failures. For a POS system handling financial transactions, this ensures the platform gracefully handles network partitions, service failures, and infrastructure issues.
| Attribute | Selection |
|---|---|
| Tool | LitmusChaos (Primary) or Gremlin |
| Environment | Staging only (never production for POS) |
| Goal | Validate offline-first, circuit breakers, failover |
Why Chaos Engineering for POS?
+------------------------------------------------------------------+
| RETAIL FAILURE SCENARIOS |
+------------------------------------------------------------------+
| |
| Scenario 1: Internet Outage During Sale |
| └── POS must complete transaction offline |
| └── Payment must queue for sync |
| |
| Scenario 2: Payment Processor Down |
| └── Circuit breaker must open |
| └── Fallback to secondary processor or cash |
| |
| Scenario 3: Database Connection Lost |
| └── Read operations from local cache |
| └── Write operations queued in local SQLite |
| |
| Scenario 4: Kafka Cluster Failure |
| └── Events stored in outbox table |
| └── Replay on recovery |
| |
+------------------------------------------------------------------+
LitmusChaos Installation
# Install LitmusChaos in Kubernetes staging cluster
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v3.0.0.yaml
# Verify installation
kubectl get pods -n litmus
# Install chaos experiments
kubectl apply -f https://hub.litmuschaos.io/api/chaos/3.0.0?file=charts/generic/experiments.yaml
Chaos Experiment: Network Partition
Tests offline-first capability when POS client loses connection to central API.
# chaos-experiments/network-partition.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: pos-network-partition
namespace: staging
spec:
engineState: "active"
appinfo:
appns: "staging"
applabel: "app=pos-client"
appkind: "deployment"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-network-partition
spec:
components:
env:
# Target the central API
- name: TARGET_SERVICE_PORT
value: "8080"
- name: NETWORK_INTERFACE
value: "eth0"
# Duration of network partition
- name: TOTAL_CHAOS_DURATION
value: "300" # 5 minutes
# Affect all traffic to API
- name: DESTINATION_HOSTS
value: "pos-api.staging.svc.cluster.local"
probe:
- name: pos-offline-mode-check
type: httpProbe
mode: Continuous
runProperties:
probeTimeout: 5
interval: 10
httpProbe/inputs:
url: "http://pos-client:8080/api/health/offline-status"
method:
get:
criteria: "=="
responseCode: "200"
responseTimeout: 3
---
# Expected Behavior Validation
apiVersion: v1
kind: ConfigMap
metadata:
name: network-partition-expected
data:
expected_behavior: |
1. POS client detects connection loss within 5 seconds
2. UI shows "Offline Mode" indicator
3. Sales can be created and processed locally
4. Payments queue in local SQLite
5. Sync resumes automatically when connection restored
6. No duplicate transactions on resync
Chaos Experiment: Payment Processor Failure
Tests circuit breaker and fallback behavior.
# chaos-experiments/payment-processor-failure.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: payment-processor-chaos
namespace: staging
spec:
engineState: "active"
appinfo:
appns: "staging"
applabel: "app=pos-api"
appkind: "deployment"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-http-modify-response
spec:
components:
env:
# Inject 500 errors for Stripe calls
- name: TARGET_SERVICE_PORT
value: "443"
- name: TARGET_HOSTS
value: "api.stripe.com"
- name: RESPONSE_BODY
value: '{"error": {"type": "api_error", "message": "Chaos injection"}}'
- name: STATUS_CODE
value: "500"
- name: CHAOS_DURATION
value: "120"
probe:
- name: circuit-breaker-open-check
type: promProbe
mode: OnChaos
runProperties:
probeTimeout: 30
interval: 10
promProbe/inputs:
endpoint: "http://prometheus:9090"
query: 'polly_circuit_breaker_state{service="stripe"} == 1'
comparator:
type: "int"
criteria: "=="
value: "1" # Circuit should be OPEN
Chaos Experiment: Database Latency
Tests system behavior under slow database conditions.
# chaos-experiments/database-latency.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: db-latency-chaos
namespace: staging
spec:
engineState: "active"
appinfo:
appns: "staging"
applabel: "app=pos-postgres"
appkind: "statefulset"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-network-latency
spec:
components:
env:
- name: NETWORK_INTERFACE
value: "eth0"
- name: NETWORK_LATENCY
value: "2000" # 2 second latency
- name: JITTER
value: "500" # +/- 500ms jitter
- name: TOTAL_CHAOS_DURATION
value: "180"
probe:
- name: api-response-degradation
type: httpProbe
mode: Continuous
httpProbe/inputs:
url: "http://pos-api:8080/api/v1/products"
method:
get:
criteria: "<"
responseCode: "500" # Should not fail, just slow
responseTimeout: 10
Chaos Experiment: Kafka Broker Failure
Tests event sourcing resilience and outbox pattern.
# chaos-experiments/kafka-broker-failure.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: kafka-chaos
namespace: staging
spec:
engineState: "active"
appinfo:
appns: "staging"
applabel: "app=kafka"
appkind: "statefulset"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-delete
spec:
components:
env:
- name: TOTAL_CHAOS_DURATION
value: "120"
- name: CHAOS_INTERVAL
value: "30"
- name: FORCE
value: "true"
probe:
- name: events-queued-in-outbox
type: cmdProbe
mode: Edge
cmdProbe/inputs:
command: |
psql -h pos-postgres -U pos_admin -d pos_platform -c \
"SELECT COUNT(*) FROM event_outbox WHERE status = 'pending'"
comparator:
type: "int"
criteria: ">="
value: "1" # Events should queue
- name: events-replayed-on-recovery
type: cmdProbe
mode: EOT
cmdProbe/inputs:
command: |
# After Kafka recovery, outbox should drain
psql -h pos-postgres -U pos_admin -d pos_platform -c \
"SELECT COUNT(*) FROM event_outbox WHERE status = 'pending'"
comparator:
type: "int"
criteria: "=="
value: "0" # All events processed
Chaos Testing Schedule
# chaos-experiments/scheduled-chaos.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosSchedule
metadata:
name: weekly-resilience-tests
namespace: staging
spec:
schedule:
now: false
repeat:
timeRange:
startTime: "2026-01-01T02:00:00Z" # Run at 2 AM
properties:
minChaosInterval: "168h" # Weekly
chaosEngineTemplateSpec:
engineState: "active"
appinfo:
appns: "staging"
applabel: "app=pos-api"
appkind: "deployment"
experiments:
- name: pod-network-partition
spec:
components:
env:
- name: TOTAL_CHAOS_DURATION
value: "300"
Chaos Engineering CI/CD Integration
# .github/workflows/chaos-tests.yml
name: Chaos Engineering Tests
on:
schedule:
- cron: '0 3 * * 0' # Weekly on Sunday at 3 AM
workflow_dispatch:
jobs:
chaos-tests:
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/checkout@v4
- name: Setup kubectl
uses: azure/setup-kubectl@v3
- name: Configure kubectl
run: |
echo "${{ secrets.STAGING_KUBECONFIG }}" > kubeconfig
export KUBECONFIG=kubeconfig
- name: Run Network Partition Test
run: |
kubectl apply -f chaos-experiments/network-partition.yaml
kubectl wait --for=condition=complete chaosengine/pos-network-partition -n staging --timeout=600s
- name: Validate Offline Mode Results
run: |
# Check experiment status
RESULT=$(kubectl get chaosresult pos-network-partition-pod-network-partition -n staging -o jsonpath='{.status.experimentStatus.verdict}')
if [ "$RESULT" != "Pass" ]; then
echo "Chaos experiment FAILED: Network partition handling"
kubectl logs -l app=pos-client -n staging --tail=100
exit 1
fi
- name: Run Payment Processor Failure Test
run: |
kubectl apply -f chaos-experiments/payment-processor-failure.yaml
kubectl wait --for=condition=complete chaosengine/payment-processor-chaos -n staging --timeout=300s
- name: Generate Chaos Report
run: |
litmusctl get experiments -n staging -o json > chaos-report.json
- name: Upload Chaos Report
uses: actions/upload-artifact@v4
with:
name: chaos-engineering-report
path: chaos-report.json
- name: Notify on Failure
if: failure()
uses: slackapi/slack-github-action@v1
with:
channel-id: 'chaos-alerts'
slack-message: 'Chaos engineering tests FAILED in staging'
Chaos Engineering Runbook
# Chaos Engineering Runbook
## Pre-Chaos Checklist
- [ ] Staging environment is isolated from production
- [ ] No active deployments in progress
- [ ] Monitoring dashboards are active
- [ ] On-call engineer is notified
- [ ] Rollback procedures are documented
## During Chaos
1. Monitor Grafana dashboards for:
- Error rates
- Latency p99
- Circuit breaker states
- Queue depths
2. Validate expected behaviors:
- Offline mode activates
- Fallbacks engage
- No data loss
## Post-Chaos
1. Review chaos experiment results
2. Document any unexpected behaviors
3. Create tickets for resilience improvements
4. Update architecture documentation
18.15 Next Steps
With your development environment ready:
- Proceed to Chapter 19: Implementation Roadmap for the full build plan
- Begin Phase 1: Foundation in Chapter 20
- Reference this chapter (Chapter 18) when adding new developers to the project
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VI - Implementation Guide |
| Chapter | 18 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 19: Implementation Roadmap
19.1 Overview
This chapter presents the complete implementation roadmap for the POS platform, organized into 6 phases spanning 23 weeks. Each phase builds upon the previous, with clear milestones and dependencies.
Key Insight: The POS Client deserves its own dedicated phase (Phase 4) because it’s the revenue-generating touchpoint where customer transactions occur. The Web Portal handles platform operations; the POS Client handles client operations.
19.2 Phase Summary
| Phase | Name | Duration | Chapter | Key Deliverables |
|---|---|---|---|---|
| 1 | Foundation | Weeks 1-5 | Ch. 20 | Multi-tenant, Auth, Catalog, Security Hardening |
| 2 | Core Backend | Weeks 6-9 | Ch. 21 | Inventory, Sales, Payments, Cash APIs |
| 3 | Admin Portal | Weeks 10-12 | Ch. 22 | Web admin UI, Reports, Settings |
| 4 | POS Client | Weeks 13-18 | Ch. 23 | .NET MAUI Blazor Hybrid, Offline-first, Hardware |
| 5 | Integration | Weeks 19-21 | Ch. 22 | RFID/Raptag, Loyalty, End-to-end Testing |
| 6 | Production | Weeks 22-23 | Ch. 24-28 | Monitoring, Security, Deployment |
Total: 23 weeks
19.3 Gantt Chart
Week: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|
PHASE 1 - FOUNDATION (5 weeks)
Multi-Tenant [==|==]
Authentication [==|==]
Catalog [==|==]
Hardening [==]
PHASE 2 - CORE BACKEND (4 weeks)
Inventory [==|==]
Sales Domain [==|==]
Payments [==]
Cash Drawer [==]
PHASE 3 - ADMIN PORTAL (3 weeks)
Admin UI [==|==|==]
Reports [==|==]
Settings [==]
PHASE 4 - POS CLIENT (6 weeks) ★ DEDICATED PHASE
Setup/Core UI [==]
Sale Workflow [==]
Payments [==]
Offline/Sync [==]
Hardware [==]
Distribution [==]
PHASE 5 - INTEGRATION (3 weeks)
RFID/Raptag [==]
Loyalty [==]
E2E Testing [==]
PHASE 6 - PRODUCTION (2 weeks)
Monitoring [==]
Go-Live [==]
MILESTONES
M1: Tenant Demo *
M2: Auth + PIN Hardening *
M3: Catalog API *
M4: Inventory Sync *
M5: First Sale (API) *
M6: Admin Portal Live *
M7: POS Client Demo *
M8: POS Offline Working *
M9: Hardware Integrated *
M10: RFID Ready *
M11: Go-Live *
19.4 Phase 1: Foundation (Weeks 1-5)
Week 1-2: Multi-Tenant Infrastructure
Objective: Establish schema-per-tenant database isolation with automatic provisioning.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Tenant entity and repository | Tenant CRUD operations |
| 3-4 | Schema provisioning service | Automatic schema creation |
| 5 | Tenant resolution middleware | Request-scoped tenant context |
| 6-7 | Connection string routing | Dynamic connection per tenant |
| 8-9 | Tenant management API | REST endpoints for tenants |
| 10 | Integration tests | Tenant isolation verified |
Claude Commands:
/dev-team implement tenant entity with repository pattern
/architect-review multi-tenant database isolation strategy
/dev-team create tenant provisioning service
/dev-team implement tenant resolution middleware
/qa-team write tenant isolation integration tests
Success Criteria:
- New tenant creates isolated schema in < 5 seconds
- Tenant data completely isolated (cross-tenant queries blocked)
- Tenant context available in all service layers
- 100% test coverage on tenant resolution
Week 2-3: Authentication System
Objective: Implement JWT-based authentication with PIN support for POS terminals.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | User entity with password hashing | BCrypt password storage |
| 3-4 | JWT token service | Access + refresh token generation |
| 5-6 | PIN-based authentication | 4-6 digit PIN for terminals |
| 7-8 | RBAC permission system | Role-based access control |
| 9-10 | Auth middleware | Token validation, user context |
Claude Commands:
/dev-team implement user entity with bcrypt password hashing
/dev-team create JWT token service with refresh token support
/dev-team implement PIN authentication for POS terminals
/security-review authentication implementation
/dev-team create authorization middleware with RBAC
Success Criteria:
- JWT tokens expire and refresh correctly
- PIN login works for cashier terminals
- Roles enforce API access restrictions
- Password reset flow functional
- Failed login attempts are rate-limited
Week 3-4: Catalog Domain
Objective: Build product catalog with variants, categories, and pricing.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Product and Category entities | Domain models |
| 3-4 | Product variant support | Size, color, style variations |
| 5-6 | Pricing rules engine | Base price, markups, promotions |
| 7-8 | Product repository | CRUD with search, filtering |
| 9-10 | Catalog API endpoints | REST API for products |
Claude Commands:
/dev-team create product entity with variant support
/dev-team implement category hierarchy with nested sets
/dev-team create pricing rules engine
/dev-team implement product repository with full-text search
/dev-team create catalog API endpoints with pagination
Success Criteria:
- Products support unlimited variants
- Categories support infinite nesting
- Full-text search returns results in < 100ms
- Bulk import handles 10,000 products
- API returns paginated results
19.5 Phase 2: Core (Weeks 5-10)
Week 5-6: Inventory Domain
Objective: Implement multi-location inventory with real-time tracking.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Inventory item entity | Stock levels per location |
| 3-4 | Stock movement tracking | Audit trail of all changes |
| 5-6 | Inventory adjustment service | Manual adjustments with reasons |
| 7-8 | Inter-store transfers | Transfer request workflow |
| 9-10 | Low stock alerts | Configurable thresholds |
Claude Commands:
/dev-team create inventory item entity with location quantities
/dev-team implement stock movement event sourcing
/dev-team create inventory adjustment service
/dev-team implement inter-store transfer workflow
/dev-team create low stock alert notification system
Dependencies: Catalog (products), Multi-tenant (locations)
Success Criteria:
- Stock levels accurate across all locations
- Every inventory change has audit record
- Transfers update both source and destination
- Alerts fire when stock below threshold
- Concurrent updates handled correctly
Week 6-7: Sales Domain (Event Sourcing)
Objective: Build sale transaction processing with event-sourced state.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Sale aggregate root | Event-sourced sale entity |
| 3-4 | Sale events | ItemAdded, ItemRemoved, DiscountApplied |
| 5-6 | Sale projections | Current cart state, totals |
| 7-8 | Sale completion | Finalization workflow |
| 9-10 | Receipt generation | Digital and print receipts |
Claude Commands:
/dev-team create sale aggregate with event sourcing
/dev-team implement sale events (add, remove, discount)
/dev-team create sale projection service
/dev-team implement sale completion workflow
/dev-team create receipt generation service
Dependencies: Inventory (stock deduction), Catalog (product lookup)
Success Criteria:
- Sales can be reconstructed from events
- Cart updates in < 50ms
- Tax calculations accurate to penny
- Concurrent cart modifications handled
- Receipts generated in < 1 second
Week 8-9: Payment Processing
Objective: Implement multi-tender payment with gateway integration.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Payment entity | Multi-tender support |
| 3-4 | Cash payment handler | Exact, over, change calculation |
| 5-6 | Card payment abstraction | Payment gateway interface |
| 7-8 | Split tender support | Multiple payment methods |
| 9-10 | Void and refund | Transaction reversal |
Claude Commands:
/dev-team create payment entity with multi-tender support
/dev-team implement cash payment handler with change calculation
/dev-team create payment gateway abstraction (Stripe/Square)
/dev-team implement split tender payment processing
/dev-team create void and refund transaction handlers
Dependencies: Sales (total calculation)
Success Criteria:
- Cash, card, and mixed payments work
- Change calculated correctly
- Failed payments don’t affect inventory
- Refunds trace to original sale
- Gateway timeouts handled gracefully
Week 9-10: Cash Drawer Operations
Objective: Manage physical cash with drawer sessions and blind counts.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Drawer session entity | Open, active, closed states |
| 3-4 | Cash in/out tracking | Expected vs actual |
| 5-6 | Blind count support | Cashier cannot see expected |
| 7-8 | Drawer reconciliation | Variance calculation |
| 9-10 | Shift handoff | Mid-shift cash pickup |
Claude Commands:
/dev-team create drawer session entity with state machine
/dev-team implement cash transaction tracking
/dev-team create blind count entry service
/dev-team implement drawer reconciliation with variance alerts
/dev-team create shift handoff workflow
Dependencies: Authentication (cashier identity), Sales (cash payments)
Success Criteria:
- Drawer opens with starting balance
- All cash movements tracked
- Blind count mode prevents cheating
- Variances flagged for review
- Shift reports accurate
19.6 Phase 3: Support (Weeks 11-14)
Week 11-12: Customer Domain with Loyalty
Objective: Customer profiles, purchase history, and loyalty points.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Customer entity | Profile, contact info |
| 3-4 | Customer lookup | Phone, email, loyalty ID |
| 5-6 | Purchase history | Orders linked to customer |
| 7-8 | Loyalty program | Points earning and redemption |
| 9-10 | Customer API | CRUD and search endpoints |
Claude Commands:
/dev-team create customer entity with contact information
/dev-team implement customer lookup by phone, email, ID
/dev-team create purchase history tracking
/dev-team implement loyalty points system
/dev-team create customer API with search
Dependencies: Sales (purchase linkage)
Success Criteria:
- Customer lookup in < 200ms
- Points calculated on every purchase
- Points redemption decreases balance
- Purchase history complete
- GDPR data export works
Week 12-13: Offline Sync Infrastructure
Objective: Enable POS operation during network outages.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Local SQLite database | Offline storage |
| 3-4 | Queue service | Offline transaction queue |
| 5-6 | Sync protocol | Conflict resolution |
| 7-8 | Connectivity detection | Online/offline mode |
| 9-10 | Background sync | Automatic upload when online |
Claude Commands:
/dev-team implement local SQLite storage for offline mode
/dev-team create offline transaction queue service
/dev-team implement sync protocol with conflict resolution
/dev-team create connectivity detection service
/dev-team implement background sync with retry logic
Dependencies: Sales, Payments, Inventory
Success Criteria:
- POS operates fully offline
- Transactions queue locally
- Sync completes within 30 seconds online
- Conflicts resolved with last-write-wins
- No data loss during sync
Week 13-14: RFID Module (Optional)
Objective: RFID tag reading for inventory and sales.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | RFID reader abstraction | Device interface |
| 3-4 | Tag inventory scanning | Bulk inventory count |
| 5-6 | POS tag reading | Add items by RFID |
| 7-8 | Anti-theft detection | Unpaid item alerts |
| 9-10 | Tag encoding | Write product info to tags |
Claude Commands:
/dev-team create RFID reader abstraction interface
/dev-team implement bulk inventory scanning with RFID
/dev-team create POS RFID tag reading for sales
/dev-team implement anti-theft detection at exit
/dev-team create RFID tag encoding service
Dependencies: Inventory, Catalog
Success Criteria:
- Reader connects and reads tags
- Bulk scan counts 1000 items in < 60 seconds
- POS adds items by RFID instantly
- Alerts fire for unpaid items
- Tags written with product data
19.7 Phase 4: Production (Weeks 15-16)
Week 15: Monitoring and Alerting
Objective: Production observability with metrics, logs, and alerts.
| Day | Task | Deliverable |
|---|---|---|
| 1 | Structured logging | Serilog with context |
| 2 | Metrics collection | Prometheus endpoints |
| 3 | Health checks | Liveness and readiness |
| 4 | Grafana dashboards | Key metrics visualization |
| 5 | Alert rules | PagerDuty/Slack integration |
Claude Commands:
/dev-team implement structured logging with Serilog
/dev-team add Prometheus metrics endpoints
/dev-team create health check endpoints
/devops-team create Grafana dashboards
/devops-team configure alerting rules
Success Criteria:
- Logs include correlation IDs
- Key metrics exposed (latency, errors, saturation)
- Health checks report component status
- Dashboards show real-time data
- Alerts notify on-call team
Week 15: Security Hardening
Objective: Production security controls and compliance.
| Day | Task | Deliverable |
|---|---|---|
| 1 | Input validation | All endpoints validated |
| 2 | Rate limiting | API throttling |
| 3 | Secrets management | Vault integration |
| 4 | Security headers | CSP, HSTS, etc. |
| 5 | Penetration testing | Vulnerability scan |
Claude Commands:
/security-team review input validation coverage
/dev-team implement rate limiting middleware
/devops-team configure secrets management with Vault
/dev-team add security headers middleware
/security-team run penetration test scan
Success Criteria:
- No SQL injection vulnerabilities
- Rate limiting prevents abuse
- No secrets in code or logs
- Security headers configured
- Pen test findings remediated
Week 16: Production Deployment
Objective: Deploy to production with zero-downtime release.
| Day | Task | Deliverable |
|---|---|---|
| 1-2 | Production infrastructure | Kubernetes/Docker Swarm |
| 3 | Database migration | Schema applied |
| 4 | Blue-green deployment | Zero-downtime release |
| 5 | Go-live | Production traffic |
Claude Commands:
/devops-team provision production infrastructure
/devops-team run database migrations
/devops-team execute blue-green deployment
/qa-team run production smoke tests
/team go-live celebration
Success Criteria:
- Infrastructure provisioned and tested
- Database migrated without data loss
- Deployment completes in < 10 minutes
- Zero downtime during release
- Production accepting traffic
19.8 Module Dependencies
┌─────────────┐
│ Multi-Tenant│
└──────┬──────┘
│
┌───────────────┼───────────────┐
│ │ │
┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐
│ Auth │ │ Catalog │ │ Locations │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
│ ┌──────▼──────┐ │
│ │ Inventory │◄───────┘
│ └──────┬──────┘
│ │
┌──────▼───────────────▼──────┐
│ Sales │
└──────┬───────────────┬──────┘
│ │
┌──────▼──────┐ ┌──────▼──────┐
│ Payments │ │ Customer │
└──────┬──────┘ └─────────────┘
│
┌──────▼──────┐
│ Cash Drawer │
└─────────────┘
┌─────────────┐
│ RFID │ (Optional, independent)
└─────────────┘
┌─────────────┐
│ Offline Sync│ (Wraps: Sales, Payments, Inventory)
└─────────────┘
19.9 Risk Assessment
High Risk
| Risk | Impact | Mitigation |
|---|---|---|
| Multi-tenant data leak | Critical | Extensive testing, schema isolation |
| Payment processing failure | High | Retry logic, fallback methods |
| Offline sync data loss | High | Local backup, conflict resolution |
Medium Risk
| Risk | Impact | Mitigation |
|---|---|---|
| Performance degradation | Medium | Load testing, caching |
| RFID reader compatibility | Medium | Abstraction layer |
| Third-party API outages | Medium | Circuit breakers, fallbacks |
Low Risk
| Risk | Impact | Mitigation |
|---|---|---|
| UI complexity | Low | User testing, iteration |
| Documentation gaps | Low | Continuous documentation |
19.10 Resource Requirements
Team Composition
| Role | Count | Phase Focus |
|---|---|---|
| Senior Backend Developer | 2 | All phases |
| Frontend Developer | 1 | Phase 2-3 |
| DevOps Engineer | 1 | Phase 1, 4 |
| QA Engineer | 1 | All phases |
| Project Manager | 1 | All phases |
Infrastructure
| Resource | Development | Production |
|---|---|---|
| API Servers | 1 | 3 (min) |
| Database | Shared | Dedicated cluster |
| Cache | Shared | Dedicated Redis |
| Message Queue | Shared | Dedicated RabbitMQ |
19.11 Milestone Checklist
M1: Tenant Demo (Week 2)
- Tenant CRUD API working
- Schema provisioning automated
- Tenant isolation verified
M2: Auth Complete (Week 3)
- User registration and login
- JWT tokens functioning
- PIN login for terminals
M3: Catalog API (Week 4)
- Product CRUD complete
- Variant support working
- Search and filtering
M4: Inventory Sync (Week 6)
- Stock levels tracked
- Movements audited
- Transfers working
M5: First Sale (Week 7)
- Cart operations complete
- Sale finalization working
- Inventory decremented
M6: Payment Complete (Week 9)
- Multi-tender payments
- Card processing
- Refunds working
M7: Offline Ready (Week 13)
- Offline mode functional
- Sync protocol tested
- Conflict resolution verified
M8: Go-Live (Week 16)
- Production deployed
- Monitoring active
- Team trained
19.12 Next Steps
- Begin Chapter 20: Phase 1 Foundation for detailed week-by-week implementation
- Set up project tracking in GitHub Projects or Jira
- Schedule weekly demo sessions for stakeholder feedback
- Establish on-call rotation for Phase 4
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VI - Implementation Guide |
| Chapter | 19 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 20: Phase 1 - Foundation Implementation
20.1 Overview
Phase 1 establishes the foundational infrastructure: multi-tenant isolation, authentication, and catalog management. This 5-week phase (extended from 4 weeks based on security research) creates the base upon which all other modules build.
Week 1-2: Multi-Tenant Infrastructure Week 3: Authentication (with security hardening) Week 4: Catalog Domain Week 5: Production Hardening (NEW)
20.2 Domain Configuration Strategy
Development Domain Setup
During development, we’ll use the existing nexusdenim.com domain with Cloudflare, allowing immediate development without waiting for a dedicated domain.
Subdomain Structure:
nexusdenim.com (existing infrastructure)
├── pos.nexusdenim.com → Platform admin portal
├── api-pos.nexusdenim.com → REST API gateway
├── {tenant}.pos.nexusdenim.com → Tenant-specific access
│ ├── nexus.pos.nexusdenim.com → Example: Nexus Clothing tenant
│ └── demo.pos.nexusdenim.com → Example: Demo tenant
│
└── (existing services - unchanged)
├── tasks.nexusdenim.com
└── orders-api.nexusdenim.com
Production Migration Path
When ready, migration to a dedicated domain (e.g., posplatform.com) requires:
| Component | Change Required | Effort |
|---|---|---|
| DNS Records | New Cloudflare zone | 10 min |
| Environment Variables | Update PLATFORM_DOMAIN | 1 min |
| Tenant Records | UPDATE query | 1 min |
| SSL Certificates | Cloudflare auto-generates | Automatic |
| Application Code | None (domain-agnostic) | 0 min |
Migration Script (run when switching domains):
-- Update all tenant domains from dev to production
UPDATE shared.tenants
SET domain = REPLACE(domain, 'nexusdenim.com', 'posplatform.com')
WHERE domain LIKE '%.nexusdenim.com';
Environment-Based Configuration
All domain references use environment variables for seamless migration:
// src/PosPlatform.Api/appsettings.json
{
"Platform": {
"Domain": "nexusdenim.com", // Change this ONE place
"AdminSubdomain": "pos", // pos.{domain}
"ApiSubdomain": "api-pos", // api-pos.{domain}
"TenantPattern": "{tenant}.pos" // {tenant}.pos.{domain}
},
"Jwt": {
"Issuer": "https://pos.nexusdenim.com",
"Audience": "https://pos.nexusdenim.com"
}
}
// src/PosPlatform.Core/Configuration/PlatformSettings.cs
namespace PosPlatform.Core.Configuration;
public class PlatformSettings
{
public string Domain { get; set; } = "nexusdenim.com";
public string AdminSubdomain { get; set; } = "pos";
public string ApiSubdomain { get; set; } = "api-pos";
public string TenantPattern { get; set; } = "{tenant}.pos";
public string AdminUrl => $"https://{AdminSubdomain}.{Domain}";
public string ApiUrl => $"https://{ApiSubdomain}.{Domain}";
public string GetTenantUrl(string tenantCode)
=> $"https://{TenantPattern.Replace("{tenant}", tenantCode.ToLowerInvariant())}.{Domain}";
}
Cloudflare Tunnel Configuration
Development Setup (nexusdenim.com):
# cloudflared/config.yml
tunnel: pos-platform-dev
credentials-file: /etc/cloudflared/credentials.json
ingress:
# Platform admin portal
- hostname: pos.nexusdenim.com
service: http://pos-web:8080
# API gateway
- hostname: api-pos.nexusdenim.com
service: http://pos-api:5100
# Wildcard for tenant subdomains
- hostname: "*.pos.nexusdenim.com"
service: http://pos-web:8080
- service: http_status:404
Tenant Resolution Updates
The middleware now uses configured domain patterns:
// Updated subdomain parsing to use configured domain
private string? ResolveFromSubdomain(HttpContext context, PlatformSettings settings)
{
var host = context.Request.Host.Host;
// Extract tenant from pattern: {tenant}.pos.{domain}
// e.g., nexus.pos.nexusdenim.com → nexus
var suffix = $".{settings.AdminSubdomain}.{settings.Domain}";
if (host.EndsWith(suffix, StringComparison.OrdinalIgnoreCase))
{
var tenant = host[..^suffix.Length];
if (!string.IsNullOrEmpty(tenant) && tenant != settings.AdminSubdomain)
return tenant.ToUpperInvariant();
}
return null;
}
SSL Certificate Configuration
Important: The 3-level subdomain pattern ({tenant}.pos.nexusdenim.com) requires Cloudflare Advanced Certificate Manager ($10/month) for wildcard SSL support. Universal SSL only covers 2-level wildcards.
Setup Steps:
- Enable Advanced Certificate Manager in Cloudflare dashboard
- Request wildcard certificate for
*.pos.nexusdenim.com - Verify certificate covers all tenant subdomains
Key Design Decisions
- Domain-Agnostic Code: All application code uses configuration, never hardcoded domains
- Environment Variables: Single source of truth for domain configuration
- Cloudflare Integration: Leverages existing Cloudflare infrastructure with ACM
- Wildcard Subdomains: Supports dynamic tenant creation without DNS changes
- Zero-Downtime Migration: Can run both domains simultaneously during transition
- SSL Strategy: Advanced Certificate Manager for multi-level wildcard support
20.3 Multi-Tenancy Architecture Decision
Why Custom Implementation Over Finbuckle.MultiTenant
During architecture planning, we evaluated Finbuckle.MultiTenant (the most popular .NET multi-tenancy library) against a custom implementation.
Finbuckle.MultiTenant Overview:
// What Finbuckle looks like
services.AddMultiTenant<TenantInfo>()
.WithHostStrategy() // tenant.example.com
.WithHeaderStrategy("X-Tenant") // X-Tenant-Code header
.WithEFCoreStore<AppDbContext, TenantInfo>();
Decision: Custom Implementation
| Factor | Finbuckle | Custom (Chosen) |
|---|---|---|
| Multi-tenant pattern | Designed for Row-Level Security (TenantId column) | Native PostgreSQL schema-per-tenant (search_path) |
| Control | Magic inside library | Every line is our code |
| Debugging | Stack traces through library internals | Direct, readable code path |
| POS-specific needs | Generic SaaS patterns | Tailored for retail (locations, registers, shifts) |
| Dependencies | External NuGet package | Zero external dependencies |
| Learning curve | Team learns library API | Team deeply understands multi-tenancy |
| Schema isolation | Requires workarounds | First-class PostgreSQL schema support |
When Finbuckle Would Make Sense
- 100+ tenants sharing tables with Row-Level Security
- Rapid prototyping where development speed > control
- Team unfamiliar with multi-tenant architecture patterns
Why Custom Makes Sense for RapOS
- Schema-per-tenant is our architecture - Finbuckle is optimized for RLS, not PostgreSQL schemas
- 5-50 tenants expected - Not the scale where library abstractions pay off
- Compliance requirements - Complete data isolation per tenant (schema separation)
- POS-specific patterns - Locations, cash drawers, shifts aren’t generic SaaS concepts
- Already designed - Switching to Finbuckle would be rework, not simplification
Our Implementation (Clean & Understandable)
// TenantResolutionMiddleware.cs - Resolve tenant from request
var tenantCode = ResolveTenantCode(context); // Header/Subdomain/JWT
var tenant = await _tenantRepository.GetByCodeAsync(tenantCode);
_tenantContext.SetTenant(tenant);
// TenantDbContext.cs - Dynamic schema binding
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
var schema = $"tenant_{_tenantContext.TenantCode.ToLowerInvariant()}";
modelBuilder.HasDefaultSchema(schema);
}
This is 60 lines of middleware that we fully understand vs. a library dependency we’d need to learn, debug, and work around for schema-per-tenant.
20.4 Week 1-2: Multi-Tenant Infrastructure
Day 1-2: Tenant Entity and Repository
Objective: Create the tenant domain model with repository pattern.
Claude Command:
/dev-team implement tenant entity with repository pattern
Implementation:
// src/PosPlatform.Core/Entities/Tenant.cs
using System;
using System.Collections.Generic;
namespace PosPlatform.Core.Entities;
public class Tenant
{
public Guid Id { get; private set; }
public string Code { get; private set; } = string.Empty;
public string Name { get; private set; } = string.Empty;
public string? Domain { get; private set; }
public TenantStatus Status { get; private set; }
public TenantSettings Settings { get; private set; } = new();
public DateTime CreatedAt { get; private set; }
public DateTime? UpdatedAt { get; private set; }
private Tenant() { } // EF Core
public static Tenant Create(string code, string name, string? domain = null)
{
if (string.IsNullOrWhiteSpace(code) || code.Length > 10)
throw new ArgumentException("Code must be 1-10 characters", nameof(code));
return new Tenant
{
Id = Guid.NewGuid(),
Code = code.ToUpperInvariant(),
Name = name,
Domain = domain,
Status = TenantStatus.Active,
CreatedAt = DateTime.UtcNow
};
}
public void UpdateSettings(TenantSettings settings)
{
Settings = settings ?? throw new ArgumentNullException(nameof(settings));
UpdatedAt = DateTime.UtcNow;
}
public void Suspend() => Status = TenantStatus.Suspended;
public void Activate() => Status = TenantStatus.Active;
}
public enum TenantStatus
{
Active,
Suspended,
Pending
}
public class TenantSettings
{
public string Timezone { get; set; } = "UTC";
public string Currency { get; set; } = "USD";
public decimal TaxRate { get; set; } = 0.0m;
public string? LogoUrl { get; set; }
public Dictionary<string, string> Custom { get; set; } = new();
}
// src/PosPlatform.Core/Interfaces/ITenantRepository.cs
using PosPlatform.Core.Entities;
namespace PosPlatform.Core.Interfaces;
public interface ITenantRepository
{
Task<Tenant?> GetByIdAsync(Guid id, CancellationToken ct = default);
Task<Tenant?> GetByCodeAsync(string code, CancellationToken ct = default);
Task<Tenant?> GetByDomainAsync(string domain, CancellationToken ct = default);
Task<IReadOnlyList<Tenant>> GetAllAsync(CancellationToken ct = default);
Task<Tenant> AddAsync(Tenant tenant, CancellationToken ct = default);
Task UpdateAsync(Tenant tenant, CancellationToken ct = default);
Task<bool> ExistsAsync(string code, CancellationToken ct = default);
}
// src/PosPlatform.Infrastructure/Repositories/TenantRepository.cs
using Microsoft.EntityFrameworkCore;
using PosPlatform.Core.Entities;
using PosPlatform.Core.Interfaces;
using PosPlatform.Infrastructure.Data;
namespace PosPlatform.Infrastructure.Repositories;
public class TenantRepository : ITenantRepository
{
private readonly PlatformDbContext _context;
public TenantRepository(PlatformDbContext context)
{
_context = context;
}
public async Task<Tenant?> GetByIdAsync(Guid id, CancellationToken ct = default)
=> await _context.Tenants.FindAsync(new object[] { id }, ct);
public async Task<Tenant?> GetByCodeAsync(string code, CancellationToken ct = default)
=> await _context.Tenants
.FirstOrDefaultAsync(t => t.Code == code.ToUpperInvariant(), ct);
public async Task<Tenant?> GetByDomainAsync(string domain, CancellationToken ct = default)
=> await _context.Tenants
.FirstOrDefaultAsync(t => t.Domain == domain.ToLowerInvariant(), ct);
public async Task<IReadOnlyList<Tenant>> GetAllAsync(CancellationToken ct = default)
=> await _context.Tenants.ToListAsync(ct);
public async Task<Tenant> AddAsync(Tenant tenant, CancellationToken ct = default)
{
await _context.Tenants.AddAsync(tenant, ct);
await _context.SaveChangesAsync(ct);
return tenant;
}
public async Task UpdateAsync(Tenant tenant, CancellationToken ct = default)
{
_context.Tenants.Update(tenant);
await _context.SaveChangesAsync(ct);
}
public async Task<bool> ExistsAsync(string code, CancellationToken ct = default)
=> await _context.Tenants.AnyAsync(t => t.Code == code.ToUpperInvariant(), ct);
}
Test Command:
# Run unit tests for tenant entity
dotnet test --filter "FullyQualifiedName~TenantTests"
Day 3-4: Schema Provisioning Service
Objective: Automatically create tenant-specific database schemas.
Claude Command:
/dev-team create tenant provisioning service with schema isolation
Implementation:
// src/PosPlatform.Core/Interfaces/ITenantProvisioningService.cs
namespace PosPlatform.Core.Interfaces;
public interface ITenantProvisioningService
{
Task ProvisionTenantAsync(string tenantCode, CancellationToken ct = default);
Task DeprovisionTenantAsync(string tenantCode, CancellationToken ct = default);
Task<bool> IsProvisionedAsync(string tenantCode, CancellationToken ct = default);
}
// src/PosPlatform.Infrastructure/MultiTenant/TenantProvisioningService.cs
using Microsoft.Extensions.Logging;
using Npgsql;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Infrastructure.MultiTenant;
public class TenantProvisioningService : ITenantProvisioningService
{
private readonly string _connectionString;
private readonly ILogger<TenantProvisioningService> _logger;
public TenantProvisioningService(
string connectionString,
ILogger<TenantProvisioningService> logger)
{
_connectionString = connectionString;
_logger = logger;
}
public async Task ProvisionTenantAsync(string tenantCode, CancellationToken ct = default)
{
var schemaName = GetSchemaName(tenantCode);
_logger.LogInformation("Provisioning tenant schema: {Schema}", schemaName);
await using var conn = new NpgsqlConnection(_connectionString);
await conn.OpenAsync(ct);
await using var transaction = await conn.BeginTransactionAsync(ct);
try
{
// Create schema
await ExecuteAsync(conn, $"CREATE SCHEMA IF NOT EXISTS {schemaName}", ct);
// Create locations table
await ExecuteAsync(conn, $@"
CREATE TABLE IF NOT EXISTS {schemaName}.locations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
code VARCHAR(10) NOT NULL,
name VARCHAR(100) NOT NULL,
address JSONB,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW(),
CONSTRAINT uk_{schemaName}_locations_code UNIQUE (code)
)", ct);
// Create users table
await ExecuteAsync(conn, $@"
CREATE TABLE IF NOT EXISTS {schemaName}.users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
employee_id VARCHAR(20),
full_name VARCHAR(100) NOT NULL,
email VARCHAR(255),
password_hash VARCHAR(255),
pin_hash VARCHAR(255),
role VARCHAR(50) NOT NULL,
location_id UUID REFERENCES {schemaName}.locations(id),
is_active BOOLEAN DEFAULT true,
last_login_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW(),
CONSTRAINT uk_{schemaName}_users_email UNIQUE (email),
CONSTRAINT uk_{schemaName}_users_employee_id UNIQUE (employee_id)
)", ct);
// Create categories table
await ExecuteAsync(conn, $@"
CREATE TABLE IF NOT EXISTS {schemaName}.categories (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(100) NOT NULL,
parent_id UUID REFERENCES {schemaName}.categories(id),
sort_order INT DEFAULT 0,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW()
)", ct);
// Create products table
await ExecuteAsync(conn, $@"
CREATE TABLE IF NOT EXISTS {schemaName}.products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
sku VARCHAR(50) NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
category_id UUID REFERENCES {schemaName}.categories(id),
base_price DECIMAL(10,2) NOT NULL,
cost DECIMAL(10,2),
tax_rate DECIMAL(5,4) DEFAULT 0,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ,
CONSTRAINT uk_{schemaName}_products_sku UNIQUE (sku)
)", ct);
// Create product_variants table
await ExecuteAsync(conn, $@"
CREATE TABLE IF NOT EXISTS {schemaName}.product_variants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
product_id UUID NOT NULL REFERENCES {schemaName}.products(id),
sku VARCHAR(50) NOT NULL,
name VARCHAR(255) NOT NULL,
attributes JSONB NOT NULL DEFAULT '{{}}',
price_adjustment DECIMAL(10,2) DEFAULT 0,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW(),
CONSTRAINT uk_{schemaName}_variants_sku UNIQUE (sku)
)", ct);
// Create indexes
await ExecuteAsync(conn, $@"
CREATE INDEX IF NOT EXISTS idx_{schemaName}_products_category
ON {schemaName}.products(category_id);
CREATE INDEX IF NOT EXISTS idx_{schemaName}_products_name
ON {schemaName}.products USING gin(name gin_trgm_ops);
CREATE INDEX IF NOT EXISTS idx_{schemaName}_variants_product
ON {schemaName}.product_variants(product_id);
", ct);
await transaction.CommitAsync(ct);
_logger.LogInformation("Tenant schema provisioned: {Schema}", schemaName);
}
catch (Exception ex)
{
await transaction.RollbackAsync(ct);
_logger.LogError(ex, "Failed to provision tenant schema: {Schema}", schemaName);
throw;
}
}
public async Task DeprovisionTenantAsync(string tenantCode, CancellationToken ct = default)
{
var schemaName = GetSchemaName(tenantCode);
_logger.LogWarning("Deprovisioning tenant schema: {Schema}", schemaName);
await using var conn = new NpgsqlConnection(_connectionString);
await conn.OpenAsync(ct);
await ExecuteAsync(conn, $"DROP SCHEMA IF EXISTS {schemaName} CASCADE", ct);
}
public async Task<bool> IsProvisionedAsync(string tenantCode, CancellationToken ct = default)
{
var schemaName = GetSchemaName(tenantCode);
await using var conn = new NpgsqlConnection(_connectionString);
await conn.OpenAsync(ct);
await using var cmd = new NpgsqlCommand(
"SELECT EXISTS(SELECT 1 FROM information_schema.schemata WHERE schema_name = @schema)",
conn);
cmd.Parameters.AddWithValue("schema", schemaName);
var result = await cmd.ExecuteScalarAsync(ct);
return result is true;
}
private static string GetSchemaName(string tenantCode)
=> $"tenant_{tenantCode.ToLowerInvariant()}";
private static async Task ExecuteAsync(NpgsqlConnection conn, string sql, CancellationToken ct)
{
await using var cmd = new NpgsqlCommand(sql, conn);
await cmd.ExecuteNonQueryAsync(ct);
}
}
Test Command:
# Test schema provisioning
curl -X POST http://localhost:5100/api/admin/tenants \
-H "Content-Type: application/json" \
-d '{"code": "TEST", "name": "Test Store"}'
# Verify schema exists
docker exec -it pos-postgres psql -U pos_admin -d pos_platform -c "\dn"
Day 5: Tenant Resolution Middleware
Objective: Resolve tenant from request and establish context.
Claude Command:
/dev-team implement tenant resolution middleware with request context
Implementation:
// src/PosPlatform.Core/Interfaces/ITenantContext.cs
using PosPlatform.Core.Entities;
namespace PosPlatform.Core.Interfaces;
public interface ITenantContext
{
Tenant? CurrentTenant { get; }
string? TenantCode { get; }
bool HasTenant { get; }
}
public interface ITenantContextSetter
{
void SetTenant(Tenant tenant);
void ClearTenant();
}
// src/PosPlatform.Infrastructure/MultiTenant/TenantContext.cs
using PosPlatform.Core.Entities;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Infrastructure.MultiTenant;
public class TenantContext : ITenantContext, ITenantContextSetter
{
private Tenant? _tenant;
public Tenant? CurrentTenant => _tenant;
public string? TenantCode => _tenant?.Code;
public bool HasTenant => _tenant != null;
public void SetTenant(Tenant tenant)
{
_tenant = tenant ?? throw new ArgumentNullException(nameof(tenant));
}
public void ClearTenant()
{
_tenant = null;
}
}
// src/PosPlatform.Api/Middleware/TenantResolutionMiddleware.cs
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Api.Middleware;
public class TenantResolutionMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<TenantResolutionMiddleware> _logger;
public TenantResolutionMiddleware(
RequestDelegate next,
ILogger<TenantResolutionMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(
HttpContext context,
ITenantRepository tenantRepository,
ITenantContextSetter tenantContext)
{
// Skip tenant resolution for platform endpoints
if (context.Request.Path.StartsWithSegments("/api/admin") ||
context.Request.Path.StartsWithSegments("/health"))
{
await _next(context);
return;
}
var tenantCode = ResolveTenantCode(context);
if (string.IsNullOrEmpty(tenantCode))
{
context.Response.StatusCode = 400;
await context.Response.WriteAsJsonAsync(new { error = "Tenant not specified" });
return;
}
var tenant = await tenantRepository.GetByCodeAsync(tenantCode);
if (tenant == null)
{
context.Response.StatusCode = 404;
await context.Response.WriteAsJsonAsync(new { error = "Tenant not found" });
return;
}
if (tenant.Status != TenantStatus.Active)
{
context.Response.StatusCode = 403;
await context.Response.WriteAsJsonAsync(new { error = "Tenant is not active" });
return;
}
tenantContext.SetTenant(tenant);
_logger.LogDebug("Tenant resolved: {TenantCode}", tenant.Code);
try
{
await _next(context);
}
finally
{
tenantContext.ClearTenant();
}
}
private static string? ResolveTenantCode(HttpContext context)
{
// Priority 1: Header
if (context.Request.Headers.TryGetValue("X-Tenant-Code", out var headerValue))
return headerValue.ToString();
// Priority 2: Query string
if (context.Request.Query.TryGetValue("tenant", out var queryValue))
return queryValue.ToString();
// Priority 3: Subdomain (e.g., tenant1.posplatform.com)
var host = context.Request.Host.Host;
var parts = host.Split('.');
if (parts.Length >= 3)
return parts[0];
// Priority 4: JWT claim (if authenticated)
var tenantClaim = context.User?.FindFirst("tenant_code");
if (tenantClaim != null)
return tenantClaim.Value;
return null;
}
}
// Extension method for registration
public static class TenantMiddlewareExtensions
{
public static IApplicationBuilder UseTenantResolution(this IApplicationBuilder app)
{
return app.UseMiddleware<TenantResolutionMiddleware>();
}
}
Registration in Program.cs:
// Add to Program.cs
builder.Services.AddScoped<TenantContext>();
builder.Services.AddScoped<ITenantContext>(sp => sp.GetRequiredService<TenantContext>());
builder.Services.AddScoped<ITenantContextSetter>(sp => sp.GetRequiredService<TenantContext>());
// In middleware pipeline (after authentication, before controllers)
app.UseAuthentication();
app.UseTenantResolution();
app.UseAuthorization();
Day 6-7: Dynamic Connection Routing
Objective: Route database connections to tenant-specific schemas.
Claude Command:
/dev-team implement dynamic connection string routing per tenant
Implementation:
// src/PosPlatform.Infrastructure/Data/TenantDbContext.cs
using Microsoft.EntityFrameworkCore;
using PosPlatform.Core.Entities;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Infrastructure.Data;
public class TenantDbContext : DbContext
{
private readonly ITenantContext _tenantContext;
private readonly string _schemaName;
public TenantDbContext(
DbContextOptions<TenantDbContext> options,
ITenantContext tenantContext)
: base(options)
{
_tenantContext = tenantContext;
_schemaName = tenantContext.HasTenant
? $"tenant_{tenantContext.TenantCode!.ToLowerInvariant()}"
: "public";
}
public DbSet<Location> Locations => Set<Location>();
public DbSet<User> Users => Set<User>();
public DbSet<Category> Categories => Set<Category>();
public DbSet<Product> Products => Set<Product>();
public DbSet<ProductVariant> ProductVariants => Set<ProductVariant>();
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Set default schema for all entities
modelBuilder.HasDefaultSchema(_schemaName);
// Configure entities
modelBuilder.Entity<Location>(entity =>
{
entity.ToTable("locations");
entity.HasKey(e => e.Id);
entity.Property(e => e.Code).HasMaxLength(10).IsRequired();
entity.Property(e => e.Name).HasMaxLength(100).IsRequired();
entity.HasIndex(e => e.Code).IsUnique();
});
modelBuilder.Entity<User>(entity =>
{
entity.ToTable("users");
entity.HasKey(e => e.Id);
entity.Property(e => e.FullName).HasMaxLength(100).IsRequired();
entity.Property(e => e.Email).HasMaxLength(255);
entity.HasIndex(e => e.Email).IsUnique();
entity.HasIndex(e => e.EmployeeId).IsUnique();
entity.HasOne(e => e.Location)
.WithMany()
.HasForeignKey(e => e.LocationId);
});
modelBuilder.Entity<Category>(entity =>
{
entity.ToTable("categories");
entity.HasKey(e => e.Id);
entity.Property(e => e.Name).HasMaxLength(100).IsRequired();
entity.HasOne(e => e.Parent)
.WithMany(e => e.Children)
.HasForeignKey(e => e.ParentId)
.OnDelete(DeleteBehavior.Restrict);
});
modelBuilder.Entity<Product>(entity =>
{
entity.ToTable("products");
entity.HasKey(e => e.Id);
entity.Property(e => e.Sku).HasMaxLength(50).IsRequired();
entity.Property(e => e.Name).HasMaxLength(255).IsRequired();
entity.Property(e => e.BasePrice).HasPrecision(10, 2);
entity.Property(e => e.Cost).HasPrecision(10, 2);
entity.HasIndex(e => e.Sku).IsUnique();
entity.HasOne(e => e.Category)
.WithMany()
.HasForeignKey(e => e.CategoryId);
});
modelBuilder.Entity<ProductVariant>(entity =>
{
entity.ToTable("product_variants");
entity.HasKey(e => e.Id);
entity.Property(e => e.Sku).HasMaxLength(50).IsRequired();
entity.HasIndex(e => e.Sku).IsUnique();
entity.HasOne(e => e.Product)
.WithMany(p => p.Variants)
.HasForeignKey(e => e.ProductId);
});
}
}
// DI Registration in Program.cs
builder.Services.AddDbContext<TenantDbContext>((sp, options) =>
{
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");
options.UseNpgsql(connectionString);
});
Day 8-9: Tenant Management API
Objective: Create REST API for tenant CRUD operations.
Claude Command:
/dev-team create tenant management API with CRUD endpoints
Implementation:
// src/PosPlatform.Api/Controllers/Admin/TenantsController.cs
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using PosPlatform.Core.Entities;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Api.Controllers.Admin;
[ApiController]
[Route("api/admin/tenants")]
[Authorize(Roles = "super_admin")]
public class TenantsController : ControllerBase
{
private readonly ITenantRepository _tenantRepository;
private readonly ITenantProvisioningService _provisioningService;
private readonly ILogger<TenantsController> _logger;
public TenantsController(
ITenantRepository tenantRepository,
ITenantProvisioningService provisioningService,
ILogger<TenantsController> logger)
{
_tenantRepository = tenantRepository;
_provisioningService = provisioningService;
_logger = logger;
}
[HttpGet]
public async Task<ActionResult<IEnumerable<TenantDto>>> GetAll(CancellationToken ct)
{
var tenants = await _tenantRepository.GetAllAsync(ct);
return Ok(tenants.Select(TenantDto.FromEntity));
}
[HttpGet("{code}")]
public async Task<ActionResult<TenantDto>> GetByCode(string code, CancellationToken ct)
{
var tenant = await _tenantRepository.GetByCodeAsync(code, ct);
if (tenant == null)
return NotFound();
return Ok(TenantDto.FromEntity(tenant));
}
[HttpPost]
public async Task<ActionResult<TenantDto>> Create(
[FromBody] CreateTenantRequest request,
CancellationToken ct)
{
if (await _tenantRepository.ExistsAsync(request.Code, ct))
return Conflict(new { error = "Tenant code already exists" });
var tenant = Tenant.Create(request.Code, request.Name, request.Domain);
if (request.Settings != null)
tenant.UpdateSettings(request.Settings);
await _tenantRepository.AddAsync(tenant, ct);
// Provision database schema
await _provisioningService.ProvisionTenantAsync(tenant.Code, ct);
_logger.LogInformation("Tenant created: {Code}", tenant.Code);
return CreatedAtAction(
nameof(GetByCode),
new { code = tenant.Code },
TenantDto.FromEntity(tenant));
}
[HttpPut("{code}/settings")]
public async Task<IActionResult> UpdateSettings(
string code,
[FromBody] TenantSettings settings,
CancellationToken ct)
{
var tenant = await _tenantRepository.GetByCodeAsync(code, ct);
if (tenant == null)
return NotFound();
tenant.UpdateSettings(settings);
await _tenantRepository.UpdateAsync(tenant, ct);
return NoContent();
}
[HttpPost("{code}/suspend")]
public async Task<IActionResult> Suspend(string code, CancellationToken ct)
{
var tenant = await _tenantRepository.GetByCodeAsync(code, ct);
if (tenant == null)
return NotFound();
tenant.Suspend();
await _tenantRepository.UpdateAsync(tenant, ct);
_logger.LogWarning("Tenant suspended: {Code}", code);
return NoContent();
}
[HttpPost("{code}/activate")]
public async Task<IActionResult> Activate(string code, CancellationToken ct)
{
var tenant = await _tenantRepository.GetByCodeAsync(code, ct);
if (tenant == null)
return NotFound();
tenant.Activate();
await _tenantRepository.UpdateAsync(tenant, ct);
return NoContent();
}
}
// DTOs
public record CreateTenantRequest(
string Code,
string Name,
string? Domain,
TenantSettings? Settings);
public record TenantDto(
Guid Id,
string Code,
string Name,
string? Domain,
string Status,
TenantSettings Settings,
DateTime CreatedAt)
{
public static TenantDto FromEntity(Tenant tenant) => new(
tenant.Id,
tenant.Code,
tenant.Name,
tenant.Domain,
tenant.Status.ToString(),
tenant.Settings,
tenant.CreatedAt);
}
Day 10: Integration Tests
Objective: Verify tenant isolation through integration tests.
Claude Command:
/qa-team write tenant isolation integration tests
Implementation:
// tests/PosPlatform.Api.Tests/TenantIsolationTests.cs
using System.Net;
using System.Net.Http.Json;
using Microsoft.AspNetCore.Mvc.Testing;
using Xunit;
namespace PosPlatform.Api.Tests;
public class TenantIsolationTests : IClassFixture<WebApplicationFactory<Program>>
{
private readonly HttpClient _client;
public TenantIsolationTests(WebApplicationFactory<Program> factory)
{
_client = factory.CreateClient();
}
[Fact]
public async Task Request_WithoutTenant_Returns400()
{
var response = await _client.GetAsync("/api/products");
Assert.Equal(HttpStatusCode.BadRequest, response.StatusCode);
}
[Fact]
public async Task Request_WithInvalidTenant_Returns404()
{
_client.DefaultRequestHeaders.Add("X-Tenant-Code", "INVALID");
var response = await _client.GetAsync("/api/products");
Assert.Equal(HttpStatusCode.NotFound, response.StatusCode);
}
[Fact]
public async Task Products_FromDifferentTenants_AreIsolated()
{
// Create product in Tenant A
_client.DefaultRequestHeaders.Clear();
_client.DefaultRequestHeaders.Add("X-Tenant-Code", "TENANT_A");
var productA = new { Sku = "SKU-A", Name = "Product A", BasePrice = 10.00m };
await _client.PostAsJsonAsync("/api/products", productA);
// Create product in Tenant B
_client.DefaultRequestHeaders.Clear();
_client.DefaultRequestHeaders.Add("X-Tenant-Code", "TENANT_B");
var productB = new { Sku = "SKU-B", Name = "Product B", BasePrice = 20.00m };
await _client.PostAsJsonAsync("/api/products", productB);
// Verify Tenant A only sees their product
_client.DefaultRequestHeaders.Clear();
_client.DefaultRequestHeaders.Add("X-Tenant-Code", "TENANT_A");
var responseA = await _client.GetFromJsonAsync<ProductListResponse>("/api/products");
Assert.Single(responseA!.Items);
Assert.Equal("SKU-A", responseA.Items[0].Sku);
// Verify Tenant B only sees their product
_client.DefaultRequestHeaders.Clear();
_client.DefaultRequestHeaders.Add("X-Tenant-Code", "TENANT_B");
var responseB = await _client.GetFromJsonAsync<ProductListResponse>("/api/products");
Assert.Single(responseB!.Items);
Assert.Equal("SKU-B", responseB.Items[0].Sku);
}
}
public record ProductListResponse(List<ProductItem> Items);
public record ProductItem(string Sku, string Name, decimal BasePrice);
Test Command:
# Run integration tests
dotnet test tests/PosPlatform.Api.Tests --filter "FullyQualifiedName~TenantIsolation"
20.5 Week 2-3: Authentication System
Day 1-2: User Entity with Password Hashing
Claude Command:
/dev-team implement user entity with bcrypt password hashing
Implementation:
// src/PosPlatform.Core/Entities/User.cs
using System.Security.Cryptography;
using BCrypt.Net;
namespace PosPlatform.Core.Entities;
public class User
{
public Guid Id { get; private set; }
public string? EmployeeId { get; private set; }
public string FullName { get; private set; } = string.Empty;
public string? Email { get; private set; }
public string? PasswordHash { get; private set; }
public string? PinHash { get; private set; }
public UserRole Role { get; private set; }
public Guid? LocationId { get; private set; }
public Location? Location { get; private set; }
public bool IsActive { get; private set; }
public DateTime? LastLoginAt { get; private set; }
public DateTime CreatedAt { get; private set; }
private User() { }
public static User Create(
string fullName,
UserRole role,
string? email = null,
string? employeeId = null)
{
return new User
{
Id = Guid.NewGuid(),
FullName = fullName,
Email = email?.ToLowerInvariant(),
EmployeeId = employeeId,
Role = role,
IsActive = true,
CreatedAt = DateTime.UtcNow
};
}
public void SetPassword(string password)
{
if (string.IsNullOrWhiteSpace(password) || password.Length < 8)
throw new ArgumentException("Password must be at least 8 characters");
PasswordHash = BCrypt.Net.BCrypt.HashPassword(password, 12);
}
public bool VerifyPassword(string password)
{
if (string.IsNullOrEmpty(PasswordHash))
return false;
return BCrypt.Net.BCrypt.Verify(password, PasswordHash);
}
public void SetPin(string pin)
{
if (string.IsNullOrWhiteSpace(pin) || !pin.All(char.IsDigit) || pin.Length != 6)
throw new ArgumentException("PIN must be exactly 6 digits");
// Block common weak PINs
var weakPins = new[] { "000000", "111111", "123456", "654321", "012345" };
if (weakPins.Contains(pin))
throw new ArgumentException("PIN is too weak. Choose a more secure combination.");
PinHash = BCrypt.Net.BCrypt.HashPassword(pin, 10);
}
public bool VerifyPin(string pin)
{
if (string.IsNullOrEmpty(PinHash))
return false;
return BCrypt.Net.BCrypt.Verify(pin, PinHash);
}
public void AssignLocation(Guid locationId) => LocationId = locationId;
public void RecordLogin() => LastLoginAt = DateTime.UtcNow;
public void Deactivate() => IsActive = false;
public void Activate() => IsActive = true;
}
public enum UserRole
{
Cashier,
Supervisor,
Manager,
Admin
}
Day 3-4: JWT Token Service
Claude Command:
/dev-team create JWT token service with refresh token support
Implementation:
// src/PosPlatform.Infrastructure/Services/JwtTokenService.cs
using System.IdentityModel.Tokens.Jwt;
using System.Security.Claims;
using System.Security.Cryptography;
using System.Text;
using Microsoft.Extensions.Options;
using Microsoft.IdentityModel.Tokens;
using PosPlatform.Core.Entities;
namespace PosPlatform.Infrastructure.Services;
public interface IJwtTokenService
{
TokenResult GenerateTokens(User user, string tenantCode);
ClaimsPrincipal? ValidateToken(string token);
string GenerateRefreshToken();
}
public record TokenResult(
string AccessToken,
string RefreshToken,
DateTime ExpiresAt);
public class JwtTokenService : IJwtTokenService
{
private readonly JwtSettings _settings;
private readonly byte[] _key;
public JwtTokenService(IOptions<JwtSettings> settings)
{
_settings = settings.Value;
_key = Encoding.UTF8.GetBytes(_settings.SecretKey);
}
public TokenResult GenerateTokens(User user, string tenantCode)
{
var expiresAt = DateTime.UtcNow.AddMinutes(_settings.AccessTokenExpirationMinutes);
var claims = new List<Claim>
{
new(ClaimTypes.NameIdentifier, user.Id.ToString()),
new(ClaimTypes.Name, user.FullName),
new(ClaimTypes.Role, user.Role.ToString()),
new("tenant_code", tenantCode),
new("jti", Guid.NewGuid().ToString())
};
if (!string.IsNullOrEmpty(user.Email))
claims.Add(new Claim(ClaimTypes.Email, user.Email));
if (user.LocationId.HasValue)
claims.Add(new Claim("location_id", user.LocationId.Value.ToString()));
var tokenDescriptor = new SecurityTokenDescriptor
{
Subject = new ClaimsIdentity(claims),
Expires = expiresAt,
Issuer = _settings.Issuer,
Audience = _settings.Audience,
SigningCredentials = new SigningCredentials(
new SymmetricSecurityKey(_key),
SecurityAlgorithms.HmacSha256Signature)
};
var tokenHandler = new JwtSecurityTokenHandler();
var token = tokenHandler.CreateToken(tokenDescriptor);
return new TokenResult(
tokenHandler.WriteToken(token),
GenerateRefreshToken(),
expiresAt);
}
public ClaimsPrincipal? ValidateToken(string token)
{
var tokenHandler = new JwtSecurityTokenHandler();
try
{
var principal = tokenHandler.ValidateToken(token, new TokenValidationParameters
{
ValidateIssuerSigningKey = true,
IssuerSigningKey = new SymmetricSecurityKey(_key),
ValidateIssuer = true,
ValidIssuer = _settings.Issuer,
ValidateAudience = true,
ValidAudience = _settings.Audience,
ValidateLifetime = true,
ClockSkew = TimeSpan.Zero
}, out _);
return principal;
}
catch
{
return null;
}
}
public string GenerateRefreshToken()
{
var randomBytes = new byte[64];
using var rng = RandomNumberGenerator.Create();
rng.GetBytes(randomBytes);
return Convert.ToBase64String(randomBytes);
}
}
public class JwtSettings
{
public string SecretKey { get; set; } = string.Empty;
public string Issuer { get; set; } = "PosPlatform";
public string Audience { get; set; } = "PosPlatform";
public int AccessTokenExpirationMinutes { get; set; } = 60;
public int RefreshTokenExpirationDays { get; set; } = 7;
}
Day 5-6: PIN Authentication
Claude Command:
/dev-team implement PIN authentication for POS terminals
Implementation:
// src/PosPlatform.Api/Controllers/AuthController.cs
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using PosPlatform.Core.Interfaces;
using PosPlatform.Infrastructure.Services;
namespace PosPlatform.Api.Controllers;
[ApiController]
[Route("api/auth")]
public class AuthController : ControllerBase
{
private readonly IUserRepository _userRepository;
private readonly IJwtTokenService _tokenService;
private readonly ITenantContext _tenantContext;
public AuthController(
IUserRepository userRepository,
IJwtTokenService tokenService,
ITenantContext tenantContext)
{
_userRepository = userRepository;
_tokenService = tokenService;
_tenantContext = tenantContext;
}
[HttpPost("login")]
[AllowAnonymous]
public async Task<ActionResult<LoginResponse>> Login(
[FromBody] LoginRequest request,
CancellationToken ct)
{
var user = await _userRepository.GetByEmailAsync(request.Email, ct);
if (user == null || !user.IsActive)
return Unauthorized(new { error = "Invalid credentials" });
if (!user.VerifyPassword(request.Password))
return Unauthorized(new { error = "Invalid credentials" });
user.RecordLogin();
await _userRepository.UpdateAsync(user, ct);
var tokens = _tokenService.GenerateTokens(user, _tenantContext.TenantCode!);
return Ok(new LoginResponse(
tokens.AccessToken,
tokens.RefreshToken,
tokens.ExpiresAt,
UserDto.FromEntity(user)));
}
[HttpPost("login/pin")]
[AllowAnonymous]
public async Task<ActionResult<LoginResponse>> LoginWithPin(
[FromBody] PinLoginRequest request,
CancellationToken ct)
{
var user = await _userRepository.GetByEmployeeIdAsync(request.EmployeeId, ct);
if (user == null || !user.IsActive)
return Unauthorized(new { error = "Invalid credentials" });
if (!user.VerifyPin(request.Pin))
return Unauthorized(new { error = "Invalid PIN" });
user.RecordLogin();
await _userRepository.UpdateAsync(user, ct);
var tokens = _tokenService.GenerateTokens(user, _tenantContext.TenantCode!);
return Ok(new LoginResponse(
tokens.AccessToken,
tokens.RefreshToken,
tokens.ExpiresAt,
UserDto.FromEntity(user)));
}
[HttpPost("refresh")]
[AllowAnonymous]
public async Task<ActionResult<TokenResponse>> Refresh(
[FromBody] RefreshRequest request,
CancellationToken ct)
{
// In production, validate refresh token from database
var principal = _tokenService.ValidateToken(request.AccessToken);
if (principal == null)
return Unauthorized();
var userId = Guid.Parse(principal.FindFirst(ClaimTypes.NameIdentifier)!.Value);
var user = await _userRepository.GetByIdAsync(userId, ct);
if (user == null || !user.IsActive)
return Unauthorized();
var tokens = _tokenService.GenerateTokens(user, _tenantContext.TenantCode!);
return Ok(new TokenResponse(
tokens.AccessToken,
tokens.RefreshToken,
tokens.ExpiresAt));
}
[HttpPost("logout")]
[Authorize]
public IActionResult Logout()
{
// In production, invalidate refresh token in database
return NoContent();
}
}
// DTOs
public record LoginRequest(string Email, string Password);
public record PinLoginRequest(string EmployeeId, string Pin);
public record RefreshRequest(string AccessToken, string RefreshToken);
public record LoginResponse(string AccessToken, string RefreshToken, DateTime ExpiresAt, UserDto User);
public record TokenResponse(string AccessToken, string RefreshToken, DateTime ExpiresAt);
public record UserDto(Guid Id, string FullName, string? Email, string Role, Guid? LocationId)
{
public static UserDto FromEntity(User user) => new(
user.Id, user.FullName, user.Email, user.Role.ToString(), user.LocationId);
}
20.6 Week 3-4: Catalog Domain
Day 1-2: Product and Category Entities
Claude Command:
/dev-team create product entity with variant support
See entities defined in TenantDbContext above.
Day 5-6: Pricing Rules Engine
Claude Command:
/dev-team create pricing rules engine
Implementation:
// src/PosPlatform.Core/Services/PricingService.cs
namespace PosPlatform.Core.Services;
public interface IPricingService
{
decimal CalculatePrice(Product product, ProductVariant? variant, PricingContext context);
}
public class PricingService : IPricingService
{
public decimal CalculatePrice(Product product, ProductVariant? variant, PricingContext context)
{
var basePrice = product.BasePrice;
// Apply variant adjustment
if (variant != null)
basePrice += variant.PriceAdjustment;
// Apply promotions
foreach (var promo in context.ActivePromotions)
{
if (promo.AppliesTo(product))
basePrice = promo.Apply(basePrice);
}
// Apply customer discount
if (context.CustomerDiscount > 0)
basePrice *= (1 - context.CustomerDiscount);
return Math.Round(basePrice, 2);
}
}
public class PricingContext
{
public List<Promotion> ActivePromotions { get; set; } = new();
public decimal CustomerDiscount { get; set; }
public DateTime PriceDate { get; set; } = DateTime.UtcNow;
}
public abstract class Promotion
{
public abstract bool AppliesTo(Product product);
public abstract decimal Apply(decimal price);
}
public class PercentagePromotion : Promotion
{
public decimal DiscountPercent { get; set; }
public Guid? CategoryId { get; set; }
public override bool AppliesTo(Product product)
=> !CategoryId.HasValue || product.CategoryId == CategoryId;
public override decimal Apply(decimal price)
=> price * (1 - DiscountPercent / 100);
}
PIN Rate Limiting & Lockout (Security Enhancement)
Objective: Prevent brute-force attacks on 6-digit PINs.
Research Finding: Even with 6-digit PINs (1M combinations), without rate limiting an attacker could test all combinations in hours. Add account lockout after failed attempts.
Claude Command:
/dev-team implement PIN rate limiting with lockout
Implementation:
// src/PosPlatform.Core/Services/PinAttemptTracker.cs
using Microsoft.Extensions.Caching.Distributed;
namespace PosPlatform.Core.Services;
public interface IPinAttemptTracker
{
Task<bool> IsLockedOutAsync(string employeeId, CancellationToken ct = default);
Task RecordFailedAttemptAsync(string employeeId, CancellationToken ct = default);
Task ResetAttemptsAsync(string employeeId, CancellationToken ct = default);
Task<int> GetFailedAttemptsAsync(string employeeId, CancellationToken ct = default);
}
public class PinAttemptTracker : IPinAttemptTracker
{
private readonly IDistributedCache _cache;
private readonly ILogger<PinAttemptTracker> _logger;
private const int MaxAttempts = 5;
private const int LockoutMinutes = 15;
private const int ManagerResetThreshold = 10;
public PinAttemptTracker(IDistributedCache cache, ILogger<PinAttemptTracker> logger)
{
_cache = cache;
_logger = logger;
}
public async Task<bool> IsLockedOutAsync(string employeeId, CancellationToken ct = default)
{
var lockoutKey = $"pin_lockout:{employeeId}";
var lockout = await _cache.GetStringAsync(lockoutKey, ct);
return lockout != null;
}
public async Task RecordFailedAttemptAsync(string employeeId, CancellationToken ct = default)
{
var attemptsKey = $"pin_attempts:{employeeId}";
var lockoutKey = $"pin_lockout:{employeeId}";
// Get current attempts
var currentStr = await _cache.GetStringAsync(attemptsKey, ct);
var current = string.IsNullOrEmpty(currentStr) ? 0 : int.Parse(currentStr);
current++;
// Store updated attempts (expires in 1 hour)
await _cache.SetStringAsync(attemptsKey, current.ToString(),
new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1)
}, ct);
_logger.LogWarning("PIN attempt {Attempt} for employee {EmployeeId}", current, employeeId);
// Lock out after MaxAttempts
if (current >= MaxAttempts)
{
await _cache.SetStringAsync(lockoutKey, DateTime.UtcNow.ToString("O"),
new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(LockoutMinutes)
}, ct);
_logger.LogError("Employee {EmployeeId} locked out after {Attempts} failed PIN attempts",
employeeId, current);
}
// Alert for potential security incident
if (current >= ManagerResetThreshold)
{
_logger.LogCritical("Employee {EmployeeId} exceeded {Threshold} failed PIN attempts - requires manager reset",
employeeId, ManagerResetThreshold);
}
}
public async Task ResetAttemptsAsync(string employeeId, CancellationToken ct = default)
{
await _cache.RemoveAsync($"pin_attempts:{employeeId}", ct);
await _cache.RemoveAsync($"pin_lockout:{employeeId}", ct);
}
public async Task<int> GetFailedAttemptsAsync(string employeeId, CancellationToken ct = default)
{
var attemptsKey = $"pin_attempts:{employeeId}";
var currentStr = await _cache.GetStringAsync(attemptsKey, ct);
return string.IsNullOrEmpty(currentStr) ? 0 : int.Parse(currentStr);
}
}
Updated PIN Login Endpoint:
[HttpPost("login/pin")]
[AllowAnonymous]
public async Task<ActionResult<LoginResponse>> LoginWithPin(
[FromBody] PinLoginRequest request,
CancellationToken ct)
{
// Check lockout first
if (await _pinTracker.IsLockedOutAsync(request.EmployeeId, ct))
{
_logger.LogWarning("PIN login attempt for locked-out employee {EmployeeId}", request.EmployeeId);
return Unauthorized(new { error = "Account temporarily locked. Try again in 15 minutes." });
}
var user = await _userRepository.GetByEmployeeIdAsync(request.EmployeeId, ct);
if (user == null || !user.IsActive)
{
await _pinTracker.RecordFailedAttemptAsync(request.EmployeeId, ct);
return Unauthorized(new { error = "Invalid credentials" });
}
if (!user.VerifyPin(request.Pin))
{
await _pinTracker.RecordFailedAttemptAsync(request.EmployeeId, ct);
var remaining = 5 - await _pinTracker.GetFailedAttemptsAsync(request.EmployeeId, ct);
return Unauthorized(new { error = $"Invalid PIN. {Math.Max(0, remaining)} attempts remaining." });
}
// Success - reset attempts
await _pinTracker.ResetAttemptsAsync(request.EmployeeId, ct);
user.RecordLogin();
await _userRepository.UpdateAsync(user, ct);
var tokens = _tokenService.GenerateTokens(user, _tenantContext.TenantCode!);
return Ok(new LoginResponse(
tokens.AccessToken,
tokens.RefreshToken,
tokens.ExpiresAt,
UserDto.FromEntity(user)));
}
Refresh Token Rotation (Security Enhancement)
Objective: Prevent token theft by implementing single-use refresh tokens with family tracking.
Research Finding: Without rotation, a stolen refresh token can be reused indefinitely within its validity period.
Claude Command:
/dev-team implement refresh token rotation with reuse detection
Implementation:
// src/PosPlatform.Core/Entities/RefreshToken.cs
namespace PosPlatform.Core.Entities;
public class RefreshToken
{
public Guid Id { get; private set; }
public string Token { get; private set; } = string.Empty;
public Guid UserId { get; private set; }
public string FamilyId { get; private set; } = string.Empty; // Groups related tokens
public bool IsRevoked { get; private set; }
public bool IsUsed { get; private set; }
public DateTime ExpiresAt { get; private set; }
public DateTime CreatedAt { get; private set; }
public Guid? ReplacedByTokenId { get; private set; } // Chain tracking
private RefreshToken() { }
public static RefreshToken Create(Guid userId, int expirationDays = 7, string? familyId = null)
{
return new RefreshToken
{
Id = Guid.NewGuid(),
Token = GenerateSecureToken(),
UserId = userId,
FamilyId = familyId ?? Guid.NewGuid().ToString(),
ExpiresAt = DateTime.UtcNow.AddDays(expirationDays),
CreatedAt = DateTime.UtcNow
};
}
public RefreshToken Rotate()
{
if (IsRevoked || IsUsed)
throw new InvalidOperationException("Cannot rotate a revoked or used token");
IsUsed = true;
var newToken = Create(UserId, 7, FamilyId);
ReplacedByTokenId = newToken.Id;
return newToken;
}
public void Revoke() => IsRevoked = true;
public bool IsValid => !IsRevoked && !IsUsed && ExpiresAt > DateTime.UtcNow;
private static string GenerateSecureToken()
{
var randomBytes = new byte[64];
using var rng = RandomNumberGenerator.Create();
rng.GetBytes(randomBytes);
return Convert.ToBase64String(randomBytes);
}
}
// Updated TokenService with rotation
public async Task<TokenResult> RefreshTokenAsync(string accessToken, string refreshToken, CancellationToken ct)
{
var storedToken = await _tokenRepository.GetByTokenAsync(refreshToken, ct);
if (storedToken == null)
throw new SecurityException("Invalid refresh token");
// CRITICAL: Detect token reuse (indicates possible theft)
if (storedToken.IsUsed || storedToken.IsRevoked)
{
// Revoke entire family - security breach detected
await _tokenRepository.RevokeTokenFamilyAsync(storedToken.FamilyId, ct);
_logger.LogCritical("Refresh token reuse detected for family {FamilyId}. All tokens revoked.",
storedToken.FamilyId);
throw new SecurityException("Token reuse detected. All sessions terminated.");
}
if (!storedToken.IsValid)
throw new SecurityException("Refresh token expired");
// Rotate the token
var newRefreshToken = storedToken.Rotate();
await _tokenRepository.UpdateAsync(storedToken, ct);
await _tokenRepository.AddAsync(newRefreshToken, ct);
// Get user and generate new access token
var user = await _userRepository.GetByIdAsync(storedToken.UserId, ct);
if (user == null || !user.IsActive)
throw new SecurityException("User not found or inactive");
var accessTokenResult = GenerateAccessToken(user, _tenantContext.TenantCode!);
return new TokenResult(
accessTokenResult.Token,
newRefreshToken.Token,
accessTokenResult.ExpiresAt);
}
20.7 Week 5: Production Hardening (NEW)
This week was added based on security research findings. It addresses critical production-readiness gaps.
Day 1-2: Global Exception Handling & Logging
Claude Command:
/dev-team implement global exception handler with Serilog
Implementation:
// src/PosPlatform.Api/Middleware/GlobalExceptionMiddleware.cs
using Microsoft.AspNetCore.Mvc;
namespace PosPlatform.Api.Middleware;
public class GlobalExceptionMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<GlobalExceptionMiddleware> _logger;
public GlobalExceptionMiddleware(RequestDelegate next, ILogger<GlobalExceptionMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context)
{
try
{
await _next(context);
}
catch (ValidationException ex)
{
_logger.LogWarning(ex, "Validation error");
await WriteErrorResponse(context, 400, "Validation Error", ex.Message);
}
catch (NotFoundException ex)
{
_logger.LogInformation(ex, "Resource not found");
await WriteErrorResponse(context, 404, "Not Found", ex.Message);
}
catch (UnauthorizedAccessException ex)
{
_logger.LogWarning(ex, "Unauthorized access");
await WriteErrorResponse(context, 403, "Forbidden", ex.Message);
}
catch (SecurityException ex)
{
_logger.LogError(ex, "Security exception");
await WriteErrorResponse(context, 401, "Security Error", ex.Message);
}
catch (Exception ex)
{
_logger.LogError(ex, "Unhandled exception");
var correlationId = context.Items["CorrelationId"]?.ToString() ?? "unknown";
await WriteErrorResponse(context, 500, "Internal Server Error",
$"An unexpected error occurred. Reference: {correlationId}");
}
}
private static async Task WriteErrorResponse(HttpContext context, int statusCode, string title, string detail)
{
context.Response.StatusCode = statusCode;
context.Response.ContentType = "application/problem+json";
var problem = new ProblemDetails
{
Status = statusCode,
Title = title,
Detail = detail,
Instance = context.Request.Path
};
await context.Response.WriteAsJsonAsync(problem);
}
}
// src/PosPlatform.Api/Middleware/CorrelationIdMiddleware.cs
namespace PosPlatform.Api.Middleware;
public class CorrelationIdMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<CorrelationIdMiddleware> _logger;
public CorrelationIdMiddleware(RequestDelegate next, ILogger<CorrelationIdMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context)
{
var correlationId = context.Request.Headers["X-Correlation-ID"].FirstOrDefault()
?? Guid.NewGuid().ToString();
context.Items["CorrelationId"] = correlationId;
context.Response.Headers.Append("X-Correlation-ID", correlationId);
using (_logger.BeginScope(new Dictionary<string, object>
{
["CorrelationId"] = correlationId,
["TenantCode"] = context.Items["TenantCode"]?.ToString() ?? "unknown"
}))
{
await _next(context);
}
}
}
Day 3: Rate Limiting Per Tenant
Claude Command:
/dev-team implement per-tenant rate limiting
Implementation:
// Program.cs - Rate limiting configuration
builder.Services.AddRateLimiter(options =>
{
// Per-tenant rate limit
options.AddPolicy("per-tenant", context =>
{
var tenantCode = context.Request.Headers["X-Tenant-Code"].ToString();
if (string.IsNullOrEmpty(tenantCode))
tenantCode = "anonymous";
return RateLimitPartition.GetFixedWindowLimiter(tenantCode, _ =>
new FixedWindowRateLimiterOptions
{
PermitLimit = 1000,
Window = TimeSpan.FromMinutes(1),
QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
QueueLimit = 10
});
});
// Stricter limit for auth endpoints
options.AddPolicy("auth-limit", context =>
{
var ipAddress = context.Connection.RemoteIpAddress?.ToString() ?? "unknown";
return RateLimitPartition.GetFixedWindowLimiter(ipAddress, _ =>
new FixedWindowRateLimiterOptions
{
PermitLimit = 20,
Window = TimeSpan.FromMinutes(1),
QueueLimit = 0 // No queuing for auth
});
});
options.OnRejected = async (context, _) =>
{
context.HttpContext.Response.StatusCode = StatusCodes.Status429TooManyRequests;
await context.HttpContext.Response.WriteAsJsonAsync(new
{
error = "Too many requests. Please slow down.",
retryAfter = 60
});
};
});
Day 4: Health Checks
Claude Command:
/dev-team implement health check endpoints
Implementation:
// Program.cs - Health checks configuration
builder.Services.AddHealthChecks()
.AddNpgSql(
builder.Configuration.GetConnectionString("DefaultConnection")!,
name: "database",
tags: new[] { "ready" })
.AddCheck<TenantProvisioningHealthCheck>("tenant_provisioning", tags: new[] { "ready" })
.AddCheck("memory", () =>
{
var allocated = GC.GetTotalMemory(false);
var maxMemory = 500 * 1024 * 1024; // 500MB threshold
return allocated < maxMemory
? HealthCheckResult.Healthy($"Memory: {allocated / 1024 / 1024}MB")
: HealthCheckResult.Degraded($"High memory: {allocated / 1024 / 1024}MB");
}, tags: new[] { "live" });
// Endpoints
app.MapHealthChecks("/health/live", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("live")
});
app.MapHealthChecks("/health/ready", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("ready"),
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});
Day 5: Input Validation & CORS
Claude Command:
/dev-team implement FluentValidation and CORS configuration
Implementation:
// Program.cs - FluentValidation and CORS
builder.Services.AddValidatorsFromAssemblyContaining<CreateTenantRequestValidator>();
builder.Services.AddFluentValidationAutoValidation();
builder.Services.AddCors(options =>
{
options.AddPolicy("PosPolicy", policy =>
{
var allowedOrigins = builder.Configuration
.GetSection("Cors:AllowedOrigins")
.Get<string[]>() ?? Array.Empty<string>();
policy.WithOrigins(allowedOrigins)
.AllowAnyMethod()
.AllowAnyHeader()
.AllowCredentials()
.SetPreflightMaxAge(TimeSpan.FromMinutes(10));
});
});
// Middleware order
app.UseMiddleware<CorrelationIdMiddleware>();
app.UseMiddleware<GlobalExceptionMiddleware>();
app.UseCors("PosPolicy");
app.UseRateLimiter();
app.UseTenantResolution();
app.UseAuthentication();
app.UseAuthorization();
// src/PosPlatform.Api/Validators/CreateTenantRequestValidator.cs
using FluentValidation;
namespace PosPlatform.Api.Validators;
public class CreateTenantRequestValidator : AbstractValidator<CreateTenantRequest>
{
public CreateTenantRequestValidator()
{
RuleFor(x => x.Code)
.NotEmpty()
.Length(1, 10)
.Matches("^[A-Z0-9]+$")
.WithMessage("Code must be 1-10 uppercase alphanumeric characters");
RuleFor(x => x.Name)
.NotEmpty()
.MaximumLength(100);
RuleFor(x => x.Domain)
.MaximumLength(255)
.Matches(@"^[a-z0-9.-]+$")
.When(x => !string.IsNullOrEmpty(x.Domain));
}
}
20.8 Week 1-4 Review Checkpoint
Claude Command:
/architect-review multi-tenant isolation, authentication, and security implementation
Checklist:
- Tenant CRUD API functional
- Schema provisioning creates tables correctly
- Tenant middleware resolves from header/subdomain
- JWT authentication working
- 6-digit PIN login functional with rate limiting
- PIN lockout after 5 failed attempts
- Refresh token rotation implemented
- Integration tests pass
20.9 Week 5 Review Checkpoint
Checklist:
- Global exception handler returns consistent error format
- Correlation IDs in all log entries
- Per-tenant rate limiting active
- Auth endpoint rate limiting (20/min)
- Health check endpoints responding
- FluentValidation on all DTOs
- CORS properly configured
20.10 Implementation Roadmap Overview
The complete POS platform implementation spans 4 phases over 18 weeks:
Phase 1: Foundation (5 weeks) - THIS CHAPTER
Multi-tenant infrastructure, Auth, Catalog
Phase 2: Core Operations (4 weeks) - Chapter 21
Inventory, Sales, Payments, Cash APIs
Phase 3: Supporting Systems (3 weeks) - Chapter 22
RFID/Raptag, Reports, Integrations
Phase 4: Production Ready (6 weeks) - Chapter 23
POS Client, Monitoring, Security, Go-live
─────────────────────────────────────────────────────────
Total: 18 weeks
Key Insight: POS Client as Standalone Phase
“The Web Portal is the main portal for platform operations. The POS Client is the main portal for client operations.”
Both deserve equal architectural attention. Phase 4 dedicates 6 full weeks to the revenue-generating POS terminal application.
20.11 Next Steps
Proceed to Chapter 21: Phase 2 - Core Implementation for:
- Inventory domain with stock tracking
- Sales domain with event sourcing
- Payment processing
- Cash drawer operations
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VI - Implementation Guide |
| Chapter | 20 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 21: Phase 2 - Core Implementation
21.1 Overview
Phase 2 builds the core transactional capabilities: inventory management, sales processing with event sourcing, payment handling, and cash drawer operations. This 6-week phase (Weeks 5-10) delivers the heart of the POS system.
21.2 Week 5-6: Inventory Domain
Day 1-2: Inventory Item Entity
Objective: Create inventory entity with multi-location stock tracking.
Claude Command:
/dev-team create inventory item entity with location quantities
Implementation:
// src/PosPlatform.Core/Entities/Inventory/InventoryItem.cs
namespace PosPlatform.Core.Entities.Inventory;
public class InventoryItem
{
public Guid Id { get; private set; }
public Guid ProductId { get; private set; }
public Guid? VariantId { get; private set; }
public string Sku { get; private set; } = string.Empty;
public Guid LocationId { get; private set; }
// Stock levels
public int QuantityOnHand { get; private set; }
public int QuantityReserved { get; private set; }
public int QuantityAvailable => QuantityOnHand - QuantityReserved;
// Thresholds
public int ReorderPoint { get; private set; }
public int ReorderQuantity { get; private set; }
public int MaxQuantity { get; private set; }
// Tracking
public DateTime LastCountedAt { get; private set; }
public DateTime LastReceivedAt { get; private set; }
public DateTime LastSoldAt { get; private set; }
public DateTime UpdatedAt { get; private set; }
private readonly List<StockMovement> _movements = new();
public IReadOnlyList<StockMovement> Movements => _movements.AsReadOnly();
private InventoryItem() { }
public static InventoryItem Create(
Guid productId,
Guid locationId,
string sku,
Guid? variantId = null)
{
return new InventoryItem
{
Id = Guid.NewGuid(),
ProductId = productId,
VariantId = variantId,
Sku = sku,
LocationId = locationId,
QuantityOnHand = 0,
QuantityReserved = 0,
ReorderPoint = 10,
ReorderQuantity = 50,
MaxQuantity = 200,
LastCountedAt = DateTime.MinValue,
LastReceivedAt = DateTime.MinValue,
LastSoldAt = DateTime.MinValue,
UpdatedAt = DateTime.UtcNow
};
}
public void ReceiveStock(int quantity, string reference, Guid userId)
{
if (quantity <= 0)
throw new ArgumentException("Quantity must be positive", nameof(quantity));
var movement = StockMovement.Create(
Id, MovementType.Receipt, quantity, QuantityOnHand, reference, userId);
QuantityOnHand += quantity;
LastReceivedAt = DateTime.UtcNow;
UpdatedAt = DateTime.UtcNow;
_movements.Add(movement);
}
public void SellStock(int quantity, string saleReference, Guid userId)
{
if (quantity <= 0)
throw new ArgumentException("Quantity must be positive", nameof(quantity));
if (quantity > QuantityAvailable)
throw new InvalidOperationException($"Insufficient stock. Available: {QuantityAvailable}");
var movement = StockMovement.Create(
Id, MovementType.Sale, -quantity, QuantityOnHand, saleReference, userId);
QuantityOnHand -= quantity;
QuantityReserved = Math.Max(0, QuantityReserved - quantity);
LastSoldAt = DateTime.UtcNow;
UpdatedAt = DateTime.UtcNow;
_movements.Add(movement);
}
public void ReserveStock(int quantity)
{
if (quantity > QuantityAvailable)
throw new InvalidOperationException($"Cannot reserve {quantity}. Available: {QuantityAvailable}");
QuantityReserved += quantity;
UpdatedAt = DateTime.UtcNow;
}
public void ReleaseReservation(int quantity)
{
QuantityReserved = Math.Max(0, QuantityReserved - quantity);
UpdatedAt = DateTime.UtcNow;
}
public void Adjust(int newQuantity, string reason, Guid userId)
{
var difference = newQuantity - QuantityOnHand;
var movement = StockMovement.Create(
Id, MovementType.Adjustment, difference, QuantityOnHand, reason, userId);
QuantityOnHand = newQuantity;
UpdatedAt = DateTime.UtcNow;
_movements.Add(movement);
}
public void RecordCount(int countedQuantity, Guid userId)
{
if (countedQuantity != QuantityOnHand)
{
var variance = countedQuantity - QuantityOnHand;
var movement = StockMovement.Create(
Id, MovementType.Count, variance, QuantityOnHand,
$"Physical count variance: {variance}", userId);
QuantityOnHand = countedQuantity;
_movements.Add(movement);
}
LastCountedAt = DateTime.UtcNow;
UpdatedAt = DateTime.UtcNow;
}
public bool NeedsReorder => QuantityOnHand <= ReorderPoint;
public bool IsOverstocked => QuantityOnHand > MaxQuantity;
}
// src/PosPlatform.Core/Entities/Inventory/StockMovement.cs
namespace PosPlatform.Core.Entities.Inventory;
public class StockMovement
{
public Guid Id { get; private set; }
public Guid InventoryItemId { get; private set; }
public MovementType Type { get; private set; }
public int Quantity { get; private set; }
public int QuantityBefore { get; private set; }
public int QuantityAfter { get; private set; }
public string Reference { get; private set; } = string.Empty;
public Guid UserId { get; private set; }
public DateTime CreatedAt { get; private set; }
private StockMovement() { }
public static StockMovement Create(
Guid inventoryItemId,
MovementType type,
int quantity,
int quantityBefore,
string reference,
Guid userId)
{
return new StockMovement
{
Id = Guid.NewGuid(),
InventoryItemId = inventoryItemId,
Type = type,
Quantity = quantity,
QuantityBefore = quantityBefore,
QuantityAfter = quantityBefore + quantity,
Reference = reference,
UserId = userId,
CreatedAt = DateTime.UtcNow
};
}
}
public enum MovementType
{
Receipt, // Stock received from vendor
Sale, // Stock sold to customer
Return, // Customer return
Adjustment, // Manual adjustment
Transfer, // Inter-store transfer
Count, // Physical count variance
Damage, // Damaged/written off
Reserved // Reserved for order
}
Day 3-4: Stock Movement Event Sourcing
Objective: Implement event-sourced stock movements for complete audit trail.
Claude Command:
/dev-team implement stock movement event sourcing
Implementation:
// src/PosPlatform.Core/Events/Inventory/InventoryEvents.cs
namespace PosPlatform.Core.Events.Inventory;
public abstract record InventoryEvent(
Guid InventoryItemId,
Guid UserId,
DateTime OccurredAt);
public record StockReceivedEvent(
Guid InventoryItemId,
int Quantity,
string ReceiptReference,
decimal UnitCost,
Guid UserId,
DateTime OccurredAt) : InventoryEvent(InventoryItemId, UserId, OccurredAt);
public record StockSoldEvent(
Guid InventoryItemId,
int Quantity,
Guid SaleId,
decimal UnitPrice,
Guid UserId,
DateTime OccurredAt) : InventoryEvent(InventoryItemId, UserId, OccurredAt);
public record StockAdjustedEvent(
Guid InventoryItemId,
int QuantityChange,
int NewQuantity,
string Reason,
Guid UserId,
DateTime OccurredAt) : InventoryEvent(InventoryItemId, UserId, OccurredAt);
public record StockTransferredEvent(
Guid SourceInventoryItemId,
Guid DestinationInventoryItemId,
int Quantity,
string TransferReference,
Guid UserId,
DateTime OccurredAt) : InventoryEvent(SourceInventoryItemId, UserId, OccurredAt);
public record StockCountedEvent(
Guid InventoryItemId,
int CountedQuantity,
int SystemQuantity,
int Variance,
Guid UserId,
DateTime OccurredAt) : InventoryEvent(InventoryItemId, UserId, OccurredAt);
// src/PosPlatform.Infrastructure/Data/InventoryEventStore.cs
using Microsoft.EntityFrameworkCore;
using PosPlatform.Core.Events.Inventory;
using System.Text.Json;
namespace PosPlatform.Infrastructure.Data;
public interface IInventoryEventStore
{
Task AppendAsync(InventoryEvent @event, CancellationToken ct = default);
Task<IReadOnlyList<InventoryEvent>> GetEventsAsync(
Guid inventoryItemId,
DateTime? fromDate = null,
CancellationToken ct = default);
}
public class InventoryEventStore : IInventoryEventStore
{
private readonly TenantDbContext _context;
public InventoryEventStore(TenantDbContext context)
{
_context = context;
}
public async Task AppendAsync(InventoryEvent @event, CancellationToken ct = default)
{
var storedEvent = new StoredInventoryEvent
{
Id = Guid.NewGuid(),
InventoryItemId = @event.InventoryItemId,
EventType = @event.GetType().Name,
EventData = JsonSerializer.Serialize(@event, @event.GetType()),
UserId = @event.UserId,
CreatedAt = @event.OccurredAt
};
_context.Set<StoredInventoryEvent>().Add(storedEvent);
await _context.SaveChangesAsync(ct);
}
public async Task<IReadOnlyList<InventoryEvent>> GetEventsAsync(
Guid inventoryItemId,
DateTime? fromDate = null,
CancellationToken ct = default)
{
var query = _context.Set<StoredInventoryEvent>()
.Where(e => e.InventoryItemId == inventoryItemId);
if (fromDate.HasValue)
query = query.Where(e => e.CreatedAt >= fromDate.Value);
var stored = await query
.OrderBy(e => e.CreatedAt)
.ToListAsync(ct);
return stored
.Select(DeserializeEvent)
.Where(e => e != null)
.Cast<InventoryEvent>()
.ToList();
}
private static InventoryEvent? DeserializeEvent(StoredInventoryEvent stored)
{
var type = stored.EventType switch
{
nameof(StockReceivedEvent) => typeof(StockReceivedEvent),
nameof(StockSoldEvent) => typeof(StockSoldEvent),
nameof(StockAdjustedEvent) => typeof(StockAdjustedEvent),
nameof(StockTransferredEvent) => typeof(StockTransferredEvent),
nameof(StockCountedEvent) => typeof(StockCountedEvent),
_ => null
};
if (type == null) return null;
return JsonSerializer.Deserialize(stored.EventData, type) as InventoryEvent;
}
}
public class StoredInventoryEvent
{
public Guid Id { get; set; }
public Guid InventoryItemId { get; set; }
public string EventType { get; set; } = string.Empty;
public string EventData { get; set; } = string.Empty;
public Guid UserId { get; set; }
public DateTime CreatedAt { get; set; }
}
Day 5-6: Inventory Adjustment Service
Claude Command:
/dev-team create inventory adjustment service with reasons
Implementation:
// src/PosPlatform.Core/Services/InventoryAdjustmentService.cs
using PosPlatform.Core.Entities.Inventory;
using PosPlatform.Core.Events.Inventory;
using PosPlatform.Core.Interfaces;
using PosPlatform.Infrastructure.Data;
namespace PosPlatform.Core.Services;
public interface IInventoryAdjustmentService
{
Task<InventoryItem> AdjustQuantityAsync(
Guid inventoryItemId,
int newQuantity,
AdjustmentReason reason,
string? notes,
Guid userId,
CancellationToken ct = default);
Task<InventoryItem> RecordCountAsync(
Guid inventoryItemId,
int countedQuantity,
Guid userId,
CancellationToken ct = default);
}
public class InventoryAdjustmentService : IInventoryAdjustmentService
{
private readonly IInventoryRepository _repository;
private readonly IInventoryEventStore _eventStore;
public InventoryAdjustmentService(
IInventoryRepository repository,
IInventoryEventStore eventStore)
{
_repository = repository;
_eventStore = eventStore;
}
public async Task<InventoryItem> AdjustQuantityAsync(
Guid inventoryItemId,
int newQuantity,
AdjustmentReason reason,
string? notes,
Guid userId,
CancellationToken ct = default)
{
var item = await _repository.GetByIdAsync(inventoryItemId, ct)
?? throw new InvalidOperationException("Inventory item not found");
var oldQuantity = item.QuantityOnHand;
var reasonText = FormatReason(reason, notes);
item.Adjust(newQuantity, reasonText, userId);
await _repository.UpdateAsync(item, ct);
var @event = new StockAdjustedEvent(
inventoryItemId,
newQuantity - oldQuantity,
newQuantity,
reasonText,
userId,
DateTime.UtcNow);
await _eventStore.AppendAsync(@event, ct);
return item;
}
public async Task<InventoryItem> RecordCountAsync(
Guid inventoryItemId,
int countedQuantity,
Guid userId,
CancellationToken ct = default)
{
var item = await _repository.GetByIdAsync(inventoryItemId, ct)
?? throw new InvalidOperationException("Inventory item not found");
var systemQuantity = item.QuantityOnHand;
var variance = countedQuantity - systemQuantity;
item.RecordCount(countedQuantity, userId);
await _repository.UpdateAsync(item, ct);
var @event = new StockCountedEvent(
inventoryItemId,
countedQuantity,
systemQuantity,
variance,
userId,
DateTime.UtcNow);
await _eventStore.AppendAsync(@event, ct);
return item;
}
private static string FormatReason(AdjustmentReason reason, string? notes)
{
var reasonText = reason switch
{
AdjustmentReason.Damaged => "Damaged merchandise",
AdjustmentReason.Theft => "Theft/shrinkage",
AdjustmentReason.Expired => "Expired product",
AdjustmentReason.DataCorrection => "Data entry correction",
AdjustmentReason.VendorReturn => "Returned to vendor",
AdjustmentReason.Found => "Found stock",
AdjustmentReason.Other => "Other adjustment",
_ => "Unknown reason"
};
return string.IsNullOrWhiteSpace(notes)
? reasonText
: $"{reasonText}: {notes}";
}
}
public enum AdjustmentReason
{
Damaged,
Theft,
Expired,
DataCorrection,
VendorReturn,
Found,
Other
}
Day 7-8: Inter-Store Transfers
Claude Command:
/dev-team implement inter-store transfer workflow
Implementation:
// src/PosPlatform.Core/Entities/Inventory/TransferRequest.cs
namespace PosPlatform.Core.Entities.Inventory;
public class TransferRequest
{
public Guid Id { get; private set; }
public string TransferNumber { get; private set; } = string.Empty;
public Guid SourceLocationId { get; private set; }
public Guid DestinationLocationId { get; private set; }
public TransferStatus Status { get; private set; }
public Guid RequestedByUserId { get; private set; }
public DateTime RequestedAt { get; private set; }
public Guid? ApprovedByUserId { get; private set; }
public DateTime? ApprovedAt { get; private set; }
public Guid? ShippedByUserId { get; private set; }
public DateTime? ShippedAt { get; private set; }
public Guid? ReceivedByUserId { get; private set; }
public DateTime? ReceivedAt { get; private set; }
public string? Notes { get; private set; }
private readonly List<TransferItem> _items = new();
public IReadOnlyList<TransferItem> Items => _items.AsReadOnly();
private TransferRequest() { }
public static TransferRequest Create(
Guid sourceLocationId,
Guid destinationLocationId,
Guid requestedByUserId,
string? notes = null)
{
return new TransferRequest
{
Id = Guid.NewGuid(),
TransferNumber = GenerateTransferNumber(),
SourceLocationId = sourceLocationId,
DestinationLocationId = destinationLocationId,
Status = TransferStatus.Pending,
RequestedByUserId = requestedByUserId,
RequestedAt = DateTime.UtcNow,
Notes = notes
};
}
public void AddItem(Guid productId, Guid? variantId, string sku, int quantity)
{
if (Status != TransferStatus.Pending)
throw new InvalidOperationException("Cannot modify non-pending transfer");
var existing = _items.FirstOrDefault(i => i.Sku == sku);
if (existing != null)
{
existing.UpdateQuantity(existing.Quantity + quantity);
}
else
{
_items.Add(new TransferItem(Id, productId, variantId, sku, quantity));
}
}
public void Approve(Guid userId)
{
if (Status != TransferStatus.Pending)
throw new InvalidOperationException("Can only approve pending transfers");
Status = TransferStatus.Approved;
ApprovedByUserId = userId;
ApprovedAt = DateTime.UtcNow;
}
public void Ship(Guid userId)
{
if (Status != TransferStatus.Approved)
throw new InvalidOperationException("Can only ship approved transfers");
Status = TransferStatus.InTransit;
ShippedByUserId = userId;
ShippedAt = DateTime.UtcNow;
}
public void Receive(Guid userId, IEnumerable<ReceivedQuantity> receivedQuantities)
{
if (Status != TransferStatus.InTransit)
throw new InvalidOperationException("Can only receive in-transit transfers");
foreach (var received in receivedQuantities)
{
var item = _items.FirstOrDefault(i => i.Sku == received.Sku);
item?.RecordReceived(received.Quantity);
}
Status = TransferStatus.Completed;
ReceivedByUserId = userId;
ReceivedAt = DateTime.UtcNow;
}
public void Cancel(Guid userId, string reason)
{
if (Status == TransferStatus.Completed)
throw new InvalidOperationException("Cannot cancel completed transfer");
Status = TransferStatus.Cancelled;
Notes = $"{Notes}\nCancelled: {reason}";
}
private static string GenerateTransferNumber()
=> $"TRF-{DateTime.UtcNow:yyyyMMdd}-{Guid.NewGuid().ToString()[..8].ToUpper()}";
}
public class TransferItem
{
public Guid Id { get; private set; }
public Guid TransferRequestId { get; private set; }
public Guid ProductId { get; private set; }
public Guid? VariantId { get; private set; }
public string Sku { get; private set; } = string.Empty;
public int Quantity { get; private set; }
public int QuantityReceived { get; private set; }
public int Variance => QuantityReceived - Quantity;
internal TransferItem(Guid transferId, Guid productId, Guid? variantId, string sku, int quantity)
{
Id = Guid.NewGuid();
TransferRequestId = transferId;
ProductId = productId;
VariantId = variantId;
Sku = sku;
Quantity = quantity;
}
internal void UpdateQuantity(int quantity) => Quantity = quantity;
internal void RecordReceived(int quantity) => QuantityReceived = quantity;
}
public record ReceivedQuantity(string Sku, int Quantity);
public enum TransferStatus
{
Pending,
Approved,
InTransit,
Completed,
Cancelled
}
Day 9-10: Low Stock Alerts
Claude Command:
/dev-team create low stock alert notification system
Implementation:
// src/PosPlatform.Core/Services/LowStockAlertService.cs
using PosPlatform.Core.Entities.Inventory;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Core.Services;
public interface ILowStockAlertService
{
Task<IReadOnlyList<LowStockAlert>> GetAlertsAsync(
Guid? locationId = null,
CancellationToken ct = default);
Task ProcessAlertsAsync(CancellationToken ct = default);
}
public class LowStockAlertService : ILowStockAlertService
{
private readonly IInventoryRepository _repository;
private readonly INotificationService _notificationService;
public LowStockAlertService(
IInventoryRepository repository,
INotificationService notificationService)
{
_repository = repository;
_notificationService = notificationService;
}
public async Task<IReadOnlyList<LowStockAlert>> GetAlertsAsync(
Guid? locationId = null,
CancellationToken ct = default)
{
var lowStockItems = await _repository.GetBelowReorderPointAsync(locationId, ct);
return lowStockItems.Select(item => new LowStockAlert(
item.Id,
item.Sku,
item.LocationId,
item.QuantityOnHand,
item.ReorderPoint,
item.ReorderQuantity,
item.QuantityOnHand == 0 ? AlertSeverity.Critical : AlertSeverity.Warning
)).ToList();
}
public async Task ProcessAlertsAsync(CancellationToken ct = default)
{
var alerts = await GetAlertsAsync(ct: ct);
var criticalAlerts = alerts.Where(a => a.Severity == AlertSeverity.Critical);
foreach (var alert in criticalAlerts)
{
await _notificationService.SendAsync(new Notification
{
Type = NotificationType.LowStock,
Priority = NotificationPriority.High,
Title = $"Out of Stock: {alert.Sku}",
Message = $"SKU {alert.Sku} is out of stock at location. Reorder quantity: {alert.ReorderQuantity}",
Data = new { alert.InventoryItemId, alert.LocationId }
}, ct);
}
}
}
public record LowStockAlert(
Guid InventoryItemId,
string Sku,
Guid LocationId,
int CurrentQuantity,
int ReorderPoint,
int SuggestedOrderQuantity,
AlertSeverity Severity);
public enum AlertSeverity
{
Info,
Warning,
Critical
}
21.3 Week 6-7: Sales Domain (Event Sourcing)
Day 1-2: Sale Aggregate Root
Objective: Create event-sourced sale entity.
Claude Command:
/dev-team create sale aggregate with event sourcing
Implementation:
// src/PosPlatform.Core/Entities/Sales/Sale.cs
using PosPlatform.Core.Events.Sales;
namespace PosPlatform.Core.Entities.Sales;
public class Sale
{
public Guid Id { get; private set; }
public string SaleNumber { get; private set; } = string.Empty;
public Guid LocationId { get; private set; }
public Guid CashierId { get; private set; }
public Guid? CustomerId { get; private set; }
public SaleStatus Status { get; private set; }
// Calculated totals
public decimal Subtotal { get; private set; }
public decimal TotalDiscount { get; private set; }
public decimal TaxAmount { get; private set; }
public decimal Total { get; private set; }
// Metadata
public DateTime StartedAt { get; private set; }
public DateTime? CompletedAt { get; private set; }
public DateTime? VoidedAt { get; private set; }
public Guid? VoidedByUserId { get; private set; }
public string? VoidReason { get; private set; }
private readonly List<SaleLineItem> _items = new();
public IReadOnlyList<SaleLineItem> Items => _items.AsReadOnly();
private readonly List<SalePayment> _payments = new();
public IReadOnlyList<SalePayment> Payments => _payments.AsReadOnly();
private readonly List<SaleEvent> _events = new();
public IReadOnlyList<SaleEvent> Events => _events.AsReadOnly();
private Sale() { }
public static Sale Start(Guid locationId, Guid cashierId, Guid? customerId = null)
{
var sale = new Sale
{
Id = Guid.NewGuid(),
SaleNumber = GenerateSaleNumber(),
LocationId = locationId,
CashierId = cashierId,
CustomerId = customerId,
Status = SaleStatus.InProgress,
StartedAt = DateTime.UtcNow
};
sale.Apply(new SaleStartedEvent(sale.Id, locationId, cashierId, DateTime.UtcNow));
return sale;
}
public void AddItem(
Guid productId,
Guid? variantId,
string sku,
string name,
int quantity,
decimal unitPrice,
decimal taxRate)
{
EnsureInProgress();
var existing = _items.FirstOrDefault(i => i.Sku == sku);
if (existing != null)
{
existing.UpdateQuantity(existing.Quantity + quantity);
}
else
{
var item = new SaleLineItem(Id, productId, variantId, sku, name, quantity, unitPrice, taxRate);
_items.Add(item);
}
RecalculateTotals();
Apply(new ItemAddedEvent(Id, sku, name, quantity, unitPrice, DateTime.UtcNow));
}
public void RemoveItem(string sku)
{
EnsureInProgress();
var item = _items.FirstOrDefault(i => i.Sku == sku);
if (item != null)
{
_items.Remove(item);
RecalculateTotals();
Apply(new ItemRemovedEvent(Id, sku, DateTime.UtcNow));
}
}
public void UpdateItemQuantity(string sku, int newQuantity)
{
EnsureInProgress();
var item = _items.FirstOrDefault(i => i.Sku == sku)
?? throw new InvalidOperationException($"Item {sku} not found");
if (newQuantity <= 0)
{
RemoveItem(sku);
return;
}
item.UpdateQuantity(newQuantity);
RecalculateTotals();
Apply(new ItemQuantityChangedEvent(Id, sku, newQuantity, DateTime.UtcNow));
}
public void ApplyDiscount(decimal discountAmount, string discountCode)
{
EnsureInProgress();
TotalDiscount = discountAmount;
RecalculateTotals();
Apply(new DiscountAppliedEvent(Id, discountAmount, discountCode, DateTime.UtcNow));
}
public void AddPayment(PaymentMethod method, decimal amount, string? reference = null)
{
EnsureInProgress();
var payment = new SalePayment(Id, method, amount, reference);
_payments.Add(payment);
Apply(new PaymentReceivedEvent(Id, method, amount, DateTime.UtcNow));
if (TotalPaid >= Total)
{
Complete();
}
}
public void Complete()
{
if (Status != SaleStatus.InProgress)
throw new InvalidOperationException("Sale is not in progress");
if (TotalPaid < Total)
throw new InvalidOperationException($"Payment incomplete. Due: {Total - TotalPaid:C}");
Status = SaleStatus.Completed;
CompletedAt = DateTime.UtcNow;
Apply(new SaleCompletedEvent(Id, Total, DateTime.UtcNow));
}
public void Void(Guid userId, string reason)
{
if (Status == SaleStatus.Voided)
throw new InvalidOperationException("Sale already voided");
Status = SaleStatus.Voided;
VoidedAt = DateTime.UtcNow;
VoidedByUserId = userId;
VoidReason = reason;
Apply(new SaleVoidedEvent(Id, userId, reason, DateTime.UtcNow));
}
public decimal TotalPaid => _payments.Sum(p => p.Amount);
public decimal BalanceDue => Total - TotalPaid;
public decimal ChangeDue => TotalPaid > Total ? TotalPaid - Total : 0;
private void RecalculateTotals()
{
Subtotal = _items.Sum(i => i.ExtendedPrice);
TaxAmount = _items.Sum(i => i.TaxAmount);
Total = Subtotal - TotalDiscount + TaxAmount;
}
private void EnsureInProgress()
{
if (Status != SaleStatus.InProgress)
throw new InvalidOperationException("Sale is not in progress");
}
private void Apply(SaleEvent @event)
{
_events.Add(@event);
}
private static string GenerateSaleNumber()
=> $"S-{DateTime.UtcNow:yyyyMMddHHmmss}-{Guid.NewGuid().ToString()[..4].ToUpper()}";
}
public enum SaleStatus
{
InProgress,
Completed,
Voided,
Suspended
}
Day 3-4: Sale Events
Claude Command:
/dev-team implement sale events (add, remove, discount, payment)
Implementation:
// src/PosPlatform.Core/Events/Sales/SaleEvents.cs
namespace PosPlatform.Core.Events.Sales;
public abstract record SaleEvent(Guid SaleId, DateTime OccurredAt);
public record SaleStartedEvent(
Guid SaleId,
Guid LocationId,
Guid CashierId,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record ItemAddedEvent(
Guid SaleId,
string Sku,
string Name,
int Quantity,
decimal UnitPrice,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record ItemRemovedEvent(
Guid SaleId,
string Sku,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record ItemQuantityChangedEvent(
Guid SaleId,
string Sku,
int NewQuantity,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record DiscountAppliedEvent(
Guid SaleId,
decimal DiscountAmount,
string DiscountCode,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record PaymentReceivedEvent(
Guid SaleId,
PaymentMethod Method,
decimal Amount,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record SaleCompletedEvent(
Guid SaleId,
decimal TotalAmount,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
public record SaleVoidedEvent(
Guid SaleId,
Guid VoidedByUserId,
string Reason,
DateTime OccurredAt) : SaleEvent(SaleId, OccurredAt);
Day 7-8: Sale Completion Workflow
Claude Command:
/dev-team implement sale completion workflow
Implementation:
// src/PosPlatform.Core/Services/SaleCompletionService.cs
using PosPlatform.Core.Entities.Sales;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Core.Services;
public interface ISaleCompletionService
{
Task<SaleCompletionResult> CompleteSaleAsync(
Guid saleId,
CancellationToken ct = default);
}
public class SaleCompletionService : ISaleCompletionService
{
private readonly ISaleRepository _saleRepository;
private readonly IInventoryService _inventoryService;
private readonly IReceiptService _receiptService;
private readonly IEventPublisher _eventPublisher;
public SaleCompletionService(
ISaleRepository saleRepository,
IInventoryService inventoryService,
IReceiptService receiptService,
IEventPublisher eventPublisher)
{
_saleRepository = saleRepository;
_inventoryService = inventoryService;
_receiptService = receiptService;
_eventPublisher = eventPublisher;
}
public async Task<SaleCompletionResult> CompleteSaleAsync(
Guid saleId,
CancellationToken ct = default)
{
var sale = await _saleRepository.GetByIdAsync(saleId, ct)
?? throw new InvalidOperationException("Sale not found");
// Validate payment
if (sale.BalanceDue > 0)
{
return SaleCompletionResult.Failed($"Balance due: {sale.BalanceDue:C}");
}
// Deduct inventory
foreach (var item in sale.Items)
{
await _inventoryService.DeductStockAsync(
item.Sku,
sale.LocationId,
item.Quantity,
sale.SaleNumber,
sale.CashierId,
ct);
}
// Complete the sale
sale.Complete();
await _saleRepository.UpdateAsync(sale, ct);
// Generate receipt
var receipt = await _receiptService.GenerateAsync(sale, ct);
// Publish events
foreach (var @event in sale.Events)
{
await _eventPublisher.PublishAsync(@event, ct);
}
return SaleCompletionResult.Success(receipt.ReceiptNumber, sale.ChangeDue);
}
}
public record SaleCompletionResult(
bool IsSuccess,
string? ReceiptNumber,
decimal ChangeDue,
string? ErrorMessage)
{
public static SaleCompletionResult Success(string receiptNumber, decimal change)
=> new(true, receiptNumber, change, null);
public static SaleCompletionResult Failed(string error)
=> new(false, null, 0, error);
}
21.4 Week 8-9: Payment Processing
Day 1-2: Multi-Tender Payment Entity
Claude Command:
/dev-team create payment entity with multi-tender support
Implementation:
// src/PosPlatform.Core/Entities/Sales/SalePayment.cs
namespace PosPlatform.Core.Entities.Sales;
public class SalePayment
{
public Guid Id { get; private set; }
public Guid SaleId { get; private set; }
public PaymentMethod Method { get; private set; }
public decimal Amount { get; private set; }
public string? Reference { get; private set; }
public PaymentStatus Status { get; private set; }
public DateTime CreatedAt { get; private set; }
public string? ProcessorResponse { get; private set; }
internal SalePayment(
Guid saleId,
PaymentMethod method,
decimal amount,
string? reference = null)
{
Id = Guid.NewGuid();
SaleId = saleId;
Method = method;
Amount = amount;
Reference = reference;
Status = PaymentStatus.Pending;
CreatedAt = DateTime.UtcNow;
}
public void MarkApproved(string? processorResponse = null)
{
Status = PaymentStatus.Approved;
ProcessorResponse = processorResponse;
}
public void MarkDeclined(string? reason = null)
{
Status = PaymentStatus.Declined;
ProcessorResponse = reason;
}
public void MarkRefunded()
{
Status = PaymentStatus.Refunded;
}
}
public enum PaymentMethod
{
Cash,
CreditCard,
DebitCard,
GiftCard,
StoreCredit,
Check,
Other
}
public enum PaymentStatus
{
Pending,
Approved,
Declined,
Refunded,
Voided
}
Day 3-4: Cash Payment Handler
Claude Command:
/dev-team implement cash payment handler with change calculation
Implementation:
// src/PosPlatform.Core/Services/Payments/CashPaymentHandler.cs
namespace PosPlatform.Core.Services.Payments;
public interface ICashPaymentHandler
{
CashPaymentResult ProcessPayment(decimal amountDue, decimal amountTendered);
IReadOnlyList<CashDenomination> CalculateOptimalChange(decimal changeAmount);
}
public class CashPaymentHandler : ICashPaymentHandler
{
private static readonly decimal[] Denominations =
{
100.00m, 50.00m, 20.00m, 10.00m, 5.00m, 2.00m, 1.00m,
0.25m, 0.10m, 0.05m, 0.01m
};
public CashPaymentResult ProcessPayment(decimal amountDue, decimal amountTendered)
{
if (amountTendered < 0)
throw new ArgumentException("Amount tendered cannot be negative");
if (amountTendered < amountDue)
{
return new CashPaymentResult(
false,
amountTendered,
0,
amountDue - amountTendered,
Array.Empty<CashDenomination>());
}
var change = amountTendered - amountDue;
var changeDenominations = CalculateOptimalChange(change);
return new CashPaymentResult(
true,
amountTendered,
change,
0,
changeDenominations);
}
public IReadOnlyList<CashDenomination> CalculateOptimalChange(decimal changeAmount)
{
var result = new List<CashDenomination>();
var remaining = changeAmount;
foreach (var denom in Denominations)
{
if (remaining <= 0) break;
var count = (int)(remaining / denom);
if (count > 0)
{
result.Add(new CashDenomination(denom, count));
remaining -= count * denom;
}
}
// Handle any remaining due to floating point
remaining = Math.Round(remaining, 2);
if (remaining > 0)
{
result.Add(new CashDenomination(0.01m, (int)(remaining / 0.01m)));
}
return result;
}
}
public record CashPaymentResult(
bool IsFullPayment,
decimal AmountTendered,
decimal ChangeAmount,
decimal RemainingBalance,
IReadOnlyList<CashDenomination> ChangeDenominations);
public record CashDenomination(decimal Value, int Count)
{
public decimal Total => Value * Count;
}
21.5 Week 9-10: Cash Drawer Operations
Day 1-2: Drawer Session Entity
Claude Command:
/dev-team create drawer session entity with state machine
Implementation:
// src/PosPlatform.Core/Entities/CashDrawer/DrawerSession.cs
namespace PosPlatform.Core.Entities.CashDrawer;
public class DrawerSession
{
public Guid Id { get; private set; }
public Guid LocationId { get; private set; }
public Guid TerminalId { get; private set; }
public Guid OpenedByUserId { get; private set; }
public Guid? ClosedByUserId { get; private set; }
public DrawerSessionStatus Status { get; private set; }
public decimal OpeningBalance { get; private set; }
public decimal ExpectedBalance { get; private set; }
public decimal? CountedBalance { get; private set; }
public decimal? Variance { get; private set; }
public DateTime OpenedAt { get; private set; }
public DateTime? ClosedAt { get; private set; }
private readonly List<DrawerTransaction> _transactions = new();
public IReadOnlyList<DrawerTransaction> Transactions => _transactions.AsReadOnly();
private DrawerSession() { }
public static DrawerSession Open(
Guid locationId,
Guid terminalId,
Guid userId,
decimal openingBalance)
{
return new DrawerSession
{
Id = Guid.NewGuid(),
LocationId = locationId,
TerminalId = terminalId,
OpenedByUserId = userId,
Status = DrawerSessionStatus.Open,
OpeningBalance = openingBalance,
ExpectedBalance = openingBalance,
OpenedAt = DateTime.UtcNow
};
}
public void RecordCashSale(decimal amount, string saleReference)
{
EnsureOpen();
var txn = DrawerTransaction.CashIn(Id, amount, saleReference);
_transactions.Add(txn);
ExpectedBalance += amount;
}
public void RecordCashRefund(decimal amount, string refundReference)
{
EnsureOpen();
var txn = DrawerTransaction.CashOut(Id, amount, refundReference, "Cash Refund");
_transactions.Add(txn);
ExpectedBalance -= amount;
}
public void RecordPaidOut(decimal amount, string description, Guid userId)
{
EnsureOpen();
var txn = DrawerTransaction.PaidOut(Id, amount, description, userId);
_transactions.Add(txn);
ExpectedBalance -= amount;
}
public void RecordPaidIn(decimal amount, string description, Guid userId)
{
EnsureOpen();
var txn = DrawerTransaction.PaidIn(Id, amount, description, userId);
_transactions.Add(txn);
ExpectedBalance += amount;
}
public void RecordPickup(decimal amount, Guid userId)
{
EnsureOpen();
var txn = DrawerTransaction.Pickup(Id, amount, userId);
_transactions.Add(txn);
ExpectedBalance -= amount;
}
public void SubmitBlindCount(decimal countedAmount, Guid userId)
{
EnsureOpen();
CountedBalance = countedAmount;
Variance = countedAmount - ExpectedBalance;
Status = DrawerSessionStatus.Counted;
ClosedByUserId = userId;
}
public void Close(Guid userId)
{
if (Status == DrawerSessionStatus.Open)
throw new InvalidOperationException("Must submit count before closing");
if (Status == DrawerSessionStatus.Closed)
throw new InvalidOperationException("Drawer already closed");
Status = DrawerSessionStatus.Closed;
ClosedByUserId = userId;
ClosedAt = DateTime.UtcNow;
}
public decimal TotalCashIn => _transactions
.Where(t => t.Type == DrawerTransactionType.CashIn || t.Type == DrawerTransactionType.PaidIn)
.Sum(t => t.Amount);
public decimal TotalCashOut => _transactions
.Where(t => t.Type == DrawerTransactionType.CashOut ||
t.Type == DrawerTransactionType.PaidOut ||
t.Type == DrawerTransactionType.Pickup)
.Sum(t => t.Amount);
private void EnsureOpen()
{
if (Status != DrawerSessionStatus.Open)
throw new InvalidOperationException("Drawer is not open");
}
}
public enum DrawerSessionStatus
{
Open,
Counted,
Closed
}
// src/PosPlatform.Core/Entities/CashDrawer/DrawerTransaction.cs
namespace PosPlatform.Core.Entities.CashDrawer;
public class DrawerTransaction
{
public Guid Id { get; private set; }
public Guid SessionId { get; private set; }
public DrawerTransactionType Type { get; private set; }
public decimal Amount { get; private set; }
public string Reference { get; private set; } = string.Empty;
public string? Description { get; private set; }
public Guid? UserId { get; private set; }
public DateTime CreatedAt { get; private set; }
private DrawerTransaction() { }
public static DrawerTransaction CashIn(Guid sessionId, decimal amount, string reference)
=> Create(sessionId, DrawerTransactionType.CashIn, amount, reference);
public static DrawerTransaction CashOut(Guid sessionId, decimal amount, string reference, string description)
=> Create(sessionId, DrawerTransactionType.CashOut, amount, reference, description);
public static DrawerTransaction PaidOut(Guid sessionId, decimal amount, string description, Guid userId)
=> Create(sessionId, DrawerTransactionType.PaidOut, amount, Guid.NewGuid().ToString(), description, userId);
public static DrawerTransaction PaidIn(Guid sessionId, decimal amount, string description, Guid userId)
=> Create(sessionId, DrawerTransactionType.PaidIn, amount, Guid.NewGuid().ToString(), description, userId);
public static DrawerTransaction Pickup(Guid sessionId, decimal amount, Guid userId)
=> Create(sessionId, DrawerTransactionType.Pickup, amount, $"PICKUP-{DateTime.UtcNow:yyyyMMddHHmmss}", "Cash Pickup", userId);
private static DrawerTransaction Create(
Guid sessionId,
DrawerTransactionType type,
decimal amount,
string reference,
string? description = null,
Guid? userId = null)
{
return new DrawerTransaction
{
Id = Guid.NewGuid(),
SessionId = sessionId,
Type = type,
Amount = Math.Abs(amount),
Reference = reference,
Description = description,
UserId = userId,
CreatedAt = DateTime.UtcNow
};
}
}
public enum DrawerTransactionType
{
CashIn, // Cash received from sale
CashOut, // Cash returned (refund, change)
PaidIn, // Manual cash deposit
PaidOut, // Manual cash withdrawal
Pickup // Cash pickup during shift
}
21.6 Testing Checkpoints
Week 6 Checkpoint: Inventory
# Run inventory tests
dotnet test --filter "FullyQualifiedName~Inventory"
# Manual verification
curl -X POST http://localhost:5100/api/inventory/receive \
-H "Content-Type: application/json" \
-H "X-Tenant-Code: DEMO" \
-d '{"sku": "TEST-001", "quantity": 100, "reference": "PO-001"}'
Week 7 Checkpoint: Sales
# Create and complete a sale
curl -X POST http://localhost:5100/api/sales \
-H "Content-Type: application/json" \
-H "X-Tenant-Code: DEMO" \
-d '{"locationId": "...", "cashierId": "..."}'
# Add item
curl -X POST http://localhost:5100/api/sales/{saleId}/items \
-d '{"sku": "TEST-001", "quantity": 1}'
# Add payment
curl -X POST http://localhost:5100/api/sales/{saleId}/payments \
-d '{"method": "Cash", "amount": 50.00}'
21.7 Next Steps
Proceed to Chapter 22: Phase 3 - Support Implementation for:
- Customer domain with loyalty
- Offline sync infrastructure
- RFID module (optional)
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VI - Implementation Guide |
| Chapter | 21 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 22: Phase 3 - Support Implementation
22.1 Overview
Phase 3 adds support capabilities that enhance the core POS system: customer management with loyalty programs, offline operation support, and optional RFID integration. This 4-week phase (Weeks 11-14) builds features that differentiate the platform.
22.2 Week 11-12: Customer Domain with Loyalty
Day 1-2: Customer Entity
Objective: Create customer entity with profile and contact information.
Claude Command:
/dev-team create customer entity with contact information and profile
Implementation:
// src/PosPlatform.Core/Entities/Customers/Customer.cs
namespace PosPlatform.Core.Entities.Customers;
public class Customer
{
public Guid Id { get; private set; }
public string? CustomerNumber { get; private set; }
public string FirstName { get; private set; } = string.Empty;
public string LastName { get; private set; } = string.Empty;
public string FullName => $"{FirstName} {LastName}".Trim();
// Contact info
public string? Email { get; private set; }
public string? Phone { get; private set; }
public CustomerAddress? Address { get; private set; }
// Loyalty
public string? LoyaltyId { get; private set; }
public int LoyaltyPoints { get; private set; }
public CustomerTier Tier { get; private set; }
// Marketing
public bool EmailOptIn { get; private set; }
public bool SmsOptIn { get; private set; }
// Stats
public int TotalOrders { get; private set; }
public decimal TotalSpent { get; private set; }
public DateTime? LastPurchaseAt { get; private set; }
// Metadata
public bool IsActive { get; private set; }
public DateTime CreatedAt { get; private set; }
public DateTime? UpdatedAt { get; private set; }
private readonly List<CustomerNote> _notes = new();
public IReadOnlyList<CustomerNote> Notes => _notes.AsReadOnly();
private Customer() { }
public static Customer Create(
string firstName,
string lastName,
string? email = null,
string? phone = null)
{
var customer = new Customer
{
Id = Guid.NewGuid(),
CustomerNumber = GenerateCustomerNumber(),
FirstName = firstName,
LastName = lastName,
Email = email?.ToLowerInvariant(),
Phone = NormalizePhone(phone),
Tier = CustomerTier.Bronze,
IsActive = true,
CreatedAt = DateTime.UtcNow
};
// Auto-generate loyalty ID
customer.LoyaltyId = GenerateLoyaltyId();
return customer;
}
public void UpdateContact(string? email, string? phone)
{
Email = email?.ToLowerInvariant();
Phone = NormalizePhone(phone);
UpdatedAt = DateTime.UtcNow;
}
public void UpdateAddress(CustomerAddress address)
{
Address = address;
UpdatedAt = DateTime.UtcNow;
}
public void SetMarketingPreferences(bool emailOptIn, bool smsOptIn)
{
EmailOptIn = emailOptIn;
SmsOptIn = smsOptIn;
UpdatedAt = DateTime.UtcNow;
}
public void RecordPurchase(decimal amount, int pointsEarned)
{
TotalOrders++;
TotalSpent += amount;
LoyaltyPoints += pointsEarned;
LastPurchaseAt = DateTime.UtcNow;
// Update tier based on total spent
Tier = TotalSpent switch
{
>= 10000 => CustomerTier.Platinum,
>= 5000 => CustomerTier.Gold,
>= 1000 => CustomerTier.Silver,
_ => CustomerTier.Bronze
};
UpdatedAt = DateTime.UtcNow;
}
public bool RedeemPoints(int points)
{
if (points > LoyaltyPoints)
return false;
LoyaltyPoints -= points;
UpdatedAt = DateTime.UtcNow;
return true;
}
public void AddNote(string content, Guid userId)
{
_notes.Add(new CustomerNote(Id, content, userId));
UpdatedAt = DateTime.UtcNow;
}
public void Deactivate() => IsActive = false;
public void Reactivate() => IsActive = true;
private static string GenerateCustomerNumber()
=> $"C{DateTime.UtcNow:yyMMdd}{new Random().Next(1000, 9999)}";
private static string GenerateLoyaltyId()
=> $"LYL{Guid.NewGuid().ToString()[..8].ToUpper()}";
private static string? NormalizePhone(string? phone)
{
if (string.IsNullOrWhiteSpace(phone))
return null;
// Remove non-digits
var digits = new string(phone.Where(char.IsDigit).ToArray());
// Format as (XXX) XXX-XXXX for US numbers
if (digits.Length == 10)
return $"({digits[..3]}) {digits[3..6]}-{digits[6..]}";
if (digits.Length == 11 && digits[0] == '1')
return $"({digits[1..4]}) {digits[4..7]}-{digits[7..]}";
return digits;
}
}
public class CustomerAddress
{
public string Street1 { get; set; } = string.Empty;
public string? Street2 { get; set; }
public string City { get; set; } = string.Empty;
public string State { get; set; } = string.Empty;
public string PostalCode { get; set; } = string.Empty;
public string Country { get; set; } = "US";
}
public class CustomerNote
{
public Guid Id { get; private set; }
public Guid CustomerId { get; private set; }
public string Content { get; private set; }
public Guid CreatedByUserId { get; private set; }
public DateTime CreatedAt { get; private set; }
public CustomerNote(Guid customerId, string content, Guid userId)
{
Id = Guid.NewGuid();
CustomerId = customerId;
Content = content;
CreatedByUserId = userId;
CreatedAt = DateTime.UtcNow;
}
}
public enum CustomerTier
{
Bronze,
Silver,
Gold,
Platinum
}
Day 3-4: Customer Lookup Service
Objective: Fast customer lookup by multiple identifiers.
Claude Command:
/dev-team implement customer lookup by phone, email, and loyalty ID
Implementation:
// src/PosPlatform.Core/Interfaces/ICustomerRepository.cs
using PosPlatform.Core.Entities.Customers;
namespace PosPlatform.Core.Interfaces;
public interface ICustomerRepository
{
Task<Customer?> GetByIdAsync(Guid id, CancellationToken ct = default);
Task<Customer?> GetByEmailAsync(string email, CancellationToken ct = default);
Task<Customer?> GetByPhoneAsync(string phone, CancellationToken ct = default);
Task<Customer?> GetByLoyaltyIdAsync(string loyaltyId, CancellationToken ct = default);
Task<Customer?> GetByCustomerNumberAsync(string customerNumber, CancellationToken ct = default);
Task<IReadOnlyList<Customer>> SearchAsync(
string searchTerm,
int limit = 20,
CancellationToken ct = default);
Task<Customer> AddAsync(Customer customer, CancellationToken ct = default);
Task UpdateAsync(Customer customer, CancellationToken ct = default);
}
// src/PosPlatform.Infrastructure/Repositories/CustomerRepository.cs
using Microsoft.EntityFrameworkCore;
using PosPlatform.Core.Entities.Customers;
using PosPlatform.Core.Interfaces;
using PosPlatform.Infrastructure.Data;
namespace PosPlatform.Infrastructure.Repositories;
public class CustomerRepository : ICustomerRepository
{
private readonly TenantDbContext _context;
public CustomerRepository(TenantDbContext context)
{
_context = context;
}
public async Task<Customer?> GetByIdAsync(Guid id, CancellationToken ct = default)
=> await _context.Customers
.Include(c => c.Notes)
.FirstOrDefaultAsync(c => c.Id == id, ct);
public async Task<Customer?> GetByEmailAsync(string email, CancellationToken ct = default)
=> await _context.Customers
.FirstOrDefaultAsync(c => c.Email == email.ToLowerInvariant(), ct);
public async Task<Customer?> GetByPhoneAsync(string phone, CancellationToken ct = default)
{
// Normalize phone for comparison
var normalizedPhone = NormalizePhoneForSearch(phone);
return await _context.Customers
.FirstOrDefaultAsync(c => c.Phone != null &&
EF.Functions.Like(c.Phone, $"%{normalizedPhone}%"), ct);
}
public async Task<Customer?> GetByLoyaltyIdAsync(string loyaltyId, CancellationToken ct = default)
=> await _context.Customers
.FirstOrDefaultAsync(c => c.LoyaltyId == loyaltyId.ToUpperInvariant(), ct);
public async Task<Customer?> GetByCustomerNumberAsync(string customerNumber, CancellationToken ct = default)
=> await _context.Customers
.FirstOrDefaultAsync(c => c.CustomerNumber == customerNumber, ct);
public async Task<IReadOnlyList<Customer>> SearchAsync(
string searchTerm,
int limit = 20,
CancellationToken ct = default)
{
var term = searchTerm.ToLowerInvariant();
return await _context.Customers
.Where(c => c.IsActive &&
(c.FirstName.ToLower().Contains(term) ||
c.LastName.ToLower().Contains(term) ||
(c.Email != null && c.Email.Contains(term)) ||
(c.Phone != null && c.Phone.Contains(term)) ||
(c.LoyaltyId != null && c.LoyaltyId.Contains(term.ToUpper())) ||
(c.CustomerNumber != null && c.CustomerNumber.Contains(term))))
.OrderBy(c => c.LastName)
.ThenBy(c => c.FirstName)
.Take(limit)
.ToListAsync(ct);
}
public async Task<Customer> AddAsync(Customer customer, CancellationToken ct = default)
{
await _context.Customers.AddAsync(customer, ct);
await _context.SaveChangesAsync(ct);
return customer;
}
public async Task UpdateAsync(Customer customer, CancellationToken ct = default)
{
_context.Customers.Update(customer);
await _context.SaveChangesAsync(ct);
}
private static string NormalizePhoneForSearch(string phone)
=> new string(phone.Where(char.IsDigit).ToArray());
}
// src/PosPlatform.Core/Services/CustomerLookupService.cs
using PosPlatform.Core.Entities.Customers;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Core.Services;
public interface ICustomerLookupService
{
Task<Customer?> LookupAsync(string identifier, CancellationToken ct = default);
Task<IReadOnlyList<Customer>> QuickSearchAsync(string term, CancellationToken ct = default);
}
public class CustomerLookupService : ICustomerLookupService
{
private readonly ICustomerRepository _repository;
public CustomerLookupService(ICustomerRepository repository)
{
_repository = repository;
}
public async Task<Customer?> LookupAsync(string identifier, CancellationToken ct = default)
{
if (string.IsNullOrWhiteSpace(identifier))
return null;
identifier = identifier.Trim();
// Try loyalty ID first (fast, unique)
if (identifier.StartsWith("LYL", StringComparison.OrdinalIgnoreCase))
{
return await _repository.GetByLoyaltyIdAsync(identifier, ct);
}
// Try customer number
if (identifier.StartsWith("C", StringComparison.OrdinalIgnoreCase) &&
identifier.Length == 11)
{
return await _repository.GetByCustomerNumberAsync(identifier, ct);
}
// Try email
if (identifier.Contains('@'))
{
return await _repository.GetByEmailAsync(identifier, ct);
}
// Try phone (if mostly digits)
var digits = identifier.Count(char.IsDigit);
if (digits >= 7)
{
return await _repository.GetByPhoneAsync(identifier, ct);
}
// Fall back to search
var results = await _repository.SearchAsync(identifier, 1, ct);
return results.FirstOrDefault();
}
public async Task<IReadOnlyList<Customer>> QuickSearchAsync(
string term,
CancellationToken ct = default)
{
if (string.IsNullOrWhiteSpace(term) || term.Length < 2)
return Array.Empty<Customer>();
return await _repository.SearchAsync(term, 10, ct);
}
}
Day 5-6: Purchase History
Objective: Track and query customer purchase history.
Claude Command:
/dev-team create purchase history tracking and queries
Implementation:
// src/PosPlatform.Core/Entities/Customers/CustomerPurchase.cs
namespace PosPlatform.Core.Entities.Customers;
public class CustomerPurchase
{
public Guid Id { get; private set; }
public Guid CustomerId { get; private set; }
public Guid SaleId { get; private set; }
public string SaleNumber { get; private set; } = string.Empty;
public Guid LocationId { get; private set; }
public decimal TotalAmount { get; private set; }
public int ItemCount { get; private set; }
public int PointsEarned { get; private set; }
public int PointsRedeemed { get; private set; }
public DateTime PurchasedAt { get; private set; }
private readonly List<CustomerPurchaseItem> _items = new();
public IReadOnlyList<CustomerPurchaseItem> Items => _items.AsReadOnly();
private CustomerPurchase() { }
public static CustomerPurchase Create(
Guid customerId,
Guid saleId,
string saleNumber,
Guid locationId,
decimal totalAmount,
int pointsEarned,
int pointsRedeemed,
IEnumerable<CustomerPurchaseItem> items)
{
var purchase = new CustomerPurchase
{
Id = Guid.NewGuid(),
CustomerId = customerId,
SaleId = saleId,
SaleNumber = saleNumber,
LocationId = locationId,
TotalAmount = totalAmount,
PointsEarned = pointsEarned,
PointsRedeemed = pointsRedeemed,
PurchasedAt = DateTime.UtcNow
};
foreach (var item in items)
{
purchase._items.Add(item);
}
purchase.ItemCount = purchase._items.Sum(i => i.Quantity);
return purchase;
}
}
public class CustomerPurchaseItem
{
public Guid Id { get; set; }
public Guid PurchaseId { get; set; }
public string Sku { get; set; } = string.Empty;
public string ProductName { get; set; } = string.Empty;
public int Quantity { get; set; }
public decimal UnitPrice { get; set; }
public decimal TotalPrice { get; set; }
}
// src/PosPlatform.Core/Services/PurchaseHistoryService.cs
using PosPlatform.Core.Entities.Customers;
using PosPlatform.Core.Entities.Sales;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Core.Services;
public interface IPurchaseHistoryService
{
Task RecordPurchaseAsync(Guid customerId, Sale sale, CancellationToken ct = default);
Task<IReadOnlyList<CustomerPurchase>> GetHistoryAsync(
Guid customerId,
DateTime? fromDate = null,
DateTime? toDate = null,
int limit = 50,
CancellationToken ct = default);
Task<CustomerPurchaseStats> GetStatsAsync(Guid customerId, CancellationToken ct = default);
}
public class PurchaseHistoryService : IPurchaseHistoryService
{
private readonly IPurchaseHistoryRepository _repository;
private readonly ICustomerRepository _customerRepository;
private readonly ILoyaltyService _loyaltyService;
public PurchaseHistoryService(
IPurchaseHistoryRepository repository,
ICustomerRepository customerRepository,
ILoyaltyService loyaltyService)
{
_repository = repository;
_customerRepository = customerRepository;
_loyaltyService = loyaltyService;
}
public async Task RecordPurchaseAsync(
Guid customerId,
Sale sale,
CancellationToken ct = default)
{
var customer = await _customerRepository.GetByIdAsync(customerId, ct)
?? throw new InvalidOperationException("Customer not found");
var pointsEarned = _loyaltyService.CalculatePoints(sale.Total, customer.Tier);
var items = sale.Items.Select(i => new CustomerPurchaseItem
{
Id = Guid.NewGuid(),
Sku = i.Sku,
ProductName = i.Name,
Quantity = i.Quantity,
UnitPrice = i.UnitPrice,
TotalPrice = i.ExtendedPrice
});
var purchase = CustomerPurchase.Create(
customerId,
sale.Id,
sale.SaleNumber,
sale.LocationId,
sale.Total,
pointsEarned,
0, // Points redeemed tracked separately
items);
await _repository.AddAsync(purchase, ct);
customer.RecordPurchase(sale.Total, pointsEarned);
await _customerRepository.UpdateAsync(customer, ct);
}
public async Task<IReadOnlyList<CustomerPurchase>> GetHistoryAsync(
Guid customerId,
DateTime? fromDate = null,
DateTime? toDate = null,
int limit = 50,
CancellationToken ct = default)
{
return await _repository.GetByCustomerAsync(customerId, fromDate, toDate, limit, ct);
}
public async Task<CustomerPurchaseStats> GetStatsAsync(
Guid customerId,
CancellationToken ct = default)
{
return await _repository.GetStatsAsync(customerId, ct);
}
}
public record CustomerPurchaseStats(
int TotalOrders,
decimal TotalSpent,
decimal AverageOrderValue,
int TotalItems,
string? TopCategory,
string? TopProduct,
DateTime? FirstPurchase,
DateTime? LastPurchase);
Day 7-8: Loyalty Points System
Objective: Implement point earning and redemption.
Claude Command:
/dev-team implement loyalty points earning and redemption system
Implementation:
// src/PosPlatform.Core/Services/LoyaltyService.cs
using PosPlatform.Core.Entities.Customers;
using PosPlatform.Core.Interfaces;
namespace PosPlatform.Core.Services;
public interface ILoyaltyService
{
int CalculatePoints(decimal purchaseAmount, CustomerTier tier);
decimal CalculateRedemptionValue(int points);
Task<PointsRedemptionResult> RedeemPointsAsync(
Guid customerId,
int points,
Guid saleId,
CancellationToken ct = default);
LoyaltyTierBenefits GetTierBenefits(CustomerTier tier);
}
public class LoyaltyService : ILoyaltyService
{
private readonly ICustomerRepository _customerRepository;
private readonly ILoyaltyTransactionRepository _transactionRepository;
// Configuration (in production, load from settings)
private const decimal BasePointsPerDollar = 1.0m;
private const decimal PointValue = 0.01m; // Each point = $0.01
private static readonly Dictionary<CustomerTier, decimal> TierMultipliers = new()
{
{ CustomerTier.Bronze, 1.0m },
{ CustomerTier.Silver, 1.25m },
{ CustomerTier.Gold, 1.5m },
{ CustomerTier.Platinum, 2.0m }
};
public LoyaltyService(
ICustomerRepository customerRepository,
ILoyaltyTransactionRepository transactionRepository)
{
_customerRepository = customerRepository;
_transactionRepository = transactionRepository;
}
public int CalculatePoints(decimal purchaseAmount, CustomerTier tier)
{
var multiplier = TierMultipliers.GetValueOrDefault(tier, 1.0m);
var points = purchaseAmount * BasePointsPerDollar * multiplier;
return (int)Math.Floor(points);
}
public decimal CalculateRedemptionValue(int points)
{
return points * PointValue;
}
public async Task<PointsRedemptionResult> RedeemPointsAsync(
Guid customerId,
int points,
Guid saleId,
CancellationToken ct = default)
{
var customer = await _customerRepository.GetByIdAsync(customerId, ct)
?? throw new InvalidOperationException("Customer not found");
if (points > customer.LoyaltyPoints)
{
return PointsRedemptionResult.Failed(
$"Insufficient points. Available: {customer.LoyaltyPoints}");
}
var value = CalculateRedemptionValue(points);
if (!customer.RedeemPoints(points))
{
return PointsRedemptionResult.Failed("Failed to redeem points");
}
await _customerRepository.UpdateAsync(customer, ct);
// Record transaction
var transaction = new LoyaltyTransaction
{
Id = Guid.NewGuid(),
CustomerId = customerId,
Type = LoyaltyTransactionType.Redemption,
Points = -points,
BalanceAfter = customer.LoyaltyPoints,
Reference = saleId.ToString(),
CreatedAt = DateTime.UtcNow
};
await _transactionRepository.AddAsync(transaction, ct);
return PointsRedemptionResult.Success(points, value);
}
public LoyaltyTierBenefits GetTierBenefits(CustomerTier tier)
{
return tier switch
{
CustomerTier.Bronze => new LoyaltyTierBenefits(
"Bronze", 1.0m, 0, false, false),
CustomerTier.Silver => new LoyaltyTierBenefits(
"Silver", 1.25m, 5, true, false),
CustomerTier.Gold => new LoyaltyTierBenefits(
"Gold", 1.5m, 10, true, true),
CustomerTier.Platinum => new LoyaltyTierBenefits(
"Platinum", 2.0m, 15, true, true),
_ => new LoyaltyTierBenefits("Unknown", 1.0m, 0, false, false)
};
}
}
public record PointsRedemptionResult(
bool IsSuccess,
int PointsRedeemed,
decimal DiscountValue,
string? ErrorMessage)
{
public static PointsRedemptionResult Success(int points, decimal value)
=> new(true, points, value, null);
public static PointsRedemptionResult Failed(string error)
=> new(false, 0, 0, error);
}
public record LoyaltyTierBenefits(
string TierName,
decimal PointsMultiplier,
int DiscountPercentage,
bool FreeShipping,
bool EarlyAccess);
public class LoyaltyTransaction
{
public Guid Id { get; set; }
public Guid CustomerId { get; set; }
public LoyaltyTransactionType Type { get; set; }
public int Points { get; set; }
public int BalanceAfter { get; set; }
public string Reference { get; set; } = string.Empty;
public DateTime CreatedAt { get; set; }
}
public enum LoyaltyTransactionType
{
Earn,
Redemption,
Adjustment,
Expiration
}
Day 9-10: Customer API
Claude Command:
/dev-team create customer API endpoints with search and CRUD
Implementation:
// src/PosPlatform.Api/Controllers/CustomersController.cs
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using PosPlatform.Core.Entities.Customers;
using PosPlatform.Core.Interfaces;
using PosPlatform.Core.Services;
namespace PosPlatform.Api.Controllers;
[ApiController]
[Route("api/customers")]
[Authorize]
public class CustomersController : ControllerBase
{
private readonly ICustomerRepository _repository;
private readonly ICustomerLookupService _lookupService;
private readonly IPurchaseHistoryService _historyService;
private readonly ILoyaltyService _loyaltyService;
public CustomersController(
ICustomerRepository repository,
ICustomerLookupService lookupService,
IPurchaseHistoryService historyService,
ILoyaltyService loyaltyService)
{
_repository = repository;
_lookupService = lookupService;
_historyService = historyService;
_loyaltyService = loyaltyService;
}
[HttpGet("search")]
public async Task<ActionResult<IEnumerable<CustomerSummaryDto>>> Search(
[FromQuery] string q,
CancellationToken ct)
{
if (string.IsNullOrWhiteSpace(q))
return BadRequest("Search term required");
var customers = await _lookupService.QuickSearchAsync(q, ct);
return Ok(customers.Select(CustomerSummaryDto.FromEntity));
}
[HttpGet("lookup")]
public async Task<ActionResult<CustomerDto>> Lookup(
[FromQuery] string identifier,
CancellationToken ct)
{
var customer = await _lookupService.LookupAsync(identifier, ct);
if (customer == null)
return NotFound();
var benefits = _loyaltyService.GetTierBenefits(customer.Tier);
return Ok(CustomerDto.FromEntity(customer, benefits));
}
[HttpGet("{id:guid}")]
public async Task<ActionResult<CustomerDto>> GetById(Guid id, CancellationToken ct)
{
var customer = await _repository.GetByIdAsync(id, ct);
if (customer == null)
return NotFound();
var benefits = _loyaltyService.GetTierBenefits(customer.Tier);
return Ok(CustomerDto.FromEntity(customer, benefits));
}
[HttpPost]
public async Task<ActionResult<CustomerDto>> Create(
[FromBody] CreateCustomerRequest request,
CancellationToken ct)
{
// Check for existing customer
if (!string.IsNullOrEmpty(request.Email))
{
var existing = await _repository.GetByEmailAsync(request.Email, ct);
if (existing != null)
return Conflict(new { error = "Email already registered" });
}
var customer = Customer.Create(
request.FirstName,
request.LastName,
request.Email,
request.Phone);
if (request.Address != null)
customer.UpdateAddress(request.Address);
customer.SetMarketingPreferences(
request.EmailOptIn ?? false,
request.SmsOptIn ?? false);
await _repository.AddAsync(customer, ct);
var benefits = _loyaltyService.GetTierBenefits(customer.Tier);
return CreatedAtAction(
nameof(GetById),
new { id = customer.Id },
CustomerDto.FromEntity(customer, benefits));
}
[HttpPut("{id:guid}")]
public async Task<IActionResult> Update(
Guid id,
[FromBody] UpdateCustomerRequest request,
CancellationToken ct)
{
var customer = await _repository.GetByIdAsync(id, ct);
if (customer == null)
return NotFound();
if (request.Email != null || request.Phone != null)
customer.UpdateContact(request.Email ?? customer.Email, request.Phone ?? customer.Phone);
if (request.Address != null)
customer.UpdateAddress(request.Address);
if (request.EmailOptIn.HasValue || request.SmsOptIn.HasValue)
customer.SetMarketingPreferences(
request.EmailOptIn ?? customer.EmailOptIn,
request.SmsOptIn ?? customer.SmsOptIn);
await _repository.UpdateAsync(customer, ct);
return NoContent();
}
[HttpGet("{id:guid}/purchases")]
public async Task<ActionResult<IEnumerable<PurchaseDto>>> GetPurchases(
Guid id,
[FromQuery] DateTime? from,
[FromQuery] DateTime? to,
[FromQuery] int limit = 50,
CancellationToken ct)
{
var purchases = await _historyService.GetHistoryAsync(id, from, to, limit, ct);
return Ok(purchases.Select(PurchaseDto.FromEntity));
}
[HttpGet("{id:guid}/stats")]
public async Task<ActionResult<CustomerPurchaseStats>> GetStats(
Guid id,
CancellationToken ct)
{
var stats = await _historyService.GetStatsAsync(id, ct);
return Ok(stats);
}
[HttpPost("{id:guid}/redeem-points")]
public async Task<ActionResult<PointsRedemptionResult>> RedeemPoints(
Guid id,
[FromBody] RedeemPointsRequest request,
CancellationToken ct)
{
var result = await _loyaltyService.RedeemPointsAsync(
id, request.Points, request.SaleId, ct);
if (!result.IsSuccess)
return BadRequest(result);
return Ok(result);
}
}
// DTOs
public record CreateCustomerRequest(
string FirstName,
string LastName,
string? Email,
string? Phone,
CustomerAddress? Address,
bool? EmailOptIn,
bool? SmsOptIn);
public record UpdateCustomerRequest(
string? Email,
string? Phone,
CustomerAddress? Address,
bool? EmailOptIn,
bool? SmsOptIn);
public record RedeemPointsRequest(int Points, Guid SaleId);
public record CustomerSummaryDto(
Guid Id,
string FullName,
string? Email,
string? Phone,
string? LoyaltyId,
int LoyaltyPoints,
string Tier);
public record CustomerDto(
Guid Id,
string CustomerNumber,
string FirstName,
string LastName,
string FullName,
string? Email,
string? Phone,
CustomerAddress? Address,
string? LoyaltyId,
int LoyaltyPoints,
string Tier,
LoyaltyTierBenefits TierBenefits,
int TotalOrders,
decimal TotalSpent,
DateTime? LastPurchaseAt)
{
public static CustomerDto FromEntity(Customer c, LoyaltyTierBenefits benefits) => new(
c.Id, c.CustomerNumber ?? "", c.FirstName, c.LastName, c.FullName,
c.Email, c.Phone, c.Address, c.LoyaltyId, c.LoyaltyPoints,
c.Tier.ToString(), benefits, c.TotalOrders, c.TotalSpent, c.LastPurchaseAt);
}
public record PurchaseDto(
Guid Id,
string SaleNumber,
decimal TotalAmount,
int ItemCount,
int PointsEarned,
DateTime PurchasedAt);
22.3 Week 12-13: Offline Sync Infrastructure
Day 1-2: Local SQLite Storage
Objective: Implement local storage for offline operation.
Claude Command:
/dev-team implement local SQLite storage for offline mode
Implementation:
// src/PosPlatform.Core/Offline/OfflineStorage.cs
using Microsoft.Data.Sqlite;
using System.Text.Json;
namespace PosPlatform.Core.Offline;
public interface IOfflineStorage
{
Task InitializeAsync(CancellationToken ct = default);
Task StoreTransactionAsync(OfflineTransaction transaction, CancellationToken ct = default);
Task<IReadOnlyList<OfflineTransaction>> GetPendingTransactionsAsync(CancellationToken ct = default);
Task MarkSyncedAsync(Guid transactionId, CancellationToken ct = default);
Task DeleteSyncedAsync(CancellationToken ct = default);
}
public class SqliteOfflineStorage : IOfflineStorage
{
private readonly string _connectionString;
public SqliteOfflineStorage(string databasePath)
{
_connectionString = $"Data Source={databasePath}";
}
public async Task InitializeAsync(CancellationToken ct = default)
{
await using var conn = new SqliteConnection(_connectionString);
await conn.OpenAsync(ct);
var sql = @"
CREATE TABLE IF NOT EXISTS offline_transactions (
id TEXT PRIMARY KEY,
transaction_type TEXT NOT NULL,
payload TEXT NOT NULL,
created_at TEXT NOT NULL,
synced_at TEXT,
retry_count INTEGER DEFAULT 0,
last_error TEXT
);
CREATE INDEX IF NOT EXISTS idx_offline_synced
ON offline_transactions(synced_at);
";
await using var cmd = new SqliteCommand(sql, conn);
await cmd.ExecuteNonQueryAsync(ct);
}
public async Task StoreTransactionAsync(
OfflineTransaction transaction,
CancellationToken ct = default)
{
await using var conn = new SqliteConnection(_connectionString);
await conn.OpenAsync(ct);
var sql = @"
INSERT INTO offline_transactions (id, transaction_type, payload, created_at)
VALUES (@id, @type, @payload, @created)
";
await using var cmd = new SqliteCommand(sql, conn);
cmd.Parameters.AddWithValue("@id", transaction.Id.ToString());
cmd.Parameters.AddWithValue("@type", transaction.Type.ToString());
cmd.Parameters.AddWithValue("@payload", transaction.PayloadJson);
cmd.Parameters.AddWithValue("@created", transaction.CreatedAt.ToString("O"));
await cmd.ExecuteNonQueryAsync(ct);
}
public async Task<IReadOnlyList<OfflineTransaction>> GetPendingTransactionsAsync(
CancellationToken ct = default)
{
await using var conn = new SqliteConnection(_connectionString);
await conn.OpenAsync(ct);
var sql = @"
SELECT id, transaction_type, payload, created_at, retry_count, last_error
FROM offline_transactions
WHERE synced_at IS NULL
ORDER BY created_at ASC
";
await using var cmd = new SqliteCommand(sql, conn);
await using var reader = await cmd.ExecuteReaderAsync(ct);
var transactions = new List<OfflineTransaction>();
while (await reader.ReadAsync(ct))
{
transactions.Add(new OfflineTransaction
{
Id = Guid.Parse(reader.GetString(0)),
Type = Enum.Parse<OfflineTransactionType>(reader.GetString(1)),
PayloadJson = reader.GetString(2),
CreatedAt = DateTime.Parse(reader.GetString(3)),
RetryCount = reader.GetInt32(4),
LastError = reader.IsDBNull(5) ? null : reader.GetString(5)
});
}
return transactions;
}
public async Task MarkSyncedAsync(Guid transactionId, CancellationToken ct = default)
{
await using var conn = new SqliteConnection(_connectionString);
await conn.OpenAsync(ct);
var sql = "UPDATE offline_transactions SET synced_at = @synced WHERE id = @id";
await using var cmd = new SqliteCommand(sql, conn);
cmd.Parameters.AddWithValue("@id", transactionId.ToString());
cmd.Parameters.AddWithValue("@synced", DateTime.UtcNow.ToString("O"));
await cmd.ExecuteNonQueryAsync(ct);
}
public async Task DeleteSyncedAsync(CancellationToken ct = default)
{
await using var conn = new SqliteConnection(_connectionString);
await conn.OpenAsync(ct);
var sql = "DELETE FROM offline_transactions WHERE synced_at IS NOT NULL";
await using var cmd = new SqliteCommand(sql, conn);
await cmd.ExecuteNonQueryAsync(ct);
}
}
public class OfflineTransaction
{
public Guid Id { get; set; }
public OfflineTransactionType Type { get; set; }
public string PayloadJson { get; set; } = string.Empty;
public DateTime CreatedAt { get; set; }
public DateTime? SyncedAt { get; set; }
public int RetryCount { get; set; }
public string? LastError { get; set; }
public T? GetPayload<T>() where T : class
{
return JsonSerializer.Deserialize<T>(PayloadJson);
}
}
public enum OfflineTransactionType
{
Sale,
Payment,
InventoryAdjustment,
CustomerCreate,
DrawerTransaction
}
Day 3-4: Offline Queue Service
Claude Command:
/dev-team create offline transaction queue service
Implementation:
// src/PosPlatform.Core/Offline/OfflineQueueService.cs
using System.Text.Json;
namespace PosPlatform.Core.Offline;
public interface IOfflineQueueService
{
Task<bool> IsOnlineAsync(CancellationToken ct = default);
Task EnqueueAsync<T>(OfflineTransactionType type, T payload, CancellationToken ct = default);
Task<int> GetPendingCountAsync(CancellationToken ct = default);
Task ProcessQueueAsync(CancellationToken ct = default);
}
public class OfflineQueueService : IOfflineQueueService
{
private readonly IOfflineStorage _storage;
private readonly IConnectivityService _connectivity;
private readonly ISyncProcessor _syncProcessor;
private readonly ILogger<OfflineQueueService> _logger;
public OfflineQueueService(
IOfflineStorage storage,
IConnectivityService connectivity,
ISyncProcessor syncProcessor,
ILogger<OfflineQueueService> logger)
{
_storage = storage;
_connectivity = connectivity;
_syncProcessor = syncProcessor;
_logger = logger;
}
public async Task<bool> IsOnlineAsync(CancellationToken ct = default)
{
return await _connectivity.CheckConnectionAsync(ct);
}
public async Task EnqueueAsync<T>(
OfflineTransactionType type,
T payload,
CancellationToken ct = default)
{
var transaction = new OfflineTransaction
{
Id = Guid.NewGuid(),
Type = type,
PayloadJson = JsonSerializer.Serialize(payload),
CreatedAt = DateTime.UtcNow
};
await _storage.StoreTransactionAsync(transaction, ct);
_logger.LogInformation(
"Transaction queued for offline sync: {Type} {Id}",
type, transaction.Id);
}
public async Task<int> GetPendingCountAsync(CancellationToken ct = default)
{
var pending = await _storage.GetPendingTransactionsAsync(ct);
return pending.Count;
}
public async Task ProcessQueueAsync(CancellationToken ct = default)
{
if (!await IsOnlineAsync(ct))
{
_logger.LogDebug("Cannot process queue - offline");
return;
}
var pending = await _storage.GetPendingTransactionsAsync(ct);
if (pending.Count == 0)
return;
_logger.LogInformation("Processing {Count} pending transactions", pending.Count);
foreach (var transaction in pending)
{
try
{
await _syncProcessor.ProcessAsync(transaction, ct);
await _storage.MarkSyncedAsync(transaction.Id, ct);
_logger.LogInformation(
"Transaction synced: {Type} {Id}",
transaction.Type, transaction.Id);
}
catch (Exception ex)
{
_logger.LogError(ex,
"Failed to sync transaction {Id}. Retry count: {RetryCount}",
transaction.Id, transaction.RetryCount);
// Will retry on next sync cycle
}
}
// Clean up synced transactions older than 24 hours
await _storage.DeleteSyncedAsync(ct);
}
}
Day 5-6: Sync Protocol with Conflict Resolution
Claude Command:
/dev-team implement sync protocol with conflict resolution
Implementation:
// src/PosPlatform.Core/Offline/SyncProcessor.cs
namespace PosPlatform.Core.Offline;
public interface ISyncProcessor
{
Task ProcessAsync(OfflineTransaction transaction, CancellationToken ct = default);
}
public class SyncProcessor : ISyncProcessor
{
private readonly ISaleRepository _saleRepository;
private readonly IInventoryService _inventoryService;
private readonly ICustomerRepository _customerRepository;
private readonly IConflictResolver _conflictResolver;
private readonly ILogger<SyncProcessor> _logger;
public SyncProcessor(
ISaleRepository saleRepository,
IInventoryService inventoryService,
ICustomerRepository customerRepository,
IConflictResolver conflictResolver,
ILogger<SyncProcessor> logger)
{
_saleRepository = saleRepository;
_inventoryService = inventoryService;
_customerRepository = customerRepository;
_conflictResolver = conflictResolver;
_logger = logger;
}
public async Task ProcessAsync(
OfflineTransaction transaction,
CancellationToken ct = default)
{
switch (transaction.Type)
{
case OfflineTransactionType.Sale:
await ProcessSaleAsync(transaction, ct);
break;
case OfflineTransactionType.InventoryAdjustment:
await ProcessInventoryAsync(transaction, ct);
break;
case OfflineTransactionType.CustomerCreate:
await ProcessCustomerAsync(transaction, ct);
break;
default:
_logger.LogWarning("Unknown transaction type: {Type}", transaction.Type);
break;
}
}
private async Task ProcessSaleAsync(
OfflineTransaction transaction,
CancellationToken ct)
{
var payload = transaction.GetPayload<OfflineSalePayload>();
if (payload == null) return;
// Check if sale already exists (idempotency)
var existing = await _saleRepository.GetByIdAsync(payload.SaleId, ct);
if (existing != null)
{
_logger.LogInformation("Sale {Id} already synced, skipping", payload.SaleId);
return;
}
// Validate inventory availability
foreach (var item in payload.Items)
{
var available = await _inventoryService.GetAvailableAsync(
item.Sku, payload.LocationId, ct);
if (available < item.Quantity)
{
// Conflict: inventory no longer available
var resolution = await _conflictResolver.ResolveInventoryConflictAsync(
item.Sku, item.Quantity, available, ct);
if (resolution.Action == ConflictAction.Reject)
{
throw new SyncConflictException(
$"Insufficient inventory for {item.Sku}");
}
// Adjust quantity if partial fulfillment allowed
item.Quantity = resolution.AdjustedQuantity;
}
}
// Create the sale
await _saleRepository.AddFromOfflineAsync(payload, ct);
}
private async Task ProcessInventoryAsync(
OfflineTransaction transaction,
CancellationToken ct)
{
var payload = transaction.GetPayload<OfflineInventoryPayload>();
if (payload == null) return;
// Get current server state
var currentQuantity = await _inventoryService.GetQuantityAsync(
payload.Sku, payload.LocationId, ct);
// Apply delta (relative adjustment)
var newQuantity = currentQuantity + payload.QuantityDelta;
if (newQuantity < 0)
{
// Last-write-wins for negative inventory
_logger.LogWarning(
"Inventory for {Sku} would go negative, clamping to 0", payload.Sku);
newQuantity = 0;
}
await _inventoryService.SetQuantityAsync(
payload.Sku, payload.LocationId, newQuantity, payload.Reason, payload.UserId, ct);
}
private async Task ProcessCustomerAsync(
OfflineTransaction transaction,
CancellationToken ct)
{
var payload = transaction.GetPayload<OfflineCustomerPayload>();
if (payload == null) return;
// Check for duplicate by email
if (!string.IsNullOrEmpty(payload.Email))
{
var existing = await _customerRepository.GetByEmailAsync(payload.Email, ct);
if (existing != null)
{
_logger.LogInformation(
"Customer with email {Email} already exists, merging",
payload.Email);
// Merge: update existing customer
existing.UpdateContact(payload.Email, payload.Phone);
await _customerRepository.UpdateAsync(existing, ct);
return;
}
}
// Create new customer
var customer = Customer.Create(
payload.FirstName,
payload.LastName,
payload.Email,
payload.Phone);
await _customerRepository.AddAsync(customer, ct);
}
}
public interface IConflictResolver
{
Task<ConflictResolution> ResolveInventoryConflictAsync(
string sku,
int requested,
int available,
CancellationToken ct = default);
}
public class ConflictResolver : IConflictResolver
{
public Task<ConflictResolution> ResolveInventoryConflictAsync(
string sku,
int requested,
int available,
CancellationToken ct = default)
{
// Strategy: Partial fulfillment if any stock available
if (available > 0)
{
return Task.FromResult(new ConflictResolution(
ConflictAction.AdjustAndContinue,
available));
}
// No stock: reject the item
return Task.FromResult(new ConflictResolution(
ConflictAction.Reject,
0));
}
}
public record ConflictResolution(ConflictAction Action, int AdjustedQuantity);
public enum ConflictAction
{
Continue,
AdjustAndContinue,
Reject
}
public class SyncConflictException : Exception
{
public SyncConflictException(string message) : base(message) { }
}
// Payload models
public class OfflineSalePayload
{
public Guid SaleId { get; set; }
public Guid LocationId { get; set; }
public Guid CashierId { get; set; }
public List<OfflineSaleItem> Items { get; set; } = new();
public List<OfflinePayment> Payments { get; set; } = new();
public DateTime CreatedAt { get; set; }
}
public class OfflineSaleItem
{
public string Sku { get; set; } = string.Empty;
public int Quantity { get; set; }
public decimal UnitPrice { get; set; }
}
public class OfflinePayment
{
public string Method { get; set; } = string.Empty;
public decimal Amount { get; set; }
}
public class OfflineInventoryPayload
{
public string Sku { get; set; } = string.Empty;
public Guid LocationId { get; set; }
public int QuantityDelta { get; set; }
public string Reason { get; set; } = string.Empty;
public Guid UserId { get; set; }
}
public class OfflineCustomerPayload
{
public string FirstName { get; set; } = string.Empty;
public string LastName { get; set; } = string.Empty;
public string? Email { get; set; }
public string? Phone { get; set; }
}
Day 7-8: Connectivity Detection
Claude Command:
/dev-team create connectivity detection service
Implementation:
// src/PosPlatform.Core/Offline/ConnectivityService.cs
namespace PosPlatform.Core.Offline;
public interface IConnectivityService
{
event EventHandler<ConnectivityChangedEventArgs>? ConnectivityChanged;
bool IsOnline { get; }
Task<bool> CheckConnectionAsync(CancellationToken ct = default);
void StartMonitoring();
void StopMonitoring();
}
public class ConnectivityService : IConnectivityService, IDisposable
{
private readonly HttpClient _httpClient;
private readonly ILogger<ConnectivityService> _logger;
private readonly string _healthCheckUrl;
private readonly TimeSpan _checkInterval;
private Timer? _timer;
private bool _isOnline = true;
public event EventHandler<ConnectivityChangedEventArgs>? ConnectivityChanged;
public bool IsOnline => _isOnline;
public ConnectivityService(
HttpClient httpClient,
IConfiguration configuration,
ILogger<ConnectivityService> logger)
{
_httpClient = httpClient;
_logger = logger;
_healthCheckUrl = configuration["Api:HealthCheckUrl"] ?? "/health";
_checkInterval = TimeSpan.FromSeconds(
configuration.GetValue<int>("Connectivity:CheckIntervalSeconds", 30));
}
public async Task<bool> CheckConnectionAsync(CancellationToken ct = default)
{
try
{
var response = await _httpClient.GetAsync(_healthCheckUrl, ct);
var isOnline = response.IsSuccessStatusCode;
if (isOnline != _isOnline)
{
var previousState = _isOnline;
_isOnline = isOnline;
_logger.LogInformation(
"Connectivity changed: {Previous} -> {Current}",
previousState ? "Online" : "Offline",
isOnline ? "Online" : "Offline");
ConnectivityChanged?.Invoke(this, new ConnectivityChangedEventArgs(isOnline));
}
return isOnline;
}
catch (Exception ex)
{
_logger.LogDebug(ex, "Connectivity check failed");
if (_isOnline)
{
_isOnline = false;
ConnectivityChanged?.Invoke(this, new ConnectivityChangedEventArgs(false));
}
return false;
}
}
public void StartMonitoring()
{
_timer = new Timer(
async _ => await CheckConnectionAsync(),
null,
TimeSpan.Zero,
_checkInterval);
_logger.LogInformation(
"Connectivity monitoring started. Check interval: {Interval}s",
_checkInterval.TotalSeconds);
}
public void StopMonitoring()
{
_timer?.Dispose();
_timer = null;
_logger.LogInformation("Connectivity monitoring stopped");
}
public void Dispose()
{
StopMonitoring();
}
}
public class ConnectivityChangedEventArgs : EventArgs
{
public bool IsOnline { get; }
public ConnectivityChangedEventArgs(bool isOnline)
{
IsOnline = isOnline;
}
}
Day 9-10: Background Sync Service
Claude Command:
/dev-team implement background sync with retry logic
Implementation:
// src/PosPlatform.Infrastructure/Services/BackgroundSyncService.cs
using Microsoft.Extensions.Hosting;
namespace PosPlatform.Infrastructure.Services;
public class BackgroundSyncService : BackgroundService
{
private readonly IOfflineQueueService _queueService;
private readonly IConnectivityService _connectivity;
private readonly ILogger<BackgroundSyncService> _logger;
private readonly TimeSpan _syncInterval;
public BackgroundSyncService(
IOfflineQueueService queueService,
IConnectivityService connectivity,
IConfiguration configuration,
ILogger<BackgroundSyncService> logger)
{
_queueService = queueService;
_connectivity = connectivity;
_logger = logger;
_syncInterval = TimeSpan.FromSeconds(
configuration.GetValue<int>("Sync:IntervalSeconds", 60));
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("Background sync service started");
// Subscribe to connectivity changes for immediate sync
_connectivity.ConnectivityChanged += OnConnectivityChanged;
_connectivity.StartMonitoring();
while (!stoppingToken.IsCancellationRequested)
{
try
{
await _queueService.ProcessQueueAsync(stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error processing sync queue");
}
await Task.Delay(_syncInterval, stoppingToken);
}
_connectivity.StopMonitoring();
_connectivity.ConnectivityChanged -= OnConnectivityChanged;
_logger.LogInformation("Background sync service stopped");
}
private async void OnConnectivityChanged(object? sender, ConnectivityChangedEventArgs e)
{
if (e.IsOnline)
{
_logger.LogInformation("Connection restored, triggering immediate sync");
try
{
await _queueService.ProcessQueueAsync(CancellationToken.None);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error during immediate sync after reconnection");
}
}
}
}
22.4 Week 13-14: RFID Module (Optional)
Day 1-2: RFID Reader Abstraction
Claude Command:
/dev-team create RFID reader abstraction interface
Implementation:
// src/PosPlatform.Core/RFID/IRfidReader.cs
namespace PosPlatform.Core.RFID;
public interface IRfidReader : IDisposable
{
event EventHandler<TagReadEventArgs>? TagRead;
event EventHandler<ReaderStatusEventArgs>? StatusChanged;
string ReaderId { get; }
ReaderStatus Status { get; }
Task ConnectAsync(CancellationToken ct = default);
Task DisconnectAsync(CancellationToken ct = default);
Task StartInventoryAsync(CancellationToken ct = default);
Task StopInventoryAsync(CancellationToken ct = default);
Task<IReadOnlyList<RfidTag>> ReadTagsAsync(TimeSpan timeout, CancellationToken ct = default);
Task<bool> WriteTagAsync(string epc, byte[] data, CancellationToken ct = default);
}
public class TagReadEventArgs : EventArgs
{
public RfidTag Tag { get; }
public DateTime ReadAt { get; }
public TagReadEventArgs(RfidTag tag)
{
Tag = tag;
ReadAt = DateTime.UtcNow;
}
}
public class ReaderStatusEventArgs : EventArgs
{
public ReaderStatus Status { get; }
public string? Message { get; }
public ReaderStatusEventArgs(ReaderStatus status, string? message = null)
{
Status = status;
Message = message;
}
}
public class RfidTag
{
public string Epc { get; set; } = string.Empty;
public string? Tid { get; set; }
public int Rssi { get; set; }
public int ReadCount { get; set; }
public byte[]? UserData { get; set; }
public DateTime FirstSeen { get; set; }
public DateTime LastSeen { get; set; }
// Parsed product info (if encoded)
public string? Sku { get; set; }
public string? SerialNumber { get; set; }
}
public enum ReaderStatus
{
Disconnected,
Connecting,
Connected,
Reading,
Error
}
Day 3-4: Bulk Inventory Scanning
Claude Command:
/dev-team implement bulk inventory scanning with RFID
Implementation:
// src/PosPlatform.Core/RFID/RfidInventoryService.cs
namespace PosPlatform.Core.RFID;
public interface IRfidInventoryService
{
Task<InventoryScanResult> ScanInventoryAsync(
Guid locationId,
TimeSpan scanDuration,
CancellationToken ct = default);
Task<InventoryComparisonResult> CompareWithSystemAsync(
Guid locationId,
IReadOnlyList<RfidTag> scannedTags,
CancellationToken ct = default);
}
public class RfidInventoryService : IRfidInventoryService
{
private readonly IRfidReader _reader;
private readonly IInventoryRepository _inventoryRepository;
private readonly IRfidTagDecoder _tagDecoder;
private readonly ILogger<RfidInventoryService> _logger;
public RfidInventoryService(
IRfidReader reader,
IInventoryRepository inventoryRepository,
IRfidTagDecoder tagDecoder,
ILogger<RfidInventoryService> logger)
{
_reader = reader;
_inventoryRepository = inventoryRepository;
_tagDecoder = tagDecoder;
_logger = logger;
}
public async Task<InventoryScanResult> ScanInventoryAsync(
Guid locationId,
TimeSpan scanDuration,
CancellationToken ct = default)
{
var startTime = DateTime.UtcNow;
var allTags = new Dictionary<string, RfidTag>();
_logger.LogInformation(
"Starting RFID inventory scan at location {Location} for {Duration}s",
locationId, scanDuration.TotalSeconds);
await _reader.StartInventoryAsync(ct);
var endTime = DateTime.UtcNow.Add(scanDuration);
while (DateTime.UtcNow < endTime && !ct.IsCancellationRequested)
{
var tags = await _reader.ReadTagsAsync(TimeSpan.FromSeconds(1), ct);
foreach (var tag in tags)
{
if (allTags.TryGetValue(tag.Epc, out var existing))
{
existing.ReadCount += tag.ReadCount;
existing.LastSeen = tag.LastSeen;
existing.Rssi = Math.Max(existing.Rssi, tag.Rssi);
}
else
{
// Decode SKU from tag
tag.Sku = await _tagDecoder.DecodeSkuAsync(tag.Epc, ct);
allTags[tag.Epc] = tag;
}
}
}
await _reader.StopInventoryAsync(ct);
var elapsed = DateTime.UtcNow - startTime;
_logger.LogInformation(
"RFID scan complete. Found {Count} unique tags in {Elapsed}s",
allTags.Count, elapsed.TotalSeconds);
return new InventoryScanResult(
locationId,
allTags.Values.ToList(),
startTime,
elapsed);
}
public async Task<InventoryComparisonResult> CompareWithSystemAsync(
Guid locationId,
IReadOnlyList<RfidTag> scannedTags,
CancellationToken ct = default)
{
// Group scanned tags by SKU
var scannedBySku = scannedTags
.Where(t => !string.IsNullOrEmpty(t.Sku))
.GroupBy(t => t.Sku!)
.ToDictionary(g => g.Key, g => g.Count());
// Get system inventory
var systemInventory = await _inventoryRepository
.GetByLocationAsync(locationId, ct);
var systemBySku = systemInventory
.ToDictionary(i => i.Sku, i => i.QuantityOnHand);
var discrepancies = new List<InventoryDiscrepancy>();
// Find discrepancies
var allSkus = scannedBySku.Keys.Union(systemBySku.Keys);
foreach (var sku in allSkus)
{
var scanned = scannedBySku.GetValueOrDefault(sku, 0);
var system = systemBySku.GetValueOrDefault(sku, 0);
if (scanned != system)
{
discrepancies.Add(new InventoryDiscrepancy(
sku,
system,
scanned,
scanned - system));
}
}
return new InventoryComparisonResult(
locationId,
scannedTags.Count,
systemInventory.Sum(i => i.QuantityOnHand),
discrepancies);
}
}
public record InventoryScanResult(
Guid LocationId,
IReadOnlyList<RfidTag> Tags,
DateTime StartedAt,
TimeSpan Duration)
{
public int TotalTags => Tags.Count;
public int UniqueSkus => Tags.Where(t => t.Sku != null).Select(t => t.Sku).Distinct().Count();
}
public record InventoryComparisonResult(
Guid LocationId,
int ScannedCount,
int SystemCount,
IReadOnlyList<InventoryDiscrepancy> Discrepancies)
{
public int MatchCount => ScannedCount - Discrepancies.Sum(d => Math.Abs(d.Variance));
public decimal AccuracyPercent => SystemCount > 0
? (decimal)MatchCount / SystemCount * 100
: 100;
}
public record InventoryDiscrepancy(
string Sku,
int SystemQuantity,
int ScannedQuantity,
int Variance);
22.5 Integration Testing
Customer Domain Tests
# Run customer tests
dotnet test --filter "FullyQualifiedName~Customer"
# Manual API test
curl -X POST http://localhost:5100/api/customers \
-H "Content-Type: application/json" \
-H "X-Tenant-Code: DEMO" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"firstName": "John",
"lastName": "Doe",
"email": "john@example.com",
"phone": "555-123-4567"
}'
Offline Sync Tests
# Simulate offline mode
# 1. Create sale while "offline"
# 2. Check pending queue
# 3. Restore connection
# 4. Verify sync completed
dotnet test --filter "FullyQualifiedName~OfflineSync"
22.6 Performance Testing
Customer Lookup Performance
// tests/PosPlatform.Api.Tests/CustomerLookupPerformanceTests.cs
[Fact]
public async Task CustomerLookup_ShouldCompleteIn200ms()
{
var stopwatch = Stopwatch.StartNew();
var result = await _lookupService.LookupAsync("555-123-4567");
stopwatch.Stop();
Assert.NotNull(result);
Assert.True(stopwatch.ElapsedMilliseconds < 200,
$"Lookup took {stopwatch.ElapsedMilliseconds}ms, expected < 200ms");
}
22.7 Next Steps
Proceed to Chapter 23: Phase 4 - POS Client Implementation for:
- Monitoring and alerting setup
- Security hardening
- Production deployment
- Go-live procedures
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VI - Implementation Guide |
| Chapter | 22 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 23: Phase 4 - POS Client Implementation
23.1 Overview
Phase 4 delivers the POS Client Terminal - the revenue-generating touchpoint where customer transactions occur. This 6-week dedicated phase (Weeks 14-19) builds the native, offline-capable application that cashiers and floor staff use daily.
Key Insight: The Web Admin Portal is the main portal for platform operations. The POS Client is the main portal for client operations. Both deserve equal architectural attention.
Why POS Client Needs Its Own Phase
| Aspect | Web Admin Portal | POS Client Terminal |
|---|---|---|
| Primary Users | Store managers, admins | Cashiers, floor staff |
| Technology | Blazor Server (web) | .NET MAUI Blazor Hybrid (native) |
| Connectivity | Always online | Offline-first required |
| Hardware | None | Printers, scanners, cash drawers, payment terminals |
| UI Complexity | Standard CRUD forms | Touch-optimized, configurable layouts |
| Sessions | Long (hours) | Short (per transaction) |
| Critical Path | Business management | Revenue generation |
23.2 Technology Decision: .NET MAUI Blazor Hybrid
Why Not PWA?
| Factor | PWA | Native/Hybrid | Winner |
|---|---|---|---|
| Offline reliability | iOS evicts data after 7 days unused | True SQLite, persistent | Native |
| Receipt printers | Requires bridge app | Direct P/Invoke | Native |
| Cash drawers | No Web API exists | Native access | Native |
| Barcode scanning | Safari doesn’t support | ZXing.Net.Maui | Native |
| Update deployment | Instant (web push) | Portal download | PWA |
| Development cost | Lower | Higher | PWA |
Verdict: PWA is insufficient for mission-critical POS. Native/Hybrid required.
Why .NET MAUI Blazor Hybrid?
| Rationale | Benefit |
|---|---|
| Aligns with stack | Backend uses ASP.NET Core + Blazor |
| Matches ADR-002 | Offline-first SQLite architecture maps directly |
| Single codebase | Android tablets, Windows back-office, macOS |
| Hardware access | Native APIs for printers, scanners |
| Skill reuse | Same Blazor components for web admin portal |
23.3 Phase 4 Scope
POS CLIENT SCOPE
================
1. UI/UX DESIGN
├── Touch-optimized sale screen
├── Product grid with categories
├── Cart management
├── Customer lookup
├── Retail Pro-style drag-and-drop layout configuration
└── Cashier vs Manager mode switching
2. CORE FEATURES
├── Sale processing (full workflow)
├── Payment handling (cash, card, split)
├── Receipt printing
├── Returns/exchanges
├── Discounts/promotions
├── Gift cards
├── Customer loyalty integration
└── End-of-day operations
3. HARDWARE INTEGRATION
├── Receipt printers (Epson, Star Micronics)
├── Barcode scanners (USB, Bluetooth)
├── Cash drawers (kick signals)
├── Payment terminals (Stripe Terminal)
├── Customer-facing displays
└── RFID readers (Raptag integration)
4. OFFLINE OPERATIONS
├── Local SQLite database
├── Transaction queue
├── Sync engine
├── Conflict resolution
└── Offline payment handling
5. DISTRIBUTION & UPDATES
├── Portal-based download
├── Registration flow
├── Auto-update mechanism
└── Version management
6. CONFIGURATION SYSTEM
├── Drag-and-drop UI builder (Retail Pro style)
├── Quick-access button configuration
├── Receipt template customization
├── Per-location settings
└── Hardware profile management
23.4 Week 14: Project Setup & Core UI
Day 1-2: .NET MAUI Blazor Hybrid Project
Objective: Create the project structure with offline-first architecture.
Claude Command:
/dev-team create .NET MAUI Blazor Hybrid project for POS client
Project Structure:
RapOS.PosClient/
├── RapOS.PosClient/
│ ├── App.xaml
│ ├── MauiProgram.cs
│ ├── Platforms/
│ │ ├── Android/
│ │ ├── iOS/
│ │ ├── MacCatalyst/
│ │ └── Windows/
│ ├── Resources/
│ └── wwwroot/
├── RapOS.PosClient.Core/
│ ├── Models/
│ ├── Services/
│ ├── Data/
│ └── Interfaces/
├── RapOS.PosClient.UI/
│ ├── Components/
│ │ ├── Layout/
│ │ ├── Sale/
│ │ ├── Products/
│ │ └── Shared/
│ ├── Pages/
│ └── Themes/
└── RapOS.PosClient.Hardware/
├── Printers/
├── Scanners/
├── CashDrawers/
└── Payments/
Implementation:
// MauiProgram.cs
using Microsoft.Extensions.Logging;
namespace RapOS.PosClient;
public static class MauiProgram
{
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
});
builder.Services.AddMauiBlazorWebView();
#if DEBUG
builder.Services.AddBlazorWebViewDeveloperTools();
builder.Logging.AddDebug();
#endif
// Core services
builder.Services.AddSingleton<ILocalDatabase, SqliteDatabase>();
builder.Services.AddSingleton<ISyncService, SyncService>();
builder.Services.AddSingleton<ITerminalContext, TerminalContext>();
// Hardware services
builder.Services.AddSingleton<IPrinterService, EscPosPrinterService>();
builder.Services.AddSingleton<IScannerService, BarcodeScannerService>();
builder.Services.AddSingleton<ICashDrawerService, CashDrawerService>();
// Sale services
builder.Services.AddScoped<ISaleService, SaleService>();
builder.Services.AddScoped<ICartService, CartService>();
builder.Services.AddScoped<IPaymentService, PaymentService>();
// Configuration
builder.Services.AddSingleton<ILayoutService, LayoutService>();
builder.Services.AddSingleton<ISettingsService, SettingsService>();
return builder.Build();
}
}
Day 3-4: SQLite Local Database Schema
Objective: Implement offline-first local database with sync support.
Claude Command:
/dev-team create SQLite schema for offline POS operations
Implementation:
// RapOS.PosClient.Core/Data/SqliteDatabase.cs
using Microsoft.Data.Sqlite;
using SQLitePCL;
namespace RapOS.PosClient.Core.Data;
public interface ILocalDatabase
{
Task InitializeAsync();
Task<SqliteConnection> GetConnectionAsync();
Task ExecuteAsync(string sql, object? parameters = null);
Task<T?> QuerySingleAsync<T>(string sql, object? parameters = null);
Task<List<T>> QueryAsync<T>(string sql, object? parameters = null);
}
public class SqliteDatabase : ILocalDatabase
{
private readonly string _dbPath;
private SqliteConnection? _connection;
public SqliteDatabase()
{
_dbPath = Path.Combine(
FileSystem.AppDataDirectory,
"rapos_pos.db");
}
public async Task InitializeAsync()
{
Batteries.Init();
_connection = new SqliteConnection($"Data Source={_dbPath}");
await _connection.OpenAsync();
await CreateTablesAsync();
}
private async Task CreateTablesAsync()
{
// Terminal configuration
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS terminal_config (
id INTEGER PRIMARY KEY,
tenant_code TEXT NOT NULL,
location_id TEXT NOT NULL,
terminal_id TEXT NOT NULL,
terminal_name TEXT NOT NULL,
api_endpoint TEXT NOT NULL,
api_key TEXT NOT NULL,
layout_config TEXT,
last_sync_at TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
)");
// Products cache
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS products (
id TEXT PRIMARY KEY,
sku TEXT NOT NULL,
name TEXT NOT NULL,
description TEXT,
category_id TEXT,
category_name TEXT,
base_price REAL NOT NULL,
tax_rate REAL DEFAULT 0,
image_url TEXT,
barcode TEXT,
is_active INTEGER DEFAULT 1,
quantity_on_hand INTEGER DEFAULT 0,
synced_at TEXT NOT NULL,
UNIQUE(sku)
)");
// Categories cache
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS categories (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
parent_id TEXT,
display_order INTEGER DEFAULT 0,
color TEXT,
icon TEXT,
synced_at TEXT NOT NULL
)");
// Customers cache
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS customers (
id TEXT PRIMARY KEY,
first_name TEXT,
last_name TEXT,
email TEXT,
phone TEXT,
loyalty_points INTEGER DEFAULT 0,
loyalty_tier TEXT,
synced_at TEXT NOT NULL
)");
// Local sales (pending sync)
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS sales (
id TEXT PRIMARY KEY,
local_id INTEGER AUTOINCREMENT,
status TEXT NOT NULL DEFAULT 'pending',
customer_id TEXT,
cashier_id TEXT NOT NULL,
subtotal REAL NOT NULL,
tax_total REAL NOT NULL,
discount_total REAL DEFAULT 0,
grand_total REAL NOT NULL,
created_at TEXT NOT NULL,
completed_at TEXT,
synced_at TEXT,
sync_attempts INTEGER DEFAULT 0,
sync_error TEXT
)");
// Sale line items
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS sale_items (
id TEXT PRIMARY KEY,
sale_id TEXT NOT NULL,
product_id TEXT NOT NULL,
sku TEXT NOT NULL,
name TEXT NOT NULL,
quantity INTEGER NOT NULL,
unit_price REAL NOT NULL,
discount_amount REAL DEFAULT 0,
tax_amount REAL NOT NULL,
line_total REAL NOT NULL,
FOREIGN KEY (sale_id) REFERENCES sales(id)
)");
// Payments
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS payments (
id TEXT PRIMARY KEY,
sale_id TEXT NOT NULL,
method TEXT NOT NULL,
amount REAL NOT NULL,
reference TEXT,
card_last_four TEXT,
card_brand TEXT,
created_at TEXT NOT NULL,
FOREIGN KEY (sale_id) REFERENCES sales(id)
)");
// Sync queue for offline transactions
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS sync_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
entity_type TEXT NOT NULL,
entity_id TEXT NOT NULL,
operation TEXT NOT NULL,
payload TEXT NOT NULL,
priority INTEGER DEFAULT 0,
attempts INTEGER DEFAULT 0,
last_attempt_at TEXT,
error TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(entity_type, entity_id, operation)
)");
// Quick access buttons configuration
await ExecuteAsync(@"
CREATE TABLE IF NOT EXISTS quick_access_buttons (
id INTEGER PRIMARY KEY AUTOINCREMENT,
position INTEGER NOT NULL,
product_id TEXT,
category_id TEXT,
label TEXT NOT NULL,
color TEXT,
icon TEXT,
action_type TEXT NOT NULL
)");
// Create indexes for performance
await ExecuteAsync("CREATE INDEX IF NOT EXISTS idx_products_sku ON products(sku)");
await ExecuteAsync("CREATE INDEX IF NOT EXISTS idx_products_barcode ON products(barcode)");
await ExecuteAsync("CREATE INDEX IF NOT EXISTS idx_products_category ON products(category_id)");
await ExecuteAsync("CREATE INDEX IF NOT EXISTS idx_sales_status ON sales(status)");
await ExecuteAsync("CREATE INDEX IF NOT EXISTS idx_sync_queue_priority ON sync_queue(priority DESC, created_at ASC)");
}
public async Task<SqliteConnection> GetConnectionAsync()
{
if (_connection == null)
await InitializeAsync();
return _connection!;
}
public async Task ExecuteAsync(string sql, object? parameters = null)
{
var conn = await GetConnectionAsync();
using var cmd = conn.CreateCommand();
cmd.CommandText = sql;
// Add parameters...
await cmd.ExecuteNonQueryAsync();
}
public async Task<T?> QuerySingleAsync<T>(string sql, object? parameters = null)
{
// Implementation with Dapper-style mapping
throw new NotImplementedException();
}
public async Task<List<T>> QueryAsync<T>(string sql, object? parameters = null)
{
// Implementation with Dapper-style mapping
throw new NotImplementedException();
}
}
Day 5: Main Sale Screen UI
Objective: Build the primary POS interface with product grid and cart.
Claude Command:
/dev-team create main sale screen component with responsive touch layout
Implementation:
@* RapOS.PosClient.UI/Pages/SaleScreen.razor *@
@page "/sale"
@inject ICartService CartService
@inject IProductService ProductService
@inject ICategoryService CategoryService
@inject IPaymentService PaymentService
@inject ILayoutService LayoutService
<div class="sale-screen @(IsDarkMode ? "dark" : "light")">
@* Header Bar *@
<header class="pos-header">
<div class="header-left">
<img src="images/rapos-logo.svg" class="logo" alt="RapOS" />
<span class="location-name">@CurrentLocation</span>
</div>
<div class="header-center">
<div class="search-box">
<span class="search-icon">🔍</span>
<input type="text"
@bind="SearchQuery"
@bind:event="oninput"
@onkeydown="HandleSearchKeydown"
placeholder="Search or scan barcode..." />
</div>
</div>
<div class="header-right">
<button class="mode-toggle" @onclick="ToggleMode">
@(IsManagerMode ? "Manager" : "Cashier")
</button>
<button class="theme-toggle" @onclick="ToggleTheme">
@(IsDarkMode ? "☀️" : "🌙")
</button>
<span class="cashier-name">@CashierName</span>
</div>
</header>
@* Main Content Area *@
<main class="pos-main">
@* Left Panel: Products *@
<section class="products-panel">
@* Category Quick Access *@
<div class="category-bar">
<button class="category-btn @(SelectedCategory == null ? "active" : "")"
@onclick="() => SelectCategory(null)">
All
</button>
@foreach (var category in Categories)
{
<button class="category-btn @(SelectedCategory?.Id == category.Id ? "active" : "")"
style="--cat-color: @category.Color"
@onclick="() => SelectCategory(category)">
@category.Name
</button>
}
</div>
@* Product Grid/List *@
<div class="products-container @(IsGridView ? "grid-view" : "list-view")">
@foreach (var product in FilteredProducts)
{
<div class="product-card" @onclick="() => AddToCart(product)">
@if (IsGridView)
{
<div class="product-image">
@if (!string.IsNullOrEmpty(product.ImageUrl))
{
<img src="@product.ImageUrl" alt="@product.Name" />
}
else
{
<span class="placeholder-icon">📦</span>
}
</div>
}
<div class="product-info">
<span class="product-name">@product.Name</span>
<span class="product-sku">@product.Sku</span>
<span class="product-price">@product.BasePrice.ToString("C")</span>
</div>
@if (product.QuantityOnHand <= 3)
{
<span class="low-stock-badge">Low Stock</span>
}
</div>
}
</div>
@* View Toggle *@
<div class="view-toggle">
<button class="@(IsGridView ? "active" : "")" @onclick="() => IsGridView = true">
Grid
</button>
<button class="@(!IsGridView ? "active" : "")" @onclick="() => IsGridView = false">
List
</button>
</div>
</section>
@* Right Panel: Cart *@
<section class="cart-panel">
@* Customer Section *@
<div class="customer-section">
@if (CurrentCustomer != null)
{
<div class="customer-info">
<span class="customer-name">@CurrentCustomer.FullName</span>
<span class="loyalty-points">@CurrentCustomer.LoyaltyPoints pts</span>
<button class="remove-customer" @onclick="ClearCustomer">✕</button>
</div>
}
else
{
<button class="add-customer-btn" @onclick="ShowCustomerLookup">
+ Add Customer
</button>
}
</div>
@* Cart Items *@
<div class="cart-items">
@if (!CartItems.Any())
{
<div class="empty-cart">
<span class="empty-icon">🛒</span>
<p>Cart is empty</p>
<p class="hint">Scan or select products to add</p>
</div>
}
else
{
@foreach (var item in CartItems)
{
<div class="cart-item">
<div class="item-info">
<span class="item-name">@item.Name</span>
<span class="item-sku">@item.Sku</span>
</div>
<div class="item-quantity">
<button @onclick="() => UpdateQuantity(item, -1)">−</button>
<span>@item.Quantity</span>
<button @onclick="() => UpdateQuantity(item, 1)">+</button>
</div>
<div class="item-price">
@item.LineTotal.ToString("C")
</div>
<button class="remove-item" @onclick="() => RemoveItem(item)">
🗑️
</button>
</div>
}
}
</div>
@* Totals *@
<div class="totals-section">
<div class="total-row">
<span>Subtotal</span>
<span>@Subtotal.ToString("C")</span>
</div>
@if (DiscountTotal > 0)
{
<div class="total-row discount">
<span>Discount</span>
<span>-@DiscountTotal.ToString("C")</span>
</div>
}
<div class="total-row">
<span>Tax</span>
<span>@TaxTotal.ToString("C")</span>
</div>
<div class="total-row grand-total">
<span>Total</span>
<span>@GrandTotal.ToString("C")</span>
</div>
</div>
@* Action Buttons *@
<div class="cart-actions">
<button class="action-btn secondary" @onclick="ShowDiscountDialog">
Discount
</button>
<button class="action-btn secondary" @onclick="HoldSale" disabled="@(!CartItems.Any())">
Hold
</button>
<button class="action-btn primary pay-btn"
@onclick="ProceedToPayment"
disabled="@(!CartItems.Any())">
Pay @GrandTotal.ToString("C")
</button>
</div>
</section>
</main>
@* Footer: Quick Access & Function Keys *@
<footer class="pos-footer">
<div class="quick-access">
@foreach (var btn in QuickAccessButtons)
{
<button class="quick-btn"
style="--btn-color: @btn.Color"
@onclick="() => HandleQuickAction(btn)">
@btn.Label
</button>
}
</div>
<div class="function-keys">
<button class="fn-key" @onclick="OpenDrawer">Open Drawer</button>
<button class="fn-key" @onclick="ShowReturns">Returns</button>
<button class="fn-key" @onclick="ShowHeldSales">Held Sales</button>
<button class="fn-key" @onclick="ReprintReceipt">Reprint</button>
@if (IsManagerMode)
{
<button class="fn-key manager" @onclick="ShowReports">Reports</button>
<button class="fn-key manager" @onclick="ShowSettings">Settings</button>
}
</div>
</footer>
@* Offline Indicator *@
@if (!IsOnline)
{
<div class="offline-banner">
<span>⚡ Offline Mode - @PendingSyncCount transactions pending</span>
</div>
}
</div>
@code {
private List<Product> Products = new();
private List<Category> Categories = new();
private List<CartItem> CartItems = new();
private List<QuickAccessButton> QuickAccessButtons = new();
private Category? SelectedCategory;
private Customer? CurrentCustomer;
private string SearchQuery = string.Empty;
private bool IsGridView = true;
private bool IsDarkMode = true;
private bool IsManagerMode = false;
private bool IsOnline = true;
private int PendingSyncCount = 0;
private string CurrentLocation => "Main Store";
private string CashierName => "Jane D.";
private IEnumerable<Product> FilteredProducts => Products
.Where(p => SelectedCategory == null || p.CategoryId == SelectedCategory.Id)
.Where(p => string.IsNullOrEmpty(SearchQuery) ||
p.Name.Contains(SearchQuery, StringComparison.OrdinalIgnoreCase) ||
p.Sku.Contains(SearchQuery, StringComparison.OrdinalIgnoreCase) ||
p.Barcode == SearchQuery);
private decimal Subtotal => CartItems.Sum(i => i.UnitPrice * i.Quantity);
private decimal DiscountTotal => CartItems.Sum(i => i.DiscountAmount);
private decimal TaxTotal => CartItems.Sum(i => i.TaxAmount);
private decimal GrandTotal => Subtotal - DiscountTotal + TaxTotal;
protected override async Task OnInitializedAsync()
{
Products = await ProductService.GetProductsAsync();
Categories = await CategoryService.GetCategoriesAsync();
QuickAccessButtons = await LayoutService.GetQuickAccessButtonsAsync();
// Check connectivity
IsOnline = await CheckConnectivity();
PendingSyncCount = await CartService.GetPendingSyncCountAsync();
}
private async Task AddToCart(Product product)
{
await CartService.AddItemAsync(product);
CartItems = await CartService.GetCartItemsAsync();
}
private async Task UpdateQuantity(CartItem item, int delta)
{
await CartService.UpdateQuantityAsync(item.Id, item.Quantity + delta);
CartItems = await CartService.GetCartItemsAsync();
}
private async Task RemoveItem(CartItem item)
{
await CartService.RemoveItemAsync(item.Id);
CartItems = await CartService.GetCartItemsAsync();
}
private async Task ProceedToPayment()
{
// Navigate to payment screen
// PaymentService.InitiatePayment(GrandTotal, CartItems);
}
// Additional methods for all functionality...
}
23.5 Week 15: Sale Workflow
Day 1-2: Product Browsing & Search
Objective: Implement efficient product lookup with category navigation.
Claude Command:
/dev-team create product service with caching and search
Implementation:
// RapOS.PosClient.Core/Services/ProductService.cs
namespace RapOS.PosClient.Core.Services;
public interface IProductService
{
Task<List<Product>> GetProductsAsync();
Task<List<Product>> SearchAsync(string query);
Task<Product?> GetByBarcodeAsync(string barcode);
Task<Product?> GetBySkuAsync(string sku);
Task RefreshCacheAsync();
}
public class ProductService : IProductService
{
private readonly ILocalDatabase _db;
private readonly IApiClient _api;
private readonly IConnectivityService _connectivity;
private List<Product>? _cachedProducts;
public ProductService(
ILocalDatabase db,
IApiClient api,
IConnectivityService connectivity)
{
_db = db;
_api = api;
_connectivity = connectivity;
}
public async Task<List<Product>> GetProductsAsync()
{
if (_cachedProducts != null)
return _cachedProducts;
_cachedProducts = await _db.QueryAsync<Product>(
"SELECT * FROM products WHERE is_active = 1 ORDER BY name");
return _cachedProducts;
}
public async Task<List<Product>> SearchAsync(string query)
{
if (string.IsNullOrWhiteSpace(query))
return await GetProductsAsync();
var searchPattern = $"%{query}%";
return await _db.QueryAsync<Product>(@"
SELECT * FROM products
WHERE is_active = 1
AND (name LIKE @pattern
OR sku LIKE @pattern
OR barcode = @exact)
ORDER BY
CASE WHEN barcode = @exact THEN 0
WHEN sku LIKE @pattern THEN 1
ELSE 2 END,
name
LIMIT 50",
new { pattern = searchPattern, exact = query });
}
public async Task<Product?> GetByBarcodeAsync(string barcode)
{
return await _db.QuerySingleAsync<Product>(
"SELECT * FROM products WHERE barcode = @barcode AND is_active = 1",
new { barcode });
}
public async Task<Product?> GetBySkuAsync(string sku)
{
return await _db.QuerySingleAsync<Product>(
"SELECT * FROM products WHERE sku = @sku AND is_active = 1",
new { sku });
}
public async Task RefreshCacheAsync()
{
if (!await _connectivity.IsOnlineAsync())
return;
var lastSync = await GetLastSyncTimeAsync();
var products = await _api.GetAsync<List<Product>>(
$"/api/products?modifiedAfter={lastSync:O}");
foreach (var product in products)
{
await _db.ExecuteAsync(@"
INSERT OR REPLACE INTO products
(id, sku, name, description, category_id, category_name,
base_price, tax_rate, image_url, barcode, is_active,
quantity_on_hand, synced_at)
VALUES
(@Id, @Sku, @Name, @Description, @CategoryId, @CategoryName,
@BasePrice, @TaxRate, @ImageUrl, @Barcode, @IsActive,
@QuantityOnHand, @now)",
new { product, now = DateTime.UtcNow });
}
_cachedProducts = null; // Invalidate cache
}
private async Task<DateTime> GetLastSyncTimeAsync()
{
var result = await _db.QuerySingleAsync<string>(
"SELECT MAX(synced_at) FROM products");
return DateTime.TryParse(result, out var dt) ? dt : DateTime.MinValue;
}
}
Day 3-4: Barcode Scanning
Objective: Integrate camera and hardware barcode scanning.
Claude Command:
/dev-team implement barcode scanning with ZXing.Net.MAUI
Implementation:
// RapOS.PosClient.Hardware/Scanners/BarcodeScannerService.cs
using ZXing.Net.Maui;
namespace RapOS.PosClient.Hardware.Scanners;
public interface IScannerService
{
event EventHandler<string>? BarcodeScanned;
Task StartCameraScanAsync();
Task StopCameraScanAsync();
bool IsHardwareScannerConnected { get; }
}
public class BarcodeScannerService : IScannerService
{
private readonly IProductService _productService;
private readonly ICartService _cartService;
public event EventHandler<string>? BarcodeScanned;
public bool IsHardwareScannerConnected { get; private set; }
public BarcodeScannerService(
IProductService productService,
ICartService cartService)
{
_productService = productService;
_cartService = cartService;
// Listen for USB/Bluetooth scanner input
InitializeHardwareScanner();
}
private void InitializeHardwareScanner()
{
// USB scanners typically emit keyboard events
// Monitor for rapid sequential character input ending with Enter
#if WINDOWS
// Windows: Hook into keyboard events
SetupWindowsKeyboardHook();
#elif ANDROID
// Android: Use USB Host API for dedicated scanners
SetupAndroidUsbScanner();
#endif
}
public async Task StartCameraScanAsync()
{
// Camera scanning is handled by ZXing component in UI
// This just signals to show the camera overlay
}
public async Task StopCameraScanAsync()
{
// Hide camera overlay
}
public async Task ProcessBarcodeAsync(string barcode)
{
BarcodeScanned?.Invoke(this, barcode);
var product = await _productService.GetByBarcodeAsync(barcode);
if (product != null)
{
await _cartService.AddItemAsync(product);
}
else
{
// Play error sound, show "Product not found" notification
await PlayErrorSoundAsync();
}
}
private async Task PlayErrorSoundAsync()
{
// Platform-specific audio playback
}
#if WINDOWS
private void SetupWindowsKeyboardHook()
{
// Implementation for Windows keyboard hook
// Detect scanner input pattern (rapid keys + Enter)
}
#endif
#if ANDROID
private void SetupAndroidUsbScanner()
{
// Implementation for Android USB Host API
}
#endif
}
Camera Scanner Component:
@* RapOS.PosClient.UI/Components/Shared/CameraScannerOverlay.razor *@
@using ZXing.Net.Maui.Controls
<div class="scanner-overlay @(IsVisible ? "visible" : "")">
<div class="scanner-container">
<CameraBarcodeReaderView
x:Name="barcodeReader"
IsDetecting="true"
BarcodesDetected="OnBarcodesDetected"
Options="@(new BarcodeReaderOptions
{
Formats = BarcodeFormat.Ean13 | BarcodeFormat.Code128 | BarcodeFormat.QrCode,
AutoRotate = true,
TryHarder = true
})" />
<div class="scanner-frame">
<div class="corner top-left"></div>
<div class="corner top-right"></div>
<div class="corner bottom-left"></div>
<div class="corner bottom-right"></div>
</div>
<button class="close-scanner" @onclick="Close">✕ Close</button>
</div>
</div>
@code {
[Parameter] public bool IsVisible { get; set; }
[Parameter] public EventCallback<string> OnScanned { get; set; }
[Parameter] public EventCallback OnClose { get; set; }
private void OnBarcodesDetected(object? sender, BarcodeDetectionEventArgs e)
{
var barcode = e.Results.FirstOrDefault()?.Value;
if (!string.IsNullOrEmpty(barcode))
{
OnScanned.InvokeAsync(barcode);
}
}
private async Task Close()
{
await OnClose.InvokeAsync();
}
}
Day 5: Cart Operations
Objective: Implement full cart management with discounts.
Claude Command:
/dev-team create cart service with quantity controls and line discounts
Implementation:
// RapOS.PosClient.Core/Services/CartService.cs
namespace RapOS.PosClient.Core.Services;
public interface ICartService
{
Task<List<CartItem>> GetCartItemsAsync();
Task AddItemAsync(Product product, int quantity = 1);
Task UpdateQuantityAsync(Guid itemId, int newQuantity);
Task RemoveItemAsync(Guid itemId);
Task ApplyLineDiscountAsync(Guid itemId, decimal amount, DiscountType type);
Task ApplyCartDiscountAsync(decimal amount, DiscountType type);
Task ClearCartAsync();
Task<int> GetPendingSyncCountAsync();
CartTotals GetTotals();
}
public class CartService : ICartService
{
private readonly List<CartItem> _items = new();
private decimal _cartDiscount = 0;
private DiscountType _cartDiscountType = DiscountType.Amount;
public Task<List<CartItem>> GetCartItemsAsync()
{
return Task.FromResult(_items.ToList());
}
public Task AddItemAsync(Product product, int quantity = 1)
{
var existing = _items.FirstOrDefault(i => i.ProductId == product.Id);
if (existing != null)
{
existing.Quantity += quantity;
RecalculateItem(existing);
}
else
{
var item = new CartItem
{
Id = Guid.NewGuid(),
ProductId = product.Id,
Sku = product.Sku,
Name = product.Name,
Quantity = quantity,
UnitPrice = product.BasePrice,
TaxRate = product.TaxRate
};
RecalculateItem(item);
_items.Add(item);
}
return Task.CompletedTask;
}
public Task UpdateQuantityAsync(Guid itemId, int newQuantity)
{
var item = _items.FirstOrDefault(i => i.Id == itemId);
if (item == null) return Task.CompletedTask;
if (newQuantity <= 0)
{
_items.Remove(item);
}
else
{
item.Quantity = newQuantity;
RecalculateItem(item);
}
return Task.CompletedTask;
}
public Task RemoveItemAsync(Guid itemId)
{
_items.RemoveAll(i => i.Id == itemId);
return Task.CompletedTask;
}
public Task ApplyLineDiscountAsync(Guid itemId, decimal amount, DiscountType type)
{
var item = _items.FirstOrDefault(i => i.Id == itemId);
if (item == null) return Task.CompletedTask;
item.DiscountType = type;
item.DiscountValue = amount;
RecalculateItem(item);
return Task.CompletedTask;
}
public Task ApplyCartDiscountAsync(decimal amount, DiscountType type)
{
_cartDiscount = amount;
_cartDiscountType = type;
return Task.CompletedTask;
}
public Task ClearCartAsync()
{
_items.Clear();
_cartDiscount = 0;
return Task.CompletedTask;
}
public CartTotals GetTotals()
{
var subtotal = _items.Sum(i => i.UnitPrice * i.Quantity);
var lineDiscounts = _items.Sum(i => i.DiscountAmount);
var cartDiscountAmount = _cartDiscountType == DiscountType.Percentage
? subtotal * (_cartDiscount / 100)
: _cartDiscount;
var taxableAmount = subtotal - lineDiscounts - cartDiscountAmount;
var taxTotal = _items.Sum(i =>
((i.UnitPrice * i.Quantity - i.DiscountAmount) / subtotal) * taxableAmount * i.TaxRate);
return new CartTotals
{
Subtotal = subtotal,
LineDiscounts = lineDiscounts,
CartDiscount = cartDiscountAmount,
TotalDiscount = lineDiscounts + cartDiscountAmount,
TaxTotal = taxTotal,
GrandTotal = taxableAmount + taxTotal
};
}
private void RecalculateItem(CartItem item)
{
var lineSubtotal = item.UnitPrice * item.Quantity;
item.DiscountAmount = item.DiscountType == DiscountType.Percentage
? lineSubtotal * (item.DiscountValue / 100)
: item.DiscountValue;
var taxableAmount = lineSubtotal - item.DiscountAmount;
item.TaxAmount = taxableAmount * item.TaxRate;
item.LineTotal = taxableAmount + item.TaxAmount;
}
public async Task<int> GetPendingSyncCountAsync()
{
// Query sync_queue table for pending count
return 0;
}
}
public enum DiscountType
{
Amount,
Percentage
}
public class CartTotals
{
public decimal Subtotal { get; set; }
public decimal LineDiscounts { get; set; }
public decimal CartDiscount { get; set; }
public decimal TotalDiscount { get; set; }
public decimal TaxTotal { get; set; }
public decimal GrandTotal { get; set; }
}
23.6 Week 16: Payments & Transactions
Day 1-2: Cash & Card Payments
Objective: Implement payment handling with multiple methods.
Claude Command:
/dev-team create payment service supporting cash, card, and split payments
Implementation:
// RapOS.PosClient.Core/Services/PaymentService.cs
namespace RapOS.PosClient.Core.Services;
public interface IPaymentService
{
Task<PaymentResult> ProcessCashPaymentAsync(decimal amount, decimal tendered);
Task<PaymentResult> ProcessCardPaymentAsync(decimal amount);
Task<SaleCompletionResult> CompleteSaleAsync(List<Payment> payments);
Task<bool> CanProcessOfflineAsync(PaymentMethod method);
}
public class PaymentService : IPaymentService
{
private readonly ILocalDatabase _db;
private readonly ISyncService _sync;
private readonly IStripeTerminalService _stripeTerminal;
private readonly IPrinterService _printer;
private readonly ICashDrawerService _cashDrawer;
public PaymentService(
ILocalDatabase db,
ISyncService sync,
IStripeTerminalService stripeTerminal,
IPrinterService printer,
ICashDrawerService cashDrawer)
{
_db = db;
_sync = sync;
_stripeTerminal = stripeTerminal;
_printer = printer;
_cashDrawer = cashDrawer;
}
public async Task<PaymentResult> ProcessCashPaymentAsync(decimal amount, decimal tendered)
{
if (tendered < amount)
{
return PaymentResult.Failed("Insufficient payment amount");
}
var change = tendered - amount;
// Open cash drawer
await _cashDrawer.OpenAsync();
return PaymentResult.Success(new Payment
{
Id = Guid.NewGuid(),
Method = PaymentMethod.Cash,
Amount = amount,
Tendered = tendered,
Change = change,
CreatedAt = DateTime.UtcNow
});
}
public async Task<PaymentResult> ProcessCardPaymentAsync(decimal amount)
{
try
{
// Check if Stripe Terminal is connected
if (!_stripeTerminal.IsConnected)
{
return PaymentResult.Failed("Payment terminal not connected");
}
// Create payment intent
var intent = await _stripeTerminal.CreatePaymentIntentAsync(amount);
// Collect payment
var result = await _stripeTerminal.CollectPaymentAsync(intent);
if (!result.Success)
{
return PaymentResult.Failed(result.ErrorMessage ?? "Payment declined");
}
return PaymentResult.Success(new Payment
{
Id = Guid.NewGuid(),
Method = PaymentMethod.Card,
Amount = amount,
Reference = result.PaymentIntentId,
CardLastFour = result.CardLastFour,
CardBrand = result.CardBrand,
CreatedAt = DateTime.UtcNow
});
}
catch (Exception ex)
{
return PaymentResult.Failed($"Payment error: {ex.Message}");
}
}
public async Task<SaleCompletionResult> CompleteSaleAsync(List<Payment> payments)
{
var sale = await CreateSaleRecordAsync(payments);
// Save to local database
await SaveSaleLocallyAsync(sale);
// Queue for sync
await _sync.QueueForSyncAsync("sale", sale.Id, SyncOperation.Create, sale);
// Print receipt
await _printer.PrintReceiptAsync(sale);
// If cash payment, drawer is already open
// Clear cart happens in calling code
return new SaleCompletionResult
{
Success = true,
SaleId = sale.Id,
ReceiptNumber = sale.ReceiptNumber
};
}
public Task<bool> CanProcessOfflineAsync(PaymentMethod method)
{
// Cash is always available offline
// Cards require terminal which needs connectivity for most operations
return Task.FromResult(method == PaymentMethod.Cash);
}
private async Task<Sale> CreateSaleRecordAsync(List<Payment> payments)
{
// Implementation to create sale from current cart state
throw new NotImplementedException();
}
private async Task SaveSaleLocallyAsync(Sale sale)
{
await _db.ExecuteAsync(@"
INSERT INTO sales (id, status, customer_id, cashier_id, subtotal,
tax_total, discount_total, grand_total, created_at)
VALUES (@Id, 'completed', @CustomerId, @CashierId, @Subtotal,
@TaxTotal, @DiscountTotal, @GrandTotal, @CreatedAt)",
sale);
foreach (var item in sale.Items)
{
await _db.ExecuteAsync(@"
INSERT INTO sale_items (id, sale_id, product_id, sku, name,
quantity, unit_price, discount_amount,
tax_amount, line_total)
VALUES (@Id, @SaleId, @ProductId, @Sku, @Name, @Quantity,
@UnitPrice, @DiscountAmount, @TaxAmount, @LineTotal)",
item);
}
foreach (var payment in sale.Payments)
{
await _db.ExecuteAsync(@"
INSERT INTO payments (id, sale_id, method, amount, reference,
card_last_four, card_brand, created_at)
VALUES (@Id, @SaleId, @Method, @Amount, @Reference,
@CardLastFour, @CardBrand, @CreatedAt)",
payment);
}
}
}
Day 3-4: Stripe Terminal Integration
Objective: Integrate Stripe Terminal for card payments.
Claude Command:
/dev-team implement Stripe Terminal SDK integration for MAUI
Implementation:
// RapOS.PosClient.Hardware/Payments/StripeTerminalService.cs
namespace RapOS.PosClient.Hardware.Payments;
public interface IStripeTerminalService
{
bool IsConnected { get; }
Task InitializeAsync(string locationId);
Task<TerminalReader?> DiscoverAndConnectAsync();
Task<PaymentIntentResult> CreatePaymentIntentAsync(decimal amount);
Task<CollectPaymentResult> CollectPaymentAsync(string paymentIntentId);
Task DisconnectAsync();
}
public class StripeTerminalService : IStripeTerminalService
{
private readonly IConfiguration _config;
private readonly IApiClient _api;
private bool _initialized;
private TerminalReader? _connectedReader;
public bool IsConnected => _connectedReader != null;
public StripeTerminalService(IConfiguration config, IApiClient api)
{
_config = config;
_api = api;
}
public async Task InitializeAsync(string locationId)
{
if (_initialized) return;
// Get connection token from backend
var tokenResponse = await _api.PostAsync<ConnectionTokenResponse>(
"/api/terminals/stripe/connection-token",
new { locationId });
// Initialize Stripe Terminal SDK
// Note: Actual implementation depends on platform
#if ANDROID
await InitializeAndroidAsync(tokenResponse.Secret);
#elif WINDOWS
await InitializeWindowsAsync(tokenResponse.Secret);
#endif
_initialized = true;
}
public async Task<TerminalReader?> DiscoverAndConnectAsync()
{
// Discover available readers
var readers = await DiscoverReadersAsync();
if (!readers.Any())
{
return null;
}
// Auto-connect to first reader (or show picker)
var reader = readers.First();
await ConnectToReaderAsync(reader);
_connectedReader = reader;
return reader;
}
public async Task<PaymentIntentResult> CreatePaymentIntentAsync(decimal amount)
{
// Create payment intent on backend
var response = await _api.PostAsync<PaymentIntentResult>(
"/api/payments/create-intent",
new
{
amount = (long)(amount * 100), // Convert to cents
currency = "usd"
});
return response;
}
public async Task<CollectPaymentResult> CollectPaymentAsync(string paymentIntentId)
{
// This triggers the reader to collect card
#if ANDROID
return await CollectPaymentAndroidAsync(paymentIntentId);
#elif WINDOWS
return await CollectPaymentWindowsAsync(paymentIntentId);
#else
throw new PlatformNotSupportedException();
#endif
}
public async Task DisconnectAsync()
{
if (_connectedReader != null)
{
// Disconnect from reader
_connectedReader = null;
}
}
// Platform-specific implementations...
}
public class TerminalReader
{
public string Id { get; set; } = string.Empty;
public string SerialNumber { get; set; } = string.Empty;
public string Label { get; set; } = string.Empty;
public TerminalReaderType Type { get; set; }
public bool IsOnline { get; set; }
}
public enum TerminalReaderType
{
Bluetooth,
Internet,
USB
}
public class CollectPaymentResult
{
public bool Success { get; set; }
public string? PaymentIntentId { get; set; }
public string? CardLastFour { get; set; }
public string? CardBrand { get; set; }
public string? ErrorMessage { get; set; }
}
Day 5: Receipt Generation
Objective: Generate and print receipts.
Claude Command:
/dev-team create receipt generator with ESC/POS printer support
Implementation:
// RapOS.PosClient.Hardware/Printers/ReceiptGenerator.cs
namespace RapOS.PosClient.Hardware.Printers;
public interface IReceiptGenerator
{
byte[] GenerateReceipt(Sale sale);
byte[] GenerateReprint(Sale sale);
byte[] GenerateVoid(Sale sale, string reason);
}
public class EscPosReceiptGenerator : IReceiptGenerator
{
private readonly ITerminalContext _terminal;
private readonly ISettingsService _settings;
public EscPosReceiptGenerator(
ITerminalContext terminal,
ISettingsService settings)
{
_terminal = terminal;
_settings = settings;
}
public byte[] GenerateReceipt(Sale sale)
{
using var ms = new MemoryStream();
using var writer = new EscPosWriter(ms);
// Initialize printer
writer.Initialize();
// Header - Store Info
writer.SetAlignment(Alignment.Center);
writer.SetBold(true);
writer.WriteLine(_settings.StoreName);
writer.SetBold(false);
writer.WriteLine(_settings.StoreAddress);
writer.WriteLine(_settings.StorePhone);
writer.LineFeed();
// Transaction Info
writer.SetAlignment(Alignment.Left);
writer.WriteLine($"Receipt: {sale.ReceiptNumber}");
writer.WriteLine($"Date: {sale.CreatedAt:MM/dd/yyyy HH:mm}");
writer.WriteLine($"Cashier: {sale.CashierName}");
if (sale.Customer != null)
{
writer.WriteLine($"Customer: {sale.Customer.FullName}");
}
writer.PrintDivider();
// Line Items
foreach (var item in sale.Items)
{
// Product name
writer.WriteLine(item.Name);
// Quantity x Price = Total
var priceStr = item.UnitPrice.ToString("F2");
var qtyStr = $" {item.Quantity} x {priceStr}";
var totalStr = item.LineTotal.ToString("F2");
writer.WriteColumns(qtyStr, totalStr);
if (item.DiscountAmount > 0)
{
writer.WriteColumns(" Discount:", $"-{item.DiscountAmount:F2}");
}
}
writer.PrintDivider();
// Totals
writer.WriteColumns("Subtotal:", sale.Subtotal.ToString("F2"));
if (sale.DiscountTotal > 0)
{
writer.WriteColumns("Discount:", $"-{sale.DiscountTotal:F2}");
}
writer.WriteColumns("Tax:", sale.TaxTotal.ToString("F2"));
writer.SetBold(true);
writer.WriteColumns("TOTAL:", sale.GrandTotal.ToString("F2"));
writer.SetBold(false);
writer.LineFeed();
// Payments
foreach (var payment in sale.Payments)
{
var methodName = payment.Method switch
{
PaymentMethod.Cash => "Cash",
PaymentMethod.Card => $"Card ({payment.CardBrand} ...{payment.CardLastFour})",
_ => payment.Method.ToString()
};
writer.WriteColumns(methodName, payment.Amount.ToString("F2"));
if (payment.Method == PaymentMethod.Cash && payment.Change > 0)
{
writer.WriteColumns("Change:", payment.Change.ToString("F2"));
}
}
writer.LineFeed();
// Footer
writer.SetAlignment(Alignment.Center);
writer.WriteLine(_settings.ReceiptFooterMessage ?? "Thank you for your purchase!");
if (sale.Customer?.LoyaltyPoints > 0)
{
writer.LineFeed();
writer.WriteLine($"Loyalty Points Earned: {sale.LoyaltyPointsEarned}");
writer.WriteLine($"Total Points: {sale.Customer.LoyaltyPoints + sale.LoyaltyPointsEarned}");
}
// Barcode for receipt lookup
writer.LineFeed();
writer.PrintBarcode(sale.ReceiptNumber, BarcodeType.Code128);
// Cut paper
writer.CutPaper();
return ms.ToArray();
}
public byte[] GenerateReprint(Sale sale)
{
// Same as receipt but with "REPRINT" header
using var ms = new MemoryStream();
using var writer = new EscPosWriter(ms);
writer.Initialize();
writer.SetAlignment(Alignment.Center);
writer.SetBold(true);
writer.WriteLine("*** REPRINT ***");
writer.SetBold(false);
writer.LineFeed();
// Rest is same as GenerateReceipt...
// Could refactor to share code
return ms.ToArray();
}
public byte[] GenerateVoid(Sale sale, string reason)
{
// Void receipt
using var ms = new MemoryStream();
using var writer = new EscPosWriter(ms);
writer.Initialize();
writer.SetAlignment(Alignment.Center);
writer.SetBold(true);
writer.WriteLine("*** VOID ***");
writer.SetBold(false);
writer.WriteLine($"Original Receipt: {sale.ReceiptNumber}");
writer.WriteLine($"Void Reason: {reason}");
writer.LineFeed();
writer.CutPaper();
return ms.ToArray();
}
}
public class EscPosWriter : IDisposable
{
private readonly Stream _stream;
// ESC/POS command constants
private static readonly byte[] CMD_INIT = { 0x1B, 0x40 };
private static readonly byte[] CMD_ALIGN_LEFT = { 0x1B, 0x61, 0x00 };
private static readonly byte[] CMD_ALIGN_CENTER = { 0x1B, 0x61, 0x01 };
private static readonly byte[] CMD_ALIGN_RIGHT = { 0x1B, 0x61, 0x02 };
private static readonly byte[] CMD_BOLD_ON = { 0x1B, 0x45, 0x01 };
private static readonly byte[] CMD_BOLD_OFF = { 0x1B, 0x45, 0x00 };
private static readonly byte[] CMD_CUT = { 0x1D, 0x56, 0x41, 0x00 };
private const int LINE_WIDTH = 42; // Standard 80mm receipt width
public EscPosWriter(Stream stream)
{
_stream = stream;
}
public void Initialize() => _stream.Write(CMD_INIT);
public void SetAlignment(Alignment align)
{
_stream.Write(align switch
{
Alignment.Left => CMD_ALIGN_LEFT,
Alignment.Center => CMD_ALIGN_CENTER,
Alignment.Right => CMD_ALIGN_RIGHT,
_ => CMD_ALIGN_LEFT
});
}
public void SetBold(bool bold) =>
_stream.Write(bold ? CMD_BOLD_ON : CMD_BOLD_OFF);
public void WriteLine(string text)
{
var bytes = Encoding.UTF8.GetBytes(text + "\n");
_stream.Write(bytes);
}
public void WriteColumns(string left, string right)
{
var padding = LINE_WIDTH - left.Length - right.Length;
var line = left + new string(' ', Math.Max(1, padding)) + right;
WriteLine(line);
}
public void LineFeed() => _stream.WriteByte(0x0A);
public void PrintDivider() =>
WriteLine(new string('-', LINE_WIDTH));
public void PrintBarcode(string data, BarcodeType type)
{
// ESC/POS barcode commands
_stream.Write(new byte[] { 0x1D, 0x68, 50 }); // Height
_stream.Write(new byte[] { 0x1D, 0x77, 2 }); // Width
_stream.Write(new byte[] { 0x1D, 0x6B, (byte)type });
var bytes = Encoding.ASCII.GetBytes(data);
_stream.WriteByte((byte)bytes.Length);
_stream.Write(bytes);
}
public void CutPaper() => _stream.Write(CMD_CUT);
public void Dispose() { }
}
public enum Alignment { Left, Center, Right }
public enum BarcodeType { Code128 = 73 }
23.7 Week 17: Offline & Sync
Day 1-2: Local Transaction Queue
Objective: Queue transactions for sync when offline.
Claude Command:
/dev-team create sync queue service for offline transaction handling
Implementation:
// RapOS.PosClient.Core/Services/SyncService.cs
namespace RapOS.PosClient.Core.Services;
public interface ISyncService
{
Task QueueForSyncAsync<T>(string entityType, Guid entityId, SyncOperation operation, T payload);
Task ProcessQueueAsync();
Task<int> GetPendingCountAsync();
event EventHandler<SyncProgressEventArgs>? SyncProgress;
}
public class SyncService : ISyncService
{
private readonly ILocalDatabase _db;
private readonly IApiClient _api;
private readonly IConnectivityService _connectivity;
private readonly ILogger<SyncService> _logger;
private bool _isSyncing;
public event EventHandler<SyncProgressEventArgs>? SyncProgress;
public SyncService(
ILocalDatabase db,
IApiClient api,
IConnectivityService connectivity,
ILogger<SyncService> logger)
{
_db = db;
_api = api;
_connectivity = connectivity;
_logger = logger;
}
public async Task QueueForSyncAsync<T>(
string entityType,
Guid entityId,
SyncOperation operation,
T payload)
{
var priority = GetPriority(entityType);
var json = JsonSerializer.Serialize(payload);
await _db.ExecuteAsync(@"
INSERT OR REPLACE INTO sync_queue
(entity_type, entity_id, operation, payload, priority, created_at)
VALUES (@entityType, @entityId, @operation, @payload, @priority, @now)",
new
{
entityType,
entityId = entityId.ToString(),
operation = operation.ToString(),
payload = json,
priority,
now = DateTime.UtcNow.ToString("O")
});
// Try immediate sync if online
if (await _connectivity.IsOnlineAsync() && !_isSyncing)
{
_ = ProcessQueueAsync(); // Fire and forget
}
}
public async Task ProcessQueueAsync()
{
if (_isSyncing || !await _connectivity.IsOnlineAsync())
return;
_isSyncing = true;
try
{
var pending = await _db.QueryAsync<SyncQueueItem>(@"
SELECT * FROM sync_queue
WHERE attempts < 5
ORDER BY priority DESC, created_at ASC
LIMIT 50");
var total = pending.Count;
var processed = 0;
foreach (var item in pending)
{
try
{
await SyncItemAsync(item);
// Remove from queue on success
await _db.ExecuteAsync(
"DELETE FROM sync_queue WHERE id = @id",
new { item.Id });
processed++;
RaiseSyncProgress(processed, total, null);
}
catch (Exception ex)
{
_logger.LogError(ex,
"Failed to sync {EntityType} {EntityId}",
item.EntityType, item.EntityId);
// Update attempts and error
await _db.ExecuteAsync(@"
UPDATE sync_queue
SET attempts = attempts + 1,
last_attempt_at = @now,
error = @error
WHERE id = @id",
new
{
item.Id,
now = DateTime.UtcNow.ToString("O"),
error = ex.Message
});
RaiseSyncProgress(processed, total, ex.Message);
}
}
}
finally
{
_isSyncing = false;
}
}
private async Task SyncItemAsync(SyncQueueItem item)
{
var endpoint = GetEndpointForEntity(item.EntityType);
switch (item.Operation)
{
case "Create":
await _api.PostAsync(endpoint, item.Payload);
break;
case "Update":
await _api.PutAsync($"{endpoint}/{item.EntityId}", item.Payload);
break;
case "Delete":
await _api.DeleteAsync($"{endpoint}/{item.EntityId}");
break;
}
// Update local record with synced timestamp
await MarkAsSyncedAsync(item.EntityType, item.EntityId);
}
private int GetPriority(string entityType)
{
// Sales have highest priority (revenue!)
return entityType switch
{
"sale" => 100,
"payment" => 90,
"customer" => 50,
"inventory_adjustment" => 40,
_ => 10
};
}
private string GetEndpointForEntity(string entityType)
{
return entityType switch
{
"sale" => "/api/sales",
"payment" => "/api/payments",
"customer" => "/api/customers",
_ => $"/api/{entityType}"
};
}
private async Task MarkAsSyncedAsync(string entityType, string entityId)
{
var table = entityType switch
{
"sale" => "sales",
_ => entityType
};
await _db.ExecuteAsync(
$"UPDATE {table} SET synced_at = @now WHERE id = @id",
new { id = entityId, now = DateTime.UtcNow.ToString("O") });
}
public async Task<int> GetPendingCountAsync()
{
var result = await _db.QuerySingleAsync<int>(
"SELECT COUNT(*) FROM sync_queue WHERE attempts < 5");
return result;
}
private void RaiseSyncProgress(int current, int total, string? error)
{
SyncProgress?.Invoke(this, new SyncProgressEventArgs
{
Current = current,
Total = total,
Error = error
});
}
}
public enum SyncOperation
{
Create,
Update,
Delete
}
public class SyncQueueItem
{
public int Id { get; set; }
public string EntityType { get; set; } = string.Empty;
public string EntityId { get; set; } = string.Empty;
public string Operation { get; set; } = string.Empty;
public string Payload { get; set; } = string.Empty;
public int Priority { get; set; }
public int Attempts { get; set; }
public string? LastAttemptAt { get; set; }
public string? Error { get; set; }
public string CreatedAt { get; set; } = string.Empty;
}
public class SyncProgressEventArgs : EventArgs
{
public int Current { get; set; }
public int Total { get; set; }
public string? Error { get; set; }
}
Day 3-4: Product & Customer Cache
Objective: Sync and cache products/customers for offline access.
Claude Command:
/dev-team create background sync service for product and customer data
Implementation:
// RapOS.PosClient.Core/Services/BackgroundSyncService.cs
namespace RapOS.PosClient.Core.Services;
public interface IBackgroundSyncService
{
Task StartAsync();
Task StopAsync();
Task ForceSyncAsync();
}
public class BackgroundSyncService : IBackgroundSyncService
{
private readonly ILocalDatabase _db;
private readonly IApiClient _api;
private readonly IConnectivityService _connectivity;
private readonly IProductService _products;
private readonly ICategoryService _categories;
private readonly ICustomerService _customers;
private readonly ILogger<BackgroundSyncService> _logger;
private CancellationTokenSource? _cts;
private Timer? _syncTimer;
private const int SYNC_INTERVAL_MINUTES = 5;
public BackgroundSyncService(
ILocalDatabase db,
IApiClient api,
IConnectivityService connectivity,
IProductService products,
ICategoryService categories,
ICustomerService customers,
ILogger<BackgroundSyncService> logger)
{
_db = db;
_api = api;
_connectivity = connectivity;
_products = products;
_categories = categories;
_customers = customers;
_logger = logger;
}
public Task StartAsync()
{
_cts = new CancellationTokenSource();
// Immediate sync on start
_ = RunSyncAsync(_cts.Token);
// Periodic sync
_syncTimer = new Timer(
async _ => await RunSyncAsync(_cts.Token),
null,
TimeSpan.FromMinutes(SYNC_INTERVAL_MINUTES),
TimeSpan.FromMinutes(SYNC_INTERVAL_MINUTES));
return Task.CompletedTask;
}
public async Task StopAsync()
{
_cts?.Cancel();
if (_syncTimer != null)
{
await _syncTimer.DisposeAsync();
}
}
public async Task ForceSyncAsync()
{
await RunSyncAsync(CancellationToken.None);
}
private async Task RunSyncAsync(CancellationToken ct)
{
if (!await _connectivity.IsOnlineAsync())
{
_logger.LogInformation("Skipping sync - offline");
return;
}
_logger.LogInformation("Starting background sync...");
try
{
// Sync in parallel
await Task.WhenAll(
SyncCategoriesAsync(ct),
SyncProductsAsync(ct),
SyncCustomersAsync(ct));
_logger.LogInformation("Background sync completed");
}
catch (OperationCanceledException)
{
_logger.LogInformation("Background sync cancelled");
}
catch (Exception ex)
{
_logger.LogError(ex, "Background sync failed");
}
}
private async Task SyncCategoriesAsync(CancellationToken ct)
{
var lastSync = await GetLastSyncAsync("categories");
var categories = await _api.GetAsync<List<Category>>(
$"/api/categories?modifiedAfter={lastSync:O}", ct);
foreach (var category in categories)
{
ct.ThrowIfCancellationRequested();
await _db.ExecuteAsync(@"
INSERT OR REPLACE INTO categories
(id, name, parent_id, display_order, color, icon, synced_at)
VALUES (@Id, @Name, @ParentId, @DisplayOrder, @Color, @Icon, @now)",
new { category, now = DateTime.UtcNow.ToString("O") });
}
await UpdateLastSyncAsync("categories");
}
private async Task SyncProductsAsync(CancellationToken ct)
{
// Products synced by ProductService.RefreshCacheAsync()
await _products.RefreshCacheAsync();
}
private async Task SyncCustomersAsync(CancellationToken ct)
{
var lastSync = await GetLastSyncAsync("customers");
// Only sync active customers from this location
var customers = await _api.GetAsync<List<Customer>>(
$"/api/customers?modifiedAfter={lastSync:O}&limit=1000", ct);
foreach (var customer in customers)
{
ct.ThrowIfCancellationRequested();
await _db.ExecuteAsync(@"
INSERT OR REPLACE INTO customers
(id, first_name, last_name, email, phone,
loyalty_points, loyalty_tier, synced_at)
VALUES (@Id, @FirstName, @LastName, @Email, @Phone,
@LoyaltyPoints, @LoyaltyTier, @now)",
new { customer, now = DateTime.UtcNow.ToString("O") });
}
await UpdateLastSyncAsync("customers");
}
private async Task<DateTime> GetLastSyncAsync(string entityType)
{
var result = await _db.QuerySingleAsync<string>(
"SELECT MAX(synced_at) FROM @table",
new { table = entityType });
return DateTime.TryParse(result, out var dt) ? dt : DateTime.MinValue;
}
private async Task UpdateLastSyncAsync(string entityType)
{
await _db.ExecuteAsync(@"
INSERT OR REPLACE INTO sync_metadata (entity_type, last_sync)
VALUES (@entityType, @now)",
new { entityType, now = DateTime.UtcNow.ToString("O") });
}
}
Day 5: Conflict Resolution
Objective: Handle sync conflicts with server-wins strategy.
Claude Command:
/dev-team implement conflict resolution for offline sync
Implementation:
// RapOS.PosClient.Core/Services/ConflictResolver.cs
namespace RapOS.PosClient.Core.Services;
public interface IConflictResolver
{
Task<ConflictResolution> ResolveAsync(SyncConflict conflict);
}
public class ConflictResolver : IConflictResolver
{
private readonly ILogger<ConflictResolver> _logger;
public ConflictResolver(ILogger<ConflictResolver> logger)
{
_logger = logger;
}
public async Task<ConflictResolution> ResolveAsync(SyncConflict conflict)
{
_logger.LogWarning(
"Conflict detected for {EntityType} {EntityId}: Local={LocalVersion}, Server={ServerVersion}",
conflict.EntityType,
conflict.EntityId,
conflict.LocalVersion,
conflict.ServerVersion);
// Resolution strategy depends on entity type
return conflict.EntityType switch
{
// Sales: Local wins (transaction already happened)
"sale" => ConflictResolution.KeepLocal,
// Inventory: Server wins (authoritative count)
"product" => ConflictResolution.UseServer,
// Customer: Merge (combine changes if possible)
"customer" => await MergeCustomerAsync(conflict),
// Default: Server wins
_ => ConflictResolution.UseServer
};
}
private async Task<ConflictResolution> MergeCustomerAsync(SyncConflict conflict)
{
// For customers, we can merge if changes don't overlap
var local = JsonSerializer.Deserialize<Customer>(conflict.LocalData);
var server = JsonSerializer.Deserialize<Customer>(conflict.ServerData);
if (local == null || server == null)
return ConflictResolution.UseServer;
// If only loyalty points differ, add them (they're additive)
if (OnlyLoyaltyPointsDiffer(local, server))
{
// Keep server points (they include all synced transactions)
return ConflictResolution.UseServer;
}
// Otherwise, server wins
return ConflictResolution.UseServer;
}
private bool OnlyLoyaltyPointsDiffer(Customer local, Customer server)
{
return local.FirstName == server.FirstName &&
local.LastName == server.LastName &&
local.Email == server.Email &&
local.Phone == server.Phone &&
local.LoyaltyPoints != server.LoyaltyPoints;
}
}
public class SyncConflict
{
public string EntityType { get; set; } = string.Empty;
public string EntityId { get; set; } = string.Empty;
public int LocalVersion { get; set; }
public int ServerVersion { get; set; }
public string LocalData { get; set; } = string.Empty;
public string ServerData { get; set; } = string.Empty;
}
public enum ConflictResolution
{
KeepLocal,
UseServer,
Merge
}
23.8 Week 18: Hardware & Configuration
Day 1-2: Receipt Printer Integration
Objective: Connect to ESC/POS printers via USB, Bluetooth, and network.
Claude Command:
/dev-team implement multi-platform printer service for ESC/POS printers
Implementation:
// RapOS.PosClient.Hardware/Printers/PrinterService.cs
namespace RapOS.PosClient.Hardware.Printers;
public interface IPrinterService
{
Task<List<PrinterInfo>> DiscoverPrintersAsync();
Task<bool> ConnectAsync(PrinterInfo printer);
Task PrintAsync(byte[] data);
Task PrintReceiptAsync(Sale sale);
Task<bool> TestPrintAsync();
bool IsConnected { get; }
PrinterInfo? ConnectedPrinter { get; }
}
public class EscPosPrinterService : IPrinterService
{
private readonly IReceiptGenerator _receiptGenerator;
private readonly ISettingsService _settings;
private readonly ILogger<EscPosPrinterService> _logger;
private IPrinterConnection? _connection;
public bool IsConnected => _connection?.IsConnected ?? false;
public PrinterInfo? ConnectedPrinter { get; private set; }
public EscPosPrinterService(
IReceiptGenerator receiptGenerator,
ISettingsService settings,
ILogger<EscPosPrinterService> logger)
{
_receiptGenerator = receiptGenerator;
_settings = settings;
_logger = logger;
}
public async Task<List<PrinterInfo>> DiscoverPrintersAsync()
{
var printers = new List<PrinterInfo>();
#if WINDOWS
// Discover Windows printers
printers.AddRange(await DiscoverWindowsPrintersAsync());
#elif ANDROID
// Discover Bluetooth and USB printers
printers.AddRange(await DiscoverBluetoothPrintersAsync());
printers.AddRange(await DiscoverUsbPrintersAsync());
#endif
// Discover network printers (common across platforms)
printers.AddRange(await DiscoverNetworkPrintersAsync());
return printers;
}
public async Task<bool> ConnectAsync(PrinterInfo printer)
{
try
{
_connection = printer.ConnectionType switch
{
PrinterConnectionType.USB => new UsbPrinterConnection(printer),
PrinterConnectionType.Bluetooth => new BluetoothPrinterConnection(printer),
PrinterConnectionType.Network => new NetworkPrinterConnection(printer),
_ => throw new NotSupportedException($"Connection type {printer.ConnectionType} not supported")
};
await _connection.OpenAsync();
ConnectedPrinter = printer;
_logger.LogInformation("Connected to printer: {PrinterName}", printer.Name);
return true;
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to connect to printer: {PrinterName}", printer.Name);
return false;
}
}
public async Task PrintAsync(byte[] data)
{
if (_connection == null || !_connection.IsConnected)
{
throw new InvalidOperationException("Printer not connected");
}
await _connection.WriteAsync(data);
}
public async Task PrintReceiptAsync(Sale sale)
{
var receiptData = _receiptGenerator.GenerateReceipt(sale);
await PrintAsync(receiptData);
}
public async Task<bool> TestPrintAsync()
{
try
{
using var ms = new MemoryStream();
using var writer = new EscPosWriter(ms);
writer.Initialize();
writer.SetAlignment(Alignment.Center);
writer.WriteLine("*** TEST PRINT ***");
writer.LineFeed();
writer.WriteLine($"Printer: {ConnectedPrinter?.Name}");
writer.WriteLine($"Time: {DateTime.Now:yyyy-MM-dd HH:mm:ss}");
writer.LineFeed();
writer.WriteLine("If you can read this,");
writer.WriteLine("the printer is working!");
writer.LineFeed();
writer.CutPaper();
await PrintAsync(ms.ToArray());
return true;
}
catch (Exception ex)
{
_logger.LogError(ex, "Test print failed");
return false;
}
}
private async Task<List<PrinterInfo>> DiscoverNetworkPrintersAsync()
{
// Common receipt printer ports: 9100 (raw), 515 (LPR)
var printers = new List<PrinterInfo>();
// Check saved network printers
var savedPrinters = await _settings.GetNetworkPrintersAsync();
foreach (var saved in savedPrinters)
{
if (await IsReachableAsync(saved.IpAddress, saved.Port))
{
printers.Add(saved);
}
}
return printers;
}
private async Task<bool> IsReachableAsync(string host, int port)
{
try
{
using var client = new TcpClient();
var connectTask = client.ConnectAsync(host, port);
var timeoutTask = Task.Delay(TimeSpan.FromSeconds(2));
if (await Task.WhenAny(connectTask, timeoutTask) == connectTask)
{
return client.Connected;
}
return false;
}
catch
{
return false;
}
}
#if WINDOWS
private async Task<List<PrinterInfo>> DiscoverWindowsPrintersAsync()
{
// Use Windows printing API
return new List<PrinterInfo>();
}
#endif
#if ANDROID
private async Task<List<PrinterInfo>> DiscoverBluetoothPrintersAsync()
{
// Use Android Bluetooth API
return new List<PrinterInfo>();
}
private async Task<List<PrinterInfo>> DiscoverUsbPrintersAsync()
{
// Use Android USB Host API
return new List<PrinterInfo>();
}
#endif
}
public class PrinterInfo
{
public string Id { get; set; } = string.Empty;
public string Name { get; set; } = string.Empty;
public PrinterConnectionType ConnectionType { get; set; }
public string Address { get; set; } = string.Empty;
public string? IpAddress { get; set; }
public int Port { get; set; } = 9100;
}
public enum PrinterConnectionType
{
USB,
Bluetooth,
Network
}
Day 3-4: Drag-and-Drop Layout Builder
Objective: Implement Retail Pro-style configurable UI layouts.
Claude Command:
/dev-team create drag-and-drop layout designer for POS screen customization
Layout Designer Component:
@* RapOS.PosClient.UI/Components/Layout/LayoutDesigner.razor *@
@inject ILayoutService LayoutService
@inject IJSRuntime JS
<div class="layout-designer">
<aside class="widget-palette">
<h3>Available Widgets</h3>
@foreach (var widget in AvailableWidgets)
{
<div class="widget-item"
draggable="true"
@ondragstart="() => StartDrag(widget)">
<span class="widget-icon">@widget.Icon</span>
<span class="widget-name">@widget.Name</span>
</div>
}
</aside>
<main class="layout-canvas">
<div class="canvas-header">
<h3>Layout Canvas</h3>
<div class="canvas-actions">
<button @onclick="ResetLayout">Reset</button>
<button @onclick="SaveLayout" class="primary">Save Layout</button>
</div>
</div>
<div class="canvas-grid"
@ondragover="HandleDragOver"
@ondrop="HandleDrop">
@foreach (var cell in LayoutCells)
{
<div class="grid-cell @(cell.Widget != null ? "occupied" : "")"
data-row="@cell.Row"
data-col="@cell.Column"
style="grid-row: @(cell.Row + 1); grid-column: @(cell.Column + 1) / span @cell.ColSpan;">
@if (cell.Widget != null)
{
<div class="placed-widget"
style="background-color: @cell.Widget.Color">
<span class="widget-icon">@cell.Widget.Icon</span>
<span class="widget-name">@cell.Widget.Name</span>
<button class="remove-widget" @onclick="() => RemoveWidget(cell)">
✕
</button>
<div class="resize-handle" @onmousedown="() => StartResize(cell)"></div>
</div>
}
else
{
<span class="drop-hint">Drop widget here</span>
}
</div>
}
</div>
</main>
<aside class="layout-preview">
<h3>Preview</h3>
<div class="preview-container">
@* Miniature preview of the layout *@
<div class="preview-grid">
@foreach (var cell in LayoutCells.Where(c => c.Widget != null))
{
<div class="preview-widget"
style="grid-row: @(cell.Row + 1);
grid-column: @(cell.Column + 1) / span @cell.ColSpan;
background-color: @cell.Widget!.Color;">
</div>
}
</div>
</div>
</aside>
</div>
@code {
private List<WidgetDefinition> AvailableWidgets = new()
{
new() { Id = "product-grid", Name = "Product Grid", Icon = "📦", Color = "#4A90D9", DefaultColSpan = 2 },
new() { Id = "cart", Name = "Cart Panel", Icon = "🛒", Color = "#7B68EE", DefaultColSpan = 1 },
new() { Id = "quick-access", Name = "Quick Access", Icon = "⚡", Color = "#FFB347", DefaultColSpan = 3 },
new() { Id = "totals", Name = "Totals Panel", Icon = "💰", Color = "#77DD77", DefaultColSpan = 1 },
new() { Id = "customer", Name = "Customer Info", Icon = "👤", Color = "#FF6B6B", DefaultColSpan = 1 },
new() { Id = "categories", Name = "Category Bar", Icon = "📁", Color = "#DDA0DD", DefaultColSpan = 3 },
new() { Id = "search", Name = "Search Bar", Icon = "🔍", Color = "#87CEEB", DefaultColSpan = 2 }
};
private List<LayoutCell> LayoutCells = new();
private WidgetDefinition? DraggedWidget;
private const int GRID_ROWS = 4;
private const int GRID_COLS = 3;
protected override async Task OnInitializedAsync()
{
// Initialize grid cells
for (int row = 0; row < GRID_ROWS; row++)
{
for (int col = 0; col < GRID_COLS; col++)
{
LayoutCells.Add(new LayoutCell { Row = row, Column = col, ColSpan = 1 });
}
}
// Load existing layout
var savedLayout = await LayoutService.GetCurrentLayoutAsync();
if (savedLayout != null)
{
ApplyLayout(savedLayout);
}
}
private void StartDrag(WidgetDefinition widget)
{
DraggedWidget = widget;
}
private void HandleDragOver(DragEventArgs e)
{
// Allow drop
}
private void HandleDrop(DragEventArgs e)
{
if (DraggedWidget == null) return;
// Get drop target from event
// Place widget in cell
// This is simplified - real implementation needs JS interop for accurate drop position
}
private void RemoveWidget(LayoutCell cell)
{
cell.Widget = null;
cell.ColSpan = 1;
}
private void StartResize(LayoutCell cell)
{
// Start resize operation
}
private void ResetLayout()
{
foreach (var cell in LayoutCells)
{
cell.Widget = null;
cell.ColSpan = 1;
}
}
private async Task SaveLayout()
{
var layout = new PosLayout
{
Id = Guid.NewGuid(),
Name = "Custom Layout",
Cells = LayoutCells.Where(c => c.Widget != null).Select(c => new LayoutCellConfig
{
WidgetId = c.Widget!.Id,
Row = c.Row,
Column = c.Column,
ColSpan = c.ColSpan
}).ToList()
};
await LayoutService.SaveLayoutAsync(layout);
}
private void ApplyLayout(PosLayout layout)
{
foreach (var cell in layout.Cells)
{
var gridCell = LayoutCells.FirstOrDefault(c => c.Row == cell.Row && c.Column == cell.Column);
if (gridCell != null)
{
gridCell.Widget = AvailableWidgets.FirstOrDefault(w => w.Id == cell.WidgetId);
gridCell.ColSpan = cell.ColSpan;
}
}
}
}
public class WidgetDefinition
{
public string Id { get; set; } = string.Empty;
public string Name { get; set; } = string.Empty;
public string Icon { get; set; } = string.Empty;
public string Color { get; set; } = string.Empty;
public int DefaultColSpan { get; set; } = 1;
}
public class LayoutCell
{
public int Row { get; set; }
public int Column { get; set; }
public int ColSpan { get; set; } = 1;
public WidgetDefinition? Widget { get; set; }
}
public class PosLayout
{
public Guid Id { get; set; }
public string Name { get; set; } = string.Empty;
public List<LayoutCellConfig> Cells { get; set; } = new();
}
public class LayoutCellConfig
{
public string WidgetId { get; set; } = string.Empty;
public int Row { get; set; }
public int Column { get; set; }
public int ColSpan { get; set; }
}
Day 5: Quick Access Button Editor
Objective: Allow customization of quick-access buttons.
Claude Command:
/dev-team create quick access button configuration editor
Implementation:
@* RapOS.PosClient.UI/Components/Layout/QuickAccessEditor.razor *@
@inject IProductService ProductService
@inject ICategoryService CategoryService
@inject ILayoutService LayoutService
<div class="quick-access-editor">
<h3>Quick Access Buttons</h3>
<p class="hint">Configure up to 12 quick access buttons for fast product selection.</p>
<div class="button-grid">
@for (int i = 0; i < 12; i++)
{
var index = i;
var button = Buttons.ElementAtOrDefault(index);
<div class="button-slot @(button != null ? "configured" : "")"
@onclick="() => EditButton(index)">
@if (button != null)
{
<div class="button-preview" style="background-color: @button.Color">
<span class="button-icon">@button.Icon</span>
<span class="button-label">@button.Label</span>
</div>
<button class="remove-btn" @onclick:stopPropagation @onclick="() => RemoveButton(index)">
✕
</button>
}
else
{
<span class="empty-slot">+ Add Button</span>
}
</div>
}
</div>
@* Edit Modal *@
@if (IsEditing)
{
<div class="modal-overlay" @onclick="CancelEdit">
<div class="modal-content" @onclick:stopPropagation>
<h4>Configure Quick Access Button</h4>
<div class="form-group">
<label>Button Type</label>
<select @bind="EditingButton.ActionType">
<option value="product">Product</option>
<option value="category">Category</option>
<option value="function">Function</option>
</select>
</div>
@if (EditingButton.ActionType == "product")
{
<div class="form-group">
<label>Select Product</label>
<input type="text" @bind="ProductSearch" @bind:event="oninput"
placeholder="Search products..." />
@if (FilteredProducts.Any())
{
<div class="search-results">
@foreach (var product in FilteredProducts.Take(10))
{
<div class="search-result" @onclick="() => SelectProduct(product)">
@product.Name - @product.Sku
</div>
}
</div>
}
</div>
}
else if (EditingButton.ActionType == "category")
{
<div class="form-group">
<label>Select Category</label>
<select @bind="EditingButton.CategoryId">
@foreach (var category in Categories)
{
<option value="@category.Id">@category.Name</option>
}
</select>
</div>
}
else if (EditingButton.ActionType == "function")
{
<div class="form-group">
<label>Select Function</label>
<select @bind="EditingButton.FunctionId">
<option value="open_drawer">Open Cash Drawer</option>
<option value="no_sale">No Sale</option>
<option value="discount">Apply Discount</option>
<option value="held_sales">Held Sales</option>
<option value="returns">Returns</option>
</select>
</div>
}
<div class="form-group">
<label>Button Label</label>
<input type="text" @bind="EditingButton.Label" maxlength="15" />
</div>
<div class="form-group">
<label>Button Color</label>
<div class="color-picker">
@foreach (var color in AvailableColors)
{
<div class="color-option @(EditingButton.Color == color ? "selected" : "")"
style="background-color: @color"
@onclick="() => EditingButton.Color = color">
</div>
}
</div>
</div>
<div class="modal-actions">
<button @onclick="CancelEdit">Cancel</button>
<button class="primary" @onclick="SaveButton">Save</button>
</div>
</div>
</div>
}
</div>
@code {
private List<QuickAccessButton> Buttons = new();
private List<Product> Products = new();
private List<Category> Categories = new();
private bool IsEditing;
private int EditingIndex;
private QuickAccessButton EditingButton = new();
private string ProductSearch = string.Empty;
private readonly string[] AvailableColors =
{
"#4A90D9", "#7B68EE", "#FFB347", "#77DD77",
"#FF6B6B", "#DDA0DD", "#87CEEB", "#F0E68C"
};
private IEnumerable<Product> FilteredProducts =>
string.IsNullOrEmpty(ProductSearch)
? Enumerable.Empty<Product>()
: Products.Where(p =>
p.Name.Contains(ProductSearch, StringComparison.OrdinalIgnoreCase) ||
p.Sku.Contains(ProductSearch, StringComparison.OrdinalIgnoreCase));
protected override async Task OnInitializedAsync()
{
Buttons = await LayoutService.GetQuickAccessButtonsAsync();
Products = await ProductService.GetProductsAsync();
Categories = await CategoryService.GetCategoriesAsync();
}
private void EditButton(int index)
{
EditingIndex = index;
EditingButton = Buttons.ElementAtOrDefault(index)?.Clone() ?? new QuickAccessButton
{
Position = index,
ActionType = "product",
Color = AvailableColors[0]
};
IsEditing = true;
}
private void SelectProduct(Product product)
{
EditingButton.ProductId = product.Id;
EditingButton.Label = product.Name.Length > 15
? product.Name.Substring(0, 15)
: product.Name;
ProductSearch = string.Empty;
}
private async Task SaveButton()
{
EditingButton.Position = EditingIndex;
if (EditingIndex < Buttons.Count)
{
Buttons[EditingIndex] = EditingButton;
}
else
{
while (Buttons.Count <= EditingIndex)
{
Buttons.Add(null!);
}
Buttons[EditingIndex] = EditingButton;
}
await LayoutService.SaveQuickAccessButtonsAsync(Buttons.Where(b => b != null).ToList());
IsEditing = false;
}
private void CancelEdit()
{
IsEditing = false;
}
private async Task RemoveButton(int index)
{
if (index < Buttons.Count)
{
Buttons.RemoveAt(index);
await LayoutService.SaveQuickAccessButtonsAsync(Buttons);
}
}
}
23.9 Week 19: Distribution & Polish
Day 1-2: Update Server (Adapt Raptag Pattern)
Objective: Deploy update distribution service based on existing Raptag pattern.
Claude Command:
/dev-team create POS client update server adapting Raptag pattern
Service Location: /volume1/docker/pos-platform/update-server/
Implementation: See existing Raptag update-server at /volume1/docker/raptag/update-server/ - adapt for multi-platform support.
Key additions:
- Platform detection (windows, android, macos)
- Tenant-specific version channels
- Rollback support
Day 3-4: Terminal Registration Flow
Objective: Implement first-launch registration experience.
Claude Command:
/dev-team create terminal registration flow with QR code provisioning
Implementation:
@* RapOS.PosClient.UI/Pages/Registration.razor *@
@page "/register"
@inject ITerminalService TerminalService
@inject ILocalDatabase Database
@inject NavigationManager Navigation
<div class="registration-screen">
<div class="registration-card">
<img src="images/rapos-logo.svg" class="logo" alt="RapOS" />
<h1>Welcome to RapOS POS</h1>
@if (!IsRegistered)
{
@if (!IsConnecting)
{
<p class="instruction">
Enter the 6-digit registration code provided by your administrator.
</p>
<div class="code-input">
@for (int i = 0; i < 6; i++)
{
var index = i;
<input type="text"
maxlength="1"
class="code-digit"
@bind="CodeDigits[index]"
@oninput="(e) => HandleDigitInput(index, e)"
@ref="DigitInputs[index]" />
}
</div>
@if (!string.IsNullOrEmpty(ErrorMessage))
{
<div class="error-message">@ErrorMessage</div>
}
<button class="connect-btn"
@onclick="ConnectTerminal"
disabled="@(!IsCodeComplete)">
Connect Terminal
</button>
<p class="help-text">
Don't have a code? Contact your store administrator.
</p>
}
else
{
<div class="connecting">
<div class="spinner"></div>
<p>Connecting to @TenantName...</p>
<p class="status">@ConnectionStatus</p>
</div>
}
}
else
{
<div class="success">
<span class="success-icon">✓</span>
<h2>Terminal Registered!</h2>
<p>Connected to @TenantName</p>
<p>Location: @LocationName</p>
<div class="sync-status">
<div class="progress-bar">
<div class="progress" style="width: @(SyncProgress)%"></div>
</div>
<p>Syncing data... @SyncProgress%</p>
</div>
</div>
}
</div>
</div>
@code {
private string[] CodeDigits = new string[6];
private ElementReference[] DigitInputs = new ElementReference[6];
private bool IsConnecting;
private bool IsRegistered;
private string? ErrorMessage;
private string? TenantName;
private string? LocationName;
private string ConnectionStatus = "Verifying code...";
private int SyncProgress;
private bool IsCodeComplete => CodeDigits.All(d => !string.IsNullOrEmpty(d));
private string RegistrationCode => string.Join("", CodeDigits);
private void HandleDigitInput(int index, ChangeEventArgs e)
{
var value = e.Value?.ToString() ?? "";
if (value.Length > 0 && index < 5)
{
// Auto-advance to next input
// Requires JS interop to focus next input
}
}
private async Task ConnectTerminal()
{
if (!IsCodeComplete) return;
IsConnecting = true;
ErrorMessage = null;
try
{
// Step 1: Verify code with server
ConnectionStatus = "Verifying code...";
var result = await TerminalService.RegisterAsync(RegistrationCode);
if (!result.Success)
{
ErrorMessage = result.Error ?? "Invalid or expired registration code";
IsConnecting = false;
return;
}
TenantName = result.TenantName;
LocationName = result.LocationName;
// Step 2: Save configuration locally
ConnectionStatus = "Saving configuration...";
await SaveTerminalConfigAsync(result);
// Step 3: Initial data sync
ConnectionStatus = "Downloading products...";
await SyncInitialDataAsync();
IsRegistered = true;
// Navigate to main screen after delay
await Task.Delay(2000);
Navigation.NavigateTo("/sale");
}
catch (Exception ex)
{
ErrorMessage = $"Connection failed: {ex.Message}";
IsConnecting = false;
}
}
private async Task SaveTerminalConfigAsync(RegistrationResult result)
{
await Database.ExecuteAsync(@"
INSERT OR REPLACE INTO terminal_config
(id, tenant_code, location_id, terminal_id, terminal_name,
api_endpoint, api_key, created_at)
VALUES (1, @TenantCode, @LocationId, @TerminalId, @TerminalName,
@ApiEndpoint, @ApiKey, @now)",
new
{
result.TenantCode,
result.LocationId,
result.TerminalId,
result.TerminalName,
result.ApiEndpoint,
result.ApiKey,
now = DateTime.UtcNow.ToString("O")
});
// Store API key in secure storage
await SecureStorage.SetAsync("api_key", result.ApiKey);
}
private async Task SyncInitialDataAsync()
{
// Sync categories
SyncProgress = 10;
ConnectionStatus = "Syncing categories...";
await Task.Delay(500); // Simulated
// Sync products
SyncProgress = 30;
ConnectionStatus = "Syncing products...";
await Task.Delay(1000); // Simulated
// Sync customers
SyncProgress = 60;
ConnectionStatus = "Syncing customers...";
await Task.Delay(500); // Simulated
// Sync settings
SyncProgress = 80;
ConnectionStatus = "Applying settings...";
await Task.Delay(500); // Simulated
SyncProgress = 100;
ConnectionStatus = "Ready!";
}
}
Day 5: End-to-End Testing & Performance
Objective: Validate complete workflows and optimize performance.
Claude Command:
/qa-team run end-to-end testing for POS client workflows
Test Scenarios:
// RapOS.PosClient.Tests/E2E/SaleWorkflowTests.cs
namespace RapOS.PosClient.Tests.E2E;
[TestClass]
public class SaleWorkflowTests
{
[TestMethod]
public async Task CompleteSale_WithCash_PrintsReceipt()
{
// Arrange
var cart = new CartService();
await cart.AddItemAsync(TestProducts.Shirt);
await cart.AddItemAsync(TestProducts.Jeans);
var payment = new PaymentService(/*...*/);
var sale = new SaleService(/*...*/);
// Act
var cashResult = await payment.ProcessCashPaymentAsync(99.99m, 100.00m);
var saleResult = await sale.CompleteSaleAsync(new[] { cashResult.Payment });
// Assert
Assert.IsTrue(saleResult.Success);
Assert.IsNotNull(saleResult.ReceiptNumber);
Assert.AreEqual(0.01m, cashResult.Payment.Change);
}
[TestMethod]
public async Task OfflineSale_QueuesForSync()
{
// Arrange
var connectivity = new Mock<IConnectivityService>();
connectivity.Setup(c => c.IsOnlineAsync()).ReturnsAsync(false);
var sync = new SyncService(/*...*/);
var cart = new CartService();
await cart.AddItemAsync(TestProducts.Shirt);
var sale = new SaleService(/*...*/);
// Act
var result = await sale.CompleteSaleAsync(/*...*/);
var pendingCount = await sync.GetPendingCountAsync();
// Assert
Assert.IsTrue(result.Success);
Assert.AreEqual(1, pendingCount);
}
[TestMethod]
public async Task BarcodeScanner_AddsProductToCart()
{
// Arrange
var scanner = new BarcodeScannerService(/*...*/);
var cart = new CartService();
// Act
await scanner.ProcessBarcodeAsync("1234567890123");
// Assert
var items = await cart.GetCartItemsAsync();
Assert.AreEqual(1, items.Count);
}
}
23.10 Phase 4 Deliverables Checklist
Week 14: Project Setup & Core UI
- .NET MAUI Blazor Hybrid project structure
- SQLite local database schema
- Main sale screen with product grid and cart
Week 15: Sale Workflow
- Product browsing and search
- Barcode scanning (camera + hardware)
- Cart operations with discounts
Week 16: Payments & Transactions
- Cash payment processing
- Card payment (Stripe Terminal)
- Receipt generation and printing
Week 17: Offline & Sync
- Local transaction queue
- Product/customer cache
- Background sync service
- Conflict resolution
Week 18: Hardware & Configuration
- Multi-platform printer support
- Drag-and-drop layout builder
- Quick access button editor
- Hardware profile management
Week 19: Distribution & Polish
- Update server deployment
- Terminal registration flow
- End-to-end testing
- Performance optimization
23.11 Success Criteria
| Metric | Target |
|---|---|
| Offline Duration | 72+ hours fully functional |
| Transaction Speed | < 2 seconds from scan to receipt |
| Sync Reliability | 99.9% successful sync rate |
| App Startup | < 3 seconds cold start |
| Hardware Support | Epson, Star Micronics, Zebra printers |
| Platform Coverage | Windows, Android, macOS |
23.12 Next Steps
Phase 4 is the final implementation phase. After completion:
- Part VII: Operations (Chapters 24-28)
- Deployment procedures (Chapter 24)
- Monitoring and alerting (Chapter 25)
- Security compliance (Chapter 26)
- Disaster recovery (Chapter 27)
- Tenant lifecycle management (Chapter 28)
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VI - Implementation Guide |
| Chapter | 23 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 24: Deployment Guide
24.1 Overview
This chapter provides complete deployment procedures for the POS Platform, including Docker containerization, environment configuration, deployment strategies, and rollback procedures.
24.2 Deployment Architecture
PRODUCTION ENVIRONMENT
+-----------------------------------------------------------------------------------+
| Load Balancer (Nginx/HAProxy) |
| Port 443 (HTTPS) |
+-----------------------------------------------------------------------------------+
| | |
v v v
+-------------------+ +-------------------+ +-------------------+
| POS-API-01 | | POS-API-02 | | POS-API-03 |
| (Container) | | (Container) | | (Container) |
| Port 8080 | | Port 8080 | | Port 8080 |
+-------------------+ +-------------------+ +-------------------+
| | |
+------------------------------+------------------------------+
|
v
+-----------------------------------------------------------------------------------+
| PostgreSQL Cluster |
| Primary (Write) + Replica (Read) |
| Port 5432 |
+-----------------------------------------------------------------------------------+
| | |
v v v
+-------------------+ +-------------------+ +-------------------+
| Redis | | RabbitMQ | | Prometheus |
| (Cache/Session) | | (Event Bus) | | (Metrics) |
| Port 6379 | | Port 5672 | | Port 9090 |
+-------------------+ +-------------------+ +-------------------+
24.3 Container Images
Complete Dockerfile for API
# File: /pos-platform/docker/api/Dockerfile
# Multi-stage build for ASP.NET Core POS API
#=============================================
# Stage 1: Build Environment
#=============================================
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src
# Copy solution and project files for layer caching
COPY ["POS.sln", "./"]
COPY ["src/POS.Api/POS.Api.csproj", "src/POS.Api/"]
COPY ["src/POS.Core/POS.Core.csproj", "src/POS.Core/"]
COPY ["src/POS.Infrastructure/POS.Infrastructure.csproj", "src/POS.Infrastructure/"]
COPY ["src/POS.Application/POS.Application.csproj", "src/POS.Application/"]
# Restore dependencies (cached unless .csproj changes)
RUN dotnet restore "POS.sln"
# Copy remaining source code
COPY . .
# Build release version
WORKDIR "/src/src/POS.Api"
RUN dotnet build "POS.Api.csproj" -c Release -o /app/build
#=============================================
# Stage 2: Publish
#=============================================
FROM build AS publish
RUN dotnet publish "POS.Api.csproj" \
-c Release \
-o /app/publish \
--no-restore \
/p:UseAppHost=false \
/p:PublishTrimmed=false
#=============================================
# Stage 3: Runtime Environment
#=============================================
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
# Security: Run as non-root user
RUN addgroup -S posgroup && adduser -S posuser -G posgroup
WORKDIR /app
# Install health check dependencies
RUN apk add --no-cache curl
# Copy published application
COPY --from=publish /app/publish .
# Set ownership
RUN chown -R posuser:posgroup /app
# Switch to non-root user
USER posuser
# Expose port
EXPOSE 8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Set environment
ENV ASPNETCORE_URLS=http://+:8080
ENV ASPNETCORE_ENVIRONMENT=Production
ENV DOTNET_RUNNING_IN_CONTAINER=true
# Entry point
ENTRYPOINT ["dotnet", "POS.Api.dll"]
Dockerfile for Frontend (Blazor WASM)
# File: /pos-platform/docker/frontend/Dockerfile
# Multi-stage build for Blazor WebAssembly
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY ["src/POS.Client/POS.Client.csproj", "src/POS.Client/"]
RUN dotnet restore "src/POS.Client/POS.Client.csproj"
COPY . .
WORKDIR "/src/src/POS.Client"
RUN dotnet publish "POS.Client.csproj" -c Release -o /app/publish
FROM nginx:alpine AS runtime
COPY --from=build /app/publish/wwwroot /usr/share/nginx/html
COPY docker/frontend/nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
24.4 Docker Compose Configuration
Complete Production docker-compose.yml
# File: /pos-platform/docker/docker-compose.yml
# Production deployment configuration
version: '3.8'
services:
#=========================================
# POS API Service (Scalable)
#=========================================
pos-api:
build:
context: ..
dockerfile: docker/api/Dockerfile
image: pos-api:${TAG:-latest}
container_name: pos-api-${INSTANCE:-1}
restart: unless-stopped
deploy:
replicas: 3
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
update_config:
parallelism: 1
delay: 30s
failure_action: rollback
order: start-first
rollback_config:
parallelism: 1
delay: 10s
environment:
- ASPNETCORE_ENVIRONMENT=Production
- ConnectionStrings__DefaultConnection=${DB_CONNECTION_STRING}
- ConnectionStrings__ReadReplicaConnection=${DB_READ_CONNECTION_STRING}
- Redis__ConnectionString=${REDIS_CONNECTION_STRING}
- RabbitMQ__Host=${RABBITMQ_HOST}
- RabbitMQ__Username=${RABBITMQ_USER}
- RabbitMQ__Password=${RABBITMQ_PASSWORD}
- Jwt__Secret=${JWT_SECRET}
- Jwt__Issuer=${JWT_ISSUER}
- Jwt__Audience=${JWT_AUDIENCE}
- Payment__StripeApiKey=${STRIPE_API_KEY}
- Payment__StripeWebhookSecret=${STRIPE_WEBHOOK_SECRET}
- OTEL_EXPORTER_OTLP_ENDPOINT=http://prometheus:9090
ports:
- "${API_PORT:-8080}:8080"
networks:
- pos-network
depends_on:
postgres-primary:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "5"
volumes:
- pos-data-protection:/app/keys
- pos-logs:/app/logs
#=========================================
# PostgreSQL Primary (Write)
#=========================================
postgres-primary:
image: postgres:16-alpine
container_name: pos-postgres-primary
restart: unless-stopped
environment:
- POSTGRES_DB=pos_db
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- "${DB_PORT:-5432}:5432"
networks:
- pos-network
volumes:
- postgres-data:/var/lib/postgresql/data
- ./postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
- ./postgres/postgresql.conf:/etc/postgresql/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql/postgresql.conf
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d pos_db"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '4'
memory: 8G
reservations:
cpus: '1'
memory: 2G
#=========================================
# PostgreSQL Replica (Read)
#=========================================
postgres-replica:
image: postgres:16-alpine
container_name: pos-postgres-replica
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
networks:
- pos-network
volumes:
- postgres-replica-data:/var/lib/postgresql/data
command: postgres -c hot_standby=on
depends_on:
postgres-primary:
condition: service_healthy
deploy:
resources:
limits:
cpus: '2'
memory: 4G
#=========================================
# Redis Cache
#=========================================
redis:
image: redis:7-alpine
container_name: pos-redis
restart: unless-stopped
command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
ports:
- "${REDIS_PORT:-6379}:6379"
networks:
- pos-network
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: '1'
memory: 1G
#=========================================
# RabbitMQ Message Broker
#=========================================
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: pos-rabbitmq
restart: unless-stopped
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_PASSWORD}
- RABBITMQ_DEFAULT_VHOST=pos
ports:
- "${RABBITMQ_PORT:-5672}:5672"
- "${RABBITMQ_MGMT_PORT:-15672}:15672"
networks:
- pos-network
volumes:
- rabbitmq-data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "-q", "ping"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '1'
memory: 1G
#=========================================
# Nginx Load Balancer
#=========================================
nginx:
image: nginx:alpine
container_name: pos-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
networks:
- pos-network
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- nginx-cache:/var/cache/nginx
depends_on:
- pos-api
healthcheck:
test: ["CMD", "nginx", "-t"]
interval: 30s
timeout: 10s
retries: 3
#=========================================
# Networks
#=========================================
networks:
pos-network:
driver: bridge
ipam:
config:
- subnet: 172.28.0.0/16
#=========================================
# Volumes
#=========================================
volumes:
postgres-data:
driver: local
postgres-replica-data:
driver: local
redis-data:
driver: local
rabbitmq-data:
driver: local
nginx-cache:
driver: local
pos-data-protection:
driver: local
pos-logs:
driver: local
24.5 Environment Variables Reference
Complete .env Template
# File: /pos-platform/docker/.env.template
#=============================================
# ENVIRONMENT
#=============================================
ENVIRONMENT=Production
TAG=latest
#=============================================
# DATABASE - PostgreSQL
#=============================================
DB_HOST=postgres-primary
DB_PORT=5432
DB_NAME=pos_db
DB_USER=pos_admin
DB_PASSWORD=<GENERATE_STRONG_PASSWORD>
DB_CONNECTION_STRING=Host=postgres-primary;Port=5432;Database=pos_db;Username=pos_admin;Password=${DB_PASSWORD};Pooling=true;MinPoolSize=5;MaxPoolSize=100
DB_READ_CONNECTION_STRING=Host=postgres-replica;Port=5432;Database=pos_db;Username=pos_admin;Password=${DB_PASSWORD};Pooling=true
#=============================================
# CACHE - Redis
#=============================================
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=<GENERATE_STRONG_PASSWORD>
REDIS_CONNECTION_STRING=redis:6379,password=${REDIS_PASSWORD},abortConnect=false
#=============================================
# MESSAGE BROKER - RabbitMQ
#=============================================
RABBITMQ_HOST=rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_USER=pos_admin
RABBITMQ_PASSWORD=<GENERATE_STRONG_PASSWORD>
RABBITMQ_VHOST=pos
#=============================================
# SECURITY - JWT
#=============================================
JWT_SECRET=<GENERATE_256_BIT_SECRET>
JWT_ISSUER=pos-platform
JWT_AUDIENCE=pos-clients
JWT_EXPIRY_MINUTES=60
JWT_REFRESH_EXPIRY_DAYS=7
#=============================================
# PAYMENT PROCESSING
#=============================================
STRIPE_API_KEY=sk_live_<YOUR_KEY>
STRIPE_WEBHOOK_SECRET=whsec_<YOUR_SECRET>
STRIPE_PUBLIC_KEY=pk_live_<YOUR_KEY>
#=============================================
# EXTERNAL SERVICES
#=============================================
SHOPIFY_API_KEY=<YOUR_KEY>
SHOPIFY_API_SECRET=<YOUR_SECRET>
QUICKBOOKS_CLIENT_ID=<YOUR_ID>
QUICKBOOKS_CLIENT_SECRET=<YOUR_SECRET>
#=============================================
# MONITORING
#=============================================
PROMETHEUS_PORT=9090
GRAFANA_PORT=3000
GRAFANA_ADMIN_PASSWORD=<GENERATE_STRONG_PASSWORD>
#=============================================
# LOGGING
#=============================================
LOG_LEVEL=Information
LOG_PATH=/app/logs
SERILOG_SEQ_URL=http://seq:5341
#=============================================
# API CONFIGURATION
#=============================================
API_PORT=8080
API_RATE_LIMIT_PER_MINUTE=100
API_CORS_ORIGINS=https://pos.yourcompany.com
24.6 Deployment Checklist
Pre-Deployment Checklist
## 24.7 Pre-Deployment Verification
### 1. Code Readiness
- [ ] All tests passing (unit, integration, e2e)
- [ ] Code review approved
- [ ] Security scan completed (no critical/high vulnerabilities)
- [ ] Version number updated in csproj
- [ ] CHANGELOG.md updated
- [ ] Database migrations tested
### 2. Infrastructure Readiness
- [ ] Target environment accessible
- [ ] SSL certificates valid (> 30 days)
- [ ] Database backup completed (< 1 hour old)
- [ ] Sufficient disk space (> 20% free)
- [ ] Load balancer health checks configured
- [ ] DNS pointing to correct servers
### 3. Configuration Verification
- [ ] .env file populated with production values
- [ ] Secrets stored in secure vault
- [ ] Connection strings validated
- [ ] External API keys verified
### 4. Rollback Preparation
- [ ] Previous version image tagged and available
- [ ] Rollback script tested
- [ ] Database rollback script prepared (if schema changes)
- [ ] Rollback communication template ready
### 5. Team Readiness
- [ ] Deployment window communicated
- [ ] On-call engineer identified
- [ ] Customer support notified
- [ ] Monitoring dashboard accessible
Deployment Script
#!/bin/bash
# File: /pos-platform/scripts/deploy.sh
# Production deployment script
set -e # Exit on error
#=============================================
# CONFIGURATION
#=============================================
DEPLOY_DIR="/opt/pos-platform"
DOCKER_COMPOSE="docker compose"
TAG=${1:-latest}
BACKUP_DIR="/backups/pos"
LOG_FILE="/var/log/pos-deploy.log"
#=============================================
# FUNCTIONS
#=============================================
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
check_prerequisites() {
log "Checking prerequisites..."
# Check Docker
if ! command -v docker &> /dev/null; then
log "ERROR: Docker not installed"
exit 1
fi
# Check disk space (require 20% free)
FREE_SPACE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
if [ "$FREE_SPACE" -gt 80 ]; then
log "ERROR: Insufficient disk space (${FREE_SPACE}% used)"
exit 1
fi
log "Prerequisites check passed"
}
backup_database() {
log "Creating database backup..."
BACKUP_FILE="${BACKUP_DIR}/pos_db_$(date +%Y%m%d_%H%M%S).sql.gz"
$DOCKER_COMPOSE exec -T postgres-primary pg_dump -U pos_admin pos_db | gzip > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
log "Database backup created: $BACKUP_FILE"
else
log "ERROR: Database backup failed"
exit 1
fi
}
pull_images() {
log "Pulling new images (tag: $TAG)..."
$DOCKER_COMPOSE pull
log "Images pulled successfully"
}
deploy_with_zero_downtime() {
log "Starting zero-downtime deployment..."
# Scale up new containers first
$DOCKER_COMPOSE up -d --scale pos-api=4 --no-recreate
# Wait for new containers to be healthy
log "Waiting for health checks..."
sleep 60
# Verify new containers are healthy
HEALTHY_COUNT=$($DOCKER_COMPOSE ps | grep "healthy" | wc -l)
if [ "$HEALTHY_COUNT" -lt 3 ]; then
log "ERROR: Not enough healthy containers"
rollback
exit 1
fi
# Rolling update
$DOCKER_COMPOSE up -d --force-recreate
# Scale back to normal
$DOCKER_COMPOSE up -d --scale pos-api=3
log "Deployment completed successfully"
}
verify_deployment() {
log "Verifying deployment..."
# Check health endpoint
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health)
if [ "$HTTP_CODE" -eq 200 ]; then
log "Health check passed (HTTP $HTTP_CODE)"
else
log "ERROR: Health check failed (HTTP $HTTP_CODE)"
rollback
exit 1
fi
# Check version
VERSION=$(curl -s http://localhost:8080/health | jq -r '.version')
log "Deployed version: $VERSION"
}
rollback() {
log "ROLLBACK: Initiating rollback..."
# Get previous image tag
PREVIOUS_TAG=$(docker images pos-api --format "{{.Tag}}" | sed -n '2p')
if [ -z "$PREVIOUS_TAG" ]; then
log "ERROR: No previous version found for rollback"
exit 1
fi
log "Rolling back to version: $PREVIOUS_TAG"
TAG=$PREVIOUS_TAG $DOCKER_COMPOSE up -d --force-recreate
log "Rollback completed"
}
#=============================================
# MAIN EXECUTION
#=============================================
main() {
log "=========================================="
log "POS Platform Deployment - Started"
log "Tag: $TAG"
log "=========================================="
cd "$DEPLOY_DIR"
check_prerequisites
backup_database
pull_images
deploy_with_zero_downtime
verify_deployment
log "=========================================="
log "Deployment completed successfully!"
log "=========================================="
}
# Run main function
main "$@"
24.8 Zero-Downtime Deployment Strategy
Rolling Update Process
┌─────────────────────────────────────────────────────────────────────────────┐
│ ZERO-DOWNTIME DEPLOYMENT │
└─────────────────────────────────────────────────────────────────────────────┘
STEP 1: Initial State
┌─────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer │
│ │ │
│ ├──────► API-1 (v1.0) [HEALTHY] ◄── Receiving traffic │
│ ├──────► API-2 (v1.0) [HEALTHY] ◄── Receiving traffic │
│ └──────► API-3 (v1.0) [HEALTHY] ◄── Receiving traffic │
└─────────────────────────────────────────────────────────────────────────────┘
STEP 2: Add New Container
┌─────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer │
│ │ │
│ ├──────► API-1 (v1.0) [HEALTHY] │
│ ├──────► API-2 (v1.0) [HEALTHY] │
│ ├──────► API-3 (v1.0) [HEALTHY] │
│ └─ - - ► API-4 (v2.0) [STARTING] ◄── Not yet in rotation │
└─────────────────────────────────────────────────────────────────────────────┘
STEP 3: New Container Healthy
┌─────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer │
│ │ │
│ ├──────► API-1 (v1.0) [HEALTHY] │
│ ├──────► API-2 (v1.0) [HEALTHY] │
│ ├──────► API-3 (v1.0) [HEALTHY] │
│ └──────► API-4 (v2.0) [HEALTHY] ◄── Now receiving traffic │
└─────────────────────────────────────────────────────────────────────────────┘
STEP 4: Drain Old Container
┌─────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer │
│ │ │
│ ├─ X ─► API-1 (v1.0) [DRAINING] ◄── Finishing existing requests │
│ ├──────► API-2 (v1.0) [HEALTHY] │
│ ├──────► API-3 (v1.0) [HEALTHY] │
│ └──────► API-4 (v2.0) [HEALTHY] │
└─────────────────────────────────────────────────────────────────────────────┘
STEP 5: Replace Old Container
┌─────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer │
│ │ │
│ ├──────► API-1 (v2.0) [HEALTHY] ◄── Replaced │
│ ├──────► API-2 (v1.0) [DRAINING] │
│ ├──────► API-3 (v1.0) [HEALTHY] │
│ └──────► API-4 (v2.0) [HEALTHY] │
└─────────────────────────────────────────────────────────────────────────────┘
STEP 6: Complete (Scale back to 3)
┌─────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer │
│ │ │
│ ├──────► API-1 (v2.0) [HEALTHY] │
│ ├──────► API-2 (v2.0) [HEALTHY] │
│ └──────► API-3 (v2.0) [HEALTHY] │
│ │
│ Result: Zero downtime, all traffic served continuously │
└─────────────────────────────────────────────────────────────────────────────┘
24.9 Rollback Procedures
Automated Rollback Script
#!/bin/bash
# File: /pos-platform/scripts/rollback.sh
# Emergency rollback script
set -e
DEPLOY_DIR="/opt/pos-platform"
DOCKER_COMPOSE="docker compose"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ROLLBACK: $1"
}
#=============================================
# ROLLBACK TO PREVIOUS VERSION
#=============================================
rollback_containers() {
log "Starting container rollback..."
# Get previous image
PREVIOUS_TAG=$(docker images pos-api --format "table {{.Tag}}\t{{.CreatedAt}}" | \
grep -v latest | head -2 | tail -1 | awk '{print $1}')
if [ -z "$PREVIOUS_TAG" ]; then
log "ERROR: No previous version available"
exit 1
fi
log "Rolling back to: $PREVIOUS_TAG"
cd "$DEPLOY_DIR"
export TAG=$PREVIOUS_TAG
# Force recreate with previous version
$DOCKER_COMPOSE up -d --force-recreate pos-api
log "Containers rolled back to $PREVIOUS_TAG"
}
#=============================================
# ROLLBACK DATABASE (IF NEEDED)
#=============================================
rollback_database() {
BACKUP_FILE=$1
if [ -z "$BACKUP_FILE" ]; then
log "No database backup specified, skipping DB rollback"
return
fi
log "Rolling back database from: $BACKUP_FILE"
# Restore from backup
zcat "$BACKUP_FILE" | $DOCKER_COMPOSE exec -T postgres-primary psql -U pos_admin pos_db
log "Database rolled back"
}
#=============================================
# VERIFY ROLLBACK
#=============================================
verify_rollback() {
log "Verifying rollback..."
sleep 30 # Wait for containers to stabilize
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health)
if [ "$HTTP_CODE" -eq 200 ]; then
log "Rollback verified successfully (HTTP $HTTP_CODE)"
else
log "ERROR: Rollback verification failed (HTTP $HTTP_CODE)"
log "CRITICAL: Manual intervention required!"
exit 1
fi
}
#=============================================
# MAIN
#=============================================
main() {
log "=========================================="
log "EMERGENCY ROLLBACK INITIATED"
log "=========================================="
rollback_containers
rollback_database "$1"
verify_rollback
log "=========================================="
log "ROLLBACK COMPLETED"
log "=========================================="
}
main "$@"
24.10 Health Check Endpoints
Health Check Implementation
// File: /src/POS.Api/Health/HealthCheckEndpoints.cs
public static class HealthCheckEndpoints
{
public static void MapHealthChecks(this WebApplication app)
{
// Basic liveness probe (is the app running?)
app.MapHealthChecks("/health/live", new HealthCheckOptions
{
Predicate = _ => false, // No checks, just confirms app is running
ResponseWriter = WriteResponse
});
// Readiness probe (is the app ready to serve traffic?)
app.MapHealthChecks("/health/ready", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("ready"),
ResponseWriter = WriteResponse
});
// Full health check (all dependencies)
app.MapHealthChecks("/health", new HealthCheckOptions
{
ResponseWriter = WriteResponse
});
}
private static async Task WriteResponse(
HttpContext context,
HealthReport report)
{
context.Response.ContentType = "application/json";
var response = new
{
status = report.Status.ToString(),
version = Assembly.GetExecutingAssembly()
.GetCustomAttribute<AssemblyInformationalVersionAttribute>()
?.InformationalVersion ?? "unknown",
timestamp = DateTime.UtcNow,
checks = report.Entries.Select(e => new
{
name = e.Key,
status = e.Value.Status.ToString(),
duration = e.Value.Duration.TotalMilliseconds,
description = e.Value.Description,
data = e.Value.Data
})
};
await context.Response.WriteAsJsonAsync(response);
}
}
Health Check Response Example
{
"status": "Healthy",
"version": "2.1.0",
"timestamp": "2025-12-29T10:30:00Z",
"checks": [
{
"name": "database",
"status": "Healthy",
"duration": 12.5,
"description": "PostgreSQL connection is healthy"
},
{
"name": "redis",
"status": "Healthy",
"duration": 3.2,
"description": "Redis cache is accessible"
},
{
"name": "rabbitmq",
"status": "Healthy",
"duration": 8.1,
"description": "RabbitMQ broker is connected"
},
{
"name": "disk",
"status": "Healthy",
"duration": 1.0,
"description": "Disk space: 45% used"
}
]
}
24.11 Summary
This chapter provides complete deployment procedures including:
- Docker Configuration: Multi-stage Dockerfile and production docker-compose.yml
- Environment Variables: Complete reference for all configuration
- Deployment Checklist: Pre-deployment verification steps
- Zero-Downtime Strategy: Rolling update process diagram
- Rollback Procedures: Automated rollback scripts
- Health Checks: Implementation and response format
Next Chapter: Chapter 25: Monitoring and Alerting
“Deploy with confidence. Rollback without fear.”
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VII - Operations |
| Chapter | 24 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 25: Monitoring and Alerting
25.1 Overview
This chapter defines the complete monitoring architecture for the POS Platform, including metrics collection, dashboards, alerting rules, and incident response procedures.
25.2 Monitoring Architecture
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ MONITORING STACK │
└─────────────────────────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ POS-API-1 │ │ POS-API-2 │ │ POS-API-3 │
│ │ │ │ │ │
│ /metrics:8080 │ │ /metrics:8080 │ │ /metrics:8080 │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
└───────────────────────┼───────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ PROMETHEUS │
│ (Metrics Store) │
│ │
│ - Scrape interval: 15s │
│ - Retention: 15 days │
│ - Port: 9090 │
└──────────────────┬───────────────────┘
│
┌──────────────────┼──────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ GRAFANA │ │ ALERTMANAGER │ │ LOKI │
│ (Dashboards) │ │ (Alerts) │ │ (Logs) │
│ │ │ │ │ │
│ Port: 3000 │ │ Port: 9093 │ │ Port: 3100 │
└─────────────────┘ └────────┬────────┘ └─────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Slack │ │ Email │ │ PagerDuty│
└────────┘ └────────┘ └────────┘
25.3 Key Metrics
Business SLIs (Service Level Indicators)
| Metric | Description | Target | Alert Threshold |
|---|---|---|---|
| Transaction Success Rate | % of transactions completed successfully | > 99.9% | < 99.5% |
| Avg Transaction Time | End-to-end transaction processing | < 2s | > 5s |
| Payment Success Rate | % of payments processed successfully | > 99.5% | < 99% |
| Order Fulfillment Rate | Orders fulfilled within SLA | > 98% | < 95% |
| API Availability | Uptime of API endpoints | > 99.9% | < 99.5% |
Infrastructure Metrics
| Category | Metric | Warning | Critical |
|---|---|---|---|
| CPU | Usage % | > 70% | > 90% |
| Memory | Usage % | > 75% | > 90% |
| Disk | Usage % | > 70% | > 85% |
| Disk | I/O Wait | > 20% | > 40% |
| Network | Packet Loss | > 0.1% | > 1% |
| Network | Latency (ms) | > 100ms | > 500ms |
Application Metrics
| Metric | Description | Warning | Critical |
|---|---|---|---|
| Error Rate | 5xx errors per minute | > 1% | > 5% |
| Response Time (p99) | 99th percentile latency | > 500ms | > 2000ms |
| Response Time (p50) | Median latency | > 100ms | > 500ms |
| Request Rate | Requests per second | N/A (baseline) | > 200% of baseline |
| Queue Depth | Messages waiting in RabbitMQ | > 1000 | > 5000 |
| Active Connections | DB connections in use | > 80% of pool | > 95% of pool |
| Cache Hit Rate | Redis cache effectiveness | < 80% | < 60% |
25.4 Prometheus Configuration
Complete prometheus.yml
# File: /pos-platform/monitoring/prometheus/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
cluster: 'pos-production'
environment: 'production'
#=============================================
# ALERTING CONFIGURATION
#=============================================
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
#=============================================
# RULE FILES
#=============================================
rule_files:
- "/etc/prometheus/rules/*.yml"
#=============================================
# SCRAPE CONFIGURATIONS
#=============================================
scrape_configs:
#-----------------------------------------
# Prometheus Self-Monitoring
#-----------------------------------------
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
#-----------------------------------------
# POS API Instances
#-----------------------------------------
- job_name: 'pos-api'
metrics_path: '/metrics'
static_configs:
- targets:
- 'pos-api-1:8080'
- 'pos-api-2:8080'
- 'pos-api-3:8080'
labels:
app: 'pos-api'
tier: 'backend'
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: '([^:]+):\d+'
replacement: '${1}'
#-----------------------------------------
# PostgreSQL Exporter
#-----------------------------------------
- job_name: 'postgres'
static_configs:
- targets: ['postgres-exporter:9187']
labels:
app: 'postgres'
tier: 'database'
#-----------------------------------------
# Redis Exporter
#-----------------------------------------
- job_name: 'redis'
static_configs:
- targets: ['redis-exporter:9121']
labels:
app: 'redis'
tier: 'cache'
#-----------------------------------------
# RabbitMQ Exporter
#-----------------------------------------
- job_name: 'rabbitmq'
static_configs:
- targets: ['rabbitmq:15692']
labels:
app: 'rabbitmq'
tier: 'messaging'
#-----------------------------------------
# Nginx Exporter
#-----------------------------------------
- job_name: 'nginx'
static_configs:
- targets: ['nginx-exporter:9113']
labels:
app: 'nginx'
tier: 'ingress'
#-----------------------------------------
# Node Exporter (Host Metrics)
#-----------------------------------------
- job_name: 'node'
static_configs:
- targets:
- 'node-exporter:9100'
labels:
tier: 'infrastructure'
#-----------------------------------------
# Docker Container Metrics
#-----------------------------------------
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
labels:
tier: 'containers'
25.5 Alert Rules
Complete Alert Rules Configuration
# File: /pos-platform/monitoring/prometheus/rules/alerts.yml
groups:
#=============================================
# P1 - CRITICAL (Page immediately)
#=============================================
- name: critical_alerts
rules:
#-----------------------------------------
# API Down
#-----------------------------------------
- alert: APIDown
expr: up{job="pos-api"} == 0
for: 1m
labels:
severity: P1
team: platform
annotations:
summary: "POS API instance {{ $labels.instance }} is down"
description: "API instance has been unreachable for more than 1 minute"
runbook_url: "https://wiki.internal/runbooks/api-down"
#-----------------------------------------
# Database Down
#-----------------------------------------
- alert: DatabaseDown
expr: pg_up == 0
for: 30s
labels:
severity: P1
team: platform
annotations:
summary: "PostgreSQL database is down"
description: "Database connection failed for 30 seconds"
runbook_url: "https://wiki.internal/runbooks/db-down"
#-----------------------------------------
# High Error Rate
#-----------------------------------------
- alert: HighErrorRate
expr: |
(
sum(rate(http_requests_total{status=~"5.."}[5m]))
/
sum(rate(http_requests_total[5m]))
) * 100 > 5
for: 2m
labels:
severity: P1
team: platform
annotations:
summary: "High error rate detected: {{ $value | printf \"%.2f\" }}%"
description: "Error rate exceeds 5% for more than 2 minutes"
runbook_url: "https://wiki.internal/runbooks/high-error-rate"
#-----------------------------------------
# Transaction Failure Spike
#-----------------------------------------
- alert: TransactionFailureSpike
expr: |
(
sum(rate(pos_transactions_failed_total[5m]))
/
sum(rate(pos_transactions_total[5m]))
) * 100 > 1
for: 5m
labels:
severity: P1
team: platform
annotations:
summary: "Transaction failure rate: {{ $value | printf \"%.2f\" }}%"
description: "More than 1% of transactions are failing"
runbook_url: "https://wiki.internal/runbooks/transaction-failures"
#=============================================
# P2 - HIGH (Page during business hours)
#=============================================
- name: high_alerts
rules:
#-----------------------------------------
# High Response Time
#-----------------------------------------
- alert: HighResponseTime
expr: |
histogram_quantile(0.99,
sum(rate(http_request_duration_seconds_bucket[5m])) by (le)
) > 2
for: 5m
labels:
severity: P2
team: platform
annotations:
summary: "P99 response time is {{ $value | printf \"%.2f\" }}s"
description: "99th percentile latency exceeds 2 seconds"
runbook_url: "https://wiki.internal/runbooks/high-latency"
#-----------------------------------------
# Database Connection Pool Exhaustion
#-----------------------------------------
- alert: DBConnectionPoolLow
expr: |
pg_stat_activity_count / pg_settings_max_connections * 100 > 80
for: 5m
labels:
severity: P2
team: platform
annotations:
summary: "DB connection pool at {{ $value | printf \"%.0f\" }}%"
description: "Database connections nearly exhausted"
runbook_url: "https://wiki.internal/runbooks/db-connections"
#-----------------------------------------
# Queue Backlog
#-----------------------------------------
- alert: QueueBacklog
expr: rabbitmq_queue_messages > 5000
for: 10m
labels:
severity: P2
team: platform
annotations:
summary: "Message queue backlog: {{ $value }} messages"
description: "RabbitMQ queue has significant backlog"
runbook_url: "https://wiki.internal/runbooks/queue-backlog"
#-----------------------------------------
# Memory Pressure
#-----------------------------------------
- alert: HighMemoryUsage
expr: |
(1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100 > 90
for: 5m
labels:
severity: P2
team: infrastructure
annotations:
summary: "Memory usage at {{ $value | printf \"%.0f\" }}%"
description: "System memory is critically low"
runbook_url: "https://wiki.internal/runbooks/memory-pressure"
#=============================================
# P3 - MEDIUM (Email/Slack notification)
#=============================================
- name: medium_alerts
rules:
#-----------------------------------------
# CPU Warning
#-----------------------------------------
- alert: HighCPUUsage
expr: |
100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 70
for: 15m
labels:
severity: P3
team: infrastructure
annotations:
summary: "CPU usage at {{ $value | printf \"%.0f\" }}%"
description: "CPU usage elevated for extended period"
#-----------------------------------------
# Disk Space Warning
#-----------------------------------------
- alert: DiskSpaceLow
expr: |
(1 - (node_filesystem_avail_bytes / node_filesystem_size_bytes)) * 100 > 70
for: 30m
labels:
severity: P3
team: infrastructure
annotations:
summary: "Disk usage at {{ $value | printf \"%.0f\" }}% on {{ $labels.mountpoint }}"
description: "Disk space running low"
#-----------------------------------------
# Cache Hit Rate Low
#-----------------------------------------
- alert: CacheHitRateLow
expr: |
redis_keyspace_hits_total /
(redis_keyspace_hits_total + redis_keyspace_misses_total) * 100 < 80
for: 30m
labels:
severity: P3
team: platform
annotations:
summary: "Cache hit rate: {{ $value | printf \"%.0f\" }}%"
description: "Redis cache effectiveness is low"
#=============================================
# P4 - LOW (Log/Dashboard only)
#=============================================
- name: low_alerts
rules:
#-----------------------------------------
# SSL Certificate Expiry
#-----------------------------------------
- alert: SSLCertExpiringSoon
expr: |
(probe_ssl_earliest_cert_expiry - time()) / 86400 < 30
for: 1h
labels:
severity: P4
team: platform
annotations:
summary: "SSL cert expires in {{ $value | printf \"%.0f\" }} days"
description: "Certificate renewal needed soon"
#-----------------------------------------
# Container Restarts
#-----------------------------------------
- alert: ContainerRestarts
expr: |
increase(kube_pod_container_status_restarts_total[1h]) > 3
for: 1h
labels:
severity: P4
team: platform
annotations:
summary: "Container {{ $labels.container }} restarted {{ $value }} times"
description: "Container may be unstable"
25.6 AlertManager Configuration
# File: /pos-platform/monitoring/alertmanager/alertmanager.yml
global:
smtp_smarthost: 'smtp.company.com:587'
smtp_from: 'alerts@pos-platform.com'
smtp_auth_username: 'alerts@pos-platform.com'
smtp_auth_password: '${SMTP_PASSWORD}'
slack_api_url: '${SLACK_WEBHOOK_URL}'
pagerduty_url: 'https://events.pagerduty.com/v2/enqueue'
#=============================================
# ROUTING
#=============================================
route:
group_by: ['alertname', 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
receiver: 'default-receiver'
routes:
#-----------------------------------------
# P1 - Critical: Page immediately
#-----------------------------------------
- match:
severity: P1
receiver: 'pagerduty-critical'
continue: true
- match:
severity: P1
receiver: 'slack-critical'
continue: true
#-----------------------------------------
# P2 - High: Page during business hours
#-----------------------------------------
- match:
severity: P2
receiver: 'pagerduty-high'
active_time_intervals:
- business-hours
continue: true
- match:
severity: P2
receiver: 'slack-high'
#-----------------------------------------
# P3 - Medium: Slack + Email
#-----------------------------------------
- match:
severity: P3
receiver: 'slack-medium'
continue: true
- match:
severity: P3
receiver: 'email-team'
#-----------------------------------------
# P4 - Low: Slack only
#-----------------------------------------
- match:
severity: P4
receiver: 'slack-low'
#=============================================
# TIME INTERVALS
#=============================================
time_intervals:
- name: business-hours
time_intervals:
- weekdays: ['monday:friday']
times:
- start_time: '09:00'
end_time: '18:00'
#=============================================
# RECEIVERS
#=============================================
receivers:
- name: 'default-receiver'
slack_configs:
- channel: '#pos-alerts'
send_resolved: true
- name: 'pagerduty-critical'
pagerduty_configs:
- service_key: '${PAGERDUTY_SERVICE_KEY}'
severity: critical
- name: 'pagerduty-high'
pagerduty_configs:
- service_key: '${PAGERDUTY_SERVICE_KEY}'
severity: error
- name: 'slack-critical'
slack_configs:
- channel: '#pos-critical'
send_resolved: true
color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
title: '{{ .Status | toUpper }}: {{ .CommonAnnotations.summary }}'
text: '{{ .CommonAnnotations.description }}'
actions:
- type: button
text: 'Runbook'
url: '{{ .CommonAnnotations.runbook_url }}'
- type: button
text: 'Dashboard'
url: 'https://grafana.internal/d/pos-overview'
- name: 'slack-high'
slack_configs:
- channel: '#pos-alerts'
send_resolved: true
color: 'warning'
- name: 'slack-medium'
slack_configs:
- channel: '#pos-alerts'
send_resolved: true
- name: 'slack-low'
slack_configs:
- channel: '#pos-info'
send_resolved: false
- name: 'email-team'
email_configs:
- to: 'platform-team@company.com'
send_resolved: true
25.7 Grafana Dashboard
POS Platform Overview Dashboard (JSON)
{
"dashboard": {
"id": null,
"uid": "pos-overview",
"title": "POS Platform Overview",
"tags": ["pos", "production"],
"timezone": "browser",
"refresh": "30s",
"time": {
"from": "now-1h",
"to": "now"
},
"panels": [
{
"id": 1,
"title": "Transaction Success Rate",
"type": "stat",
"gridPos": {"h": 4, "w": 4, "x": 0, "y": 0},
"targets": [
{
"expr": "(sum(rate(pos_transactions_success_total[5m])) / sum(rate(pos_transactions_total[5m]))) * 100",
"legendFormat": "Success Rate"
}
],
"options": {
"colorMode": "value",
"graphMode": "area"
},
"fieldConfig": {
"defaults": {
"unit": "percent",
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "red", "value": null},
{"color": "yellow", "value": 99},
{"color": "green", "value": 99.5}
]
}
}
}
},
{
"id": 2,
"title": "Requests per Second",
"type": "stat",
"gridPos": {"h": 4, "w": 4, "x": 4, "y": 0},
"targets": [
{
"expr": "sum(rate(http_requests_total[1m]))",
"legendFormat": "RPS"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps"
}
}
},
{
"id": 3,
"title": "P99 Response Time",
"type": "stat",
"gridPos": {"h": 4, "w": 4, "x": 8, "y": 0},
"targets": [
{
"expr": "histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket[5m])) by (le))",
"legendFormat": "P99"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 0.5},
{"color": "red", "value": 2}
]
}
}
}
},
{
"id": 4,
"title": "Error Rate",
"type": "stat",
"gridPos": {"h": 4, "w": 4, "x": 12, "y": 0},
"targets": [
{
"expr": "(sum(rate(http_requests_total{status=~\"5..\"}[5m])) / sum(rate(http_requests_total[5m]))) * 100",
"legendFormat": "Errors"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 1},
{"color": "red", "value": 5}
]
}
}
}
},
{
"id": 5,
"title": "Active Transactions",
"type": "stat",
"gridPos": {"h": 4, "w": 4, "x": 16, "y": 0},
"targets": [
{
"expr": "pos_transactions_in_progress",
"legendFormat": "Active"
}
]
},
{
"id": 6,
"title": "API Health",
"type": "stat",
"gridPos": {"h": 4, "w": 4, "x": 20, "y": 0},
"targets": [
{
"expr": "count(up{job=\"pos-api\"} == 1)",
"legendFormat": "Healthy Instances"
}
],
"fieldConfig": {
"defaults": {
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "red", "value": null},
{"color": "yellow", "value": 2},
{"color": "green", "value": 3}
]
}
}
}
},
{
"id": 10,
"title": "Request Rate by Endpoint",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 4},
"targets": [
{
"expr": "sum(rate(http_requests_total[5m])) by (endpoint)",
"legendFormat": "{{endpoint}}"
}
]
},
{
"id": 11,
"title": "Response Time Distribution",
"type": "heatmap",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 4},
"targets": [
{
"expr": "sum(increase(http_request_duration_seconds_bucket[1m])) by (le)",
"legendFormat": "{{le}}"
}
]
},
{
"id": 20,
"title": "Database Connections",
"type": "timeseries",
"gridPos": {"h": 6, "w": 8, "x": 0, "y": 12},
"targets": [
{
"expr": "pg_stat_activity_count",
"legendFormat": "Active"
},
{
"expr": "pg_settings_max_connections",
"legendFormat": "Max"
}
]
},
{
"id": 21,
"title": "Redis Operations",
"type": "timeseries",
"gridPos": {"h": 6, "w": 8, "x": 8, "y": 12},
"targets": [
{
"expr": "rate(redis_commands_processed_total[1m])",
"legendFormat": "Commands/sec"
}
]
},
{
"id": 22,
"title": "Queue Depth",
"type": "timeseries",
"gridPos": {"h": 6, "w": 8, "x": 16, "y": 12},
"targets": [
{
"expr": "rabbitmq_queue_messages",
"legendFormat": "{{queue}}"
}
]
},
{
"id": 30,
"title": "CPU Usage by Container",
"type": "timeseries",
"gridPos": {"h": 6, "w": 12, "x": 0, "y": 18},
"targets": [
{
"expr": "rate(container_cpu_usage_seconds_total{container!=\"\"}[5m]) * 100",
"legendFormat": "{{container}}"
}
],
"fieldConfig": {
"defaults": {"unit": "percent"}
}
},
{
"id": 31,
"title": "Memory Usage by Container",
"type": "timeseries",
"gridPos": {"h": 6, "w": 12, "x": 12, "y": 18},
"targets": [
{
"expr": "container_memory_usage_bytes{container!=\"\"} / 1024 / 1024",
"legendFormat": "{{container}}"
}
],
"fieldConfig": {
"defaults": {"unit": "decmbytes"}
}
}
]
}
}
25.8 Incident Response Runbooks
Runbook: API Down (P1)
# Runbook: API Down
**Alert**: APIDown
**Severity**: P1 (Critical)
**Impact**: Customers cannot complete transactions
## 25.9 Symptoms
- Health check endpoint returning non-200
- Load balancer showing unhealthy targets
- Transaction error rate spike
## 25.10 Immediate Actions (First 5 minutes)
1. **Verify the alert**
```bash
curl -s http://pos-api:8080/health | jq
docker ps | grep pos-api
-
Check container logs
docker logs pos-api-1 --tail 100 docker logs pos-api-2 --tail 100 docker logs pos-api-3 --tail 100 -
Check resource usage
docker stats --no-stream -
Restart unhealthy containers
docker restart pos-api-1 # Replace with affected container
25.11 Escalation
- If all containers down: Page Infrastructure Lead
- If database issue: Page Database Team
- If network issue: Page Network Team
25.12 Resolution Checklist
- Identify root cause
- Apply fix (restart, rollback, config change)
- Verify health checks passing
- Monitor for 15 minutes
- Update incident ticket
- Schedule postmortem if major outage
25.13 Common Causes
| Cause | Solution |
|---|---|
| OOM (Out of Memory) | Restart, investigate memory leak |
| Database connection failure | Check DB health, restart connections |
| Deployment failure | Rollback to previous version |
| Network partition | Check network, restart networking |
### Runbook: High Error Rate (P1)
```markdown
# Runbook: High Error Rate
**Alert**: HighErrorRate
**Severity**: P1 (Critical)
**Impact**: Significant portion of requests failing
## 25.9 Symptoms
- 5xx error rate > 5%
- Customer complaints about failures
- Transaction success rate dropping
## 25.15 Immediate Actions
1. **Identify error patterns**
```bash
# Check recent errors in logs
docker logs pos-api-1 2>&1 | grep -i error | tail -50
# Query Loki for error patterns
{job="pos-api"} |= "error" | json | line_format "{{.message}}"
-
Check which endpoints are failing
# In Grafana/Prometheus sum(rate(http_requests_total{status=~"5.."}[5m])) by (endpoint, status) -
Check dependent services
# Database docker exec pos-postgres-primary pg_isready # Redis docker exec pos-redis redis-cli ping # RabbitMQ curl -u admin:password http://localhost:15672/api/healthchecks/node
25.16 Root Cause Investigation
| Error Pattern | Likely Cause | Solution |
|---|---|---|
| 500 on /api/transactions | Database timeout | Check DB connections |
| 503 across all endpoints | Overload | Scale up or rate limit |
| 502 from nginx | Container crash | Restart containers |
| Timeout errors | Slow DB queries | Kill long queries, add indexes |
25.17 Recovery Steps
- If DB issue: Restart connection pool
- If overload: Enable aggressive rate limiting
- If code bug: Rollback deployment
- If external dependency: Enable circuit breaker
---
## 25.18 OpenTelemetry Integration
### Overview
The monitoring stack is enhanced with OpenTelemetry (OTel) for comprehensive observability that prevents vendor lock-in and enables "Trace-to-Code" root cause analysis.
### Primary Pattern
| Attribute | Selection |
|-----------|-----------|
| **Pattern** | OpenTelemetry "Trace-to-Code" Pipeline |
| **Rationale** | Industry-standard protocol; trace errors from store terminal directly to source code line |
| **Vendor Lock-in** | None - OTel is open standard |
### Technology Stack (The "LGTM" Stack)
+——————————————————————+ | THE LGTM STACK | +——————————————————————+ | | | L = Loki (Log Aggregation) | | G = Grafana (Visualization & Dashboards) | | T = Tempo (Distributed Tracing) | | M = Prometheus (Metrics Collection) ← Already configured | | | +——————————————————————+
| Component | Tool | Purpose | Port |
|-----------|------|---------|------|
| **L** - Logs | Loki | Log aggregation, search | 3100 |
| **G** - Grafana | Grafana | Unified dashboards | 3000 |
| **T** - Traces | Tempo (or Jaeger) | Distributed tracing | 4317 (OTLP), 16686 (UI) |
| **M** - Metrics | Prometheus | Metrics collection | 9090 |
### Docker Compose Addition
```yaml
# Add to docker-compose.monitoring.yml
services:
# ... existing prometheus, grafana, alertmanager ...
# Loki - Log Aggregation
loki:
image: grafana/loki:2.9.0
container_name: pos-loki
ports:
- "3100:3100"
volumes:
- loki_data:/loki
- ./loki/loki-config.yml:/etc/loki/local-config.yaml
command: -config.file=/etc/loki/local-config.yaml
networks:
- monitoring
# Tempo - Distributed Tracing
tempo:
image: grafana/tempo:2.3.0
container_name: pos-tempo
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "3200:3200" # Tempo query
volumes:
- tempo_data:/var/tempo
- ./tempo/tempo-config.yml:/etc/tempo/tempo.yaml
command: -config.file=/etc/tempo/tempo.yaml
networks:
- monitoring
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.89.0
container_name: pos-otel-collector
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "8888:8888" # Prometheus metrics
volumes:
- ./otel/otel-collector-config.yml:/etc/otel/config.yaml
command: --config=/etc/otel/config.yaml
networks:
- monitoring
volumes:
loki_data:
tempo_data:
OpenTelemetry Collector Configuration
# monitoring/otel/otel-collector-config.yml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 10s
send_batch_size: 1024
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
exporters:
# Send traces to Tempo
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
# Send metrics to Prometheus
prometheus:
endpoint: 0.0.0.0:8889
namespace: otel
# Send logs to Loki
loki:
endpoint: http://loki:3100/loki/api/v1/push
labels:
resource:
service.name: "service_name"
service.instance.id: "instance_id"
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp/tempo]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [prometheus]
logs:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [loki]
.NET Application Instrumentation
// Program.cs - Add OpenTelemetry instrumentation
using OpenTelemetry.Logs;
using OpenTelemetry.Metrics;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
var builder = WebApplication.CreateBuilder(args);
// Define resource attributes
var resourceBuilder = ResourceBuilder.CreateDefault()
.AddService(
serviceName: "pos-api",
serviceVersion: Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "1.0.0",
serviceInstanceId: Environment.MachineName)
.AddAttributes(new Dictionary<string, object>
{
["deployment.environment"] = builder.Environment.EnvironmentName,
["tenant.id"] = "dynamic" // Set per-request
});
// Configure OpenTelemetry Tracing
builder.Services.AddOpenTelemetry()
.WithTracing(tracing => tracing
.SetResourceBuilder(resourceBuilder)
.AddSource("PosPlatform.*")
.AddAspNetCoreInstrumentation(options =>
{
options.RecordException = true;
options.EnrichWithHttpRequest = (activity, request) =>
{
activity.SetTag("tenant.id", request.Headers["X-Tenant-Id"].FirstOrDefault());
};
})
.AddHttpClientInstrumentation()
.AddEntityFrameworkCoreInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://otel-collector:4317");
}))
.WithMetrics(metrics => metrics
.SetResourceBuilder(resourceBuilder)
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddRuntimeInstrumentation()
.AddPrometheusExporter()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://otel-collector:4317");
}));
// Configure OpenTelemetry Logging
builder.Logging.AddOpenTelemetry(logging => logging
.SetResourceBuilder(resourceBuilder)
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://otel-collector:4317");
}));
Custom Span Example (Trace-to-Code)
// SaleService.cs - Custom tracing for business operations
public class SaleService
{
private static readonly ActivitySource ActivitySource = new("PosPlatform.Sales");
private readonly ILogger<SaleService> _logger;
public async Task<Sale> CreateSaleAsync(CreateSaleCommand command)
{
// Create custom span with source code reference
using var activity = ActivitySource.StartActivity(
"CreateSale",
ActivityKind.Internal,
Activity.Current?.Context ?? default);
activity?.SetTag("sale.location_id", command.LocationId);
activity?.SetTag("sale.line_items_count", command.LineItems.Count);
activity?.SetTag("code.filepath", "SaleService.cs");
activity?.SetTag("code.lineno", 25);
activity?.SetTag("code.function", "CreateSaleAsync");
try
{
// Business logic
var sale = await ProcessSale(command);
activity?.SetTag("sale.id", sale.Id);
activity?.SetTag("sale.total", sale.Total);
activity?.SetStatus(ActivityStatusCode.Ok);
return sale;
}
catch (Exception ex)
{
activity?.SetStatus(ActivityStatusCode.Error, ex.Message);
activity?.RecordException(ex);
_logger.LogError(ex, "Failed to create sale for location {LocationId}", command.LocationId);
throw;
}
}
}
Trace-to-Code Dashboard Query
# Grafana Tempo query - Find traces with errors from specific store
{
resource.service.name = "pos-api" &&
span.tenant.id = "NEXUS" &&
status = error
}
| select(
traceDuration,
resource.service.name,
span.code.filepath,
span.code.lineno,
span.code.function,
statusMessage
)
Observability Overload Mitigation
To prevent alert fatigue and noise:
| Strategy | Implementation |
|---|---|
| Sampling | Sample 10% of successful traces, 100% of errors |
| Aggregation | Batch traces before export (10s window) |
| Filtering | Exclude health check endpoints from tracing |
| Retention | Keep raw traces 7 days, aggregates 30 days |
# Sampling configuration in OTel Collector
processors:
probabilistic_sampler:
sampling_percentage: 10 # Sample 10% of traces
tail_sampling:
policies:
- name: always-sample-errors
type: status_code
status_code: {status_codes: [ERROR]}
- name: sample-successful
type: probabilistic
probabilistic: {sampling_percentage: 10}
Grafana Data Source Configuration
# grafana/provisioning/datasources/datasources.yml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus:9090
isDefault: true
- name: Loki
type: loki
url: http://loki:3100
- name: Tempo
type: tempo
url: http://tempo:3200
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ['service.name']
tracesToMetrics:
datasourceUid: prometheus
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
Correlating Traces, Logs, and Metrics
With LGTM stack, you can jump between:
+------------------------------------------------------------------+
| OBSERVABILITY CORRELATION |
+------------------------------------------------------------------+
| |
| TRACE (Tempo) |
| ┌────────────────────────────────────────────────────────────┐ |
| │ TraceID: abc123 │ |
| │ Span: CreateSale (45ms) │ |
| │ └─ Span: ValidateInventory (12ms) │ |
| │ └─ Span: ProcessPayment (28ms) [ERROR] │ |
| │ └─ code.filepath: PaymentService.cs:142 │ |
| └────────────────────────────────────────────────────────────┘ |
| │ |
| │ Click "Logs for this span" |
| ▼ |
| LOGS (Loki) |
| ┌────────────────────────────────────────────────────────────┐ |
| │ 2026-01-24 10:15:32 ERROR Payment declined: Insufficient │ |
| │ 2026-01-24 10:15:32 INFO Rolling back transaction abc123 │ |
| └────────────────────────────────────────────────────────────┘ |
| │ |
| │ Click "Metrics for this time" |
| ▼ |
| METRICS (Prometheus) |
| ┌────────────────────────────────────────────────────────────┐ |
| │ payment_failures_total{reason="insufficient_funds"} = 47 │ |
| │ payment_latency_p99 = 2.3s │ |
| └────────────────────────────────────────────────────────────┘ |
| |
+------------------------------------------------------------------+
Reference
For complete observability strategy and risk mitigations, see:
25.19 Observability Sampling Strategy
Overview
At production scale, collecting 100% of traces, metrics, and logs becomes prohibitively expensive. A thoughtful sampling strategy reduces costs while preserving visibility into errors and performance issues.
| Attribute | Selection |
|---|---|
| Approach | Head-based + Tail-based Sampling |
| Error Retention | 100% of errors sampled |
| Normal Traffic | 1-10% sampled based on volume |
| Cost Target | < $500/month for LGTM stack |
Sampling Strategy Matrix
+------------------------------------------------------------------+
| SAMPLING STRATEGY MATRIX |
+------------------------------------------------------------------+
| |
| SIGNAL TYPE SAMPLE RATE CONDITION |
| ───────────────────────────────────────────────────────────── |
| Traces (errors) 100% status_code >= 500 OR error=true|
| Traces (slow) 100% duration > 2s |
| Traces (normal) 5% All other traces |
| Traces (health) 0% /health, /metrics endpoints |
| |
| Metrics 100% Always (cheap to store) |
| Metrics (custom) Aggregated Sum/avg over 15s window |
| |
| Logs (ERROR+) 100% severity >= ERROR |
| Logs (WARN) 50% severity == WARN |
| Logs (INFO) 10% severity == INFO |
| Logs (DEBUG) 0% Production only; 100% in dev |
| Logs (health) 0% Health check logs suppressed |
| |
+------------------------------------------------------------------+
Head-Based Sampling
Decision made at trace start. Simple but may miss errors that occur later in the trace.
// Program.cs - Head-based sampling configuration
builder.Services.AddOpenTelemetry()
.WithTracing(tracing => tracing
.SetSampler(new ParentBasedSampler(new TraceIdRatioBasedSampler(0.05))) // 5% sampling
.AddAspNetCoreInstrumentation(options =>
{
// Always exclude health endpoints
options.Filter = httpContext =>
!httpContext.Request.Path.StartsWithSegments("/health") &&
!httpContext.Request.Path.StartsWithSegments("/metrics");
})
);
Tail-Based Sampling (Recommended)
Decision made after trace completes. Ensures all errors and slow requests are captured.
# otel-collector-config.yaml
processors:
# Tail-based sampling processor
tail_sampling:
decision_wait: 10s # Wait for span completion
num_traces: 100000 # Max traces in memory
expected_new_traces_per_sec: 1000
policies:
# Policy 1: Always sample errors (100%)
- name: errors-policy
type: status_code
status_code:
status_codes: [ERROR]
# Policy 2: Always sample slow requests (100%)
- name: latency-policy
type: latency
latency:
threshold_ms: 2000 # > 2 seconds
# Policy 3: Always sample payment operations (100%)
- name: payments-policy
type: string_attribute
string_attribute:
key: http.route
values:
- /api/v1/payments
- /api/v1/refunds
enabled_regex_matching: false
# Policy 4: Sample normal traffic (5%)
- name: probabilistic-policy
type: probabilistic
probabilistic:
sampling_percentage: 5
service:
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [otlp/tempo]
Log Sampling Configuration
# Loki pipeline configuration for log sampling
pipeline_stages:
# Drop health check logs entirely
- match:
selector: '{job="pos-api"} |~ "GET /health"'
action: drop
# Drop metrics endpoint logs
- match:
selector: '{job="pos-api"} |~ "GET /metrics"'
action: drop
# Sample INFO logs at 10%
- match:
selector: '{level="info"}'
stages:
- sampling:
rate: 0.1
# Sample WARN logs at 50%
- match:
selector: '{level="warn"}'
stages:
- sampling:
rate: 0.5
# Keep 100% of ERROR and above
- match:
selector: '{level=~"error|fatal|critical"}'
stages:
- sampling:
rate: 1.0
Application-Level Log Filtering
// Program.cs - Serilog with level-based filtering
builder.Host.UseSerilog((context, config) =>
{
config
.MinimumLevel.Information()
.MinimumLevel.Override("Microsoft.AspNetCore", LogEventLevel.Warning)
.MinimumLevel.Override("Microsoft.EntityFrameworkCore", LogEventLevel.Warning)
// Don't log health checks
.Filter.ByExcluding(Matching.WithProperty<string>("RequestPath", p =>
p.Contains("/health") || p.Contains("/metrics")))
// Sample INFO logs in production
.Filter.ByExcluding(e =>
e.Level == LogEventLevel.Information &&
Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") == "Production" &&
Random.Shared.NextDouble() > 0.1) // Keep 10%
.WriteTo.Console()
.WriteTo.OpenTelemetry(options =>
{
options.Endpoint = "http://otel-collector:4317";
options.Protocol = OtlpProtocol.Grpc;
});
});
Sampling Cost Analysis
+------------------------------------------------------------------+
| MONTHLY COST COMPARISON |
+------------------------------------------------------------------+
| |
| SCENARIO: 10 API instances, 1000 req/sec, 30-day retention |
| |
| WITHOUT SAMPLING WITH SAMPLING |
| ───────────────────────── ───────────────────────── |
| Traces: Traces: |
| 2.6B traces/month 130M traces/month (5%) |
| Storage: ~2.6 TB Storage: ~130 GB |
| Cost: ~$2,000/month Cost: ~$100/month |
| |
| Logs: Logs: |
| 5B log lines/month 500M log lines (10% avg) |
| Storage: ~5 TB Storage: ~500 GB |
| Cost: ~$3,000/month Cost: ~$300/month |
| |
| TOTAL: ~$5,000/month TOTAL: ~$400/month |
| ───────────────────────────────────────────────────────────── |
| SAVINGS: 92% reduction with smart sampling |
| |
+------------------------------------------------------------------+
Preserving Debug Capability
While sampling reduces volume, ensure debugging capability is preserved:
// Enable full sampling for specific requests via header
public class DynamicSamplingMiddleware
{
private readonly RequestDelegate _next;
public async Task InvokeAsync(HttpContext context)
{
// Check for debug header
if (context.Request.Headers.TryGetValue("X-Force-Trace", out var forceTrace) &&
forceTrace == "true")
{
// Set sampling decision to RECORD_AND_SAMPLE
Activity.Current?.SetTag("sampling.priority", 1);
Activity.Current?.SetTag("debug.forced", true);
}
await _next(context);
}
}
// Usage: Add header to force sampling
// curl -H "X-Force-Trace: true" https://api.posplatform.io/api/v1/sales
Sampling Metrics
Monitor sampling effectiveness:
# prometheus/rules/sampling-rules.yml
groups:
- name: sampling-metrics
rules:
- record: otel_traces_sampled_total
expr: sum(rate(otel_processor_tail_sampling_count_traces_sampled[5m]))
- record: otel_traces_dropped_total
expr: sum(rate(otel_processor_tail_sampling_count_traces_dropped[5m]))
- record: otel_sampling_rate
expr: |
otel_traces_sampled_total / (otel_traces_sampled_total + otel_traces_dropped_total)
- alert: SamplingRateTooLow
expr: otel_sampling_rate < 0.01
for: 15m
labels:
severity: warning
annotations:
summary: "Trace sampling rate is below 1%"
description: "Consider increasing sampling or checking for data loss"
- alert: ErrorsNotSampled
expr: |
rate(http_server_requests_total{status=~"5.."}[5m]) >
rate(otel_traces_sampled{has_error="true"}[5m]) * 1.1
for: 5m
labels:
severity: critical
annotations:
summary: "Errors may not be properly sampled"
description: "More HTTP 5xx errors than sampled error traces"
Sampling Decision Flowchart
┌─────────────────────────────────────────────────────────────────┐
│ SAMPLING DECISION FLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ New Request Arrives │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Health/Metrics │──Yes──► DROP (0%) │
│ │ endpoint? │ │
│ └────────┬────────┘ │
│ │ No │
│ ▼ │
│ ┌─────────────────┐ │
│ │ X-Force-Trace │──Yes──► SAMPLE (100%) │
│ │ header present? │ │
│ └────────┬────────┘ │
│ │ No │
│ ▼ │
│ [Request Processes...] │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Error occurred? │──Yes──► SAMPLE (100%) │
│ └────────┬────────┘ │
│ │ No │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Duration > 2s? │──Yes──► SAMPLE (100%) │
│ └────────┬────────┘ │
│ │ No │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Payment route? │──Yes──► SAMPLE (100%) │
│ └────────┬────────┘ │
│ │ No │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Random 5%? │──Yes──► SAMPLE │
│ └────────┬────────┘ │
│ │ No │
│ ▼ │
│ DROP │
│ │
└─────────────────────────────────────────────────────────────────┘
25.20 Summary
This chapter provides complete monitoring coverage:
- Architecture: Prometheus + Grafana + AlertManager stack
- Metrics: Business SLIs and infrastructure metrics with thresholds
- Prometheus Config: Complete scrape configuration
- Alert Rules: P1-P4 severity levels with escalation
- Grafana Dashboard: Production-ready JSON dashboard
- Runbooks: Step-by-step incident response procedures
Next Chapter: Chapter 26: Security Compliance
“You cannot improve what you do not measure.”
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VII - Operations |
| Chapter | 25 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 26: Security and Compliance
26.1 Overview
This chapter covers security architecture, PCI-DSS compliance requirements, data protection strategies, and security audit procedures for the POS Platform.
26.2 Security Architecture
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ SECURITY LAYERS │
└─────────────────────────────────────────────────────────────────────────────────────┘
INTERNET
│
┌────────────┴────────────┐
│ WAF / DDoS │ Layer 1: Edge Security
│ (Cloudflare/AWS) │ - Rate limiting
└────────────┬────────────┘ - Bot protection
│ - Geo-blocking
┌────────────┴────────────┐
│ Load Balancer │ Layer 2: TLS Termination
│ (TLS 1.3 only) │ - Certificate management
└────────────┬────────────┘ - HSTS enforcement
│
┌───────────────────────┼───────────────────────┐
│ │ │
┌────────┴────────┐ ┌────────┴────────┐ ┌────────┴────────┐
│ POS API │ │ POS API │ │ POS API │
│ (Container) │ │ (Container) │ │ (Container) │
│ │ │ │ │ │
│ Layer 3: │ │ - JWT Auth │ │ - Input Valid. │
│ Application │ │ - RBAC │ │ - Output Encod. │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
└───────────────────────┼───────────────────────┘
│
┌────────────┴────────────┐
│ Network Firewall │ Layer 4: Network
│ (Docker Network) │ - Microsegmentation
└────────────┬────────────┘ - No direct DB access
│
┌───────────────────────┼───────────────────────┐
│ │ │
┌────────┴────────┐ ┌────────┴────────┐ ┌────────┴────────┐
│ PostgreSQL │ │ Redis │ │ RabbitMQ │
│ │ │ │ │ │
│ Layer 5: │ │ - Encrypted │ │ - TLS enabled │
│ Data Layer │ │ - Auth required │ │ - Auth required │
│ │ │ │ │ │
│ - Encryption │ │ │ │ │
│ - Row-level sec │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
26.3 PCI-DSS Compliance Checklist
Complete 12 Requirements
# PCI-DSS v4.0 Compliance Checklist for POS Platform
## 26.4 REQUIREMENT 1: Install and Maintain Network Security Controls
### 1.1 Network Security Policies
- [x] Firewall rules documented
- [x] Network diagram maintained
- [x] All connections reviewed quarterly
- [x] Traffic restrictions enforced
### 1.2 Network Configuration Standards
- [x] Default passwords changed on all devices
- [x] Unnecessary services disabled
- [x] Security patches applied within 30 days
- [x] Anti-spoofing measures implemented
### Implementation
```bash
# Docker network isolation
docker network create --driver bridge \
--subnet=172.28.0.0/16 \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
pos-secure-network
# Firewall rules (iptables)
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -s 10.0.0.0/8 -j ACCEPT
iptables -A INPUT -j DROP
26.5 REQUIREMENT 2: Apply Secure Configurations
2.1 System Configuration Standards
- Hardened container images (Alpine-based)
- Non-root container execution
- Minimal installed packages
- Security benchmarks applied (CIS)
2.2 Secure Defaults
- Default accounts disabled/removed
- Vendor defaults changed
- Unnecessary functionality removed
Implementation
# Secure Dockerfile practices
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine
# Remove unnecessary packages
RUN apk del --purge wget curl || true
# Non-root user
RUN addgroup -S posgroup && adduser -S posuser -G posgroup
USER posuser
# Read-only filesystem where possible
RUN chmod -R 555 /app
26.6 REQUIREMENT 3: Protect Stored Account Data
3.1 Data Retention Policy
- Card data retention minimized
- PAN stored only when necessary (we don’t store)
- Quarterly purge of unnecessary data
- Documented retention periods
3.2 Sensitive Authentication Data
- Full track data NOT stored ✓
- CVV/CVC NOT stored ✓
- PIN/PIN block NOT stored ✓
3.3 PAN Display Masking
- PAN masked on display (show last 4 only)
- Full PAN not logged
3.4 PAN Rendering Unreadable
- We use tokenization (no PAN stored)
- Stripe tokens reference only
What We Store vs. Don’t Store
| Data Type | Stored? | Method | Location |
|---|---|---|---|
| Full PAN | NO | Tokenized | Stripe |
| Last 4 digits | YES | Masked | Local DB |
| CVV/CVC | NO | Never captured | N/A |
| Expiry Date | YES | Encrypted | Local DB |
| Cardholder Name | YES | Encrypted | Local DB |
| Track Data | NO | Never captured | N/A |
| PIN | NO | Never captured | N/A |
| Payment Token | YES | As-is | Local DB |
26.7 REQUIREMENT 4: Protect Data in Transit
4.1 Encryption Standards
- TLS 1.2+ for all transmissions
- TLS 1.3 preferred
- Strong cipher suites only
- Certificate validation enforced
4.2 Wireless Security
- WPA3 for wireless POS terminals
- No open wireless networks
- Wireless IDS monitoring
Implementation
# Nginx TLS configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# HSTS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
26.8 REQUIREMENT 5: Protect from Malicious Software
5.1 Anti-Malware Deployment
- Container scanning in CI/CD
- Runtime malware detection
- Automatic signature updates
5.2 Anti-Phishing
- Email filtering enabled
- User awareness training
- SPF/DKIM/DMARC configured
Implementation
# CI/CD container scanning (GitHub Actions)
- name: Container Security Scan
uses: aquasecurity/trivy-action@master
with:
image-ref: pos-api:${{ github.sha }}
severity: 'CRITICAL,HIGH'
exit-code: '1'
26.9 REQUIREMENT 6: Develop and Maintain Secure Systems
6.1 Secure Development Lifecycle
- Security requirements in design phase
- Code review mandatory
- SAST (Static Analysis) in CI
- DAST (Dynamic Analysis) pre-release
6.2 Change Control
- All changes documented
- Security impact assessment
- Rollback procedures defined
- Separation of dev/test/prod
6.3 Vulnerability Management
- Known vulnerabilities addressed
- Security patches within 30 days (critical)
- Dependency scanning automated
26.10 REQUIREMENT 7: Restrict Access to System Components
7.1 Access Control Model
- Role-based access control (RBAC)
- Least privilege principle
- Access reviews quarterly
- Default deny policy
7.2 Access Control System
- Unique user IDs
- MFA for admin access
- Session timeout enforced
Access Control Matrix
| Role | Transactions | Inventory | Reports | Users | Settings |
|---|---|---|---|---|---|
| Cashier | Create | View | None | None | None |
| Supervisor | All | All | Store | None | Store |
| Store Manager | All | All | Store | Store | Store |
| Regional Manager | View | View | Region | View | View |
| Admin | All | All | All | All | All |
| System | API Only | API Only | None | None | None |
26.11 REQUIREMENT 8: Identify Users and Authenticate Access
8.1 User Identification
- Unique user IDs for all users
- Shared accounts prohibited
- User ID policy documented
8.2 Authentication Management
- Password complexity enforced
- Password history (12 passwords)
- Account lockout (5 failures)
- Session timeout (15 minutes inactive)
8.3 Multi-Factor Authentication
- MFA for remote access
- MFA for admin consoles
- MFA for cardholder data access
Implementation
// Password policy configuration
services.Configure<IdentityOptions>(options =>
{
options.Password.RequiredLength = 12;
options.Password.RequireDigit = true;
options.Password.RequireLowercase = true;
options.Password.RequireUppercase = true;
options.Password.RequireNonAlphanumeric = true;
options.Password.RequiredUniqueChars = 4;
options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromMinutes(30);
options.Lockout.MaxFailedAccessAttempts = 5;
options.Lockout.AllowedForNewUsers = true;
});
26.12 REQUIREMENT 9: Restrict Physical Access
9.1 Physical Security
- Data center access controlled
- Visitor logs maintained
- Badge access for sensitive areas
9.2 Media Protection
- Media inventory maintained
- Secure media destruction
- Media transport security
9.3 POS Device Security
- Device inventory maintained
- Tamper-evident labels
- Regular device inspection
26.13 REQUIREMENT 10: Log and Monitor All Access
10.1 Audit Logging
- All access logged
- Log integrity protected
- Logs retained 1 year (3 months online)
10.2 Log Content
- User identification
- Event type
- Date/time
- Success/failure
- Affected resource
10.3 Log Review
- Daily log review
- Automated anomaly detection
- Incident correlation
Implementation
// Audit logging configuration
public class AuditLogEntry
{
public Guid Id { get; set; }
public DateTime Timestamp { get; set; }
public string UserId { get; set; }
public string UserName { get; set; }
public string EventType { get; set; } // Login, Access, Modify, Delete
public string Resource { get; set; }
public string ResourceId { get; set; }
public bool Success { get; set; }
public string IpAddress { get; set; }
public string UserAgent { get; set; }
public string Details { get; set; } // JSON of changes
}
26.14 REQUIREMENT 11: Test Security Regularly
11.1 Vulnerability Scanning
- Internal scans quarterly
- External scans quarterly
- Rescans after changes
11.2 Penetration Testing
- Annual penetration test
- Test after significant changes
- Remediation verified
11.3 Change Detection
- File integrity monitoring
- Configuration drift detection
- Unauthorized change alerts
Vulnerability Scanning Schedule
| Scan Type | Frequency | Tool | Remediation SLA |
|---|---|---|---|
| Container scan | Every build | Trivy | Block if Critical |
| Dependency scan | Daily | Dependabot | 7 days |
| SAST | Every PR | SonarQube | Block if High |
| DAST | Weekly | OWASP ZAP | 14 days |
| External ASV | Quarterly | Qualys | 30 days |
| Internal Network | Quarterly | Nessus | 30 days |
26.15 REQUIREMENT 12: Support Security with Policies
12.1 Security Policy
- Information security policy documented
- Annual policy review
- Policy accessible to all staff
12.2 Risk Assessment
- Annual risk assessment
- Risk register maintained
- Risk treatment plans
12.3 Security Awareness
- Security training for all staff
- Annual refresher training
- Role-specific training
12.4 Incident Response
- Incident response plan
- Annual plan testing
- Breach notification procedures
---
## 26.16 Tokenization Flow
┌─────────────────────────────────────────────────────────────────────────────────────┐ │ PAYMENT TOKENIZATION FLOW │ └─────────────────────────────────────────────────────────────────────────────────────┘
STEP 1: Customer Enters Card ┌───────────────┐ │ POS Client │ Customer swipes/taps/enters card │ │ Card data NEVER touches our servers └───────┬───────┘ │ Card data (encrypted) ▼ ┌───────────────┐ │ Stripe.js │ Client-side SDK handles card data │ (Browser) │ Tokenization happens in secure iframe └───────┬───────┘ │ HTTPS (TLS 1.3) ▼ ┌───────────────┐ │ Stripe │ PCI Level 1 certified │ Servers │ Card data stored securely └───────┬───────┘ │ Payment Token (tok_xxx) ▼ ┌───────────────┐ │ POS Client │ Receives token, NOT card data │ │ └───────┬───────┘ │ Token + amount ▼ ┌───────────────┐ │ POS API │ Our server sees ONLY token │ Server │ Never handles raw card data └───────┬───────┘ │ Charge request with token ▼ ┌───────────────┐ │ Stripe │ Processes payment │ Servers │ Returns charge ID └───────┬───────┘ │ Charge result ▼ ┌───────────────┐ │ POS API │ Stores transaction record │ Server │ Stores: token, last4, amount └───────────────┘ Does NOT store: full PAN, CVV
WHAT WE STORE: ┌─────────────────────────────────────────────────────┐ │ Transaction Record │ ├─────────────────────────────────────────────────────┤ │ transaction_id: “txn_abc123” │ │ stripe_charge_id: “ch_xyz789” │ │ stripe_token: “tok_xxx” (reference only) │ │ card_last4: “4242” (masked) │ │ card_brand: “Visa” │ │ amount: 99.99 │ │ status: “completed” │ │ created_at: “2025-12-29T10:30:00Z” │ └─────────────────────────────────────────────────────┘
WHAT WE NEVER STORE: ┌─────────────────────────────────────────────────────┐ │ ❌ Full card number (PAN) │ │ ❌ CVV/CVC │ │ ❌ PIN │ │ ❌ Track data │ │ ❌ Expiration date (optional, encrypted if stored) │ └─────────────────────────────────────────────────────┘
---
## 26.17 Network Segmentation
┌─────────────────────────────────────────────────────────────────────────────────────┐ │ NETWORK SEGMENTATION │ └─────────────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────┐
│ INTERNET (Untrusted) │
└──────────────┬──────────────┘
│
┌──────────────┴──────────────┐
│ DMZ ZONE │
│ (172.28.1.0/24) │
│ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Nginx │ │ WAF │ │
│ │ (LB) │ │ │ │
│ └────┬─────┘ └────┬─────┘ │
└───────┼────────────┼────────┘
│ │
══════════════════════════════════════════════════════════ Firewall │ │ ┌───────┴────────────┴────────┐ │ APPLICATION ZONE │ │ (172.28.2.0/24) │ │ │ │ ┌──────────┐ ┌──────────┐ │ │ │ POS-API │ │ POS-API │ │ │ │ 1 │ │ 2 │ │ │ └────┬─────┘ └────┬─────┘ │ └───────┼────────────┼────────┘ │ │ ══════════════════════════════════════════════════════════ Firewall │ │ ┌───────┴────────────┴────────┐ │ DATA ZONE │ │ (172.28.3.0/24) │ │ │ │ ┌──────────┐ ┌──────────┐ │ │ │ Postgres │ │ Redis │ │ │ │ │ │ │ │ │ └──────────┘ └──────────┘ │ │ │ │ ┌──────────┐ │ │ │ RabbitMQ │ │ │ │ │ │ │ └──────────┘ │ └─────────────────────────────┘ │ ══════════════════════════════════════════════════════════ Firewall │ ┌──────────────┴──────────────┐ │ MANAGEMENT ZONE │ │ (172.28.4.0/24) │ │ │ │ ┌──────────┐ ┌──────────┐ │ │ │ Grafana │ │Prometheus│ │ │ │ │ │ │ │ │ └──────────┘ └──────────┘ │ └─────────────────────────────┘
FIREWALL RULES:
DMZ → Application: ALLOW: TCP 8080 (API) from Nginx only DENY: All other traffic
Application → Data: ALLOW: TCP 5432 (Postgres) from API containers ALLOW: TCP 6379 (Redis) from API containers ALLOW: TCP 5672 (RabbitMQ) from API containers DENY: All other traffic
Data → External: DENY: All outbound traffic
Management → All: ALLOW: TCP 9090 (metrics scrape) ALLOW: SSH from jump host only
---
## 26.18 Breach Response Procedures
### Incident Response Plan
```markdown
# Security Incident Response Plan
## 26.19 Phase 1: Detection & Identification (0-15 minutes)
### Indicators of Compromise
- Unusual database queries
- Spike in failed authentication
- Unexpected outbound traffic
- Data exfiltration alerts
- Customer reports of fraud
### Initial Assessment
1. Confirm incident is real (not false positive)
2. Classify severity:
- P1: Active breach, data exfiltration
- P2: Attempted breach, no data loss
- P3: Vulnerability discovered, no exploitation
### Notification Matrix
| Severity | Notify Immediately |
|----------|-------------------|
| P1 | CISO, CTO, Legal, CEO, Payment Processor |
| P2 | CISO, Security Team Lead, Engineering Lead |
| P3 | Security Team Lead |
---
## 26.20 Phase 2: Containment (15-60 minutes)
### Immediate Actions (P1)
1. **Isolate affected systems**
```bash
# Block external traffic
iptables -I INPUT -j DROP
# Preserve evidence
docker pause <container>
-
Revoke compromised credentials
-- Revoke all API keys UPDATE api_keys SET revoked = true WHERE tenant_id = <affected>; -- Force password reset UPDATE users SET must_reset_password = true WHERE tenant_id = <affected>; -
Notify payment processor
- Call Stripe incident hotline
- Provide transaction date range
- Request card replacement if needed
26.21 Phase 3: Eradication (1-24 hours)
Evidence Collection
- Capture memory dump
- Export all logs (past 90 days)
- Capture network traffic
- Preserve container images
Root Cause Analysis
- How did attacker gain access?
- What systems were accessed?
- What data was accessed/exfiltrated?
- How long was attacker present?
Remediation
- Patch vulnerability
- Remove backdoors
- Reset all credentials
- Update security controls
26.22 Phase 4: Recovery (24-72 hours)
System Restoration
- Deploy from known-good images
- Restore data from clean backup
- Implement additional monitoring
- Gradual traffic restoration
Verification
- Security scan of restored systems
- Penetration test of fixed vulnerability
- Log analysis for lingering threats
26.23 Phase 5: Lessons Learned (1-2 weeks)
Post-Incident Review
- Timeline of events
- What worked well
- What needs improvement
- Action items with owners
Regulatory Notifications
| Regulation | Notification Period | Authority |
|---|---|---|
| PCI-DSS | Immediately | Payment brands, acquiring bank |
| GDPR | 72 hours | Supervisory authority |
| State Laws | Varies (30-90 days) | State AG, affected individuals |
Communication Templates
Customer Notification (Email)
Subject: Important Security Notice
Dear [Customer Name],
We are writing to inform you of a security incident that may have
affected your information...
[Describe incident without technical details]
What We Are Doing:
- [Actions taken]
What You Should Do:
- Monitor your accounts
- Report suspicious activity
[Contact information]
[Credit monitoring offer if applicable]
---
## 26.24 Security Audit Checklist
```markdown
# Quarterly Security Audit Checklist
## 26.25 Access Control Review
### User Accounts
- [ ] Review all user accounts for necessity
- [ ] Verify MFA enabled for all admin accounts
- [ ] Check for dormant accounts (no login > 90 days)
- [ ] Verify terminated employee access removed
- [ ] Review service account permissions
### API Keys & Tokens
- [ ] Rotate API keys > 90 days old
- [ ] Review API key permissions
- [ ] Check for exposed keys in code/logs
- [ ] Verify webhook secrets rotated
## 26.26 System Configuration
### Containers
- [ ] Scan all images for vulnerabilities
- [ ] Verify base images up to date
- [ ] Check for containers running as root
- [ ] Review exposed ports
### Database
- [ ] Verify encryption at rest enabled
- [ ] Check backup encryption
- [ ] Review database user permissions
- [ ] Test backup restoration
### Network
- [ ] Review firewall rules
- [ ] Check for unnecessary open ports
- [ ] Verify TLS configuration (SSL Labs A+)
- [ ] Test network segmentation
## 26.27 Logging & Monitoring
### Audit Logs
- [ ] Verify all security events logged
- [ ] Check log integrity (no gaps)
- [ ] Test log alerting
- [ ] Verify log retention (1 year)
### Monitoring
- [ ] Review alert thresholds
- [ ] Test incident response workflow
- [ ] Verify on-call rotation
- [ ] Check monitoring coverage
## 26.28 Vulnerability Management
### Scanning
- [ ] Review latest vulnerability scan results
- [ ] Verify critical findings remediated
- [ ] Check dependency vulnerabilities
- [ ] Review code analysis findings
### Patching
- [ ] Verify OS patches current
- [ ] Check application dependencies
- [ ] Review security advisories
- [ ] Test patch deployment process
## 26.29 Compliance
### PCI-DSS
- [ ] Review SAQ completion
- [ ] Verify ASV scan passing
- [ ] Check penetration test findings
- [ ] Update network diagram
### Data Protection
- [ ] Review data retention
- [ ] Verify data classification
- [ ] Check encryption standards
- [ ] Test data deletion process
## 26.30 Sign-off
| Role | Name | Date | Signature |
|------|------|------|-----------|
| Security Lead | | | |
| CTO | | | |
| Compliance Officer | | | |
26.31 Supply Chain Security (SCA)
Overview
Modern software relies heavily on third-party dependencies. Supply Chain Security (SCA) protects against malicious packages, vulnerable dependencies, and license compliance issues.
| Attribute | Selection |
|---|---|
| Primary Tool | Snyk or OWASP Dependency-Check |
| Strategy | “Package Firewall” - block vulnerable packages |
| Output | SBOM (Software Bill of Materials) |
| Integration | CI/CD pipeline gate |
Threat Landscape
+------------------------------------------------------------------+
| SUPPLY CHAIN ATTACK VECTORS |
+------------------------------------------------------------------+
| |
| 1. Typosquatting lodash vs lodas (malicious) |
| 2. Dependency Confusion Private package name collision |
| 3. Compromised Packages Event-stream attack (2018) |
| 4. Abandoned Packages No security updates |
| 5. License Violations GPL in commercial products |
| |
+------------------------------------------------------------------+
Snyk Configuration
# .snyk policy file
version: v1.25.0
# Ignore specific vulnerabilities (with justification)
ignore:
SNYK-DOTNET-SYSTEMTEXTJSON-5951292:
- '*':
reason: 'Risk accepted - not reachable in our code paths'
expires: 2026-03-01
# Package policies
policies:
- package-policy:
licenses:
- severity: high
license: GPL-3.0
- severity: medium
license: LGPL-3.0
# Block packages with critical vulnerabilities
fail-on:
- severity: critical
- severities: [critical, high]
type: license
# Run Snyk scan in CI
snyk test --severity-threshold=high --fail-on=all
# Monitor for new vulnerabilities
snyk monitor
# Generate SBOM
snyk sbom --format=cyclonedx+json > sbom.json
OWASP Dependency-Check (Alternative)
<!-- pom.xml or as CLI tool -->
<plugin>
<groupId>org.owasp</groupId>
<artifactId>dependency-check-maven</artifactId>
<version>8.4.0</version>
<configuration>
<failBuildOnCVSS>7</failBuildOnCVSS>
<format>ALL</format>
</configuration>
</plugin>
# .NET projects
dotnet tool install --global dotnet-dependency-check
dependency-check --project "POS Platform" --scan ./src --format HTML
Package Firewall (Proxy)
# Artifactory or Nexus configuration
remote-repositories:
nuget-proxy:
url: https://api.nuget.org/v3/index.json
blocked-packages:
- name: "malicious-package"
reason: "Known malware"
vulnerability-policy:
max-severity: HIGH
fail-build: true
SBOM (Software Bill of Materials)
// Example SBOM output (CycloneDX format)
{
"bomFormat": "CycloneDX",
"specVersion": "1.5",
"version": 1,
"metadata": {
"component": {
"name": "pos-platform",
"version": "1.2.3",
"type": "application"
}
},
"components": [
{
"type": "library",
"name": "Newtonsoft.Json",
"version": "13.0.3",
"purl": "pkg:nuget/Newtonsoft.Json@13.0.3",
"licenses": [{"license": {"id": "MIT"}}],
"hashes": [{"alg": "SHA-256", "content": "abc123..."}]
}
]
}
VEX (Vulnerability Exploitability eXchange)
VEX is the companion document to SBOM that communicates whether a vulnerability actually affects your product. While SBOM lists components, VEX explains exploitability status.
| Attribute | Selection |
|---|---|
| Purpose | Reduce vulnerability noise; focus on actual risks |
| Format | CycloneDX VEX, OpenVEX, or CSAF |
| Integration | Automated via CI/CD pipeline |
Why VEX Matters
+------------------------------------------------------------------+
| SBOM vs VEX RELATIONSHIP |
+------------------------------------------------------------------+
| |
| SBOM says: "We use library X version 2.1.0" |
| |
| CVE Database says: "Library X has CVE-2025-12345 (CRITICAL)" |
| |
| VEX says: "CVE-2025-12345 is NOT AFFECTED because we don't |
| use the vulnerable deserialization function" |
| |
| Result: Security team focuses on REAL threats, not false alarms |
| |
+------------------------------------------------------------------+
VEX Status Values
| Status | Description |
|---|---|
| Not Affected | Vulnerability doesn’t apply (code path not used) |
| Affected | Product IS vulnerable, needs remediation |
| Fixed | Vulnerability was fixed in this version |
| Under Investigation | Still assessing impact |
OpenVEX Document Example
// vex/pos-platform-vex.json
{
"@context": "https://openvex.dev/ns/v0.2.0",
"@id": "https://posplatform.io/vex/2026-01-24",
"author": "POS Platform Security Team",
"timestamp": "2026-01-24T10:00:00Z",
"version": 1,
"statements": [
{
"vulnerability": {
"@id": "https://nvd.nist.gov/vuln/detail/CVE-2025-29384",
"name": "CVE-2025-29384",
"description": "Deserialization vulnerability in Newtonsoft.Json"
},
"products": [
{
"@id": "pkg:nuget/PosPlatform.Api@1.2.3",
"identifiers": {
"purl": "pkg:nuget/PosPlatform.Api@1.2.3"
}
}
],
"status": "not_affected",
"justification": "vulnerable_code_not_in_execute_path",
"impact_statement": "The vulnerable TypeNameHandling feature is explicitly disabled in our configuration. We use TypeNameHandling.None which prevents the deserialization attack vector."
},
{
"vulnerability": {
"@id": "https://nvd.nist.gov/vuln/detail/CVE-2025-31456",
"name": "CVE-2025-31456"
},
"products": [
{
"@id": "pkg:nuget/PosPlatform.Api@1.2.3"
}
],
"status": "affected",
"action_statement": "Upgrade to Npgsql 8.0.5 in next release",
"action_statement_timestamp": "2026-02-01T00:00:00Z"
}
]
}
VEX Generation Workflow
# .github/workflows/vex-generation.yml
name: Generate VEX
on:
schedule:
- cron: '0 6 * * 1' # Weekly on Monday
workflow_dispatch:
jobs:
generate-vex:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Generate SBOM
run: |
dotnet tool install --global CycloneDX
dotnet CycloneDX src/PosPlatform.Api/PosPlatform.Api.csproj -o sbom.json
- name: Scan for vulnerabilities
run: |
snyk test --json > vulns.json
- name: Generate VEX with analysis
run: |
# Use vexctl or custom script to generate VEX
vexctl create --product pos-platform --version ${{ github.ref_name }} \
--sbom sbom.json --vulns vulns.json --output vex.json
- name: Upload VEX artifact
uses: actions/upload-artifact@v4
with:
name: vex-document
path: vex.json
- name: Publish to VEX repository
run: |
# Store VEX documents for customer access
aws s3 cp vex.json s3://security-docs/vex/pos-platform-${{ github.ref_name }}.json
Integration with Vulnerability Scanners
# Grype with VEX filtering
grype sbom:./sbom.json --vex ./vex.json
# Trivy with VEX
trivy sbom ./sbom.json --vex ./vex.json --ignore-unfixed
# Result: Only shows vulnerabilities NOT marked as "not_affected" in VEX
26.32 EU Cyber Resilience Act (CRA) Compliance
Overview
The EU Cyber Resilience Act (CRA) mandates cybersecurity requirements for products with digital elements sold in the EU market. For POS systems processing financial transactions, this is a mandatory compliance requirement.
| Attribute | Selection |
|---|---|
| Effective Date | 2027 (full enforcement); 2026 (reporting) |
| Scope | All products with digital elements in EU |
| Classification | Class II (POS = important product) |
Key Requirements
+------------------------------------------------------------------+
| EU CRA KEY OBLIGATIONS |
+------------------------------------------------------------------+
| |
| 1. SECURE BY DESIGN |
| - Security from initial design phase |
| - Threat modeling mandatory |
| - No known exploitable vulnerabilities at release |
| |
| 2. VULNERABILITY HANDLING |
| - Coordinated vulnerability disclosure process |
| - Security updates for 5+ years (or product lifetime) |
| - Notify ENISA within 24 hours of exploited vulnerabilities |
| |
| 3. TRANSPARENCY |
| - SBOM required for all products |
| - Clear security information to users |
| - CE marking for compliant products |
| |
| 4. DOCUMENTATION |
| - Technical documentation |
| - Risk assessment |
| - Conformity assessment |
| |
+------------------------------------------------------------------+
Product Classification
| Class | Examples | Requirements |
|---|---|---|
| Default | Simple IoT, basic software | Self-assessment |
| Class I | Password managers, VPNs | Harmonized standards OR third-party |
| Class II | POS systems, firewalls, HSMs | Third-party conformity assessment |
| Critical | Smart meters, medical devices | European certification |
POS Platform Classification: Class II (Important Product)
- Processes financial transactions
- Handles payment data
- Network-connected critical retail infrastructure
CRA Compliance Checklist
# EU Cyber Resilience Act Compliance Checklist
## 26.33 Design Phase Requirements
- [ ] Threat model documented for all components
- [ ] Security requirements in design specifications
- [ ] STRIDE analysis completed
- [ ] Attack surface documented
## 26.34 Development Requirements
- [ ] Secure coding guidelines followed
- [ ] SAST integrated in CI/CD pipeline
- [ ] Dependencies scanned (SCA)
- [ ] No known vulnerabilities at release
## 26.35 Vulnerability Management
- [ ] Coordinated disclosure policy published
- [ ] Security contact (security.txt) available
- [ ] Vulnerability tracking system in place
- [ ] Patch timeline: Critical (24h), High (7d), Medium (30d)
## 26.36 Documentation
- [ ] SBOM generated for each release
- [ ] VEX documents maintained
- [ ] Technical documentation complete
- [ ] User security instructions provided
## 26.37 Incident Response
- [ ] ENISA notification process defined
- [ ] 24-hour notification capability
- [ ] Incident classification criteria
- [ ] Communication templates ready
## 26.38 Conformity Assessment
- [ ] Third-party assessment scheduled
- [ ] CE marking documentation prepared
- [ ] EU Declaration of Conformity drafted
ENISA Notification Process
// Services/EnisaNotificationService.cs
public class EnisaNotificationService
{
private readonly IHttpClientFactory _httpClientFactory;
private readonly ILogger<EnisaNotificationService> _logger;
private const string ENISA_ENDPOINT = "https://enisa.europa.eu/cra/notifications";
public async Task NotifyExploitedVulnerabilityAsync(
VulnerabilityNotification notification,
CancellationToken ct = default)
{
// CRA requires notification within 24 hours of discovering
// an actively exploited vulnerability
var payload = new
{
manufacturerId = "POS-PLATFORM-EU-001",
productIdentifier = notification.ProductId,
vulnerabilityId = notification.CveId,
discoveryTimestamp = notification.DiscoveredAt.ToString("O"),
exploitationEvidence = notification.ExploitationDetails,
affectedVersions = notification.AffectedVersions,
mitigationStatus = notification.MitigationStatus,
estimatedPatchDate = notification.EstimatedPatchDate?.ToString("O"),
contactEmail = "security@posplatform.io"
};
var client = _httpClientFactory.CreateClient("ENISA");
var response = await client.PostAsJsonAsync(ENISA_ENDPOINT, payload, ct);
if (!response.IsSuccessStatusCode)
{
_logger.LogCritical(
"ENISA notification failed for {CveId}. Status: {Status}. " +
"MANUAL NOTIFICATION REQUIRED within 24 hours.",
notification.CveId,
response.StatusCode
);
// Trigger escalation to security team
await TriggerEscalationAsync(notification);
}
_logger.LogInformation(
"ENISA notified of {CveId}. Reference: {Reference}",
notification.CveId,
await response.Content.ReadAsStringAsync(ct)
);
}
}
public record VulnerabilityNotification(
string ProductId,
string CveId,
DateTime DiscoveredAt,
string ExploitationDetails,
string[] AffectedVersions,
string MitigationStatus,
DateTime? EstimatedPatchDate
);
Security Support Period
Under CRA, manufacturers must provide security updates for:
- Minimum 5 years from product release, OR
- Expected product lifetime (whichever is longer)
# Product Lifecycle Policy (CRA Compliant)
products:
pos-platform:
current_version: "1.2.3"
release_date: "2026-03-01"
security_support_until: "2031-03-01" # 5 years minimum
support_tiers:
- tier: "active"
description: "Feature updates + security patches"
duration: "3 years"
- tier: "security"
description: "Security patches only"
duration: "2 years"
- tier: "extended"
description: "Critical security only (paid)"
duration: "negotiable"
patch_slas:
critical: "24 hours"
high: "7 days"
medium: "30 days"
low: "90 days"
CE Marking & Declaration
# EU Declaration of Conformity
**Manufacturer**: POS Platform Inc.
**Address**: [Company Address]
**Product**: POS Platform - Multi-tenant Retail Point of Sale System
**Model**: POS-2026-PRO
**Version**: 1.2.3
This declaration of conformity is issued under the sole responsibility
of the manufacturer.
**Object of Declaration**:
The product described above is in conformity with the essential
requirements of the EU Cyber Resilience Act (Regulation 2024/XXX).
**Harmonized Standards Applied**:
- EN ISO/IEC 27001:2022 - Information Security Management
- EN ISO/IEC 62443-4-1:2018 - Secure Product Development Lifecycle
- CycloneDX 1.5 - SBOM Standard
**Conformity Assessment**:
Third-party conformity assessment performed by [Notified Body Name]
Certificate Number: [Certificate ID]
**Signed**:
[Name, Title]
[Date]
[Place]
26.39 GenAI Governance
Overview
With AI-assisted code generation (GitHub Copilot, Claude Code), additional security controls are required to prevent AI-generated vulnerabilities from reaching production.
| Attribute | Selection |
|---|---|
| Policy | All AI-generated code must pass Deep SAST gate |
| Tools | SonarQube, CodeQL |
| Integration | Pre-commit hooks + CI pipeline |
“Vibe Coding” Risks
+------------------------------------------------------------------+
| AI CODE GENERATION RISKS |
+------------------------------------------------------------------+
| |
| 1. Hallucinated APIs - Non-existent functions |
| 2. Insecure Patterns - SQL injection, hardcoded secrets |
| 3. Outdated Libraries - Training data from 2022 |
| 4. License Contamination - Copyleft code in proprietary |
| 5. Logic Errors - Subtle bugs that compile fine |
| |
+------------------------------------------------------------------+
Deep SAST Configuration (SonarQube)
# sonar-project.properties
sonar.projectKey=pos-platform
sonar.projectName=POS Platform
sonar.sources=src
sonar.tests=tests
# Quality Gates
sonar.qualitygate.wait=true
# Rules for AI-generated code
sonar.issue.ignore.multicriteria=e1
sonar.issue.ignore.multicriteria.e1.ruleKey=csharpsquid:S1135
sonar.issue.ignore.multicriteria.e1.resourceKey=**/*Generated*.cs
SonarQube Quality Gate Rules
| Rule | Threshold | Action |
|---|---|---|
| Blocker Issues | 0 | Block merge |
| Critical Issues | 0 | Block merge |
| Security Hotspots | 0 unreviewed | Block merge |
| Code Coverage | > 80% | Warning |
| Duplicated Lines | < 3% | Warning |
CI Pipeline Integration
# .github/workflows/security.yml
name: Security Scan
on: [push, pull_request]
jobs:
sast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: SonarQube Scan
uses: SonarSource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
- name: Quality Gate Check
uses: SonarSource/sonarqube-quality-gate-action@master
timeout-minutes: 5
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
codeql:
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: csharp
- name: Build
run: dotnet build
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
Claude Code CLI Integration
# Pre-commit hook for AI-generated code
#!/bin/bash
# .git/hooks/pre-commit
# Check if files were modified by Claude Code
if git diff --cached --name-only | grep -q "Generated by Claude"; then
echo "AI-generated code detected. Running deep SAST..."
# Run SonarScanner
sonar-scanner -Dsonar.qualitygate.wait=true
if [ $? -ne 0 ]; then
echo "ERROR: AI-generated code failed security scan"
exit 1
fi
fi
AI Code Review Checklist
# AI-Generated Code Review Checklist
Before approving AI-generated code:
## 26.40 Security
- [ ] No hardcoded secrets or API keys
- [ ] No SQL injection vulnerabilities
- [ ] Input validation on all user inputs
- [ ] Output encoding for XSS prevention
- [ ] No insecure deserialization
## 26.41 Quality
- [ ] Logic matches intended behavior
- [ ] Edge cases handled
- [ ] Error handling is appropriate
- [ ] No deprecated APIs used
## 26.42 Compliance
- [ ] No copyleft licensed code copied
- [ ] SAST scan passed
- [ ] Unit tests included
26.43 File Integrity Monitoring (FIM)
Overview
File Integrity Monitoring is a PCI-DSS requirement (11.5) that detects unauthorized changes to critical system files, essential for detecting tampering and skimmer attacks on POS terminals.
| Attribute | Selection |
|---|---|
| Tool | Wazuh (Primary) or OSSEC |
| Scope | POS terminals, API servers, payment modules |
| PCI Requirement | 11.5 - Deploy FIM on critical systems |
Wazuh Architecture
+------------------------------------------------------------------+
| FIM ARCHITECTURE |
+------------------------------------------------------------------+
| |
| ┌─────────────────┐ ┌─────────────────┐ ┌───────────────┐ |
| │ POS Terminal │ │ API Server │ │ DB Server │ |
| │ (Wazuh Agent) │ │ (Wazuh Agent) │ │ (Wazuh Agent) │ |
| └────────┬────────┘ └────────┬────────┘ └───────┬───────┘ |
| │ │ │ |
| └──────────────────────┼──────────────────────┘ |
| │ |
| ▼ |
| ┌─────────────────────────┐ |
| │ Wazuh Manager │ |
| │ (Central Analysis) │ |
| └────────────┬────────────┘ |
| │ |
| ┌────────────┴────────────┐ |
| │ │ |
| ▼ ▼ |
| ┌───────────────┐ ┌────────────────┐ |
| │ Wazuh UI │ │ Alerting │ |
| │ Dashboard │ │ (Slack/Email) │ |
| └───────────────┘ └────────────────┘ |
| |
+------------------------------------------------------------------+
Wazuh Agent Configuration
<!-- /var/ossec/etc/ossec.conf -->
<ossec_config>
<syscheck>
<!-- Check every 12 hours -->
<frequency>43200</frequency>
<!-- Real-time monitoring for critical directories -->
<directories realtime="yes" check_all="yes">/opt/pos/bin</directories>
<directories realtime="yes" check_all="yes">/opt/pos/config</directories>
<directories realtime="yes" check_all="yes">/etc/pos</directories>
<!-- Payment module - highest priority -->
<directories realtime="yes" check_all="yes" report_changes="yes">
/opt/pos/payment
</directories>
<!-- Critical system files -->
<directories check_all="yes">/etc/passwd</directories>
<directories check_all="yes">/etc/shadow</directories>
<directories check_all="yes">/etc/sudoers</directories>
<!-- Ignore log files and temp -->
<ignore>/var/log</ignore>
<ignore>/tmp</ignore>
<ignore type="sregex">.log$</ignore>
</syscheck>
</ossec_config>
Docker Container Monitoring
# docker-compose.wazuh.yml
services:
wazuh-manager:
image: wazuh/wazuh-manager:4.7.0
container_name: wazuh-manager
ports:
- "1514:1514" # Agent registration
- "1515:1515" # Agent communication
- "55000:55000" # API
volumes:
- wazuh_data:/var/ossec/data
- wazuh_etc:/var/ossec/etc
environment:
- INDEXER_URL=https://wazuh-indexer:9200
wazuh-agent:
image: wazuh/wazuh-agent:4.7.0
container_name: wazuh-agent
environment:
- WAZUH_MANAGER=wazuh-manager
- WAZUH_AGENT_NAME=pos-api-1
volumes:
# Mount host paths to monitor
- /opt/pos:/opt/pos:ro
- /etc:/host_etc:ro
depends_on:
- wazuh-manager
Custom FIM Rules for POS
<!-- /var/ossec/etc/rules/local_rules.xml -->
<group name="pos_fim,">
<!-- Payment module changes - CRITICAL -->
<rule id="100001" level="15">
<if_sid>550</if_sid>
<match>/opt/pos/payment</match>
<description>CRITICAL: Payment module file modified</description>
<group>pci_dss_11.5,</group>
</rule>
<!-- Configuration changes - HIGH -->
<rule id="100002" level="12">
<if_sid>550</if_sid>
<match>/opt/pos/config</match>
<description>HIGH: POS configuration file modified</description>
<group>pci_dss_11.5,</group>
</rule>
<!-- Executable changes - HIGH -->
<rule id="100003" level="12">
<if_sid>550</if_sid>
<match>/opt/pos/bin</match>
<description>HIGH: POS executable modified</description>
<group>pci_dss_11.5,</group>
</rule>
<!-- Skimmer detection - patterns -->
<rule id="100010" level="15">
<if_sid>550</if_sid>
<regex>\.dll$|\.so$|\.exe$</regex>
<match>/opt/pos/payment</match>
<description>ALERT: Possible skimmer injection detected</description>
<group>pci_dss_11.5,attack,</group>
</rule>
</group>
Alert Configuration
# Wazuh alert integration
integrations:
- name: slack
hook_url: https://hooks.slack.com/services/xxx/yyy/zzz
level: 12 # High and Critical only
alert_format: json
rule_id:
- 100001
- 100010
- name: pagerduty
api_key: your-pagerduty-key
level: 15 # Critical only
FIM Compliance Report
# Generate FIM compliance report for PCI audit
wazuh-reporting fim-report \
--start "2026-01-01" \
--end "2026-01-31" \
--format pdf \
--output /reports/fim-jan-2026.pdf
# Check current baseline
/var/ossec/bin/syscheck_control -l
# Force immediate scan
/var/ossec/bin/syscheck_control -u
26.44 PCI DSS v4.0.1 Container-Specific FIM
Overview
PCI DSS v4.0.1 introduces explicit requirements for containerized environments. Container FIM must monitor not just running containers but also:
- Container images
- Orchestrator configurations (Kubernetes)
- Container runtime configurations
| Attribute | Selection |
|---|---|
| Requirement | PCI DSS 4.0.1 - Requirement 11.5.1.1 |
| Tool | Wazuh + Falco (runtime) + Trivy (images) |
| Scope | Images, containers, K8s configs, runtime |
Container FIM Architecture
+------------------------------------------------------------------+
| CONTAINER FIM ARCHITECTURE |
+------------------------------------------------------------------+
| |
| ┌─────────────────┐ ┌─────────────────┐ ┌───────────────┐ |
| │ Image Registry │ │ Kubernetes │ │ Container │ |
| │ (Harbor/ACR) │ │ API Server │ │ Runtime │ |
| └────────┬────────┘ └────────┬────────┘ └───────┬───────┘ |
| │ │ │ |
| ▼ ▼ ▼ |
| ┌─────────────────┐ ┌─────────────────┐ ┌───────────────┐ |
| │ Trivy Scanner │ │ Wazuh K8s │ │ Falco │ |
| │ (Image FIM) │ │ Agent │ │ (Runtime) │ |
| │ │ │ │ │ │ |
| │ - Layer changes │ │ - ConfigMap │ │ - File access │ |
| │ - Vuln scanning │ │ - Secrets │ │ - Syscalls │ |
| │ - SBOM drift │ │ - RBAC changes │ │ - Network │ |
| └────────┬────────┘ └────────┬────────┘ └───────┬───────┘ |
| │ │ │ |
| └──────────────────────┼──────────────────────┘ |
| ▼ |
| ┌─────────────────────────┐ |
| │ Wazuh Manager │ |
| │ (Central Analysis) │ |
| └────────────┬────────────┘ |
| │ |
| ┌────────────┴────────────┐ |
| │ │ |
| ▼ ▼ |
| ┌───────────────┐ ┌────────────────┐ |
| │ SIEM / Wazuh │ │ PCI DSS │ |
| │ Dashboard │ │ Reports │ |
| └───────────────┘ └────────────────┘ |
| |
+------------------------------------------------------------------+
Image FIM with Trivy
# .github/workflows/image-fim.yml
name: Container Image FIM
on:
push:
branches: [main]
schedule:
- cron: '0 0 * * *' # Daily baseline check
jobs:
image-fim:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t pos-api:${{ github.sha }} .
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: pos-api:${{ github.sha }}
output-file: sbom-${{ github.sha }}.json
- name: Compare with baseline SBOM
run: |
# Download baseline SBOM
aws s3 cp s3://security-baselines/pos-api/sbom-baseline.json baseline.json
# Compare SBOMs for drift
diff_result=$(diff <(jq -S . baseline.json) <(jq -S . sbom-${{ github.sha }}.json) || true)
if [ -n "$diff_result" ]; then
echo "SBOM DRIFT DETECTED"
echo "$diff_result"
# Log to Wazuh
curl -X POST https://wazuh-manager:55000/events \
-H "Authorization: Bearer $WAZUH_TOKEN" \
-d '{
"event": "container_image_drift",
"image": "pos-api",
"sha": "${{ github.sha }}",
"changes": "'"$(echo $diff_result | jq -Rs .)"'"
}'
fi
- name: Trivy vulnerability scan
uses: aquasecurity/trivy-action@master
with:
image-ref: pos-api:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Compare with baseline vulnerabilities
run: |
# Get current vulns
trivy image --format json pos-api:${{ github.sha }} > current-vulns.json
# Compare with baseline
NEW_VULNS=$(jq -r '.Results[].Vulnerabilities[]?.VulnerabilityID' current-vulns.json | \
grep -v -f baseline-vulns.txt | wc -l)
if [ "$NEW_VULNS" -gt 0 ]; then
echo "NEW VULNERABILITIES DETECTED: $NEW_VULNS"
# Alert through Wazuh
fi
Kubernetes Configuration FIM
# k8s/wazuh-k8s-agent.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: wazuh-agent
namespace: security
spec:
selector:
matchLabels:
app: wazuh-agent
template:
metadata:
labels:
app: wazuh-agent
spec:
serviceAccountName: wazuh-agent
containers:
- name: wazuh-agent
image: wazuh/wazuh-agent:4.7.0
env:
- name: WAZUH_MANAGER
value: "wazuh-manager.security.svc"
volumeMounts:
# Monitor Kubernetes configs
- name: k8s-manifests
mountPath: /host/etc/kubernetes
readOnly: true
# Monitor container runtime
- name: containerd
mountPath: /host/run/containerd
readOnly: true
# Monitor host filesystem
- name: host-root
mountPath: /host
readOnly: true
securityContext:
privileged: true # Required for FIM
volumes:
- name: k8s-manifests
hostPath:
path: /etc/kubernetes
- name: containerd
hostPath:
path: /run/containerd
- name: host-root
hostPath:
path: /
<!-- Wazuh agent config for Kubernetes FIM -->
<ossec_config>
<syscheck>
<!-- Kubernetes manifests -->
<directories realtime="yes" check_all="yes" report_changes="yes">
/host/etc/kubernetes/manifests
</directories>
<!-- Kubernetes PKI -->
<directories realtime="yes" check_all="yes">
/host/etc/kubernetes/pki
</directories>
<!-- Container runtime config -->
<directories realtime="yes" check_all="yes">
/host/etc/containerd
</directories>
<!-- Kubelet config -->
<directories check_all="yes">
/host/var/lib/kubelet
</directories>
</syscheck>
</ossec_config>
Runtime FIM with Falco
# k8s/falco-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: falco
namespace: security
spec:
selector:
matchLabels:
app: falco
template:
spec:
containers:
- name: falco
image: falcosecurity/falco:0.37.0
securityContext:
privileged: true
volumeMounts:
- name: falco-rules
mountPath: /etc/falco/rules.d
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
readOnly: true
volumes:
- name: falco-rules
configMap:
name: falco-pos-rules
- name: dev
hostPath:
path: /dev
- name: proc
hostPath:
path: /proc
---
apiVersion: v1
kind: ConfigMap
metadata:
name: falco-pos-rules
namespace: security
data:
pos-rules.yaml: |
# POS-specific Falco rules for container FIM
- rule: POS Container File Modified
desc: Detect file modifications in POS containers
condition: >
container.name startswith "pos-" and
(evt.type = open or evt.type = openat) and
evt.is_open_write = true and
fd.name startswith "/app/"
output: >
File modified in POS container
(user=%user.name container=%container.name file=%fd.name
command=%proc.cmdline)
priority: CRITICAL
tags: [pci_dss, fim, container]
- rule: POS Payment Module Access
desc: Detect any access to payment processing code
condition: >
container.name startswith "pos-" and
fd.name contains "payment" and
(evt.type = open or evt.type = openat)
output: >
Payment module accessed
(user=%user.name container=%container.name file=%fd.name
command=%proc.cmdline)
priority: WARNING
tags: [pci_dss, payment, fim]
- rule: Unexpected Process in POS Container
desc: Detect unexpected processes in POS containers
condition: >
container.name startswith "pos-" and
spawned_process and
not proc.name in (dotnet, pos-api, bash, sh)
output: >
Unexpected process in POS container
(user=%user.name container=%container.name proc=%proc.name
parent=%proc.pname cmdline=%proc.cmdline)
priority: CRITICAL
tags: [pci_dss, runtime, malware]
- rule: Container Configuration Modified
desc: Detect changes to container configs
condition: >
(evt.type = open or evt.type = openat) and
evt.is_open_write = true and
(fd.name contains "/etc/kubernetes" or
fd.name contains "/etc/containerd" or
fd.name contains "/var/lib/kubelet")
output: >
Container infrastructure config modified
(user=%user.name file=%fd.name command=%proc.cmdline)
priority: CRITICAL
tags: [pci_dss, k8s, fim]
Container FIM Alert Rules
<!-- /var/ossec/etc/rules/container_fim_rules.xml -->
<group name="container_fim,pci_dss_11.5,">
<!-- Container image drift detected -->
<rule id="100100" level="12">
<decoded_as>json</decoded_as>
<field name="event">container_image_drift</field>
<description>CONTAINER FIM: Image SBOM drift detected for $(image)</description>
<group>pci_dss_11.5.1.1,container_security,</group>
</rule>
<!-- New vulnerability in container image -->
<rule id="100101" level="14">
<decoded_as>json</decoded_as>
<field name="event">new_vulnerability</field>
<field name="severity">CRITICAL|HIGH</field>
<description>CONTAINER FIM: New $(severity) vulnerability in $(image)</description>
<group>pci_dss_11.5.1.1,vulnerability,</group>
</rule>
<!-- Kubernetes manifest changed -->
<rule id="100102" level="13">
<if_sid>550</if_sid>
<match>/etc/kubernetes/manifests</match>
<description>CONTAINER FIM: Kubernetes manifest modified</description>
<group>pci_dss_11.5.1.1,k8s_config,</group>
</rule>
<!-- Container runtime config changed -->
<rule id="100103" level="12">
<if_sid>550</if_sid>
<match>/etc/containerd</match>
<description>CONTAINER FIM: Container runtime config modified</description>
<group>pci_dss_11.5.1.1,runtime_config,</group>
</rule>
<!-- Falco: POS container file modified -->
<rule id="100110" level="15">
<decoded_as>json</decoded_as>
<field name="rule">POS Container File Modified</field>
<description>RUNTIME FIM: File modified in POS container - $(output)</description>
<group>pci_dss_11.5.1.1,runtime_fim,critical,</group>
</rule>
<!-- Falco: Unexpected process in container -->
<rule id="100111" level="15">
<decoded_as>json</decoded_as>
<field name="rule">Unexpected Process in POS Container</field>
<description>RUNTIME FIM: Unexpected process detected - $(output)</description>
<group>pci_dss_11.5.1.1,malware_detection,critical,</group>
</rule>
</group>
PCI DSS v4.0.1 Container FIM Compliance Mapping
| PCI DSS Requirement | Implementation |
|---|---|
| 11.5.1 | Wazuh FIM on host and container paths |
| 11.5.1.1 | Trivy SBOM comparison for image drift |
| 11.5.1.1 | Falco runtime file monitoring |
| 11.5.1.1 | Kubernetes manifest monitoring |
| 11.5.2 | Wazuh alerts on critical file changes |
| 11.5.2 | Real-time notification via Slack/PagerDuty |
Reference
For complete security strategy and risk mitigations, see:
- Appendix K: Architecture Characteristics - Security characteristic justification
- Appendix L: Architecture Styles Analysis - DevSecOps pipeline details
26.45 Summary
This chapter provides comprehensive security coverage:
- Security Architecture: Defense-in-depth layers
- PCI-DSS Compliance: Complete 12-requirement checklist
- Tokenization: Payment data flow and storage policies
- Network Segmentation: Zone-based security architecture
- Breach Response: Step-by-step incident procedures
- Audit Checklist: Quarterly security review process
- Supply Chain Security (NEW): SCA with Snyk, SBOM generation
- GenAI Governance (NEW): Deep SAST gates for AI-generated code
- File Integrity Monitoring (NEW): Wazuh FIM for PCI 11.5 compliance
Next Chapter: Chapter 27: Disaster Recovery
“Security is not a product, but a process.”
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VII - Operations |
| Chapter | 26 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 27: Disaster Recovery
27.1 Overview
This chapter defines the disaster recovery strategy, backup procedures, failover architecture, and recovery processes for the POS Platform.
27.2 Recovery Objectives
RTO/RPO Requirements by Data Type
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ RECOVERY TIME OBJECTIVE (RTO) / RECOVERY POINT OBJECTIVE (RPO) │
└─────────────────────────────────────────────────────────────────────────────────────┘
┌───────────────────┬─────────────┬─────────────┬──────────────────────────────────────┐
│ Data Category │ RTO │ RPO │ Justification │
├───────────────────┼─────────────┼─────────────┼──────────────────────────────────────┤
│ Transaction Data │ < 1 hour │ 0 (no loss) │ Revenue-critical, legal requirements │
│ Inventory Data │ < 4 hours │ < 1 hour │ Business operations │
│ Customer Data │ < 4 hours │ < 1 hour │ Order fulfillment │
│ Product Catalog │ < 8 hours │ < 24 hours │ Can rebuild from source │
│ Audit Logs │ < 24 hours │ < 1 hour │ Compliance requirements │
│ Analytics Data │ < 72 hours │ < 24 hours │ Non-critical, can rebuild │
│ Configuration │ Immediate │ 0 (no loss) │ Stored in Git │
└───────────────────┴─────────────┴─────────────┴──────────────────────────────────────┘
Recovery Tier Definitions:
┌─────────┬─────────────────────────────────────────────────────────────────────────────┐
│ TIER 1 │ MISSION CRITICAL │
│ │ RTO: < 1 hour | RPO: 0 │
│ │ - Active transactions │
│ │ - Payment processing │
│ │ - Real-time inventory │
│ │ Strategy: Synchronous replication, hot standby │
├─────────┼─────────────────────────────────────────────────────────────────────────────┤
│ TIER 2 │ BUSINESS CRITICAL │
│ │ RTO: < 4 hours | RPO: < 1 hour │
│ │ - Customer data │
│ │ - Order history │
│ │ - Inventory levels │
│ │ Strategy: Asynchronous replication, warm standby │
├─────────┼─────────────────────────────────────────────────────────────────────────────┤
│ TIER 3 │ IMPORTANT │
│ │ RTO: < 24 hours | RPO: < 24 hours │
│ │ - Product catalog │
│ │ - Reports │
│ │ - Historical analytics │
│ │ Strategy: Daily backups, cold standby │
├─────────┼─────────────────────────────────────────────────────────────────────────────┤
│ TIER 4 │ NON-CRITICAL │
│ │ RTO: < 72 hours | RPO: < 72 hours │
│ │ - Archived data │
│ │ - Legacy exports │
│ │ Strategy: Weekly backups, rebuild if needed │
└─────────┴─────────────────────────────────────────────────────────────────────────────┘
27.3 Backup Strategy
Database Backup Architecture
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ DATABASE BACKUP STRATEGY │
└─────────────────────────────────────────────────────────────────────────────────────┘
PostgreSQL Primary
│
┌────────────────┼────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌──────────┐ ┌─────────────────┐
│ Streaming │ │ WAL │ │ pg_dump │
│ Replication │ │ Archiving│ │ (Daily) │
│ (Real-time) │ │ (PITR) │ │ │
└────────┬────────┘ └────┬─────┘ └────────┬────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌──────────┐ ┌─────────────────┐
│ Hot Standby │ │ WAL │ │ Backup Storage │
│ (Same Region) │ │ Archive │ │ (Encrypted) │
│ │ │ (S3/NFS) │ │ │
└─────────────────┘ └──────────┘ └─────────────────┘
│ │ │
│ │ │
└───────────────┼────────────────┘
│
▼
┌─────────────────────┐
│ Offsite Backup │
│ (Different DC) │
│ S3 Cross-Region │
└─────────────────────┘
BACKUP SCHEDULE:
┌──────────────────┬───────────────┬─────────────────┬────────────────────────────────┐
│ Backup Type │ Frequency │ Retention │ Storage Location │
├──────────────────┼───────────────┼─────────────────┼────────────────────────────────┤
│ WAL Archiving │ Continuous │ 7 days │ Local NFS + S3 │
│ pg_dump (Full) │ Daily 2AM │ 30 days │ S3 (encrypted) │
│ pg_dump (Weekly) │ Sunday 3AM │ 90 days │ S3 + Glacier │
│ Monthly Archive │ 1st of month │ 1 year │ Glacier │
│ Yearly Archive │ Jan 1st │ 7 years │ Glacier Deep Archive │
└──────────────────┴───────────────┴─────────────────┴────────────────────────────────┘
Backup Scripts
#!/bin/bash
# File: /pos-platform/scripts/backup/daily-backup.sh
# Daily database backup script
set -e
#=============================================
# CONFIGURATION
#=============================================
BACKUP_DIR="/backups/postgres/daily"
S3_BUCKET="s3://pos-backups/postgres"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="pos_db_${DATE}.sql.gz"
LOG_FILE="/var/log/pos-backup.log"
#=============================================
# FUNCTIONS
#=============================================
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
send_alert() {
# Send to Slack on failure
curl -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-type: application/json' \
-d "{\"text\": \"BACKUP ALERT: $1\"}"
}
#=============================================
# BACKUP PROCESS
#=============================================
backup_database() {
log "Starting database backup..."
# Create backup with compression
docker exec postgres-primary pg_dump \
-U pos_admin \
-d pos_db \
--format=custom \
--compress=9 \
--file="/tmp/${BACKUP_FILE}"
# Copy from container
docker cp "postgres-primary:/tmp/${BACKUP_FILE}" "${BACKUP_DIR}/${BACKUP_FILE}"
# Verify backup integrity
docker exec postgres-primary pg_restore \
--list "/tmp/${BACKUP_FILE}" > /dev/null 2>&1
if [ $? -eq 0 ]; then
log "Backup verified successfully"
else
log "ERROR: Backup verification failed"
send_alert "Backup verification failed for ${BACKUP_FILE}"
exit 1
fi
log "Backup completed: ${BACKUP_FILE}"
}
upload_to_s3() {
log "Uploading to S3..."
# Encrypt and upload
aws s3 cp \
"${BACKUP_DIR}/${BACKUP_FILE}" \
"${S3_BUCKET}/daily/${BACKUP_FILE}" \
--sse aws:kms \
--sse-kms-key-id "$KMS_KEY_ID"
log "Upload completed"
}
cleanup_old_backups() {
log "Cleaning up old backups..."
# Local cleanup
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
# S3 cleanup (handled by lifecycle policy)
log "Cleanup completed"
}
#=============================================
# PER-TENANT BACKUP
#=============================================
backup_tenant_data() {
log "Starting per-tenant backups..."
# Get all active tenants
TENANTS=$(docker exec postgres-primary psql -U pos_admin -d pos_db -t -c \
"SELECT schema_name FROM tenants WHERE status = 'active';")
for TENANT in $TENANTS; do
TENANT=$(echo "$TENANT" | tr -d ' ')
TENANT_BACKUP="${BACKUP_DIR}/tenants/${TENANT}_${DATE}.sql.gz"
log "Backing up tenant: $TENANT"
docker exec postgres-primary pg_dump \
-U pos_admin \
-d pos_db \
--schema="${TENANT}" \
--format=custom \
--compress=9 \
--file="/tmp/tenant_${TENANT}.sql"
docker cp "postgres-primary:/tmp/tenant_${TENANT}.sql" "$TENANT_BACKUP"
# Upload tenant backup
aws s3 cp "$TENANT_BACKUP" \
"${S3_BUCKET}/tenants/${TENANT}/${TENANT}_${DATE}.sql.gz" \
--sse aws:kms
log "Tenant backup completed: $TENANT"
done
}
#=============================================
# MAIN
#=============================================
main() {
log "=========================================="
log "Daily Backup Started"
log "=========================================="
mkdir -p "$BACKUP_DIR/tenants"
backup_database
backup_tenant_data
upload_to_s3
cleanup_old_backups
log "=========================================="
log "Daily Backup Completed Successfully"
log "=========================================="
}
main "$@"
WAL Archiving Configuration
# File: /pos-platform/docker/postgres/postgresql.conf (excerpt)
# WAL Settings
wal_level = replica
archive_mode = on
archive_command = 'aws s3 cp %p s3://pos-backups/wal/%f --sse aws:kms'
archive_timeout = 60
# Replication Settings
max_wal_senders = 5
wal_keep_size = 1GB
hot_standby = on
# Recovery Settings (for standby)
restore_command = 'aws s3 cp s3://pos-backups/wal/%f %p'
recovery_target_timeline = 'latest'
27.4 Failover Architecture
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ MULTI-REGION FAILOVER ARCHITECTURE │
└─────────────────────────────────────────────────────────────────────────────────────┘
┌─────────────────┐
│ DNS (Route53) │
│ Health-based │
│ Failover │
└────────┬────────┘
│
┌──────────────────┼──────────────────┐
│ │ │
▼ │ ▼
┌───────────────────┐ │ ┌───────────────────┐
│ PRIMARY REGION │ │ │ SECONDARY REGION │
│ (US-East-1) │ │ │ (US-West-2) │
│ │ │ │ │
│ ┌─────────────┐ │ │ │ ┌─────────────┐ │
│ │ Load │ │ │ │ │ Load │ │
│ │ Balancer │ │ │ │ │ Balancer │ │
│ └──────┬──────┘ │ │ │ └──────┬──────┘ │
│ │ │ │ │ │ │
│ ┌──────┴──────┐ │ │ │ ┌──────┴──────┐ │
│ │ API (x3) │ │ │ │ │ API (x2) │ │
│ │ Active │ │ │ │ │ Standby │ │
│ └──────┬──────┘ │ │ │ └──────┬──────┘ │
│ │ │ │ │ │ │
│ ┌──────┴──────┐ │ Sync │ │ ┌──────┴──────┐ │
│ │ PostgreSQL │ │◄─────────┼──────│ │ PostgreSQL │ │
│ │ PRIMARY │ │ (Async) │ │ │ REPLICA │ │
│ └─────────────┘ │ │ │ └─────────────┘ │
│ │ │ │ │
│ ┌─────────────┐ │ Sync │ │ ┌─────────────┐ │
│ │ Redis │ │◄─────────┼──────│ │ Redis │ │
│ │ PRIMARY │ │ │ │ │ REPLICA │ │
│ └─────────────┘ │ │ │ └─────────────┘ │
└───────────────────┘ │ └───────────────────┘
│
NORMAL OPERATION:
100% traffic → Primary
FAILOVER STATE:
100% traffic → Secondary
FAILOVER TRIGGERS:
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ Trigger │ Detection Time │ Failover Time │ Auto/Manual │
├──────────────────────────────────┼────────────────┼───────────────┼────────────────┤
│ Load balancer health check fail │ 30 seconds │ 1 minute │ Automatic │
│ Database connection failure │ 1 minute │ 5 minutes │ Automatic │
│ Region-wide outage (AWS) │ 5 minutes │ 10 minutes │ Automatic │
│ Planned maintenance │ N/A │ 0 (graceful) │ Manual │
│ Security incident │ Immediate │ 5 minutes │ Manual │
└─────────────────────────────────────────────────────────────────────────────────────┘
27.5 Recovery Procedures
Complete Database Recovery
#!/bin/bash
# File: /pos-platform/scripts/recovery/full-db-recovery.sh
# Complete database recovery from backup
set -e
#=============================================
# RECOVERY MODES
#=============================================
# 1. full - Restore to latest available state
# 2. pitr - Point-in-time recovery to specific timestamp
# 3. tenant - Restore specific tenant only
RECOVERY_MODE=${1:-full}
TARGET_TIME=${2:-}
TENANT_ID=${3:-}
#=============================================
# CONFIGURATION
#=============================================
S3_BUCKET="s3://pos-backups"
WORK_DIR="/tmp/recovery_$(date +%s)"
LOG_FILE="/var/log/pos-recovery.log"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] RECOVERY: $1" | tee -a "$LOG_FILE"
}
#=============================================
# STEP 1: STOP SERVICES
#=============================================
stop_services() {
log "Stopping API services..."
docker-compose stop pos-api
log "Services stopped"
}
#=============================================
# STEP 2: DOWNLOAD BACKUP
#=============================================
download_backup() {
log "Downloading backup files..."
mkdir -p "$WORK_DIR"
# Get latest backup
LATEST_BACKUP=$(aws s3 ls "${S3_BUCKET}/postgres/daily/" | \
sort | tail -1 | awk '{print $4}')
aws s3 cp "${S3_BUCKET}/postgres/daily/${LATEST_BACKUP}" \
"${WORK_DIR}/backup.sql.gz"
log "Downloaded: ${LATEST_BACKUP}"
}
#=============================================
# STEP 3: VERIFY BACKUP INTEGRITY
#=============================================
verify_backup() {
log "Verifying backup integrity..."
# Check file is valid
gunzip -t "${WORK_DIR}/backup.sql.gz"
if [ $? -ne 0 ]; then
log "ERROR: Backup file is corrupted"
exit 1
fi
log "Backup verified"
}
#=============================================
# STEP 4: PREPARE DATABASE
#=============================================
prepare_database() {
log "Preparing database for recovery..."
# Create recovery database
docker exec postgres-primary psql -U postgres -c \
"SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'pos_db';"
docker exec postgres-primary psql -U postgres -c \
"DROP DATABASE IF EXISTS pos_db_recovery;"
docker exec postgres-primary psql -U postgres -c \
"CREATE DATABASE pos_db_recovery;"
log "Recovery database prepared"
}
#=============================================
# STEP 5: RESTORE DATA
#=============================================
restore_data() {
log "Restoring data..."
# Copy backup to container
docker cp "${WORK_DIR}/backup.sql.gz" postgres-primary:/tmp/
# Decompress and restore
docker exec postgres-primary bash -c \
"gunzip -c /tmp/backup.sql.gz | psql -U postgres -d pos_db_recovery"
log "Data restored"
}
#=============================================
# STEP 6: POINT-IN-TIME RECOVERY (if needed)
#=============================================
apply_wal_logs() {
if [ "$RECOVERY_MODE" == "pitr" ]; then
log "Applying WAL logs until: $TARGET_TIME"
# Download WAL files
aws s3 sync "${S3_BUCKET}/wal/" "${WORK_DIR}/wal/" \
--exclude "*" \
--include "*.gz"
# Apply WAL files (PostgreSQL recovery mode)
docker exec postgres-primary bash -c "
echo \"recovery_target_time = '$TARGET_TIME'\" >> /var/lib/postgresql/data/recovery.signal
pg_ctl restart
"
log "PITR completed"
fi
}
#=============================================
# STEP 7: VERIFY RECOVERY
#=============================================
verify_recovery() {
log "Verifying recovery..."
# Check table counts
TABLES=$(docker exec postgres-primary psql -U postgres -d pos_db_recovery -t -c \
"SELECT COUNT(*) FROM information_schema.tables WHERE table_schema NOT IN ('pg_catalog', 'information_schema');")
log "Restored tables: $TABLES"
# Check transaction count
TX_COUNT=$(docker exec postgres-primary psql -U postgres -d pos_db_recovery -t -c \
"SELECT COUNT(*) FROM transactions;")
log "Restored transactions: $TX_COUNT"
# Check latest transaction
LATEST_TX=$(docker exec postgres-primary psql -U postgres -d pos_db_recovery -t -c \
"SELECT MAX(created_at) FROM transactions;")
log "Latest transaction: $LATEST_TX"
}
#=============================================
# STEP 8: SWAP DATABASES
#=============================================
swap_databases() {
log "Swapping databases..."
# Rename databases
docker exec postgres-primary psql -U postgres -c \
"ALTER DATABASE pos_db RENAME TO pos_db_old;"
docker exec postgres-primary psql -U postgres -c \
"ALTER DATABASE pos_db_recovery RENAME TO pos_db;"
log "Databases swapped"
}
#=============================================
# STEP 9: RESTART SERVICES
#=============================================
restart_services() {
log "Restarting services..."
docker-compose start pos-api
# Wait for health checks
sleep 30
# Verify health
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health)
if [ "$HTTP_CODE" -eq 200 ]; then
log "Services healthy"
else
log "ERROR: Services not healthy after recovery"
exit 1
fi
}
#=============================================
# STEP 10: CLEANUP
#=============================================
cleanup() {
log "Cleaning up..."
rm -rf "$WORK_DIR"
# Keep old database for 24 hours, then drop
echo "DROP DATABASE pos_db_old;" | at now + 24 hours
log "Cleanup scheduled"
}
#=============================================
# MAIN
#=============================================
main() {
log "=========================================="
log "DATABASE RECOVERY STARTED"
log "Mode: $RECOVERY_MODE"
[ -n "$TARGET_TIME" ] && log "Target Time: $TARGET_TIME"
log "=========================================="
stop_services
download_backup
verify_backup
prepare_database
restore_data
apply_wal_logs
verify_recovery
swap_databases
restart_services
cleanup
log "=========================================="
log "DATABASE RECOVERY COMPLETED"
log "=========================================="
}
main "$@"
Tenant-Specific Recovery
#!/bin/bash
# File: /pos-platform/scripts/recovery/tenant-recovery.sh
# Restore specific tenant data
TENANT_ID=$1
BACKUP_DATE=${2:-latest}
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] TENANT RECOVERY: $1"
}
#=============================================
# FIND TENANT BACKUP
#=============================================
find_backup() {
log "Finding backup for tenant: $TENANT_ID"
if [ "$BACKUP_DATE" == "latest" ]; then
BACKUP_FILE=$(aws s3 ls "s3://pos-backups/tenants/${TENANT_ID}/" | \
sort | tail -1 | awk '{print $4}')
else
BACKUP_FILE="${TENANT_ID}_${BACKUP_DATE}.sql.gz"
fi
log "Using backup: $BACKUP_FILE"
}
#=============================================
# RESTORE TENANT SCHEMA
#=============================================
restore_tenant() {
log "Restoring tenant schema..."
# Download backup
aws s3 cp "s3://pos-backups/tenants/${TENANT_ID}/${BACKUP_FILE}" /tmp/
# Drop existing schema (with confirmation in production)
docker exec postgres-primary psql -U postgres -d pos_db -c \
"DROP SCHEMA IF EXISTS ${TENANT_ID} CASCADE;"
# Restore schema
docker exec postgres-primary bash -c \
"gunzip -c /tmp/${BACKUP_FILE} | psql -U postgres -d pos_db"
log "Tenant restored: $TENANT_ID"
}
#=============================================
# MAIN
#=============================================
main() {
if [ -z "$TENANT_ID" ]; then
echo "Usage: $0 <tenant_id> [backup_date]"
exit 1
fi
find_backup
restore_tenant
log "Recovery completed for tenant: $TENANT_ID"
}
main "$@"
27.6 DR Testing Schedule
# Disaster Recovery Test Schedule
## 27.7 Quarterly Tests
### Q1 (January)
| Test | Date | Duration | Owner |
|------|------|----------|-------|
| Full failover drill | Week 3 | 4 hours | Platform Team |
| Backup restoration test | Week 4 | 2 hours | DBA |
### Q2 (April)
| Test | Date | Duration | Owner |
|------|------|----------|-------|
| Tenant recovery test | Week 2 | 2 hours | Platform Team |
| Network failover test | Week 3 | 2 hours | Network Team |
### Q3 (July)
| Test | Date | Duration | Owner |
|------|------|----------|-------|
| Full failover drill | Week 3 | 4 hours | Platform Team |
| PITR recovery test | Week 4 | 3 hours | DBA |
### Q4 (October)
| Test | Date | Duration | Owner |
|------|------|----------|-------|
| Annual DR exercise | Week 2-3 | 8 hours | All Teams |
| Tabletop exercise | Week 4 | 2 hours | Leadership |
## 27.8 Monthly Tests
- Automated backup verification
- Replica lag monitoring
- Health check validation
## 27.9 Test Procedure
### Pre-Test Checklist
- [ ] Notify stakeholders
- [ ] Confirm maintenance window
- [ ] Verify backup freshness
- [ ] Prepare rollback plan
- [ ] Stage monitoring dashboards
### During Test
- [ ] Document all actions
- [ ] Record timestamps
- [ ] Note any issues
- [ ] Track RTO/RPO actual vs target
### Post-Test
- [ ] Generate test report
- [ ] Update runbooks if needed
- [ ] File improvement tickets
- [ ] Schedule follow-up for issues
27.10 Communication Templates
Outage Notification Templates
# Template: Initial Outage Notification
## 27.11 Internal (Slack/Email)
Subject: [INCIDENT] POS Platform - Service Disruption
**Status**: Investigating
**Impact**: [High/Medium/Low]
**Start Time**: [YYYY-MM-DD HH:MM UTC]
**Affected Services**:
- [ ] Transaction Processing
- [ ] Inventory Management
- [ ] Order Fulfillment
- [ ] Reporting
**Current Actions**:
- Investigating root cause
- Engaged [Team Name]
**Next Update**: In 30 minutes or when status changes
---
# Template: Customer Notification
Subject: Service Status Update - POS Platform
Dear Valued Customer,
We are currently experiencing a service disruption affecting
[specific functionality]. Our team is actively working to
resolve this issue.
**What's Affected**:
[List specific features]
**What's Working**:
[List unaffected features]
**Workaround**:
[If applicable, provide workaround]
**Expected Resolution**:
We anticipate resolution within [timeframe].
We apologize for any inconvenience and will provide updates
as the situation progresses.
---
# Template: Resolution Notification
Subject: [RESOLVED] POS Platform - Service Restored
**Status**: Resolved
**Duration**: [X hours, Y minutes]
**Resolution Time**: [YYYY-MM-DD HH:MM UTC]
**Root Cause**:
[Brief description]
**Resolution**:
[What was done to fix]
**Preventive Measures**:
[What will prevent recurrence]
**Post-Incident Review**:
Scheduled for [date]
Thank you for your patience.
27.12 Summary
This chapter provides complete disaster recovery coverage:
- Recovery Objectives: RTO/RPO by data tier
- Backup Strategy: Daily dumps, WAL archiving, per-tenant backups
- Failover Architecture: Multi-region with automatic failover
- Recovery Procedures: Step-by-step scripts for full and tenant recovery
- DR Testing: Quarterly test schedule and procedures
- Communication: Templates for internal and customer notifications
Next Chapter: Chapter 28: Tenant Lifecycle
“Hope is not a strategy. Test your recovery procedures.”
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VII - Operations |
| Chapter | 27 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 28: Tenant Lifecycle Management
28.1 Overview
This chapter defines the complete tenant lifecycle for the POS Platform, including state transitions, onboarding workflows, offboarding procedures, and billing integration.
28.2 Tenant States
State Machine Diagram
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ TENANT STATE MACHINE │
└─────────────────────────────────────────────────────────────────────────────────────┘
┌───────────────┐
│ PROSPECT │
│ (Pre-sale) │
└───────┬───────┘
│ Sales closes deal
│ Contract signed
▼
┌───────────────┐
┌───────►│ TRIAL │◄──────────┐
│ │ (14 days) │ │
│ └───────┬───────┘ │
│ │ │
│ │ Payment received │ Trial extended
│ ▼ │ (max 30 days)
│ ┌───────────────┐ │
│ │ PROVISIONING │───────────┘
│ │ (Setup phase) │
│ └───────┬───────┘
│ │
│ │ Setup complete
│ │ Go-live approved
│ ▼
│ ┌───────────────┐
Reactivate │ │ ACTIVE │
(payment │ │ (Production) │◄─────────────────────────┐
received) │ └───────┬───────┘ │
│ │ │
│ ┌───────────┼───────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ Payment Contract Compliance │
│ Failure Violation Issue │
│ │ │ │ │
│ └───────────┼───────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────┐ │
└────────│ SUSPENDED │──────────────────────────┘
│ (Read-only) │ Issue resolved
└───────┬───────┘
│
│ 30 days no resolution
│ OR cancellation request
▼
┌───────────────┐
│ CANCELLED │
│ (Grace period)│
│ (30 days) │
└───────┬───────┘
│
│ Grace period expired
▼
┌───────────────┐
│ ARCHIVED │
│(Data retained)│
│ (90 days) │
└───────┬───────┘
│
│ Retention expired
│ OR GDPR deletion
▼
┌───────────────┐
│ PURGED │
│(Permanently │
│ deleted) │
└───────────────┘
STATE DEFINITIONS:
┌─────────────────┬──────────────────────────────────────────────────────────────────┐
│ State │ Description │
├─────────────────┼──────────────────────────────────────────────────────────────────┤
│ PROSPECT │ Lead in sales pipeline, no system access │
│ TRIAL │ Free trial period, limited features │
│ PROVISIONING │ Database/schema being set up, training in progress │
│ ACTIVE │ Full production access, billing active │
│ SUSPENDED │ Read-only access, no transactions, billing paused │
│ CANCELLED │ No access, data preserved for grace period │
│ ARCHIVED │ No access, data compressed and stored offline │
│ PURGED │ All data permanently deleted │
└─────────────────┴──────────────────────────────────────────────────────────────────┘
State Transition Rules
// File: /src/POS.Core/Tenants/TenantStateMachine.cs
public class TenantStateMachine
{
private static readonly Dictionary<TenantState, TenantState[]> AllowedTransitions = new()
{
[TenantState.Prospect] = new[] { TenantState.Trial },
[TenantState.Trial] = new[] { TenantState.Provisioning, TenantState.Cancelled },
[TenantState.Provisioning] = new[] { TenantState.Active, TenantState.Trial },
[TenantState.Active] = new[] { TenantState.Suspended, TenantState.Cancelled },
[TenantState.Suspended] = new[] { TenantState.Active, TenantState.Cancelled },
[TenantState.Cancelled] = new[] { TenantState.Archived, TenantState.Active },
[TenantState.Archived] = new[] { TenantState.Purged },
[TenantState.Purged] = Array.Empty<TenantState>()
};
public bool CanTransition(TenantState from, TenantState to)
{
return AllowedTransitions.TryGetValue(from, out var allowed)
&& allowed.Contains(to);
}
public void Transition(Tenant tenant, TenantState newState, string reason)
{
if (!CanTransition(tenant.State, newState))
{
throw new InvalidStateTransitionException(
$"Cannot transition from {tenant.State} to {newState}");
}
var previousState = tenant.State;
tenant.State = newState;
tenant.StateChangedAt = DateTime.UtcNow;
tenant.StateChangeReason = reason;
// Emit domain event
tenant.AddDomainEvent(new TenantStateChangedEvent(
tenant.Id,
previousState,
newState,
reason
));
}
}
public enum TenantState
{
Prospect,
Trial,
Provisioning,
Active,
Suspended,
Cancelled,
Archived,
Purged
}
28.3 Onboarding Workflow
Complete Onboarding Process
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ TENANT ONBOARDING WORKFLOW │
└─────────────────────────────────────────────────────────────────────────────────────┘
PHASE 1: SALES HANDOFF (Day 0)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Sales Team CRM System Onboarding Queue │
│ │ │ │ │
│ │ 1. Win opportunity │ │ │
│ ├────────────────────────────────►│ │ │
│ │ │ 2. Create tenant record │ │
│ │ ├─────────────────────────────►│ │
│ │ 3. Assign onboarding manager │ │ │
│ │◄────────────────────────────────┤ │ │
│ │ │ │ │
│ Deliverables: │
│ □ Signed contract │
│ □ Payment method on file │
│ □ Business requirements document │
│ □ Primary contact information │
│ □ Assigned onboarding manager │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
PHASE 2: DATABASE PROVISIONING (Day 1)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Automated System │
│ │ │
│ │ 1. Create tenant schema │
│ │ CREATE SCHEMA tenant_xyz; │
│ │ │
│ │ 2. Run migrations │
│ │ Apply all schema migrations │
│ │ │
│ │ 3. Seed reference data │
│ │ - Payment methods │
│ │ - Tax categories │
│ │ - Default settings │
│ │ │
│ │ 4. Create admin user │
│ │ - Generate temporary password │
│ │ - Send welcome email │
│ │ │
│ Automated Checks: │
│ □ Schema created successfully │
│ □ All tables exist │
│ □ Admin user can login │
│ □ API key generated │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
PHASE 3: CONFIGURATION SETUP (Days 2-3)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Onboarding Manager + Customer │
│ │ │
│ │ 1. Company profile setup │
│ │ - Business name, address, logo │
│ │ - Tax settings (rates, exemptions) │
│ │ - Currency and locale │
│ │ │
│ │ 2. Location configuration │
│ │ - Add store locations │
│ │ - Assign location codes │
│ │ - Set business hours │
│ │ │
│ │ 3. Payment processor setup │
│ │ - Connect Stripe account │
│ │ - Configure payment methods │
│ │ - Test transactions │
│ │ │
│ │ 4. User provisioning │
│ │ - Create user accounts │
│ │ - Assign roles (Manager, Cashier, etc.) │
│ │ - Configure permissions │
│ │ │
│ │ 5. Hardware setup (if applicable) │
│ │ - Register POS terminals │
│ │ - Connect receipt printers │
│ │ - Pair barcode scanners │
│ │ │
│ Configuration Checklist: │
│ □ Company profile complete │
│ □ At least 1 location configured │
│ □ Payment processor connected and tested │
│ □ At least 1 manager user created │
│ □ Receipt template customized │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
PHASE 4: DATA MIGRATION (Days 3-7)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Migration Team + Customer │
│ │ │
│ │ 1. Data assessment │
│ │ - Review existing data sources │
│ │ - Identify data quality issues │
│ │ - Plan field mappings │
│ │ │
│ │ 2. Product catalog import │
│ │ - Import products from CSV/API │
│ │ - Map categories │
│ │ - Validate pricing │
│ │ │
│ │ 3. Customer data import │
│ │ - Import customer records │
│ │ - Deduplicate entries │
│ │ - Validate contact info │
│ │ │
│ │ 4. Inventory import │
│ │ - Import current stock levels │
│ │ - Map to locations │
│ │ - Validate quantities │
│ │ │
│ │ 5. Historical data (optional) │
│ │ - Import transaction history │
│ │ - Import for reporting only │
│ │ │
│ Migration Validation: │
│ □ Product count matches source │
│ □ Customer count matches (after dedup) │
│ □ Inventory totals reconcile │
│ □ Sample transactions verified │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
PHASE 5: TRAINING (Days 5-10)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Training Team + Customer Staff │
│ │ │
│ │ 1. Administrator training (2 hours) │
│ │ - System configuration │
│ │ - User management │
│ │ - Reports and analytics │
│ │ │
│ │ 2. Manager training (2 hours) │
│ │ - Day-to-day operations │
│ │ - Inventory management │
│ │ - Staff management │
│ │ │
│ │ 3. Cashier training (1 hour) │
│ │ - Transaction processing │
│ │ - Customer lookup │
│ │ - Returns and exchanges │
│ │ │
│ │ 4. Hands-on practice │
│ │ - Practice transactions │
│ │ - Test edge cases │
│ │ - Q&A session │
│ │ │
│ Training Completion: │
│ □ Admin training complete │
│ □ Manager training complete │
│ □ All cashiers trained │
│ □ Practice transactions successful │
│ □ Training materials provided │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
PHASE 6: GO-LIVE (Day 10-14)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ All Teams │
│ │ │
│ │ 1. Pre-go-live checklist │
│ │ - All configuration verified │
│ │ - Data migration validated │
│ │ - Staff trained and ready │
│ │ - Backup of old system │
│ │ │
│ │ 2. Go-live execution │
│ │ - Cutover at scheduled time │
│ │ - First transaction verified │
│ │ - Monitor for issues │
│ │ │
│ │ 3. Hypercare period (Days 1-7) │
│ │ - On-call support │
│ │ - Daily check-ins │
│ │ - Rapid issue resolution │
│ │ │
│ │ 4. Transition to BAU │
│ │ - Hand off to support team │
│ │ - Schedule first review │
│ │ - Close onboarding project │
│ │ │
│ Go-Live Criteria: │
│ □ Sign-off from customer │
│ □ First successful transaction │
│ □ End-of-day close successful │
│ □ Support handoff complete │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
Automated Provisioning Script
#!/bin/bash
# File: /pos-platform/scripts/tenants/provision-tenant.sh
# Automated tenant provisioning
set -e
TENANT_ID=$1
TENANT_NAME=$2
ADMIN_EMAIL=$3
PLAN_TYPE=${4:-standard}
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] PROVISION: $1"
}
#=============================================
# STEP 1: CREATE TENANT SCHEMA
#=============================================
create_schema() {
log "Creating schema for tenant: $TENANT_ID"
docker exec postgres-primary psql -U pos_admin -d pos_db << EOF
-- Create tenant schema
CREATE SCHEMA IF NOT EXISTS "tenant_${TENANT_ID}";
-- Set search path
SET search_path TO "tenant_${TENANT_ID}";
-- Run migrations (tables created here)
\i /migrations/001_create_tables.sql
\i /migrations/002_create_indexes.sql
\i /migrations/003_seed_reference_data.sql
-- Grant permissions
GRANT ALL PRIVILEGES ON SCHEMA "tenant_${TENANT_ID}" TO pos_app;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA "tenant_${TENANT_ID}" TO pos_app;
EOF
log "Schema created"
}
#=============================================
# STEP 2: SEED TENANT DATA
#=============================================
seed_data() {
log "Seeding tenant data..."
docker exec postgres-primary psql -U pos_admin -d pos_db << EOF
SET search_path TO "tenant_${TENANT_ID}";
-- Insert tenant record in shared schema
INSERT INTO shared.tenants (id, name, schema_name, state, plan_type, created_at)
VALUES ('${TENANT_ID}', '${TENANT_NAME}', 'tenant_${TENANT_ID}', 'provisioning', '${PLAN_TYPE}', NOW());
-- Insert default settings
INSERT INTO settings (key, value) VALUES
('company_name', '${TENANT_NAME}'),
('timezone', 'America/New_York'),
('currency', 'USD'),
('tax_rate', '0.0825'),
('receipt_footer', 'Thank you for your business!');
-- Insert default payment methods
INSERT INTO payment_methods (code, name, is_active) VALUES
('CASH', 'Cash', true),
('CARD', 'Credit/Debit Card', true),
('GIFT', 'Gift Card', true);
-- Insert default roles
INSERT INTO roles (name, permissions) VALUES
('admin', '["*"]'),
('manager', '["transactions", "inventory", "reports", "customers"]'),
('cashier', '["transactions", "customers"]');
EOF
log "Data seeded"
}
#=============================================
# STEP 3: CREATE ADMIN USER
#=============================================
create_admin() {
log "Creating admin user..."
# Generate temporary password
TEMP_PASSWORD=$(openssl rand -base64 12)
PASSWORD_HASH=$(echo -n "$TEMP_PASSWORD" | argon2 $(openssl rand -base64 16) -id -t 3 -m 16 -p 4 -l 32 -e)
docker exec postgres-primary psql -U pos_admin -d pos_db << EOF
SET search_path TO "tenant_${TENANT_ID}";
INSERT INTO users (email, password_hash, role, must_change_password, created_at)
VALUES ('${ADMIN_EMAIL}', '${PASSWORD_HASH}', 'admin', true, NOW());
EOF
# Send welcome email
send_welcome_email "$ADMIN_EMAIL" "$TEMP_PASSWORD"
log "Admin user created"
}
#=============================================
# STEP 4: GENERATE API KEY
#=============================================
generate_api_key() {
log "Generating API key..."
API_KEY=$(openssl rand -hex 32)
API_KEY_HASH=$(echo -n "$API_KEY" | sha256sum | cut -d' ' -f1)
docker exec postgres-primary psql -U pos_admin -d pos_db << EOF
INSERT INTO shared.api_keys (tenant_id, key_hash, name, created_at)
VALUES ('${TENANT_ID}', '${API_KEY_HASH}', 'Primary API Key', NOW());
EOF
# Store API key securely (send to customer)
echo "$API_KEY" > "/secure/keys/${TENANT_ID}.key"
chmod 400 "/secure/keys/${TENANT_ID}.key"
log "API key generated"
}
#=============================================
# STEP 5: UPDATE STATE
#=============================================
update_state() {
log "Updating tenant state to 'active'..."
docker exec postgres-primary psql -U pos_admin -d pos_db << EOF
UPDATE shared.tenants
SET state = 'active', activated_at = NOW()
WHERE id = '${TENANT_ID}';
EOF
log "Tenant activated"
}
#=============================================
# HELPER: SEND WELCOME EMAIL
#=============================================
send_welcome_email() {
EMAIL=$1
PASSWORD=$2
curl -X POST "$EMAIL_API_URL/send" \
-H "Authorization: Bearer $EMAIL_API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"to\": \"$EMAIL\",
\"template\": \"welcome\",
\"data\": {
\"tenant_name\": \"$TENANT_NAME\",
\"login_url\": \"https://pos.example.com/login\",
\"temp_password\": \"$PASSWORD\"
}
}"
}
#=============================================
# MAIN
#=============================================
main() {
if [ -z "$TENANT_ID" ] || [ -z "$TENANT_NAME" ] || [ -z "$ADMIN_EMAIL" ]; then
echo "Usage: $0 <tenant_id> <tenant_name> <admin_email> [plan_type]"
exit 1
fi
log "=========================================="
log "Provisioning tenant: $TENANT_NAME"
log "=========================================="
create_schema
seed_data
create_admin
generate_api_key
update_state
log "=========================================="
log "Provisioning complete!"
log "=========================================="
}
main "$@"
28.4 Offboarding Workflow
Offboarding Process
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ TENANT OFFBOARDING WORKFLOW │
└─────────────────────────────────────────────────────────────────────────────────────┘
STEP 1: CANCELLATION REQUEST (Day 0)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ □ Cancellation request received (email/portal/phone) │
│ □ Reason documented │
│ □ Contract terms reviewed (notice period, penalties) │
│ □ Retention offer made (if applicable) │
│ □ Final decision confirmed in writing │
│ □ Cancellation effective date set │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
STEP 2: DATA EXPORT (Days 1-7)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ □ Customer requests data export (GDPR right to portability) │
│ □ Generate export package: │
│ - Transactions (CSV) │
│ - Products (CSV) │
│ - Customers (CSV) │
│ - Inventory history (CSV) │
│ - Reports (PDF) │
│ □ Export package delivered securely │
│ □ Customer confirms receipt │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
STEP 3: ACCESS TERMINATION (Effective Date)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ □ Tenant state set to CANCELLED │
│ □ All user sessions terminated │
│ □ API keys revoked │
│ □ Webhook endpoints removed │
│ □ Payment processor disconnected │
│ □ Customer notified of access termination │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
STEP 4: GRACE PERIOD (30 Days)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ During this period: │
│ □ Data remains intact (no modifications) │
│ □ Customer can request reactivation │
│ □ Additional data exports available on request │
│ □ Billing stopped │
│ │
│ At end of grace period: │
│ □ Final notification sent │
│ □ State changed to ARCHIVED │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
STEP 5: DATA ARCHIVAL (Day 30)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ □ Create final backup │
│ □ Encrypt backup with archival key │
│ □ Move to cold storage (Glacier) │
│ □ Drop active schema │
│ □ Release database resources │
│ □ Archive tenant record │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
STEP 6: DATA RETENTION (90 Days)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Retained data: │
│ □ Transaction records (legal requirement) │
│ □ Audit logs │
│ □ Financial reports │
│ │
│ Purpose: │
│ □ Tax/audit compliance │
│ □ Legal disputes │
│ □ Fraud investigation │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
│
▼
STEP 7: GDPR DELETION (On Request or Day 120)
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ GDPR "Right to be Forgotten" Process: │
│ │
│ □ Deletion request received and verified │
│ □ Legal hold check (no active litigation) │
│ □ Tax record retention verified (if applicable, retain 7 years) │
│ □ Personal data identified: │
│ - Customer PII │
│ - Employee data │
│ - Contact information │
│ □ Pseudonymization applied where deletion not possible │
│ □ Backup copies identified and purged │
│ □ Deletion certificate generated │
│ □ Customer notified of completion │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
Data Export Script
#!/bin/bash
# File: /pos-platform/scripts/tenants/export-tenant-data.sh
# Export all tenant data for offboarding
set -e
TENANT_ID=$1
OUTPUT_DIR="/exports/${TENANT_ID}"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] EXPORT: $1"
}
mkdir -p "$OUTPUT_DIR"
#=============================================
# EXPORT TRANSACTIONS
#=============================================
export_transactions() {
log "Exporting transactions..."
docker exec postgres-primary psql -U pos_admin -d pos_db -c "
COPY (
SELECT
t.id,
t.transaction_number,
t.created_at,
t.total,
t.tax,
t.payment_method,
t.status,
c.email as customer_email,
c.name as customer_name
FROM tenant_${TENANT_ID}.transactions t
LEFT JOIN tenant_${TENANT_ID}.customers c ON t.customer_id = c.id
ORDER BY t.created_at
) TO STDOUT WITH CSV HEADER
" > "${OUTPUT_DIR}/transactions.csv"
log "Exported $(wc -l < ${OUTPUT_DIR}/transactions.csv) transactions"
}
#=============================================
# EXPORT PRODUCTS
#=============================================
export_products() {
log "Exporting products..."
docker exec postgres-primary psql -U pos_admin -d pos_db -c "
COPY (
SELECT
id,
sku,
name,
description,
price,
cost,
category,
barcode,
is_active,
created_at
FROM tenant_${TENANT_ID}.products
ORDER BY name
) TO STDOUT WITH CSV HEADER
" > "${OUTPUT_DIR}/products.csv"
log "Exported $(wc -l < ${OUTPUT_DIR}/products.csv) products"
}
#=============================================
# EXPORT CUSTOMERS
#=============================================
export_customers() {
log "Exporting customers..."
docker exec postgres-primary psql -U pos_admin -d pos_db -c "
COPY (
SELECT
id,
email,
name,
phone,
address,
city,
state,
postal_code,
total_purchases,
last_purchase_at,
created_at
FROM tenant_${TENANT_ID}.customers
ORDER BY name
) TO STDOUT WITH CSV HEADER
" > "${OUTPUT_DIR}/customers.csv"
log "Exported $(wc -l < ${OUTPUT_DIR}/customers.csv) customers"
}
#=============================================
# EXPORT INVENTORY
#=============================================
export_inventory() {
log "Exporting inventory..."
docker exec postgres-primary psql -U pos_admin -d pos_db -c "
COPY (
SELECT
i.product_id,
p.sku,
p.name as product_name,
l.name as location_name,
i.quantity,
i.last_updated
FROM tenant_${TENANT_ID}.inventory i
JOIN tenant_${TENANT_ID}.products p ON i.product_id = p.id
JOIN tenant_${TENANT_ID}.locations l ON i.location_id = l.id
ORDER BY p.name, l.name
) TO STDOUT WITH CSV HEADER
" > "${OUTPUT_DIR}/inventory.csv"
log "Exported inventory data"
}
#=============================================
# CREATE EXPORT PACKAGE
#=============================================
create_package() {
log "Creating export package..."
# Create manifest
cat > "${OUTPUT_DIR}/manifest.json" << EOF
{
"tenant_id": "${TENANT_ID}",
"export_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"files": [
{"name": "transactions.csv", "description": "All transactions"},
{"name": "products.csv", "description": "Product catalog"},
{"name": "customers.csv", "description": "Customer records"},
{"name": "inventory.csv", "description": "Current inventory levels"}
]
}
EOF
# Create encrypted zip
zip -r -e "${OUTPUT_DIR}.zip" "$OUTPUT_DIR" -P "$EXPORT_PASSWORD"
# Generate download link
DOWNLOAD_URL=$(aws s3 presign "s3://pos-exports/${TENANT_ID}.zip" --expires-in 604800)
log "Export package created"
log "Download URL (valid 7 days): $DOWNLOAD_URL"
}
#=============================================
# MAIN
#=============================================
main() {
if [ -z "$TENANT_ID" ]; then
echo "Usage: $0 <tenant_id>"
exit 1
fi
log "=========================================="
log "Exporting data for tenant: $TENANT_ID"
log "=========================================="
export_transactions
export_products
export_customers
export_inventory
create_package
log "=========================================="
log "Export complete!"
log "=========================================="
}
main "$@"
28.5 Billing Integration
Billing Events
// File: /src/POS.Core/Billing/BillingEvents.cs
public record TenantSubscriptionCreated(
string TenantId,
string PlanId,
string StripeSubscriptionId,
DateTime StartDate,
decimal MonthlyPrice
);
public record TenantPaymentReceived(
string TenantId,
string StripePaymentId,
decimal Amount,
DateTime PaidAt
);
public record TenantPaymentFailed(
string TenantId,
string StripePaymentId,
string FailureReason,
int AttemptCount,
DateTime NextRetryAt
);
public record TenantPlanChanged(
string TenantId,
string OldPlanId,
string NewPlanId,
DateTime EffectiveDate,
bool IsUpgrade
);
public record TenantSubscriptionCancelled(
string TenantId,
string Reason,
DateTime CancellationDate,
DateTime EffectiveEndDate
);
Stripe Webhook Handler
// File: /src/POS.Api/Webhooks/StripeWebhookController.cs
[ApiController]
[Route("webhooks/stripe")]
public class StripeWebhookController : ControllerBase
{
private readonly ITenantBillingService _billingService;
private readonly ILogger<StripeWebhookController> _logger;
[HttpPost]
public async Task<IActionResult> HandleWebhook()
{
var json = await new StreamReader(HttpContext.Request.Body).ReadToEndAsync();
var stripeEvent = EventUtility.ConstructEvent(
json,
Request.Headers["Stripe-Signature"],
_webhookSecret
);
switch (stripeEvent.Type)
{
case Events.InvoicePaid:
var invoice = stripeEvent.Data.Object as Invoice;
await HandleInvoicePaid(invoice);
break;
case Events.InvoicePaymentFailed:
var failedInvoice = stripeEvent.Data.Object as Invoice;
await HandlePaymentFailed(failedInvoice);
break;
case Events.CustomerSubscriptionDeleted:
var subscription = stripeEvent.Data.Object as Subscription;
await HandleSubscriptionCancelled(subscription);
break;
}
return Ok();
}
private async Task HandlePaymentFailed(Invoice invoice)
{
var tenantId = invoice.Metadata["tenant_id"];
var attemptCount = invoice.AttemptCount;
_logger.LogWarning(
"Payment failed for tenant {TenantId}, attempt {Attempt}",
tenantId, attemptCount);
if (attemptCount >= 3)
{
// Suspend tenant after 3 failed attempts
await _billingService.SuspendTenantAsync(
tenantId,
"Payment failed after 3 attempts"
);
}
}
}
28.6 Support Tier Definitions
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ SUPPORT TIER DEFINITIONS │
└─────────────────────────────────────────────────────────────────────────────────────┘
┌─────────────────┬──────────────┬──────────────┬──────────────┬──────────────────────┐
│ Feature │ Starter │ Professional │ Enterprise │ Premium │
├─────────────────┼──────────────┼──────────────┼──────────────┼──────────────────────┤
│ Monthly Price │ $49/month │ $149/month │ $499/month │ $999/month │
├─────────────────┼──────────────┼──────────────┼──────────────┼──────────────────────┤
│ Locations │ 1 │ 3 │ 10 │ Unlimited │
│ Users │ 3 │ 10 │ 50 │ Unlimited │
│ Transactions │ 1,000/mo │ 10,000/mo │ 100,000/mo │ Unlimited │
├─────────────────┼──────────────┼──────────────┼──────────────┼──────────────────────┤
│ Support Hours │ Business │ Extended │ 24/5 │ 24/7 │
│ Response Time │ 24 hours │ 8 hours │ 4 hours │ 1 hour │
│ Phone Support │ No │ Yes │ Yes │ Priority Line │
│ Dedicated CSM │ No │ No │ Yes │ Yes │
├─────────────────┼──────────────┼──────────────┼──────────────┼──────────────────────┤
│ Onboarding │ Self-service │ Guided │ White-glove │ Custom │
│ Training │ Videos │ Live session │ On-site │ Unlimited │
│ Data Migration │ Self-service │ Assisted │ Managed │ Managed │
├─────────────────┼──────────────┼──────────────┼──────────────┼──────────────────────┤
│ Integrations │ Basic │ Standard │ All │ All + Custom │
│ API Access │ Limited │ Full │ Full │ Full + Priority │
│ Custom Reports │ No │ 3/month │ 10/month │ Unlimited │
├─────────────────┼──────────────┼──────────────┼──────────────┼──────────────────────┤
│ SLA │ 99.5% │ 99.9% │ 99.95% │ 99.99% │
│ Backup Freq. │ Daily │ Daily │ Hourly │ Real-time │
│ Data Retention │ 1 year │ 2 years │ 5 years │ 7 years │
└─────────────────┴──────────────┴──────────────┴──────────────┴──────────────────────┘
RESPONSE TIME SLA BY SEVERITY:
┌───────────────┬─────────────┬─────────────┬─────────────┬─────────────────────────────┐
│ Severity │ Starter │ Professional│ Enterprise │ Premium │
├───────────────┼─────────────┼─────────────┼─────────────┼─────────────────────────────┤
│ P1 (Critical) │ 8 hours │ 4 hours │ 1 hour │ 15 minutes │
│ P2 (High) │ 24 hours │ 8 hours │ 4 hours │ 1 hour │
│ P3 (Medium) │ 48 hours │ 24 hours │ 8 hours │ 4 hours │
│ P4 (Low) │ 5 days │ 48 hours │ 24 hours │ 8 hours │
└───────────────┴─────────────┴─────────────┴─────────────┴─────────────────────────────┘
28.7 Summary
This chapter provides complete tenant lifecycle management:
- State Machine: 8 states with defined transitions
- Onboarding Workflow: 6-phase process with checklists
- Automated Provisioning: Scripts for schema creation and setup
- Offboarding Workflow: 7-step process including GDPR compliance
- Data Export: Complete export scripts for portability
- Billing Integration: Stripe webhook handlers
- Support Tiers: 4 tiers with feature comparison
Next Chapter: Chapter 29: Claude Code Command Reference
“The beginning and end of a customer relationship deserve equal attention.”
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VII - Operations |
| Chapter | 28 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 29: Claude Code Command Reference
29.1 Complete Command Guide for POS Development
This chapter provides a comprehensive reference for all Claude Code multi-agent commands used throughout POS platform development. Use this as your quick-reference guide during implementation.
29.2 Table of Contents
29.3 Quick Commands
Core Development Commands
| Command | Agents Used | Purpose |
|---|---|---|
/o <task> | Auto-selected | Smart routing - Claude figures out the best approach |
/dev-team | Editor + Engineer | Code implementation with automatic review |
/design-team | Demo + Stylist | UI design with accessibility validation |
/architect-review | Architect | Architecture validation and decisions |
/engineer | Engineer (read-only) | Code review without modifications |
/refactor-check | Engineer | Find code quality issues and duplication |
/research | Researcher | Deep online investigation |
/learn | Memory | Capture discoveries for future sessions |
/cleanup | Orchestrator | Post-task organization and documentation |
When to Use Each Command
/o <task>
Use for: General tasks where you're unsure which agent is best
Example: /o add customer search to POS
Result: Claude analyzes and routes to appropriate agents
/dev-team
Use for: Any code implementation that needs review
Example: /dev-team implement tenant middleware
Result: Editor writes code, Engineer reviews it
/design-team
Use for: UI/UX work with accessibility
Example: /design-team create checkout flow mockup
Result: Demo creates design, Stylist validates accessibility
/architect-review
Use for: Validating major decisions
Example: /architect-review event sourcing for inventory
Result: Architect evaluates and documents ADR
/engineer
Use for: Read-only code review
Example: /engineer review PaymentService.cs
Result: Feedback without code changes
/refactor-check
Use for: Finding technical debt
Example: /refactor-check src/Services/
Result: List of duplication, violations, improvements
/research
Use for: External research
Example: /research PCI-DSS 4.0 changes for retail
Result: Comprehensive research with sources
/learn
Use for: Capturing learnings
Example: /learn EF Core tenant isolation pattern
Result: Saved to memory for future sessions
/cleanup
Use for: Finishing work sessions
Example: /cleanup after implementing inventory sync
Result: Documentation updated, files organized
29.4 Workflow Commands
Standard Workflows
| Command | Stages | Total Agents |
|---|---|---|
/workflow | Plan, Edit, Review | 3 |
/pos-workflow | Plan, Architect, Edit, Review | 4 |
/auto-workflow | Doc, Plan, Implement, Review, Doc | 5 |
/design-workflow | Research, Plan, Demo, Style, Implement, Review | 6 |
Workflow Details
/workflow - Basic Development Workflow
Stage 1: Plan
Agent: Planner
Output: Implementation plan with steps
Stage 2: Edit
Agent: Editor
Output: Code implementation
Stage 3: Review
Agent: Engineer
Output: Code review feedback
Use for: Standard feature implementation
Example:
/workflow add customer loyalty points calculation
/pos-workflow - Full POS Workflow
Stage 1: Plan
Agent: Planner
Output: Detailed implementation plan
Stage 2: Architect Review
Agent: Architect
Output: Architecture validation, ADR if needed
Stage 3: Edit
Agent: Editor
Output: Code implementation
Stage 4: Review
Agent: Engineer
Output: Code review with POS-specific checks
Use for: Major POS features requiring architecture validation
Example:
/pos-workflow implement offline payment queue
/auto-workflow - Fully Automated
Stage 1: Document (Pre)
Agent: Documenter
Output: Current state documentation
Stage 2: Plan
Agent: Planner
Output: Implementation strategy
Stage 3: Implement
Agent: Editor
Output: Code changes
Stage 4: Review
Agent: Engineer
Output: Quality validation
Stage 5: Document (Post)
Agent: Documenter
Output: Updated documentation
Use for: Complete features needing full documentation
Example:
/auto-workflow implement multi-store inventory transfer
/design-workflow - UI Development
Stage 1: Research
Agent: Researcher
Output: UX patterns, accessibility requirements
Stage 2: Plan
Agent: Planner
Output: Component structure plan
Stage 3: Demo
Agent: Demo Creator
Output: Visual mockups (ASCII/text)
Stage 4: Style
Agent: Stylist
Output: Accessibility validation, WCAG compliance
Stage 5: Implement
Agent: Editor
Output: Component code
Stage 6: Review
Agent: Engineer
Output: Code review
Use for: New UI components and screens
Example:
/design-workflow create product quick-add modal
29.5 Specialized Commands
Architecture Commands
# Create new ADR
/architect-review ADR for <decision topic>
# Validate existing architecture
/architect-review validate <component> against ADRs
# Review cross-cutting concerns
/architect-review security implications of <change>
Security Commands
# Security-focused review
/engineer security review <file or feature>
# PCI-DSS compliance check
/refactor-check PCI-DSS compliance in payment flow
# Authentication/authorization review
/architect-review auth flow for <feature>
Database Commands
# Schema review
/engineer review migration <migration-name>
# Performance analysis
/refactor-check database performance in <repository>
# Data integrity check
/architect-review data model for <entity>
Testing Commands
# Generate tests
/dev-team write tests for <feature>
# Review test coverage
/engineer review test coverage for <service>
# Integration test plan
/workflow plan integration tests for <module>
29.6 Command Sequences by Task
Task 1: Adding a New API Endpoint
Scenario: Add GET /api/v1/tenants/{tenantId}/customers/search
# Step 1: Review existing patterns
/engineer review existing customer endpoints
# Step 2: Plan and implement
/dev-team add customer search endpoint with pagination
# Step 3: Validate architecture
/architect-review customer search query patterns
# Step 4: Add tests
/dev-team write tests for customer search endpoint
# Step 5: Document
/cleanup update API documentation
Expected Files Modified:
Controllers/CustomersController.csServices/ICustomerService.csServices/CustomerService.csTests/CustomerControllerTests.cs
Task 2: Creating a New Domain Entity
Scenario: Add LoyaltyProgram entity with points tracking
# Step 1: Architecture review
/architect-review domain model for loyalty program
# Step 2: Create entity and events
/dev-team create LoyaltyProgram entity with domain events
# Step 3: Add repository
/dev-team implement ILoyaltyProgramRepository
# Step 4: Create migration
/dev-team add EF Core migration for loyalty_programs
# Step 5: Review everything
/engineer review loyalty program implementation
# Step 6: Capture pattern
/learn loyalty program implementation pattern
Expected Files Created:
Domain/Entities/LoyaltyProgram.csDomain/Events/LoyaltyPointsEarnedEvent.csDomain/Events/LoyaltyPointsRedeemedEvent.csInfrastructure/Repositories/LoyaltyProgramRepository.csMigrations/YYYYMMDDHHMMSS_AddLoyaltyProgram.cs
Task 3: Implementing a Background Job
Scenario: Daily inventory snapshot job
# Step 1: Research patterns
/research background job patterns in ASP.NET Core
# Step 2: Architecture decision
/architect-review background job hosting strategy
# Step 3: Implement job
/dev-team implement daily inventory snapshot job
# Step 4: Add scheduling
/dev-team configure Hangfire scheduling for snapshot job
# Step 5: Add monitoring
/dev-team add job health checks and metrics
# Step 6: Test
/dev-team write integration tests for snapshot job
Expected Files Created:
Jobs/InventorySnapshotJob.csJobs/IInventorySnapshotJob.csConfiguration/HangfireConfig.csTests/InventorySnapshotJobTests.cs
Task 4: Adding Tests
Scenario: Improve test coverage for PaymentService
# Step 1: Analyze current coverage
/engineer review test coverage for PaymentService
# Step 2: Identify gaps
/refactor-check find untested paths in PaymentService
# Step 3: Unit tests
/dev-team write unit tests for PaymentService edge cases
# Step 4: Integration tests
/dev-team write integration tests for payment flow
# Step 5: Validate
/engineer review new payment tests
Test Categories to Cover:
- Unit tests for business logic
- Integration tests for database operations
- Mock tests for external payment gateway
- Edge case tests (failures, timeouts, partial payments)
Task 5: Security Review
Scenario: Pre-deployment security audit
# Step 1: Authentication review
/engineer security review authentication flow
# Step 2: Authorization review
/architect-review RBAC implementation
# Step 3: Data protection
/refactor-check sensitive data handling
# Step 4: Input validation
/engineer review input validation in controllers
# Step 5: Dependency audit
/research security vulnerabilities in dependencies
# Step 6: Document findings
/cleanup create security review report
Security Checklist Integration: See Chapter 31: Checklists for complete security review checklist.
Task 6: UI Mockup Creation
Scenario: Design new receipt customization screen
# Step 1: Research
/research receipt customization UX patterns
# Step 2: Create mockup
/design-team create receipt customization screen mockup
# Step 3: Accessibility review
/design-team validate accessibility for receipt editor
# Step 4: Get architecture input
/architect-review receipt template storage approach
# Step 5: Implement
/dev-team implement receipt customization component
# Step 6: Review
/engineer review receipt customization implementation
Design Artifacts:
- ASCII mockup in markdown
- Component hierarchy diagram
- Accessibility checklist (WCAG 2.1 AA)
- State management plan
29.7 Best Practices
Command Selection Guidelines
Feature Size | Recommended Command
--------------------|--------------------
Quick fix | /dev-team
Small feature | /workflow
Major feature | /pos-workflow
New UI screen | /design-workflow
Architecture change | /architect-review first
Bug investigation | /engineer then /dev-team
Research needed | /research then /workflow
Chaining Commands Effectively
Good Pattern: Research then implement
/research multi-tenant caching strategies
# Read output, understand options
/architect-review caching strategy for tenant data
# Get ADR created
/dev-team implement tenant cache with Redis
Good Pattern: Review then fix
/engineer review InventoryService
# Get feedback list
/dev-team fix InventoryService issues
# Address each point
Bad Pattern: Skipping review
/dev-team implement critical payment feature
# Missing: /architect-review and /engineer review
Memory and Learning
# After solving a tricky problem
/learn how we handled concurrent inventory updates
# After making an architecture decision
/learn tenant isolation middleware pattern
# After debugging a complex issue
/learn debugging tips for offline sync conflicts
Session Management
Start of Session:
# Check what's pending
/o what's the status of POS implementation?
# Review recent changes
/engineer review changes since last session
End of Session:
# Clean up
/cleanup
# Document progress
/learn progress on <feature> implementation
29.8 Command Reference Card
Print this section for quick reference:
+------------------------------------------------------------------+
| CLAUDE CODE QUICK REFERENCE |
+------------------------------------------------------------------+
| QUICK COMMANDS |
| /o <task> - Smart routing (figures out best approach) |
| /dev-team - Code with review (Editor + Engineer) |
| /design-team - UI with accessibility (Demo + Stylist) |
| /architect-review - Architecture validation |
| /engineer - Code review only (read-only) |
| /refactor-check - Find code quality issues |
| /research - Deep investigation |
| /learn - Capture discoveries |
| /cleanup - Post-task organization |
+------------------------------------------------------------------+
| WORKFLOWS |
| /workflow - Plan -> Edit -> Review |
| /pos-workflow - Plan -> Architect -> Edit -> Review |
| /auto-workflow - Doc -> Plan -> Implement -> Review -> Doc |
| /design-workflow - Research -> Plan -> Demo -> Style -> Impl |
+------------------------------------------------------------------+
| COMMON SEQUENCES |
| New Endpoint: /engineer review -> /dev-team -> /architect-review|
| New Entity: /architect-review -> /dev-team -> /engineer |
| Security Audit: /engineer security -> /refactor-check -> /cleanup |
| UI Component: /design-workflow (all-in-one) |
+------------------------------------------------------------------+
29.9 Troubleshooting Commands
When Things Go Wrong
# Agent seems confused about context
/o reset context and continue with <task>
# Need to undo changes
/engineer review what changed
# Then git reset or manual fix
# Command not producing expected results
/o explain what /dev-team does for <task>
# Clarify expectations
# Need more detail from agent
/o expand on <specific aspect>
Getting Unstuck
# Stuck on implementation approach
/research alternatives for <problem>
/architect-review compare approaches
# Stuck on debugging
/engineer analyze error in <file>
/research common causes of <error>
# Stuck on design
/design-team brainstorm approaches for <UI problem>
29.10 Summary
| Need | Command |
|---|---|
| Write code with review | /dev-team |
| Just review code | /engineer |
| Design UI | /design-team |
| Major architecture decision | /architect-review |
| Research something | /research |
| Full feature with docs | /auto-workflow |
| Remember something | /learn |
| Finish session | /cleanup |
| Not sure what to use | /o <task> |
This reference is designed to be printed and kept nearby during development sessions.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VIII - Reference |
| Chapter | 29 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 30: Glossary
30.1 Complete A-Z Reference for POS Platform Terminology
This glossary provides definitions for all technical terms, acronyms, and domain concepts used throughout this Blueprint Book.
30.2 A
ADR (Architecture Decision Record)
A document that captures an important architectural decision along with its context and consequences. ADRs provide a historical record of why decisions were made.
Example: “ADR-001: Use schema-per-tenant for data isolation”
Aggregate
In Domain-Driven Design, a cluster of domain objects that can be treated as a single unit. An aggregate has a root entity and enforces consistency boundaries.
Example: Order is an aggregate root containing OrderLines, Payments, and Discounts.
Affirm
Buy-now-pay-later payment integration for customer financing at POS. Allows customers to split purchases into installments while the merchant receives full payment upfront.
API Gateway
A server that acts as the single entry point for all client requests. It handles request routing, composition, and protocol translation.
Audit Log
An append-only record of all significant events in the system. Used for compliance, debugging, and analytics.
POS Context: Every inventory change, transaction, and user action is logged.
30.3 B
Background Job
A task that runs asynchronously outside the main request/response cycle. Used for scheduled tasks, long-running operations, and deferred processing.
POS Context: Daily inventory snapshots, report generation, sync operations.
Barcode
A machine-readable representation of data, typically printed on product labels. Common formats include UPC-A, EAN-13, Code 128, and QR codes.
Blazor
A .NET web framework for building interactive web UIs using C# instead of JavaScript. Stanly uses Blazor Server for its admin interface.
Bounded Context
In DDD, a logical boundary within which a particular domain model is defined and applicable. Different bounded contexts may have different models for the same real-world concept.
POS Context: “Inventory” context vs “Sales” context may model products differently.
BRIN Index (Block Range Index)
A PostgreSQL index type optimized for large tables with naturally ordered data (like timestamps). More compact than B-tree for sequential data.
Usage: CREATE INDEX idx_events_created ON events USING BRIN (created_at);
Bridge
A software component that connects two different systems. In Stanly, the Bridge connects store computers running QuickBooks POS to the central Stanly server.
30.4 C
Cash Drawer
The physical drawer containing cash, typically connected to a POS terminal. Opened programmatically when cash transactions occur.
Checkout
The process of completing a sale, including scanning items, applying discounts, collecting payment, and generating a receipt.
Circuit Breaker
A design pattern that prevents cascading failures by detecting failures and stopping attempts to invoke a failing service.
Example: If payment gateway fails 5 times, stop trying for 30 seconds.
Command (CQRS)
An operation that modifies state. Commands are imperative (“CreateOrder”, “ApplyDiscount”) and may be rejected if invalid.
Connection String
A string containing information needed to connect to a database, including server, port, database name, and credentials.
Example: Host=postgres16;Port=5432;Database=pos_db;Username=pos_user;Password=xxx
CORS (Cross-Origin Resource Sharing)
A security mechanism that allows or restricts web pages from making requests to a different domain than the one serving the page.
CQRS (Command Query Responsibility Segregation)
An architectural pattern that separates read operations (queries) from write operations (commands). Allows optimizing each path independently.
Customer Display
A secondary screen facing the customer showing items being scanned, prices, and transaction totals.
30.5 D
Dead Letter Queue
A queue where messages that cannot be processed are sent for later analysis. Prevents message loss and enables debugging.
Dependency Injection (DI)
A technique where objects receive their dependencies from external sources rather than creating them internally. Promotes loose coupling and testability.
Discrepancy
A difference between expected and actual values. In inventory, the difference between system quantity and physical count.
Discount Calculation
The process of applying percentage or fixed-amount discounts to line items, with rules for stacking, priority, and exclusions. The POS platform supports automatic (rule-based), manual (cashier-applied), and coupon-based discounts.
Docker
A platform for developing, shipping, and running applications in containers. Provides consistent environments across development and production.
Docker Compose
A tool for defining and running multi-container Docker applications using YAML configuration files.
Domain Event
A record that something significant happened in the domain. Events are named in past tense (“OrderCreated”, “PaymentReceived”).
Domain-Driven Design (DDD)
An approach to software development that focuses on modeling the business domain and using a ubiquitous language shared by developers and domain experts.
DTO (Data Transfer Object)
An object that carries data between processes. DTOs are simple containers with no business logic.
30.6 E
EF Core (Entity Framework Core)
Microsoft’s object-relational mapper (ORM) for .NET. Maps database tables to C# classes and handles CRUD operations.
EMV
A global standard for chip-based credit and debit card transactions. Named after Europay, Mastercard, and Visa.
Entity
In DDD, an object defined by its identity rather than its attributes. Entities have a unique identifier that persists over time.
Example: A Customer is an entity - even if their name changes, they’re still the same customer.
Event Sourcing
A pattern where state is stored as a sequence of events rather than current values. The current state is derived by replaying all events.
Benefit: Complete audit trail, ability to reconstruct any historical state.
Event Store
A database optimized for storing and retrieving events. Provides append-only storage and efficient event streaming.
30.7 F
Failover
The automatic switching to a backup system when the primary system fails.
Fiscal Printer
A specialized printer that generates legally-compliant receipts with tax calculations. Required in some jurisdictions.
Fitness Function
An automated test that verifies architectural characteristics (performance, security, scalability) are maintained as the system evolves.
Example: “API response time must be < 200ms for 95th percentile”
Flyway
A database migration tool that manages schema versioning and applies migrations in order.
Fulfillment
The process of preparing and shipping an order to the customer.
30.8 G
GDPR (General Data Protection Regulation)
EU regulation on data protection and privacy. Requires consent for data collection, right to deletion, and data portability.
Gift Card
A prepaid stored-value card issued by a retailer. Can be physical or digital.
gRPC
A high-performance RPC framework using Protocol Buffers. Alternative to REST for service-to-service communication.
30.9 H
Hangfire
A .NET library for running background jobs. Provides scheduling, retry logic, and a dashboard.
Hardware Security Module (HSM)
A physical device that safeguards cryptographic keys. Used for PCI-DSS compliance.
Heartbeat
A periodic signal sent to indicate a system is alive and functioning. In Stanly, bridges send heartbeats every 60 seconds.
Horizontal Scaling
Adding more machines to handle increased load. Contrast with vertical scaling (adding resources to existing machines).
Hot Path
The code path executed for the most common operations. Must be optimized for performance.
30.10 I
Idempotency
The property where an operation produces the same result regardless of how many times it’s executed. Critical for retry logic.
Example: Creating an order with idempotency key ensures duplicates aren’t created on retry.
Idempotency Key
A unique identifier included with requests to enable idempotent operations.
Index (Database)
A data structure that improves query performance by providing quick lookup paths. Types include B-tree, BRIN, GIN, and GiST.
Integration Test
A test that verifies multiple components work together correctly. Tests real database, real services.
Inventory
The quantity and value of goods available for sale. Tracked by SKU and location.
30.11 J
Job
See Background Job.
JSON Web Token
See JWT.
JWT (JSON Web Token)
A compact, URL-safe means of representing claims between parties. Used for authentication in APIs.
Structure: Header.Payload.Signature (base64 encoded)
30.12 K
Kiosk Mode
A locked-down interface mode where users can only access specific application features. Prevents tampering with settings.
30.13 L
Layaway
A payment plan where items are reserved and paid for over time before being picked up.
Load Balancer
A device or software that distributes network traffic across multiple servers.
Location
A physical place where inventory is stored and/or sold. Each location has separate inventory counts.
Stanly Locations: HQ, GM, HM, LM, NM
Logging
Recording application events for debugging, monitoring, and audit purposes.
30.14 M
Materialized View
A database view that stores query results physically. Faster to query but must be refreshed when source data changes.
Microservice
An architectural style where applications are composed of small, independent services that communicate over a network.
Middleware
Software that sits between the application and the network/OS, handling cross-cutting concerns like authentication, logging, and error handling.
Migration (Database)
A version-controlled change to database schema. Applied in order to evolve the database structure.
Multi-Tenancy
An architecture where a single instance of software serves multiple customers (tenants), with data isolation between them.
Strategies: Shared database, schema-per-tenant, database-per-tenant.
30.15 N
N+1 Query Problem
A performance anti-pattern where code executes N additional queries to fetch related data for N items. Solved with eager loading or batch queries.
30.16 O
OAuth 2.0
An authorization framework that enables third-party applications to obtain limited access to user accounts.
Offline-First
A design approach where applications work without network connectivity and sync when connection is available.
ORM (Object-Relational Mapping)
A technique for converting data between incompatible type systems in object-oriented programming languages and relational databases.
Outbox Pattern
A pattern for reliable message publishing where messages are saved to a database table (outbox) before being published to a message broker.
30.17 P
Pagination
Dividing large result sets into smaller pages for display and transmission.
Partitioning
Dividing a database table into smaller, more manageable pieces while maintaining a single logical table.
Types: Range partitioning (by date), list partitioning (by tenant).
Payment Gateway
A service that authorizes credit card payments and transfers funds.
PCI-DSS (Payment Card Industry Data Security Standard)
A set of security standards for organizations that handle credit card data.
Key Requirements: Network security, cardholder data protection, vulnerability management, access control, monitoring, security policy.
PLU (Price Look-Up)
A 4 or 5 digit number assigned to produce items for checkout identification.
POS (Point of Sale)
The place and system where a retail transaction is completed. Includes hardware (terminal, scanner, printer) and software.
PostgreSQL
An open-source relational database known for robustness, extensibility, and standards compliance.
Projection
In event sourcing, a read model built by processing events. Optimized for specific query patterns.
30.18 Q
QBXML
QuickBooks’ XML-based API format for communicating with QuickBooks Point of Sale.
Query (CQRS)
An operation that returns data without modifying state. Queries can be optimized independently from commands.
Queue
A data structure that holds items in order. Used for background jobs, message passing, and load leveling.
30.19 R
RBAC (Role-Based Access Control)
An access control method where permissions are assigned to roles, and roles are assigned to users.
POS Roles: SuperAdmin, TenantAdmin, Manager, Cashier, Auditor.
Read Replica
A database copy that handles read queries, reducing load on the primary database.
Receipt
A document acknowledging a transaction. Can be printed, emailed, or displayed digitally.
Reconciliation
The process of comparing two sets of records to ensure they match. Used for inventory and financial data.
Refund
A return of payment to a customer, typically for returned merchandise.
Repository Pattern
A design pattern that provides an abstraction layer between the domain and data mapping layers.
REST (Representational State Transfer)
An architectural style for web services using HTTP methods (GET, POST, PUT, DELETE) to operate on resources.
Retry Policy
A strategy for automatically retrying failed operations with configurable delays and limits.
RFID (Radio-Frequency Identification)
Technology using radio waves to read/write data on tags attached to inventory items. Used for counting operations via the Raptag mobile application. In the POS platform, RFID is scoped to counting only (no lifecycle tracking). See Chapter 08 Section 5.16 for the complete RFID specification.
Row-Level Security (RLS)
A PostgreSQL feature that restricts which rows a user can access based on policies.
30.20 S
SaaS (Software as a Service)
A software distribution model where applications are hosted centrally and accessed via the internet.
Saga
A pattern for managing distributed transactions by defining a sequence of local transactions with compensating actions for rollback.
Schema
The structure of a database including tables, columns, types, and relationships.
Schema-Per-Tenant
A multi-tenancy strategy where each tenant has their own database schema within a shared database.
SDK (Software Development Kit)
A collection of tools and libraries for building applications for a specific platform.
Seeding
Populating a database with initial data required for the application to function.
Serilog
A .NET logging library with structured logging capabilities.
Service Bus
A messaging infrastructure that enables asynchronous communication between services.
Session
A server-side storage mechanism that maintains state across multiple requests from the same client.
Sharding
Distributing data across multiple databases based on a shard key (like tenant ID).
SignalR
A .NET library for adding real-time web functionality using WebSockets.
SKU (Stock Keeping Unit)
A unique identifier for a distinct product. Used for inventory tracking and sales analysis.
Example: “NXP0323” identifies a specific product variant.
Snapshot
A point-in-time copy of data. Used in event sourcing to avoid replaying all events.
Soft Delete
Marking records as deleted without physically removing them. Enables recovery and audit.
Implementation: is_deleted boolean column, excluded from normal queries.
Split Payment
A transaction where payment is made using multiple payment methods (e.g., $50 cash + $30 credit card).
Swagger/OpenAPI
A specification for describing REST APIs. Enables automatic documentation and client generation.
30.21 T
Tailscale
A VPN service using WireGuard that creates secure mesh networks. Used for connecting store bridges to central Stanly.
Tenant
A customer organization in a multi-tenant system. Each tenant’s data is isolated from others.
Tenant ID
A unique identifier for a tenant, typically a UUID. Used to scope all data and operations.
Token
A piece of data representing identity or authorization. See JWT.
Transaction
In databases, a unit of work that is atomic (all or nothing). In retail, a sale or return event.
30.22 U
Ubiquitous Language
In DDD, a common language shared by developers and domain experts, used in code and conversations.
Unit of Work
A pattern that maintains a list of objects affected by a business transaction and coordinates writing out changes.
Unit Test
A test that verifies a single unit of code (function, method) in isolation.
UPC (Universal Product Code)
A barcode symbology used for tracking items in stores. 12-digit format in North America.
UUID (Universally Unique Identifier)
A 128-bit identifier that is unique across space and time. Format: 550e8400-e29b-41d4-a716-446655440000
30.23 V
Value Object
In DDD, an object defined by its attributes rather than identity. Two value objects with the same attributes are equal.
Example: Money(100, "USD") is a value object.
Vault
A system for managing secrets (passwords, API keys, certificates). Examples: HashiCorp Vault, Azure Key Vault.
Vertical Scaling
Adding resources (CPU, RAM) to existing machines. Contrast with Horizontal Scaling.
View (Database)
A virtual table based on a SQL query. Can simplify complex queries and provide security.
Void
Canceling a transaction before it’s completed or settled.
VPN (Virtual Private Network)
A secure connection between networks over the internet. See Tailscale.
30.24 W
WebSocket
A protocol providing full-duplex communication over a single TCP connection. Used for real-time features.
Webhook
An HTTP callback that occurs when something happens. A way for apps to receive real-time notifications.
30.25 X
XSS (Cross-Site Scripting)
A security vulnerability where attackers inject malicious scripts into web pages viewed by other users.
XUNIT
A .NET unit testing framework.
30.26 Y
YAML (YAML Ain’t Markup Language)
A human-readable data serialization format used for configuration files (docker-compose.yml, etc.).
30.27 Z
Zero Downtime Deployment
Deploying new versions without any interruption to users. Achieved through rolling updates, blue-green deployments, or canary releases.
30.28 Domain-Specific Terms (Retail/POS)
| Term | Definition |
|---|---|
| Basket | Collection of items a customer intends to purchase |
| Cash Float | Starting cash amount in the register at beginning of shift |
| Cash Up | End-of-day process of counting cash and reconciling with sales |
| Clerk | Employee operating the POS terminal |
| Comp | Complimentary item given free to customer |
| EoD | End of Day - daily closing procedures |
| House Account | Credit account for regular customers |
| Layaway | Payment plan where items are reserved until fully paid |
| Markdown | Price reduction on items |
| No Sale | Opening cash drawer without a transaction |
| On Hand | Current inventory quantity |
| Open Ticket | Transaction started but not completed |
| Over/Short | Difference between expected and actual cash |
| PLU | Price Look-Up code for produce |
| Rain Check | Promise to sell at sale price when item is restocked |
| Shrinkage | Inventory loss due to theft, damage, or errors |
| SKU | Stock Keeping Unit - unique product identifier |
| Tender | Payment method (cash, credit, etc.) |
| Till | Cash drawer |
| Void | Cancel a line item or entire transaction |
| X-Read | Mid-day sales report without resetting totals |
| Z-Read | End-of-day report that resets totals |
Use Ctrl+F (or Cmd+F on Mac) to quickly find terms in this glossary.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VIII - Reference |
| Chapter | 30 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 31: Checklists
31.1 Ready-to-Use Checklists for POS Development and Operations
This chapter provides comprehensive checklists for common development, deployment, and operational tasks. Print these and use them to ensure nothing is missed.
31.2 Table of Contents
- New Feature Checklist
- Code Review Checklist
- Security Review Checklist
- API Endpoint Checklist
- Database Migration Checklist
- Deployment Checklist
- Go-Live Checklist
- Tenant Onboarding Checklist
- End-of-Day Checklist
- PCI-DSS Audit Checklist
31.3 New Feature Checklist
Use this checklist when implementing any new feature in the POS platform.
Planning Phase
- Requirements documented - Clear acceptance criteria defined
-
Architecture review -
/architect-reviewcompleted for major features - ADR created - If architectural decision was made
- Database schema designed - Entity models and relationships defined
- API contracts defined - Endpoints, request/response formats documented
- UI mockups approved - For features with user interface changes
- Tenant impact assessed - How does this affect multi-tenancy?
Implementation Phase
- Domain entities created - Following DDD patterns
- Domain events defined - Named in past tense, all relevant events
- Repository interfaces - Abstraction layer defined
- Repository implementations - EF Core implementations
- Service interfaces - Business logic abstraction
- Service implementations - Business logic with proper logging
- API controllers - RESTful endpoints with proper authorization
- DTOs created - Request/response models separate from domain
- Validation added - FluentValidation or DataAnnotations
- Error handling - Proper exception handling and responses
Testing Phase
- Unit tests written - Cover all business logic branches
- Integration tests written - Test with real database
- API tests written - Verify endpoint contracts
- Tenant isolation tested - Verify data doesn’t leak
- Edge cases covered - Null values, empty lists, max values
- Error scenarios tested - Verify proper error responses
Documentation Phase
- API documentation updated - Swagger annotations complete
- README updated - If setup/configuration changed
- CHANGELOG updated - Feature listed with version
- User documentation - If user-facing feature
Final Review
-
Code review completed -
/engineer reviewpassed - Security review - No vulnerabilities introduced
- Performance verified - No N+1 queries, proper indexing
- Migrations tested - Applied and rolled back successfully
31.4 Code Review Checklist
Use this checklist when reviewing code (or when preparing code for review).
General Quality
- Code compiles - No build errors or warnings
- Tests pass - All existing and new tests green
- No dead code - Unused variables, methods removed
- No commented-out code - Remove or document why
- Meaningful names - Variables, methods, classes have clear names
- Small methods - Functions do one thing well
- Proper indentation - Consistent formatting
Architecture & Design
- Single Responsibility - Each class has one reason to change
-
Dependency Injection - No
newfor services, use DI - Interface segregation - No fat interfaces
- Proper layering - Controllers don’t contain business logic
- Repository pattern - Data access abstracted
- No circular dependencies - Clean dependency graph
Error Handling
- Exceptions caught appropriately - Not swallowing exceptions
- Meaningful error messages - Users and developers can understand
- Logging on errors - Stack traces logged for debugging
- Graceful degradation - Feature fails safely
Security
-
Authorization checked - Proper
[Authorize]attributes - Input validated - All user input sanitized
- No SQL injection - Parameterized queries only
- No XSS vulnerabilities - Output encoded
- Secrets not hardcoded - Use configuration/vault
- Tenant isolation - Data scoped to tenant
Performance
- No N+1 queries - Use Include/eager loading
- Proper async/await - No blocking calls
- Appropriate caching - Frequently accessed data cached
- Database indexes - Queries use indexes
- No memory leaks - Disposable objects disposed
Documentation
- XML comments on public APIs - Summary, params, returns
- Complex logic documented - Why, not just what
- TODO items tracked - Linked to issues if deferred
31.5 Security Review Checklist
Use this checklist before deploying features that handle sensitive data.
Authentication
- Strong password policy - Minimum length, complexity
- Password hashing - bcrypt/Argon2, not MD5/SHA1
- Account lockout - After failed attempts
- Session management - Proper timeout, secure cookies
- Multi-factor authentication - For admin accounts
- JWT properly validated - Signature, expiration, issuer
Authorization
- Role-based access - RBAC properly implemented
- Least privilege - Users have minimum necessary permissions
- Authorization on all endpoints - No unprotected APIs
- Tenant isolation enforced - Users can’t access other tenants
- Resource ownership verified - Users can only modify own resources
Data Protection
- Sensitive data encrypted - At rest and in transit
- TLS/HTTPS enforced - No plaintext transmission
- PII minimized - Only collect what’s necessary
- Data retention policy - Old data purged
- Backup encryption - Backups are encrypted
Input Validation
- All input validated - Type, length, format
- Whitelist validation - Accept known good
- SQL injection prevented - Parameterized queries
- XSS prevented - Output encoding
- CSRF protection - Anti-forgery tokens
- File upload restrictions - Type, size limits
Logging & Monitoring
- Security events logged - Login, logout, failures
- PII not logged - Passwords, card numbers excluded
- Log integrity - Logs protected from tampering
- Alerting configured - Suspicious activity triggers alerts
- Audit trail - Who did what, when
Infrastructure
- Firewall configured - Only necessary ports open
- Dependencies updated - No known vulnerabilities
- Secrets in vault - Not in code or config files
- Container hardened - Non-root user, minimal image
- Network segmentation - Database not publicly accessible
31.6 API Endpoint Checklist
Use this checklist when adding or modifying API endpoints.
Design
- RESTful naming - Resource-based URLs
- Proper HTTP methods - GET, POST, PUT, DELETE used correctly
-
Versioning -
/api/v1/prefix - Consistent naming - camelCase, plural resources
- Pagination - Large collections paginated
- Filtering/sorting - Query parameters for flexibility
Implementation
-
Controller attribute -
[ApiController]applied - Route attribute - Explicit routes defined
-
Authorization -
[Authorize]with roles/policies -
Model binding -
[FromBody],[FromQuery]specified -
Validation -
ModelStatechecked or auto-validation -
Response types -
[ProducesResponseType]specified - Cancellation token - Async methods accept token
Request Handling
- Input validation - All inputs validated
- Idempotency - POST/PUT are idempotent where needed
- Rate limiting - Appropriate limits configured
- Request logging - Requests logged (excluding sensitive data)
- Content negotiation - Accept header respected
Response Format
- Consistent structure - Standard envelope if used
- Proper status codes - 200, 201, 400, 401, 403, 404, 500
- Error format - Standard error response structure
- No sensitive data - Passwords, tokens not in responses
- Proper content type - application/json
Documentation
- Swagger annotations - Summary, description, examples
- Request examples - Sample payloads documented
- Response examples - Success and error responses
- Authentication documented - How to authenticate
31.7 Database Migration Checklist
Use this checklist when creating and applying database migrations.
Before Creating Migration
- Schema reviewed - Changes discussed with team
- Backwards compatible - Can roll back if needed
- Data preservation - Existing data won’t be lost
- Performance impact - Large table changes planned
- Index strategy - New indexes identified
Creating Migration
- Meaningful name - Descriptive migration name
- Single responsibility - One logical change per migration
- Up and Down - Both directions implemented
- Idempotent - Can run multiple times safely
- Data migration - If data transformation needed
Testing Migration
- Local test - Applied to local database
- Rollback tested - Down migration works
- Data verified - Existing data intact
- Performance tested - Large tables migrate acceptably
- All tenants tested - Works for all tenant schemas
Deploying Migration
- Backup taken - Database backed up before migration
- Maintenance window - Users notified if downtime
- Migration logged - Record of when applied
- Verification query - Confirm migration successful
- Rollback plan - Know how to undo if problems
After Migration
- Application tested - Features work with new schema
- Performance checked - No query regressions
- Monitoring reviewed - No errors in logs
- Documentation updated - Schema docs reflect changes
31.8 Deployment Checklist
Use this checklist for every deployment to staging or production.
Pre-Deployment
- All tests passing - CI pipeline green
- Code reviewed - All changes approved
- Security scan - No new vulnerabilities
- Dependencies updated - If applicable
- CHANGELOG updated - Version and changes documented
- Rollback plan - Know how to revert if issues
Environment Preparation
- Configuration updated - Environment variables set
- Secrets rotated - If scheduled rotation
- Database migrations - Applied before deployment
- Feature flags - New features disabled initially
- Monitoring ready - Dashboards and alerts configured
Deployment Steps
- Notify stakeholders - Team aware of deployment
- Health check ready - Endpoint to verify deployment
- Deploy to staging first - Verify in staging environment
- Smoke tests passed - Critical paths work
- Deploy to production - Rolling update or blue-green
- Health check verified - All instances healthy
Post-Deployment
- Smoke tests in production - Critical paths verified
- Monitoring checked - No errors, performance normal
- User validation - Key users confirm functionality
- Deployment logged - Record version, time, deployer
- Documentation updated - If operational changes
If Problems Occur
- Assess impact - How many users affected?
- Decide rollback - Roll back or fix forward?
- Execute rollback - If decided, roll back quickly
- Notify stakeholders - Communicate status
- Root cause analysis - Document what went wrong
31.9 Go-Live Checklist
Use this comprehensive checklist before launching the POS system for a new tenant.
Infrastructure
- Production environment ready - All containers running
- Database provisioned - Tenant schema created
- SSL certificates - Valid and not expiring soon
- DNS configured - Custom domain if applicable
- Load balancer - Configured and tested
- Backup system - Automated backups running
- Disaster recovery - Tested and documented
Security
- Security audit complete - No critical findings
- PCI-DSS compliance - If handling cards
- Penetration testing - Completed without issues
- Access controls - Proper roles configured
- Secrets secured - In vault, not in code
Data
- Data migrated - From legacy system if applicable
- Data validated - Migrated data is correct
- Seed data - Default settings configured
- Test data removed - No test records in production
Integration
- Payment gateway - Connected and tested
- Shopify integration - If applicable, syncing
- QuickBooks integration - If applicable, bridges connected
- Email service - Transactional emails working
- SMS service - If applicable, verified
Training
- Admin training - Tenant admins trained
- Staff training - Cashiers trained
- Documentation - User guides available
- Support process - Help desk configured
Operational
- Monitoring active - All dashboards live
- Alerting configured - On-call schedule set
- Support team ready - Staff available for issues
- Escalation path - Know who to call for critical issues
Final Verification
- End-to-end test - Complete transaction flow
- Offline mode tested - Works without internet
- Receipt printing - Printers configured
- Cash drawer - Opens correctly
- Reports - Generate correctly
- Stakeholder sign-off - Approval to go live
31.10 Tenant Onboarding Checklist
Use this checklist when setting up a new tenant in the POS platform.
Account Setup
- Tenant record created - In system database
- Tenant ID generated - UUID assigned
- Tenant schema created - Database schema provisioned
- Admin user created - Initial admin account
- Password sent securely - Not in plain email
Configuration
- Business information - Name, address, tax ID
- Timezone configured - Correct timezone set
- Currency configured - Default currency set
- Tax rates - Local tax rates configured
- Receipt template - Customized with logo
- Email templates - Customized branding
Locations
- Locations created - All store locations added
- Location settings - Hours, addresses configured
- Inventory locations - Mapped to physical areas
- Fulfillment settings - Shipping from locations
Users
- User accounts created - All staff accounts
- Roles assigned - Proper permissions
- PIN codes set - For quick clock-in
- Training scheduled - Users know how to use system
Hardware
- POS terminals - Configured and tested
- Receipt printers - Installed and tested
- Barcode scanners - Connected and working
- Cash drawers - Opening on command
- Customer displays - If applicable, configured
Inventory
- Categories created - Product categories set up
- Products imported - From spreadsheet or legacy system
- Barcodes mapped - SKUs linked to barcodes
- Initial counts - Starting inventory recorded
- Pricing verified - All prices correct
Payments
- Payment methods - Cash, card, etc. enabled
- Payment gateway - Connected to tenant’s account
- Refund policy - Configured in system
- Gift cards - If applicable, enabled
Testing
- Test transaction - Complete sale end-to-end
- Test refund - Return processed correctly
- Test receipt - Prints correctly
- Test reports - Generate correctly
- Test sync - Data syncs to cloud
Final Steps
- Go-live date set - Scheduled with tenant
- Support contact - Tenant knows how to get help
- Documentation shared - User guides provided
- Billing configured - Subscription set up
31.11 End-of-Day Checklist
Use this checklist for daily store closing procedures.
Register Closure
- No open tickets - All pending transactions completed
- Z-report generated - End of day report printed
- Cash counted - Physical cash counted
- Over/short recorded - Discrepancy documented
- Cash deposited - Taken to safe or bank
Reconciliation
- Credit card batch - Batch closed and settled
- Gift card balance - Reconciled with system
- Returns verified - All returns have receipts
- Voids reviewed - Manager approval on voids
- Discounts reviewed - All discounts authorized
Inventory
- Received inventory - All receipts processed
- Transfers complete - Inter-store transfers logged
- Damaged items - Recorded in system
- Low stock noted - Reorder list generated
Equipment
- Registers logged out - All users signed out
- Printers - Paper refilled if needed
- Scanners - Charging if wireless
- Terminals - Shut down or locked
Security
- Safe locked - All valuables secured
- Doors locked - All entrances secured
- Alarm set - Security system armed
- Lights - Appropriate lights on/off
Data Backup
- Sync completed - All data uploaded to cloud
- Local backup - If offline backup required
- Verify sync - Confirm data in cloud dashboard
Manager Sign-Off
- Reports reviewed - Day’s performance checked
- Issues logged - Any problems documented
- Next day prep - Opening tasks noted
- Shift closed - System day closed
31.12 PCI-DSS Audit Checklist
Use this checklist to verify PCI-DSS compliance requirements.
Requirement 1: Network Security
- Firewall installed - Protecting cardholder data
- Default passwords changed - No vendor defaults
- Network segmentation - CDE isolated from other networks
- Firewall rules documented - All rules justified
- Inbound/outbound restricted - Minimal access
Requirement 2: Secure Configuration
- Hardening standards - Systems hardened
- Unnecessary services disabled - Minimal attack surface
- Security parameters - Properly configured
- One function per server - Where possible
- Non-console admin encrypted - SSH, TLS for admin
Requirement 3: Protect Stored Data
- Cardholder data minimized - Only store what’s needed
- PAN masked - Display only last 4 digits
- PAN encrypted - If stored (avoid if possible)
- Encryption keys managed - Secure key management
- Sensitive auth data - Not stored after authorization
Requirement 4: Encrypt Transmission
- TLS 1.2+ - For all cardholder data transmission
- Certificates valid - Not expired, trusted CA
- No fallback - Insecure protocols disabled
- Wireless encryption - WPA2/WPA3 for WiFi
Requirement 5: Anti-Malware
- Antivirus deployed - On all systems
- Signatures updated - Automatic updates
- Scans scheduled - Regular scans running
- Logs reviewed - Alerts investigated
Requirement 6: Secure Development
- Secure SDLC - Security in development lifecycle
- Code review - All changes reviewed
- Vulnerability testing - Regular security testing
- Patches applied - Critical patches within 30 days
- Change management - Formal change process
Requirement 7: Access Control
- Need to know - Access based on job function
- Access approval - Documented authorization
- Default deny - Unless explicitly allowed
- Privileged access limited - Minimal admin accounts
Requirement 8: User Identification
- Unique IDs - Each user has unique account
- Strong passwords - Complexity requirements
- MFA for remote - Two-factor for remote access
- Account lockout - After failed attempts
- Session timeout - Idle sessions terminated
Requirement 9: Physical Security
- Physical access controlled - To systems with card data
- Visitor procedures - Logged, escorted
- Media handling - Secure storage and destruction
- POS terminal security - Protected from tampering
Requirement 10: Logging & Monitoring
- Audit logs enabled - All access to card data
- Log integrity - Protected from modification
- Time synchronization - All systems synced
- Log review - Daily review process
- Log retention - At least 1 year, 3 months online
Requirement 11: Security Testing
- Vulnerability scans - Quarterly external scans
- Internal scans - Quarterly internal scans
- Penetration testing - Annual pen test
- IDS/IPS - Intrusion detection in place
- Change detection - File integrity monitoring
Requirement 12: Security Policies
- Security policy - Documented and published
- Risk assessment - Annual risk assessment
- User awareness - Security training program
- Incident response - Plan documented and tested
- Service providers - Compliant or managed
31.13 Using These Checklists
Digital Tracking
Create issues or tasks for each checklist item in your project management tool:
# Example: Create GitHub issues from checklist
/dev-team create issues from deployment checklist
Print and Check
Print physical copies for:
- End-of-Day Checklist (daily use)
- Tenant Onboarding (per new customer)
- Go-Live Checklist (major deployments)
Team Responsibility
Assign checklist sections to team members:
| Checklist | Owner |
|---|---|
| Code Review | Developer |
| Security Review | Security Lead |
| Deployment | DevOps |
| Go-Live | Project Manager |
| End-of-Day | Store Manager |
Checklists ensure consistency. Use them every time, not just when you remember.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VIII - Reference |
| Chapter | 31 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Chapter 32: Troubleshooting
32.1 Common Issues and Solutions for POS Platform
This chapter provides solutions for common problems encountered during development, deployment, and operation of the POS platform.
32.2 Table of Contents
- Database Connection Issues
- Tenant Isolation Failures
- Sync Conflicts
- Payment Processing Errors
- Offline Mode Problems
- Performance Issues
- Authentication Failures
- Integration Errors
- Build and Deployment Failures
32.3 Database Connection Issues
Issue: Container Cannot Connect to PostgreSQL
Symptoms:
- Application fails to start
- Error: “Connection refused” or “Host not found”
- EF Core throws
NpgsqlException
Possible Causes:
- PostgreSQL container not running
- Container not on correct Docker network
- Wrong connection string
- Firewall blocking port
Diagnostic Steps:
# Check if postgres16 is running
docker ps | grep postgres16
# Check network connectivity from app container
docker exec <app-container> ping postgres16
# Test port accessibility
docker exec <app-container> nc -zv postgres16 5432
# View PostgreSQL logs
docker logs postgres16 --tail 100
Resolution:
-
Container not running:
cd /volume1/docker/postgres docker-compose up -d -
Network misconfiguration:
# Verify network exists docker network ls | grep postgres_default # Create if missing docker network create postgres_default # Connect container to network docker network connect postgres_default <app-container> -
Wrong connection string:
# Correct format from container: Host=postgres16;Port=5432;Database=pos_db;Username=pos_user;Password=xxx # Correct format from host: Host=localhost;Port=5433;Database=pos_db;Username=pos_user;Password=xxx
Prevention:
- Always specify
postgres_defaultas external network in docker-compose - Use environment variables for connection strings
- Implement connection retry logic with exponential backoff
Issue: “Role does not exist” Error
Symptoms:
- Error:
FATAL: role "pos_user" does not exist
Possible Causes:
- Database user not created
- Wrong username in connection string
Resolution:
# Create the user
docker exec -it postgres16 psql -U postgres << EOF
CREATE USER pos_user WITH PASSWORD 'secure_password';
CREATE DATABASE pos_db OWNER pos_user;
GRANT ALL PRIVILEGES ON DATABASE pos_db TO pos_user;
EOF
32.4 Tenant Isolation Failures
Issue: Data Leaking Between Tenants
Symptoms:
- User sees data from another tenant
- Queries return unexpected results
- Security audit fails
Possible Causes:
- Missing
TenantIdfilter in query - Middleware not setting tenant context
- Background job not setting tenant
- DbContext not configured for tenant
Diagnostic Steps:
-- Check for records missing tenant_id
SELECT table_name
FROM information_schema.columns
WHERE column_name = 'tenant_id'
AND table_schema = 'public';
-- Find orphaned records
SELECT COUNT(*) FROM orders WHERE tenant_id IS NULL;
Resolution:
-
Missing filter - Add global query filter:
// In DbContext.OnModelCreating modelBuilder.Entity<Order>() .HasQueryFilter(o => o.TenantId == _tenantProvider.TenantId); -
Middleware issue:
// Verify middleware order in Program.cs app.UseAuthentication(); app.UseTenantMiddleware(); // Must be after auth app.UseAuthorization(); -
Background job:
// Always set tenant in background jobs using (var scope = _scopeFactory.CreateScope()) { var tenantProvider = scope.ServiceProvider.GetRequiredService<ITenantProvider>(); tenantProvider.SetTenant(tenantId); // ... do work }
Prevention:
- Enable Row-Level Security in PostgreSQL
- Add integration tests that verify isolation
- Review all queries for tenant filtering
- Use tenant-scoped DbContext factory
Issue: “Invalid TenantId” on Valid Request
Symptoms:
- 400 Bad Request with tenant errors
- User cannot access their own data
Possible Causes:
- Tenant ID not in JWT claims
- Tenant lookup failing
- Caching stale tenant data
Resolution:
// Debug: Log tenant resolution
_logger.LogDebug("Resolving tenant from claim: {TenantClaim}",
context.User.FindFirst("tenant_id")?.Value);
// Clear tenant cache
_cache.Remove($"tenant:{tenantId}");
32.5 Sync Conflicts
Issue: Offline Changes Overwritten
Symptoms:
- User makes offline edits, they disappear after sync
- Error: “Conflict detected”
- Data reverts to old state
Possible Causes:
- Last-write-wins without conflict detection
- Version mismatch
- Sync order incorrect
Diagnostic Steps:
-- Check version history
SELECT id, version, modified_at
FROM inventory_items
WHERE sku = 'ABC123'
ORDER BY version DESC;
-- Check event log
SELECT * FROM inventory_events
WHERE sku = 'ABC123'
ORDER BY created_at DESC LIMIT 10;
Resolution:
-
Implement optimistic concurrency:
public async Task<bool> UpdateAsync(Item item, int expectedVersion) { var affected = await _db.Items .Where(i => i.Id == item.Id && i.Version == expectedVersion) .ExecuteUpdateAsync(s => s .SetProperty(i => i.Name, item.Name) .SetProperty(i => i.Version, expectedVersion + 1)); return affected > 0; // False if version mismatch } -
Queue offline changes with timestamps:
// Store in local queue with client timestamp _localQueue.Enqueue(new SyncItem { Operation = "Update", ClientTimestamp = DateTimeOffset.UtcNow, Data = item });
Prevention:
- Use vector clocks or version vectors
- Implement merge strategies for specific entity types
- Show user when conflicts occur and let them choose
Issue: Sync Never Completes
Symptoms:
- “Syncing…” message never goes away
- Partial data sync
- Timeout errors
Possible Causes:
- Network interruption during sync
- Large payload timeout
- Server error during sync
Resolution:
// Implement chunked sync
public async Task SyncAsync()
{
var chunks = _localQueue.Chunk(100);
foreach (var chunk in chunks)
{
try
{
await _api.SyncBatchAsync(chunk);
_localQueue.MarkSynced(chunk);
}
catch (TimeoutException)
{
// Will retry next sync
break;
}
}
}
32.6 Payment Processing Errors
Issue: Payment Gateway Timeout
Symptoms:
- Payment hangs for 30+ seconds
- Error: “Request timeout”
- Uncertain if payment processed
Possible Causes:
- Network latency
- Gateway overloaded
- Invalid timeout configuration
Diagnostic Steps:
# Test gateway connectivity
curl -X GET https://api.paymentgateway.com/health -w "\nTime: %{time_total}s\n"
# Check recent payment attempts in logs
grep "payment" /var/log/pos/*.log | tail -50
Resolution:
-
Implement idempotency:
public async Task<PaymentResult> ProcessPaymentAsync( PaymentRequest request, string idempotencyKey) { // Check if already processed var existing = await _db.Payments .FirstOrDefaultAsync(p => p.IdempotencyKey == idempotencyKey); if (existing != null) return existing.ToResult(); // Process with gateway var result = await _gateway.ChargeAsync(request); // Save with idempotency key await _db.Payments.AddAsync(new Payment { IdempotencyKey = idempotencyKey, Status = result.Status }); return result; } -
Add timeout with retry:
var policy = Policy .Handle<TimeoutException>() .RetryAsync(3, onRetry: (ex, count) => { _logger.LogWarning("Payment retry {Count}: {Message}", count, ex.Message); }); await policy.ExecuteAsync(() => _gateway.ChargeAsync(request));
Prevention:
- Always use idempotency keys
- Set reasonable timeouts (15-30 seconds)
- Implement circuit breaker for gateway calls
- Queue payments if offline
Issue: Card Declined
Symptoms:
- Payment rejected
- Error code from gateway
Common Decline Codes:
| Code | Meaning | Action |
|---|---|---|
insufficient_funds | Not enough balance | Try different card |
card_declined | Generic decline | Contact card issuer |
expired_card | Card expired | Use different card |
incorrect_cvc | Wrong CVV | Re-enter |
processing_error | Gateway issue | Retry |
Resolution:
public string GetUserFriendlyMessage(string errorCode)
{
return errorCode switch
{
"insufficient_funds" => "Card declined. Please try a different payment method.",
"expired_card" => "This card has expired. Please use a different card.",
"incorrect_cvc" => "The security code is incorrect. Please verify and try again.",
_ => "Payment could not be processed. Please try again or use a different card."
};
}
32.7 Offline Mode Problems
Issue: Application Won’t Start Offline
Symptoms:
- App requires internet to launch
- Loading screen indefinitely
- Error: “Network request failed”
Possible Causes:
- Missing service worker
- No cached data
- API call in startup
Diagnostic Steps:
- Check browser DevTools > Application > Service Workers
- Check IndexedDB for cached data
- Monitor Network tab for failed requests
Resolution:
-
Ensure service worker registered:
if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/sw.js') .then(reg => console.log('SW registered')) .catch(err => console.error('SW failed', err)); } -
Add offline fallback in startup:
public async Task InitializeAsync() { try { await _api.FetchInitialData(); } catch (HttpRequestException) { _logger.LogWarning("Offline - using cached data"); await LoadFromCache(); } }
Prevention:
- Cache essential data proactively
- Implement offline-first architecture
- Test app startup with network disabled
Issue: Offline Queue Growing Too Large
Symptoms:
- Local storage filling up
- App slowing down
- “Storage quota exceeded”
Possible Causes:
- Extended offline period
- Sync failing silently
- No queue size limit
Resolution:
// Implement queue management
public async Task AddToQueue(SyncItem item)
{
var queueSize = await _localDb.SyncQueue.CountAsync();
if (queueSize >= MAX_QUEUE_SIZE)
{
// Warn user
await _notifications.ShowAsync(
"Sync queue is full. Please connect to internet.");
// Optional: Remove oldest low-priority items
await _localDb.SyncQueue
.Where(q => q.Priority == Priority.Low)
.OrderBy(q => q.CreatedAt)
.Take(100)
.ExecuteDeleteAsync();
}
await _localDb.SyncQueue.AddAsync(item);
}
32.8 Performance Issues
Issue: Slow API Responses
Symptoms:
- API calls taking > 1 second
- Users complaining of lag
- Timeouts occurring
Possible Causes:
- N+1 query problem
- Missing database indexes
- Large payloads
- No caching
Diagnostic Steps:
-- Find slow queries
SELECT query, calls, mean_time, total_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Check missing indexes
SELECT relname, seq_scan, idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > idx_scan
ORDER BY seq_scan DESC;
Resolution:
-
Fix N+1 queries:
// Bad var orders = await _db.Orders.ToListAsync(); foreach (var order in orders) order.Items = await _db.OrderItems.Where(...).ToListAsync(); // Good var orders = await _db.Orders .Include(o => o.Items) .ToListAsync(); -
Add missing indexes:
CREATE INDEX idx_orders_tenant_date ON orders (tenant_id, created_at DESC); CREATE INDEX idx_inventory_sku ON inventory_items (sku); -
Implement caching:
public async Task<Product> GetProductAsync(string sku) { return await _cache.GetOrCreateAsync($"product:{sku}", async entry => { entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5); return await _db.Products.FindAsync(sku); }); }
Prevention:
- Enable query logging in development
- Set up performance monitoring
- Establish response time budgets
Issue: Memory Usage Growing
Symptoms:
- Container memory increasing over time
- Out of memory errors
- Slow garbage collection
Possible Causes:
- Memory leak in code
- Unbounded caches
- Event handler accumulation
- Large objects in memory
Diagnostic Steps:
# Monitor container memory
docker stats <container-name>
# Get memory dump (if dotnet-dump installed)
dotnet-dump collect -p <process-id>
Resolution:
-
Dispose resources properly:
// Use 'using' for disposables await using var connection = new NpgsqlConnection(connectionString); await connection.OpenAsync(); -
Limit cache size:
services.AddMemoryCache(options => { options.SizeLimit = 1000; // Max entries }); _cache.Set(key, value, new MemoryCacheEntryOptions { Size = 1, SlidingExpiration = TimeSpan.FromMinutes(10) }); -
Unsubscribe from events:
public class MyComponent : IDisposable { public MyComponent(IEventBus bus) { _subscription = bus.Subscribe<OrderCreated>(HandleOrder); } public void Dispose() { _subscription?.Dispose(); } }
32.9 Authentication Failures
Issue: JWT Token Rejected
Symptoms:
- 401 Unauthorized responses
- “Invalid token” errors
- User suddenly logged out
Possible Causes:
- Token expired
- Wrong signing key
- Clock skew between servers
- Token issued for different audience
Diagnostic Steps:
# Decode JWT (don't do this with sensitive tokens in production)
echo "<token>" | cut -d. -f2 | base64 -d | jq
# Check claims
# Look for: exp, iss, aud
Resolution:
-
Token expired - Implement refresh flow:
if (response.StatusCode == HttpStatusCode.Unauthorized) { var newToken = await RefreshTokenAsync(); // Retry with new token } -
Clock skew - Add tolerance:
services.AddAuthentication().AddJwtBearer(options => { options.TokenValidationParameters = new TokenValidationParameters { ClockSkew = TimeSpan.FromMinutes(5) }; }); -
Wrong key - Verify signing key matches:
# Both services must use same key echo $JWT_SIGNING_KEY | base64
Issue: User Cannot Log In
Symptoms:
- Login fails with valid credentials
- “Invalid username or password”
- Account not locked
Possible Causes:
- Password hashing mismatch
- User account disabled
- Tenant not active
- Case sensitivity issues
Resolution:
public async Task<LoginResult> LoginAsync(string email, string password)
{
// Case-insensitive email lookup
var user = await _db.Users
.FirstOrDefaultAsync(u => u.Email.ToLower() == email.ToLower());
if (user == null)
{
_logger.LogWarning("Login failed: user not found for {Email}", email);
return LoginResult.Failed("Invalid credentials");
}
if (!user.IsActive)
{
_logger.LogWarning("Login failed: user {Email} is inactive", email);
return LoginResult.Failed("Account is disabled");
}
if (!_hasher.Verify(password, user.PasswordHash))
{
_logger.LogWarning("Login failed: wrong password for {Email}", email);
return LoginResult.Failed("Invalid credentials");
}
return LoginResult.Success(GenerateToken(user));
}
32.10 Integration Errors
Issue: Shopify Webhook Not Received
Symptoms:
- Orders not appearing in POS
- Inventory not syncing
- Webhook endpoint returning errors
Possible Causes:
- Webhook not registered
- HMAC verification failing
- Endpoint not accessible
- SSL certificate issues
Diagnostic Steps:
# Check webhook registration
curl -X GET "https://{store}.myshopify.com/admin/api/2024-01/webhooks.json" \
-H "X-Shopify-Access-Token: {token}"
# Test endpoint accessibility
curl -X POST https://your-domain.com/webhooks/shopify \
-H "Content-Type: application/json" \
-d '{"test": true}'
Resolution:
-
Register webhook:
curl -X POST "https://{store}.myshopify.com/admin/api/2024-01/webhooks.json" \ -H "X-Shopify-Access-Token: {token}" \ -H "Content-Type: application/json" \ -d '{ "webhook": { "topic": "orders/create", "address": "https://your-domain.com/webhooks/shopify", "format": "json" } }' -
Fix HMAC verification:
public bool VerifyWebhook(HttpRequest request) { var hmacHeader = request.Headers["X-Shopify-Hmac-SHA256"]; using var reader = new StreamReader(request.Body); var body = await reader.ReadToEndAsync(); using var hmac = new HMACSHA256(Encoding.UTF8.GetBytes(_secret)); var hash = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(body))); return hash == hmacHeader; }
Issue: Bridge Not Connecting
Symptoms:
- Bridge status shows “Offline”
- Commands stuck in pending
- Heartbeats not received
Possible Causes:
- Tailscale VPN not connected
- Wrong server URL in bridge config
- Firewall blocking
- Bridge service not running
Diagnostic Steps:
# Check Tailscale status
tailscale status
# Test connectivity from bridge machine
curl http://100.124.10.65:2500/health
# Check bridge logs
Get-Content C:\ProgramData\StanlyBridge\logs\*.log -Tail 50
Resolution:
-
Reconnect Tailscale:
tailscale up -
Verify bridge configuration:
// appsettings.json { "ServerUrl": "http://100.124.10.65:2500", "StoreCode": "GM" } -
Restart bridge service:
Restart-Service StanlyBridge
32.11 Build and Deployment Failures
Issue: Docker Build Fails
Symptoms:
docker-compose up --builderrors- Missing dependencies
- “No such file or directory”
Possible Causes:
- Dockerfile syntax error
- Missing files in context
- Network issues downloading packages
- Incompatible base image
Resolution:
-
Check .dockerignore:
# Make sure necessary files aren't ignored # Bad: *.json # Good: *.log node_modules -
Multi-stage build issues:
# Ensure COPY --from references correct stage FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /src COPY ["src/App/App.csproj", "src/App/"] RUN dotnet restore "src/App/App.csproj" FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final COPY --from=build /app/publish . # 'build' must match stage name -
Clear Docker cache:
docker builder prune docker-compose build --no-cache
Issue: Migration Fails on Deployment
Symptoms:
- Container starts but crashes
- “Database migration failed”
- Schema out of sync
Possible Causes:
- Migration order issue
- Conflicting migrations
- Database connection during migration
Resolution:
-
Run migrations separately:
# Don't auto-migrate on startup # Instead, run migrations explicitly docker exec <container> dotnet ef database update -
Check migration history:
SELECT * FROM "__EFMigrationsHistory" ORDER BY "MigrationId"; -
Reset if needed (dev only!):
# Remove all migrations and recreate dotnet ef database drop dotnet ef database update
Prevention:
- Test migrations on copy of production data
- Never modify published migrations
- Keep migrations small and focused
32.12 Quick Reference: Error Codes
| Error Code | Meaning | First Step |
|---|---|---|
| 400 | Bad Request | Check request body/params |
| 401 | Unauthorized | Check token validity |
| 403 | Forbidden | Check user permissions |
| 404 | Not Found | Check ID/resource exists |
| 409 | Conflict | Check version/concurrency |
| 422 | Validation Error | Check input constraints |
| 500 | Server Error | Check application logs |
| 502 | Bad Gateway | Check upstream services |
| 503 | Service Unavailable | Check service health |
| 504 | Gateway Timeout | Check network/timeouts |
32.13 When All Else Fails
- Check the logs:
docker logs <container> --tail 500 - Check the database: Direct query to verify data
- Check the network:
docker network inspect - Restart the container: Sometimes it just works
- Ask for help: Post in team chat with:
- Exact error message
- Steps to reproduce
- What you’ve already tried
- Relevant log snippets
The best debugging tool is a good night’s sleep. But if you need to fix it now, use these guides.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | VIII - Reference |
| Chapter | 32 of 32 |
This chapter is part of the POS Blueprint Book. All content is self-contained.
Appendix A: Complete API Reference
Version: 4.0.0
Last Updated: February 25, 2026
Base URL: https://api.pos-platform.com/api/v1
A.1 Overview
This appendix contains the complete API reference for the POS Platform, organized by domain. All endpoints require authentication unless marked as public.
Authentication
All authenticated requests must include a Bearer token:
Authorization: Bearer <jwt_token>
Role Hierarchy
| Role | Level | Capabilities |
|---|---|---|
| SuperAdmin | 5 | Full system access |
| Admin | 4 | Tenant-wide administration |
| Manager | 3 | Location management, overrides |
| Cashier | 2 | POS operations |
| Viewer | 1 | Read-only access |
Common Response Codes
| Code | Meaning |
|---|---|
| 200 | Success |
| 201 | Created |
| 204 | No Content |
| 400 | Bad Request |
| 401 | Unauthorized |
| 403 | Forbidden |
| 404 | Not Found |
| 409 | Conflict |
| 422 | Validation Error |
| 429 | Rate Limited |
| 500 | Server Error |
A.2 Domain 1: Authentication
POST /auth/login
Description: Authenticate user and receive JWT token
Authentication: None (public)
Request Body:
{
"email": "user@example.com",
"password": "securePassword123",
"tenantId": "tenant_nexus"
}
Response: 200 OK
{
"token": "eyJhbGciOiJIUzI1NiIs...",
"refreshToken": "dGhpcyBpcyBhIHJlZnJl...",
"expiresAt": "2025-12-29T16:00:00Z",
"user": {
"id": "usr_abc123",
"email": "user@example.com",
"firstName": "John",
"lastName": "Doe",
"role": "cashier",
"locationId": "loc_gm",
"permissions": ["sales.create", "sales.void", "inventory.view"]
}
}
Errors: 401 Invalid credentials, 423 Account locked
POST /auth/refresh
Description: Refresh an expired access token
Authentication: None (requires valid refresh token)
Request Body:
{
"refreshToken": "dGhpcyBpcyBhIHJlZnJl..."
}
Response: 200 OK
{
"token": "eyJhbGciOiJIUzI1NiIs...",
"refreshToken": "bmV3IHJlZnJlc2ggdG9r...",
"expiresAt": "2025-12-29T18:00:00Z"
}
Errors: 401 Invalid or expired refresh token
POST /auth/logout
Description: Invalidate current session
Authentication: Bearer token (Any role)
Request Body: None
Response: 204 No Content
POST /auth/password/change
Description: Change current user’s password
Authentication: Bearer token (Any role)
Request Body:
{
"currentPassword": "oldPassword123",
"newPassword": "newSecurePassword456"
}
Response: 204 No Content
Errors: 400 Password requirements not met, 401 Current password incorrect
POST /auth/password/reset
Description: Request password reset email
Authentication: None (public)
Request Body:
{
"email": "user@example.com",
"tenantId": "tenant_nexus"
}
Response: 202 Accepted
{
"message": "If the email exists, a reset link has been sent"
}
A.3 Domain 2: Tenants
GET /tenants
Description: List all tenants (SuperAdmin only)
Authentication: Bearer token (SuperAdmin)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | status | string | Filter by status (active, suspended, trial) | | page | int | Page number (default: 1) | | limit | int | Items per page (default: 20, max: 100) |
Response: 200 OK
{
"data": [
{
"id": "tenant_nexus",
"name": "Nexus Clothing",
"subdomain": "nexus",
"status": "active",
"plan": "enterprise",
"createdAt": "2025-01-01T00:00:00Z",
"locationCount": 5,
"userCount": 25
}
],
"pagination": {
"page": 1,
"limit": 20,
"total": 45,
"pages": 3
}
}
POST /tenants
Description: Create a new tenant
Authentication: Bearer token (SuperAdmin)
Request Body:
{
"name": "New Retail Store",
"subdomain": "newretail",
"plan": "professional",
"adminUser": {
"email": "admin@newretail.com",
"firstName": "Jane",
"lastName": "Smith",
"password": "initialPassword123"
},
"settings": {
"timezone": "America/New_York",
"currency": "USD",
"taxRate": 6.0
}
}
Response: 201 Created
{
"id": "tenant_newretail",
"name": "New Retail Store",
"subdomain": "newretail",
"status": "trial",
"trialEndsAt": "2025-01-28T00:00:00Z",
"adminUserId": "usr_admin123"
}
Errors: 409 Subdomain already exists, 422 Validation error
GET /tenants/
Description: Get tenant details
Authentication: Bearer token (SuperAdmin or tenant Admin)
Response: 200 OK
{
"id": "tenant_nexus",
"name": "Nexus Clothing",
"subdomain": "nexus",
"status": "active",
"plan": "enterprise",
"settings": {
"timezone": "America/New_York",
"currency": "USD",
"taxRate": 6.0,
"loyaltyEnabled": true,
"rfidEnabled": true
},
"usage": {
"locations": 5,
"users": 25,
"monthlyTransactions": 12500,
"storageUsedMB": 2048
},
"createdAt": "2025-01-01T00:00:00Z",
"updatedAt": "2025-12-29T10:00:00Z"
}
PATCH /tenants/
Description: Update tenant settings
Authentication: Bearer token (SuperAdmin or tenant Admin)
Request Body:
{
"name": "Nexus Clothing Inc.",
"settings": {
"taxRate": 6.5
}
}
Response: 200 OK (returns updated tenant)
POST /tenants/{tenantId}/suspend
Description: Suspend a tenant account
Authentication: Bearer token (SuperAdmin)
Request Body:
{
"reason": "Payment overdue",
"suspendAt": "2025-12-30T00:00:00Z"
}
Response: 200 OK
POST /tenants/{tenantId}/activate
Description: Reactivate a suspended tenant
Authentication: Bearer token (SuperAdmin)
Response: 200 OK
A.4 Domain 3: Locations
GET /locations
Description: List all locations for current tenant
Authentication: Bearer token (Viewer+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | status | string | Filter by status (active, inactive) | | type | string | Filter by type (store, warehouse, popup) |
Response: 200 OK
{
"data": [
{
"id": "loc_gm",
"code": "GM",
"name": "Greenbrier Mall",
"type": "store",
"status": "active",
"address": {
"street": "1401 Greenbrier Pkwy",
"city": "Chesapeake",
"state": "VA",
"zip": "23320"
},
"phone": "757-555-0100",
"timezone": "America/New_York",
"shopifyLocationId": "19718045760"
}
]
}
POST /locations
Description: Create a new location
Authentication: Bearer token (Admin)
Request Body:
{
"code": "NL",
"name": "New Location",
"type": "store",
"address": {
"street": "123 Main St",
"city": "Norfolk",
"state": "VA",
"zip": "23510"
},
"phone": "757-555-0200",
"timezone": "America/New_York",
"settings": {
"fulfillmentPriority": 5,
"canShipOnline": true
}
}
Response: 201 Created
GET /locations/
Description: Get location details
Authentication: Bearer token (Viewer+)
Response: 200 OK
{
"id": "loc_gm",
"code": "GM",
"name": "Greenbrier Mall",
"type": "store",
"status": "active",
"address": {
"street": "1401 Greenbrier Pkwy",
"city": "Chesapeake",
"state": "VA",
"zip": "23320"
},
"phone": "757-555-0100",
"timezone": "America/New_York",
"settings": {
"fulfillmentPriority": 1,
"canShipOnline": true,
"showInventoryOnWeb": true
},
"registers": [
{
"id": "reg_01",
"name": "Register 1",
"status": "active"
}
],
"operatingHours": {
"monday": { "open": "10:00", "close": "21:00" },
"tuesday": { "open": "10:00", "close": "21:00" },
"wednesday": { "open": "10:00", "close": "21:00" },
"thursday": { "open": "10:00", "close": "21:00" },
"friday": { "open": "10:00", "close": "21:00" },
"saturday": { "open": "10:00", "close": "21:00" },
"sunday": { "open": "12:00", "close": "18:00" }
}
}
PATCH /locations/
Description: Update location details
Authentication: Bearer token (Admin)
Request Body:
{
"name": "Greenbrier Mall Store",
"settings": {
"fulfillmentPriority": 2
}
}
Response: 200 OK
A.5 Domain 4: Users & Employees
GET /users
Description: List all users for current tenant
Authentication: Bearer token (Admin)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | role | string | Filter by role | | locationId | string | Filter by location | | status | string | active, inactive, locked |
Response: 200 OK
{
"data": [
{
"id": "usr_abc123",
"email": "john.doe@example.com",
"firstName": "John",
"lastName": "Doe",
"role": "cashier",
"locationId": "loc_gm",
"status": "active",
"lastLoginAt": "2025-12-29T08:00:00Z"
}
]
}
POST /users
Description: Create a new user
Authentication: Bearer token (Admin)
Request Body:
{
"email": "newuser@example.com",
"firstName": "Jane",
"lastName": "Smith",
"role": "cashier",
"locationId": "loc_gm",
"pin": "1234",
"permissions": ["sales.create", "sales.void"]
}
Response: 201 Created
GET /users/
Description: Get user details
Authentication: Bearer token (Admin or self)
Response: 200 OK
PATCH /users/
Description: Update user details
Authentication: Bearer token (Admin)
Request Body:
{
"role": "manager",
"permissions": ["sales.create", "sales.void", "inventory.adjust"]
}
Response: 200 OK
DELETE /users/
Description: Deactivate user (soft delete)
Authentication: Bearer token (Admin)
Response: 204 No Content
POST /users/{userId}/reset-pin
Description: Reset user’s POS PIN
Authentication: Bearer token (Admin)
Request Body:
{
"newPin": "5678"
}
Response: 204 No Content
GET /employees/{employeeId}/timeclock
Description: Get employee time clock entries
Authentication: Bearer token (Manager+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | startDate | date | Start of date range | | endDate | date | End of date range |
Response: 200 OK
{
"data": [
{
"id": "tc_001",
"employeeId": "usr_abc123",
"clockIn": "2025-12-29T08:00:00Z",
"clockOut": "2025-12-29T17:00:00Z",
"hoursWorked": 9.0,
"breaks": [
{
"start": "2025-12-29T12:00:00Z",
"end": "2025-12-29T12:30:00Z",
"type": "lunch"
}
]
}
]
}
POST /employees/{employeeId}/clock-in
Description: Clock in employee
Authentication: Bearer token (Cashier+ or self)
Request Body:
{
"locationId": "loc_gm",
"registerId": "reg_01"
}
Response: 201 Created
{
"id": "tc_002",
"employeeId": "usr_abc123",
"clockIn": "2025-12-29T08:00:00Z",
"locationId": "loc_gm"
}
POST /employees/{employeeId}/clock-out
Description: Clock out employee
Authentication: Bearer token (Cashier+ or self)
Response: 200 OK
{
"id": "tc_002",
"clockOut": "2025-12-29T17:00:00Z",
"hoursWorked": 9.0
}
A.6 Domain 5: Products & Catalog
GET /products
Description: List products in catalog
Authentication: Bearer token (Viewer+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | search | string | Search by name, SKU, barcode | | categoryId | string | Filter by category | | vendorId | string | Filter by vendor | | status | string | active, discontinued, draft | | page | int | Page number | | limit | int | Items per page |
Response: 200 OK
{
"data": [
{
"id": "prod_abc123",
"name": "Classic V-Neck Tee",
"sku": "NXP0323",
"barcode": "657381512532",
"categoryId": "cat_shirts",
"vendorId": "vendor_abc",
"status": "active",
"basePrice": 29.99,
"cost": 12.50,
"variants": [
{
"id": "var_001",
"sku": "NXP0323-S-BLK",
"options": { "size": "S", "color": "Black" },
"price": 29.99,
"barcode": "657381512533"
}
],
"images": [
{
"url": "https://cdn.example.com/images/nxp0323.jpg",
"alt": "Classic V-Neck Tee",
"position": 1
}
]
}
],
"pagination": {
"page": 1,
"limit": 20,
"total": 5000
}
}
POST /products
Description: Create a new product
Authentication: Bearer token (Admin)
Request Body:
{
"name": "New Product",
"sku": "NXP9999",
"categoryId": "cat_shirts",
"vendorId": "vendor_abc",
"basePrice": 39.99,
"cost": 15.00,
"description": "Product description here",
"variants": [
{
"sku": "NXP9999-S-BLK",
"options": { "size": "S", "color": "Black" },
"price": 39.99,
"barcode": "657381599999"
}
]
}
Response: 201 Created
GET /products/
Description: Get product details
Authentication: Bearer token (Viewer+)
Response: 200 OK
PATCH /products/
Description: Update product
Authentication: Bearer token (Admin)
Request Body:
{
"basePrice": 34.99,
"status": "active"
}
Response: 200 OK
DELETE /products/
Description: Discontinue product (soft delete)
Authentication: Bearer token (Admin)
Response: 204 No Content
GET /products/{productId}/variants
Description: List all variants for a product
Authentication: Bearer token (Viewer+)
Response: 200 OK
POST /products/{productId}/variants
Description: Add variant to product
Authentication: Bearer token (Admin)
Request Body:
{
"sku": "NXP0323-XL-BLK",
"options": { "size": "XL", "color": "Black" },
"price": 29.99,
"barcode": "657381512599"
}
Response: 201 Created
GET /categories
Description: List product categories
Authentication: Bearer token (Viewer+)
Response: 200 OK
{
"data": [
{
"id": "cat_shirts",
"name": "Shirts",
"parentId": null,
"children": [
{
"id": "cat_tees",
"name": "T-Shirts",
"parentId": "cat_shirts"
},
{
"id": "cat_polos",
"name": "Polos",
"parentId": "cat_shirts"
}
]
}
]
}
GET /vendors
Description: List vendors
Authentication: Bearer token (Viewer+)
Response: 200 OK
A.7 Domain 6: Inventory
GET /inventory
Description: Get inventory levels across locations
Authentication: Bearer token (Viewer+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | locationId | string | Filter by location | | variantId | string | Filter by variant | | sku | string | Filter by SKU | | belowReorder | boolean | Show only items below reorder point |
Response: 200 OK
{
"data": [
{
"variantId": "var_001",
"sku": "NXP0323-S-BLK",
"productName": "Classic V-Neck Tee - S Black",
"levels": [
{
"locationId": "loc_gm",
"locationName": "Greenbrier Mall",
"onHand": 15,
"available": 13,
"reserved": 2,
"reorderPoint": 5,
"reorderQty": 20
},
{
"locationId": "loc_hm",
"locationName": "Peninsula Town Center",
"onHand": 8,
"available": 8,
"reserved": 0,
"reorderPoint": 5,
"reorderQty": 20
}
],
"totalOnHand": 23,
"totalAvailable": 21
}
]
}
GET /inventory/locations/
Description: Get inventory for specific location
Authentication: Bearer token (Viewer+)
Response: 200 OK
POST /inventory/adjustments
Description: Create inventory adjustment
Authentication: Bearer token (Manager+)
Request Body:
{
"locationId": "loc_gm",
"adjustmentType": "cycle_count",
"items": [
{
"variantId": "var_001",
"systemQty": 15,
"countedQty": 13,
"reason": "shrinkage"
}
],
"notes": "Quarterly cycle count - Section A"
}
Response: 201 Created
{
"id": "adj_001",
"status": "completed",
"items": [
{
"variantId": "var_001",
"variance": -2,
"previousOnHand": 15,
"newOnHand": 13,
"costImpact": -25.00
}
],
"totalVariance": -2,
"totalCostImpact": -25.00
}
GET /inventory/adjustments
Description: List inventory adjustments
Authentication: Bearer token (Manager+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | locationId | string | Filter by location | | type | string | cycle_count, shrinkage, damage, correction | | startDate | date | Start date | | endDate | date | End date |
Response: 200 OK
POST /inventory/transfers
Description: Create inventory transfer request
Authentication: Bearer token (Manager+)
Request Body:
{
"fromLocationId": "loc_hq",
"toLocationId": "loc_gm",
"priority": "normal",
"reason": "low_stock",
"items": [
{
"variantId": "var_001",
"quantity": 10
}
],
"notes": "Restocking for weekend sale"
}
Response: 201 Created
{
"id": "xfer_001",
"status": "pending",
"fromLocationId": "loc_hq",
"toLocationId": "loc_gm",
"items": [
{
"variantId": "var_001",
"quantityRequested": 10
}
],
"expectedShipDate": "2025-12-30",
"expectedArrivalDate": "2025-12-31"
}
GET /inventory/transfers/
Description: Get transfer details
Authentication: Bearer token (Viewer+)
Response: 200 OK
POST /inventory/transfers/{transferId}/ship
Description: Mark transfer as shipped
Authentication: Bearer token (Manager+)
Request Body:
{
"items": [
{
"variantId": "var_001",
"quantityShipped": 10
}
],
"trackingNumber": "1Z999AA10123456784",
"carrier": "UPS"
}
Response: 200 OK
POST /inventory/transfers/{transferId}/receive
Description: Receive transfer at destination
Authentication: Bearer token (Manager+)
Request Body:
{
"items": [
{
"variantId": "var_001",
"quantityReceived": 10,
"quantityDamaged": 0
}
],
"notes": null
}
Response: 200 OK
A.8 Domain 7: Sales & Orders
POST /sales
Description: Create a new sale transaction
Authentication: Bearer token (Cashier+)
Request Body:
{
"locationId": "loc_gm",
"registerId": "reg_01",
"customerId": "cust_john_doe",
"lineItems": [
{
"variantId": "var_001",
"quantity": 2,
"unitPrice": 29.99,
"discountAmount": 0,
"discountReason": null
}
],
"discounts": [
{
"type": "percentage",
"value": 10,
"code": "SAVE10",
"appliesTo": "order"
}
],
"payments": [
{
"method": "card",
"amount": 53.98,
"reference": "tok_visa_4242"
}
]
}
Response: 201 Created
{
"id": "ord_xyz789",
"orderNumber": "ORD-2025-00001",
"receiptNumber": "GM-2025-001234",
"status": "completed",
"lineItems": [
{
"id": "li_001",
"variantId": "var_001",
"sku": "NXP0323-S-BLK",
"name": "Classic V-Neck Tee - S Black",
"quantity": 2,
"unitPrice": 29.99,
"lineTotal": 59.98
}
],
"subtotal": 59.98,
"discountTotal": 6.00,
"taxAmount": 3.24,
"total": 57.22,
"payments": [
{
"id": "pay_001",
"method": "card",
"amount": 57.22,
"status": "completed",
"authCode": "AUTH123456",
"lastFour": "4242"
}
],
"customerId": "cust_john_doe",
"loyaltyPointsEarned": 57,
"createdAt": "2025-12-29T14:30:00Z",
"createdBy": "usr_cashier1"
}
Errors: 400 Bad Request, 422 Validation Error, 402 Payment Failed
GET /sales/
Description: Get sale details
Authentication: Bearer token (Cashier+)
Response: 200 OK
GET /sales
Description: List sales with filters
Authentication: Bearer token (Cashier+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | locationId | string | Filter by location | | registerId | string | Filter by register | | startDate | datetime | Start of date range | | endDate | datetime | End of date range | | customerId | string | Filter by customer | | status | string | completed, voided, refunded | | minAmount | decimal | Minimum total | | maxAmount | decimal | Maximum total |
Response: 200 OK
POST /sales/{saleId}/void
Description: Void a sale (requires manager)
Authentication: Bearer token (Manager+)
Request Body:
{
"reason": "customer_changed_mind",
"managerPin": "1234"
}
Response: 200 OK
{
"id": "ord_xyz789",
"status": "voided",
"voidedAt": "2025-12-29T14:35:00Z",
"voidedBy": "usr_manager1",
"voidReason": "customer_changed_mind",
"refundAmount": 57.22
}
POST /returns
Description: Process a return
Authentication: Bearer token (Cashier+)
Request Body:
{
"originalOrderId": "ord_xyz789",
"originalReceiptNumber": "GM-2025-001234",
"locationId": "loc_gm",
"items": [
{
"originalLineItemId": "li_001",
"variantId": "var_001",
"quantityReturned": 1,
"reason": "wrong_size",
"condition": "resaleable"
}
],
"refundMethod": "original_payment"
}
Response: 201 Created
{
"id": "ret_abc123",
"returnReceiptNumber": "RET-GM-2025-0001",
"originalOrderId": "ord_xyz789",
"items": [
{
"variantId": "var_001",
"quantityReturned": 1,
"refundAmount": 28.61,
"inventoryRestocked": true
}
],
"totalRefund": 28.61,
"refundTransactionId": "refund_001",
"loyaltyPointsDeducted": 29,
"createdAt": "2025-12-29T15:00:00Z"
}
GET /returns/
Description: Get return details
Authentication: Bearer token (Cashier+)
Response: 200 OK
A.9 Domain 8: Customers & Loyalty
GET /customers
Description: List customers
Authentication: Bearer token (Cashier+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | search | string | Search by name, email, phone | | tier | string | Filter by loyalty tier | | tag | string | Filter by tag | | hasEmail | boolean | Has email address | | page | int | Page number | | limit | int | Items per page |
Response: 200 OK
{
"data": [
{
"id": "cust_john_doe",
"customerNumber": "CUST-2025-00001",
"firstName": "John",
"lastName": "Doe",
"email": "john.doe@example.com",
"phone": "555-0100",
"loyalty": {
"tier": "gold",
"pointsBalance": 1250,
"lifetimeSpend": 2500.00
},
"tags": ["vip", "birthday_month"],
"createdAt": "2025-01-15T00:00:00Z"
}
]
}
POST /customers
Description: Create a new customer
Authentication: Bearer token (Cashier+)
Request Body:
{
"firstName": "Jane",
"lastName": "Smith",
"email": "jane.smith@example.com",
"phone": "555-0200",
"address": {
"street": "123 Main St",
"city": "Chesapeake",
"state": "VA",
"zip": "23320"
},
"marketingOptIn": true,
"smsOptIn": false,
"enrollInLoyalty": true
}
Response: 201 Created
GET /customers/
Description: Get customer details
Authentication: Bearer token (Cashier+)
Response: 200 OK
{
"id": "cust_john_doe",
"customerNumber": "CUST-2025-00001",
"firstName": "John",
"lastName": "Doe",
"email": "john.doe@example.com",
"phone": "555-0100",
"address": {
"street": "456 Oak Ave",
"city": "Virginia Beach",
"state": "VA",
"zip": "23451"
},
"loyalty": {
"programId": "loyalty_standard",
"tier": "gold",
"pointsBalance": 1250,
"pointsToNextTier": 750,
"lifetimeSpend": 2500.00,
"lifetimePoints": 3000
},
"preferences": {
"marketingOptIn": true,
"smsOptIn": true,
"preferredContactMethod": "email"
},
"tags": ["vip", "birthday_month"],
"purchaseHistory": {
"totalOrders": 25,
"totalSpend": 2500.00,
"averageOrderValue": 100.00,
"lastPurchase": "2025-12-28T14:00:00Z"
},
"createdAt": "2025-01-15T00:00:00Z",
"updatedAt": "2025-12-28T14:00:00Z"
}
PATCH /customers/
Description: Update customer details
Authentication: Bearer token (Cashier+)
Request Body:
{
"phone": "555-0300",
"preferences": {
"smsOptIn": true
}
}
Response: 200 OK
GET /customers/{customerId}/orders
Description: Get customer’s order history
Authentication: Bearer token (Cashier+)
Response: 200 OK
POST /customers/{customerId}/loyalty/redeem
Description: Redeem loyalty points
Authentication: Bearer token (Cashier+)
Request Body:
{
"points": 500,
"orderId": "ord_xyz790"
}
Response: 200 OK
{
"pointsRedeemed": 500,
"discountAmount": 5.00,
"previousBalance": 1250,
"newBalance": 750
}
POST /customers/merge
Description: Merge duplicate customer records
Authentication: Bearer token (Admin)
Request Body:
{
"survivingCustomerId": "cust_john_doe",
"mergeCustomerIds": ["cust_john_d", "cust_jdoe"],
"conflictResolutions": {
"email": "cust_john_doe"
}
}
Response: 200 OK
A.10 Domain 9: Payments
POST /payments/process
Description: Process a payment
Authentication: Bearer token (Cashier+)
Request Body:
{
"orderId": "ord_xyz789",
"method": "card",
"amount": 57.22,
"token": "tok_visa_4242",
"terminalId": "term_verifone_01"
}
Response: 200 OK
{
"id": "pay_001",
"status": "approved",
"amount": 57.22,
"authorizationCode": "AUTH123456",
"transactionId": "txn_gateway_abc",
"cardBrand": "visa",
"lastFour": "4242",
"entryMethod": "chip",
"batchId": "batch_2025-12-29"
}
Errors: 402 Payment Declined
POST /payments/refund
Description: Process a refund
Authentication: Bearer token (Manager+)
Request Body:
{
"originalPaymentId": "pay_001",
"amount": 28.61,
"reason": "return"
}
Response: 200 OK
GET /payments/batch/
Description: Get payment batch details
Authentication: Bearer token (Manager+)
Response: 200 OK
POST /payments/batch/{batchId}/settle
Description: Settle payment batch
Authentication: Bearer token (Manager+)
Response: 200 OK
A.11 Domain 10: Gift Cards
POST /giftcards
Description: Create/sell a gift card
Authentication: Bearer token (Cashier+)
Request Body:
{
"amount": 50.00,
"purchasedBy": "cust_john_doe",
"recipientEmail": "jane@example.com",
"recipientName": "Jane",
"message": "Happy Birthday!",
"type": "digital"
}
Response: 201 Created
{
"id": "gc_001",
"cardNumber": "6012XXXXXXXXXXXX1234",
"balance": 50.00,
"status": "active",
"expiresAt": null
}
GET /giftcards/{cardNumber}/balance
Description: Check gift card balance
Authentication: Bearer token (Cashier+)
Response: 200 OK
{
"cardNumber": "6012XXXXXXXXXXXX1234",
"balance": 50.00,
"status": "active",
"expiresAt": null
}
POST /giftcards/{cardNumber}/redeem
Description: Redeem gift card for payment
Authentication: Bearer token (Cashier+)
Request Body:
{
"orderId": "ord_xyz790",
"amount": 35.00
}
Response: 200 OK
A.12 Domain 11: Cash Management
POST /shifts/open
Description: Open a new shift
Authentication: Bearer token (Manager+)
Request Body:
{
"registerId": "reg_01",
"openingFloat": 267.50,
"floatBreakdown": {
"bills_20": 5,
"bills_10": 5,
"bills_5": 10,
"bills_1": 50,
"quarters": 40,
"dimes": 50,
"nickels": 40,
"pennies": 50
}
}
Response: 201 Created
{
"id": "shift_001",
"registerId": "reg_01",
"openedAt": "2025-12-29T08:00:00Z",
"openedBy": "usr_manager1",
"openingFloat": 267.50,
"status": "active"
}
POST /shifts/{shiftId}/close
Description: Close shift and reconcile
Authentication: Bearer token (Manager+)
Request Body:
{
"closingCount": {
"bills_100": 2,
"bills_50": 3,
"bills_20": 15,
"bills_10": 10,
"bills_5": 20,
"bills_1": 75,
"quarters": 80,
"dimes": 100,
"nickels": 80,
"pennies": 100
}
}
Response: 200 OK
{
"id": "shift_001",
"closedAt": "2025-12-29T17:00:00Z",
"expectedCash": 725.50,
"actualCash": 723.00,
"variance": -2.50,
"varianceSeverity": "notable",
"summary": {
"cashSales": 458.00,
"cardSales": 1250.00,
"returns": 45.00,
"paidOuts": 25.00,
"tillDrops": 200.00
}
}
POST /shifts/{shiftId}/till-drop
Description: Record till drop to safe
Authentication: Bearer token (Cashier+)
Request Body:
{
"amount": 200.00,
"breakdown": {
"bills_100": 2
}
}
Response: 201 Created
POST /shifts/{shiftId}/paid-out
Description: Record paid out (petty cash)
Authentication: Bearer token (Manager+)
Request Body:
{
"amount": 25.00,
"category": "office_supplies",
"description": "Printer paper",
"receiptAttached": true
}
Response: 201 Created
GET /shifts/
Description: Get shift details
Authentication: Bearer token (Manager+)
Response: 200 OK
A.13 Domain 12: RFID (Optional Module — Counting Only)
Scope: RFID endpoints support inventory counting operations only. Receiving is handled by the barcode Scanner in the POS Client. See BRD Section 5.16.6 for the Scanner vs RFID distinction.
POST /rfid/tags/print
Description: Queue RFID tags for printing
Authentication: Bearer token (Manager+)
Request Body:
{
"printerId": "printer_zebra_01",
"items": [
{
"variantId": "var_001",
"quantity": 50
}
],
"templateId": "tmpl_standard"
}
Response: 202 Accepted
{
"jobId": "print_job_001",
"status": "queued",
"totalTags": 50
}
GET /rfid/tags/print/
Description: Get print job status
Authentication: Bearer token (Manager+)
Response: 200 OK
POST /rfid/scans/sessions
Description: Create a new RFID counting session
Authentication: Bearer token (Cashier+)
Request Body:
{
"locationId": "loc_gm",
"sectionId": "section_a_mens_tops",
"sessionType": "cycle_count",
"notes": "Pre-inventory count for Q4 audit"
}
Session Types: full_inventory, cycle_count, spot_check, find_item
Response: 201 Created
{
"sessionId": "scan_001",
"status": "active",
"startedAt": "2025-12-29T10:00:00Z",
"expectedCount": 505,
"sectionsAvailable": ["section_a_mens_tops", "section_b_mens_bottoms"]
}
POST /rfid/scans/sessions/{sessionId}/join
Description: Join an existing session as an additional operator (multi-operator counting)
Authentication: Bearer token (Cashier+)
Request Body:
{
"operatorId": "user_002",
"deviceId": "device_mc3390r_02",
"assignedSection": "section_b_mens_bottoms"
}
Response: 200 OK
{
"sessionId": "scan_001",
"operatorCount": 3,
"yourSection": "section_b_mens_bottoms",
"sessionStartedAt": "2025-12-29T10:00:00Z"
}
Business Rules:
- Maximum 10 operators per session
- One active session per operator
- Section assignment is advisory (not hardware-enforced)
POST /rfid/scans/sessions/{sessionId}/chunks
Description: Upload scan events in chunks (≤5,000 events per chunk). Idempotent — duplicate (session_id, epc) pairs are deduplicated server-side using UPSERT with highest RSSI kept.
Authentication: Bearer token (Cashier+)
Request Body:
{
"chunkIndex": 0,
"totalChunks": 10,
"operatorId": "user_001",
"deviceId": "device_mc3390r_01",
"events": [
{
"epc": "E28011606000020752345678",
"rssi": -45,
"readCount": 3,
"firstSeenAt": "2025-12-29T10:05:00Z",
"lastSeenAt": "2025-12-29T10:05:12Z"
}
]
}
Response: 200 OK
{
"eventsAccepted": 4892,
"eventsDeduplicated": 108,
"chunksReceived": 1,
"chunksExpected": 10
}
Chunk Size: Maximum 5,000 events per request. For a 100,000-tag session, this requires 20 chunks.
GET /rfid/scans/sessions/{sessionId}/upload-status
Description: Check upload progress. Used by mobile app to resume after network failure — identifies which chunks are missing so only those need retrying.
Authentication: Bearer token (Cashier+)
Response: 200 OK
{
"sessionId": "scan_001",
"status": "incomplete",
"chunksReceived": [0, 1, 2, 4, 5],
"chunksMissing": [3, 6, 7, 8, 9],
"totalEvents": 14892,
"uniqueEpcs": 14540
}
POST /rfid/scans/sessions/{sessionId}/complete
Description: Complete scan session and trigger variance calculation
Authentication: Bearer token (Cashier+)
Request Body:
{
"endedAt": "2025-12-29T10:30:00Z",
"notes": "Section A complete, 2 unknown tags flagged"
}
Response: 200 OK
{
"sessionId": "scan_001",
"status": "completed",
"summary": {
"totalTagsScanned": 47000,
"uniqueEpcs": 46540,
"expectedCount": 505,
"variance": 7,
"variancePercentage": 1.39,
"reviewRequired": false
},
"completedAt": "2025-12-29T10:30:00Z"
}
Variance Thresholds (configurable per tenant):
| Variance | Action |
|---|---|
| 0% | Auto-approve |
| 1-2% | Review recommended |
| 3-5% | Manager review required |
| > 5% | Recount required |
A.14 Domain 13: Sync & Offline
POST /sync/push
Description: Push offline changes to server
Authentication: Bearer token (Cashier+)
Request Body:
{
"deviceId": "dev_pos_01",
"lastSyncTimestamp": "2025-12-29T10:00:00Z",
"events": [
{
"localSequence": 1,
"eventType": "OrderCompleted",
"timestamp": "2025-12-29T10:30:00Z",
"payload": { }
}
],
"inventoryDeltas": [
{
"variantId": "var_001",
"locationId": "loc_gm",
"lastSyncQty": 15,
"delta": -2
}
]
}
Response: 200 OK
{
"success": true,
"syncedEvents": 5,
"conflicts": [
{
"type": "inventory",
"variantId": "var_001",
"resolution": "delta_merged",
"serverValue": 12,
"localDelta": -2,
"resolvedValue": 10
}
],
"serverTimestamp": "2025-12-29T12:00:00Z"
}
GET /sync/pull
Description: Pull updates from server
Authentication: Bearer token (Cashier+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | since | datetime | Last sync timestamp | | types | string[] | Event types to pull |
Response: 200 OK
GET /sync/status
Description: Get sync status for device
Authentication: Bearer token (Cashier+)
Response: 200 OK
{
"deviceId": "dev_pos_01",
"lastSync": "2025-12-29T12:00:00Z",
"pendingPush": 0,
"pendingPull": 15,
"status": "synced"
}
A.15 Domain 14: Reports
GET /reports/sales/daily
Description: Daily sales summary
Authentication: Bearer token (Manager+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | date | date | Report date | | locationId | string | Filter by location |
Response: 200 OK
{
"date": "2025-12-29",
"summary": {
"grossSales": 5250.00,
"discounts": 250.00,
"returns": 150.00,
"netSales": 4850.00,
"tax": 291.00,
"transactionCount": 85,
"averageTicket": 57.06,
"unitsPerTransaction": 2.3
},
"byPaymentMethod": {
"cash": 1250.00,
"card": 3500.00,
"giftCard": 100.00
},
"byCategory": [
{ "category": "Shirts", "sales": 2500.00, "units": 75 },
{ "category": "Pants", "sales": 1500.00, "units": 30 }
],
"topItems": [
{ "sku": "NXP0323", "name": "Classic V-Neck", "units": 25, "sales": 749.75 }
]
}
GET /reports/inventory/valuation
Description: Inventory valuation report
Authentication: Bearer token (Manager+)
Query Parameters: | Parameter | Type | Description | |———–|——|———––| | locationId | string | Filter by location | | asOfDate | date | Valuation date |
Response: 200 OK
GET /reports/employees/timeclock
Description: Employee time clock report
Authentication: Bearer token (Manager+)
Response: 200 OK
GET /reports/customers/loyalty
Description: Loyalty program report
Authentication: Bearer token (Manager+)
Response: 200 OK
A.16 Webhooks
Configuring Webhooks
Description: Register webhook endpoints
Authentication: Bearer token (Admin)
Request Body:
{
"url": "https://your-server.com/webhooks",
"events": [
"order.completed",
"order.refunded",
"inventory.low_stock",
"customer.created"
],
"secret": "whsec_your_secret_key"
}
Webhook Events
| Event | Description |
|---|---|
| order.completed | Sale completed |
| order.voided | Sale voided |
| order.refunded | Return processed |
| inventory.low_stock | Below reorder point |
| inventory.adjusted | Manual adjustment |
| customer.created | New customer |
| customer.updated | Customer modified |
| sync.conflict | Offline conflict detected |
Webhook Payload Format
{
"id": "evt_webhook_001",
"type": "order.completed",
"timestamp": "2025-12-29T14:30:00Z",
"tenantId": "tenant_nexus",
"data": {
"orderId": "ord_xyz789",
"orderNumber": "ORD-2025-00001",
"total": 57.22
}
}
A.17 Rate Limits
| Endpoint Type | Rate Limit |
|---|---|
| Authentication | 10 requests/minute |
| Read operations | 1000 requests/minute |
| Write operations | 100 requests/minute |
| Bulk operations | 10 requests/minute |
| Webhooks | 1000 events/minute |
A.18 Additional Endpoint References
Note (v5.0.0): The following endpoint groups are defined in Chapter 05 (Architecture Components) and are not fully duplicated here. Refer to the source chapter for complete request/response schemas.
- Tax Jurisdictions (
/api/v1/tax-jurisdictions,/api/v1/tax-rates): Compound tax configuration with 3-level (State/County/City) support. See Chapter 05 Section 1.17 for full specification. - RFID Configuration & Counting (
/api/v1/rfid/*): Tag templates, tag mappings, counting sessions, chunked sync upload. See Chapter 05 Section 5.16 for the complete RFID API specification. Also see Domain 12 (Section A.13) above for endpoint details already documented. - Integration Sync (
/api/v1/integrations/*): Shopify, Amazon SP-API, Google Merchant channel sync endpoints. See Chapter 13 (Integrations) for full specification.
A.19 API Versioning
The API uses URL versioning:
- Current version:
v1 - URL format:
/api/v1/{resource} - Deprecated versions are supported for 12 months
- Version header:
X-API-Version: 2025-12-29
This API reference covers 75+ endpoints across 14 domains. For additional details, see the OpenAPI specification at /api/v1/docs.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Section | Appendix A |
This appendix is part of the POS Blueprint Book. All content is self-contained.
Appendix B: Database Entity Relationship Diagram
Version: 4.0.0 Last Updated: February 25, 2026 Database: PostgreSQL 16 Total Tables: 51+
B.1 Overview
Note (v5.0.0): The POS Platform uses Row-Level Security (RLS) with
tenant_idcolumns, NOT Schema-Per-Tenant. See Chapter 07 for current database strategy and Chapter 05 Module 5 for tenant management. The ERD diagrams below should be interpreted with this context – all tenant-scoped tables include atenant_id UUID NOT NULLcolumn with RLS policies enforcing isolation.
Zone fields removed (BRD v19): Zone tracking has been removed as of BRD v19. Any zone-related fields shown in the diagrams below are no longer part of the current schema. See Chapter 05 Decision #107.
Additional tables not shown:
tax_jurisdictions,tax_rates,rfid_tag_templates,rfid_tag_mappings,session_operators,register_ip_changes. See Chapter 05 and Chapter 09 for complete table definitions.
This appendix contains the complete Entity Relationship Diagram (ERD) for the POS Platform database. The schema is organized by domain with Row-Level Security (RLS) tenant isolation.
B.2 Schema Organization
pos_platform (database)
|
+-- shared (schema)
| Contains: tenants, modules, system settings
|
+-- tenant_nexus (schema per tenant)
| Contains: All tenant-specific tables
|
+-- tenant_retailco (schema per tenant)
Contains: All tenant-specific tables
B.3 Complete Entity Relationship Diagram
╔═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
║ POS PLATFORM - COMPLETE ENTITY RELATIONSHIP DIAGRAM ║
║ 51 Tables | 14 Domains ║
╠═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 1: MULTI-TENANCY (shared schema) ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ tenants │ │ tenant_modules │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │───────────│ PK id UUID │ ║ ║
║ ║ │ name VARCHAR(100) │ 1:N │ FK tenant_id UUID │──┐ ║ ║
║ ║ │ subdomain VARCHAR(50) │ │ module_code VARCHAR │ │ ║ ║
║ ║ │ status ENUM │ │ enabled BOOLEAN │ │ ┌──────────────────────────┐ ║ ║
║ ║ │ plan ENUM │ │ config JSONB │ │ │ system_settings │ ║ ║
║ ║ │ schema_name VARCHAR │ │ activated_at TIMESTP │ │ ├──────────────────────────┤ ║ ║
║ ║ │ settings JSONB │ └──────────────────────────┘ ├────►│ PK id UUID │ ║ ║
║ ║ │ created_at TIMESTAMP │ │ │ FK tenant_id UUID │ ║ ║
║ ║ │ updated_at TIMESTAMP │ │ │ key VARCHAR(100) │ ║ ║
║ ║ └──────────────────────────┘ │ │ value JSONB │ ║ ║
║ ║ │ │ updated_at TIMESTAMP │ ║ ║
║ ║ │ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ │ ║
║ │ tenant_id (implicit via schema) ║
║ ▼ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 2: LOCATIONS & REGISTERS ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ locations │ │ registers │ │ operating_hours │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ code VARCHAR(10) │ 1:N │ FK location_id UUID │ │ FK location_id UUID │◄──┤ ║ ║
║ ║ │ name VARCHAR(100) │ │ name VARCHAR(50) │ │ day_of_week INT │ ║ ║
║ ║ │ type ENUM │ │ status ENUM │ │ open_time TIME │ ║ ║
║ ║ │ status ENUM │ │ terminal_id VARCHAR │ │ close_time TIME │ ║ ║
║ ║ │ address_line1 VARCHAR │ │ last_active TIMESTAMP │ │ is_closed BOOLEAN │ ║ ║
║ ║ │ address_line2 VARCHAR │ │ config JSONB │ └──────────────────────────┘ ║ ║
║ ║ │ city VARCHAR(100) │ └──────────────────────────┘ ║ ║
║ ║ │ state VARCHAR(50) │ ║ ║
║ ║ │ zip VARCHAR(20) │ ║ ║
║ ║ │ country VARCHAR(2) │ ║ ║
║ ║ │ phone VARCHAR(20) │ ║ ║
║ ║ │ timezone VARCHAR(50) │ ║ ║
║ ║ │ shopify_location_id │ ║ ║
║ ║ │ settings JSONB │ ║ ║
║ ║ │ created_at TIMESTAMP │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ║ │ ║ ║
║ ╚══════════════╪═══════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ │ ║
║ │ location_id ║
║ ▼ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 3: USERS & EMPLOYEES ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ users │ │ user_permissions │ │ user_sessions │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ email VARCHAR(255) │ 1:N │ FK user_id UUID │ │ FK user_id UUID │◄──┤ ║ ║
║ ║ │ password_hash VARCHAR │ │ permission VARCHAR │ │ token_hash VARCHAR │ ║ ║
║ ║ │ first_name VARCHAR │ │ granted_by UUID │ │ device_info JSONB │ ║ ║
║ ║ │ last_name VARCHAR │ │ granted_at TIMESTAMP │ │ ip_address INET │ ║ ║
║ ║ │ role ENUM │ └──────────────────────────┘ │ expires_at TIMESTAMP │ ║ ║
║ ║ │ pin_hash VARCHAR │ │ created_at TIMESTAMP │ ║ ║
║ ║ │ FK home_location_id UUID │◄─────────────────────────────────────────────└──────────────────────────┘ ║ ║
║ ║ │ status ENUM │ ║ ║
║ ║ │ last_login TIMESTAMP │ ┌──────────────────────────┐ ║ ║
║ ║ │ created_at TIMESTAMP │ │ time_clock_entries │ ║ ║
║ ║ └──────────────────────────┘ ├──────────────────────────┤ ║ ║
║ ║ │ │ PK id UUID │ ║ ║
║ ║ │ │ FK user_id UUID │◄──────────────────────────────────────┤ ║ ║
║ ║ │ │ FK location_id UUID │ ║ ║
║ ║ │ │ clock_in TIMESTAMP │ ║ ║
║ ║ │ │ clock_out TIMESTAMP │ ║ ║
║ ║ │ │ break_minutes INT │ ║ ║
║ ║ │ │ status ENUM │ ║ ║
║ ║ │ │ notes TEXT │ ║ ║
║ ║ │ └──────────────────────────┘ ║ ║
║ ╚══════════════╪═══════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ │ ║
║ │ user_id ║
║ ▼ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 4: PRODUCTS & CATALOG ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ categories │ │ products │ │ product_variants │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │◄──────────│ PK id UUID │──────►│ PK id UUID │ ║ ║
║ ║ │ FK parent_id UUID (self) │ N:1 │ sku VARCHAR(50) │ 1:N │ FK product_id UUID │ ║ ║
║ ║ │ name VARCHAR(100) │ │ name VARCHAR(255) │ │ sku VARCHAR(50) │ ║ ║
║ ║ │ slug VARCHAR(100) │ │ description TEXT │ │ barcode VARCHAR(50) │ ║ ║
║ ║ │ sort_order INT │ │ FK category_id UUID │ │ options JSONB │ ║ ║
║ ║ │ is_active BOOLEAN │ │ FK vendor_id UUID │ │ price DECIMAL(10,2) │ ║ ║
║ ║ └──────────────────────────┘ │ base_price DECIMAL │ │ compare_price DECIMAL │ ║ ║
║ ║ │ cost DECIMAL(10,2) │ │ cost DECIMAL(10,2) │ ║ ║
║ ║ ┌──────────────────────────┐ │ tax_class VARCHAR │ │ weight DECIMAL │ ║ ║
║ ║ │ vendors │ │ status ENUM │ │ is_active BOOLEAN │ ║ ║
║ ║ ├──────────────────────────┤ │ shopify_product_id │ │ shopify_variant_id │ ║ ║
║ ║ │ PK id UUID │◄──────────│ created_at TIMESTAMP │ │ created_at TIMESTAMP │ ║ ║
║ ║ │ name VARCHAR(100) │ N:1 └──────────────────────────┘ └──────────────────────────┘ ║ ║
║ ║ │ code VARCHAR(20) │ │ │ ║ ║
║ ║ │ contact_name VARCHAR │ │ │ ║ ║
║ ║ │ email VARCHAR(255) │ │ │ ║ ║
║ ║ │ phone VARCHAR(20) │ ▼ ▼ ║ ║
║ ║ │ address JSONB │ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ payment_terms VARCHAR │ │ product_images │ │ variant_prices │ ║ ║
║ ║ │ is_active BOOLEAN │ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ └──────────────────────────┘ │ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ FK product_id UUID │ │ FK variant_id UUID │ ║ ║
║ ║ │ url VARCHAR(500) │ │ FK price_list_id UUID │ ║ ║
║ ║ │ alt_text VARCHAR │ │ price DECIMAL(10,2) │ ║ ║
║ ║ │ position INT │ │ effective_from DATE │ ║ ║
║ ║ └──────────────────────────┘ │ effective_to DATE │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ │ ║
║ │ variant_id ║
║ ▼ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 5: INVENTORY ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ inventory_levels │ │ inventory_transactions │ │ inventory_reservations │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │ │ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ FK variant_id UUID │◄──────────│ FK variant_id UUID │ │ FK variant_id UUID │◄──┤ ║ ║
║ ║ │ FK location_id UUID │ 1:N │ FK location_id UUID │ │ FK location_id UUID │ ║ ║
║ ║ │ on_hand INT │ │ transaction_type ENUM │ │ FK order_id UUID │ ║ ║
║ ║ │ available INT │ │ quantity INT │ │ quantity INT │ ║ ║
║ ║ │ reserved INT │ │ previous_qty INT │ │ expires_at TIMESTAMP │ ║ ║
║ ║ │ reorder_point INT │ │ new_qty INT │ │ status ENUM │ ║ ║
║ ║ │ reorder_qty INT │ │ reference_type VARCHAR│ │ created_at TIMESTAMP │ ║ ║
║ ║ │ bin_location VARCHAR │ │ reference_id UUID │ └──────────────────────────┘ ║ ║
║ ║ │ updated_at TIMESTAMP │ │ cost DECIMAL(10,2) │ ║ ║
║ ║ │ UK (variant_id, loc_id) │ │ notes TEXT │ ║ ║
║ ║ └──────────────────────────┘ │ FK created_by UUID │ ║ ║
║ ║ │ │ created_at TIMESTAMP │ ║ ║
║ ║ │ └──────────────────────────┘ ║ ║
║ ║ │ ║ ║
║ ║ │ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ │ inventory_transfers │ │ transfer_line_items │ ║ ║
║ ║ │ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ │ PK id UUID │──────►│ PK id UUID │ ║ ║
║ ║ │ │ FK from_location_id UUID │ 1:N │ FK transfer_id UUID │ ║ ║
║ ║ │ │ FK to_location_id UUID │ │ FK variant_id UUID │ ║ ║
║ ║ └──────────►│ status ENUM │ │ qty_requested INT │ ║ ║
║ ║ │ priority ENUM │ │ qty_shipped INT │ ║ ║
║ ║ │ tracking_number VARCH │ │ qty_received INT │ ║ ║
║ ║ │ carrier VARCHAR │ │ qty_damaged INT │ ║ ║
║ ║ │ FK requested_by UUID │ └──────────────────────────┘ ║ ║
║ ║ │ FK shipped_by UUID │ ║ ║
║ ║ │ FK received_by UUID │ ║ ║
║ ║ │ shipped_at TIMESTAMP │ ║ ║
║ ║ │ received_at TIMESTAMP │ ║ ║
║ ║ │ created_at TIMESTAMP │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ │ ║
║ │ variant_id, location_id ║
║ ▼ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 6: ORDERS & SALES ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ orders │ │ order_line_items │ │ order_discounts │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ order_number VARCHAR │ 1:N │ FK order_id UUID │ │ FK order_id UUID │◄──┤ ║ ║
║ ║ │ receipt_number VARCHAR│ │ FK variant_id UUID │ │ FK line_item_id UUID │ ║ ║
║ ║ │ FK location_id UUID │ │ sku VARCHAR │ │ discount_type ENUM │ ║ ║
║ ║ │ FK register_id UUID │ │ name VARCHAR │ │ discount_value DECIMAL│ ║ ║
║ ║ │ FK customer_id UUID │ │ quantity INT │ │ discount_amount DECIM │ ║ ║
║ ║ │ FK created_by UUID │ │ unit_price DECIMAL │ │ code VARCHAR │ ║ ║
║ ║ │ status ENUM │ │ discount_amount DECIM │ │ reason VARCHAR │ ║ ║
║ ║ │ subtotal DECIMAL │ │ tax_amount DECIMAL │ └──────────────────────────┘ ║ ║
║ ║ │ discount_total DECIM │ │ line_total DECIMAL │ ║ ║
║ ║ │ tax_total DECIMAL │ │ cost DECIMAL │ ║ ║
║ ║ │ total DECIMAL(10,2) │ │ fulfillment_status EN │ ║ ║
║ ║ │ channel ENUM │ └──────────────────────────┘ ║ ║
║ ║ │ source VARCHAR │ │ ║ ║
║ ║ │ notes TEXT │ │ ║ ║
║ ║ │ metadata JSONB │ │ ║ ║
║ ║ │ voided_at TIMESTAMP │ │ ║ ║
║ ║ │ FK voided_by UUID │ ▼ ║ ║
║ ║ │ void_reason VARCHAR │ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ created_at TIMESTAMP │ │ returns │ │ return_line_items │ ║ ║
║ ║ │ completed_at TIMESTP │ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ └──────────────────────────┘ │ PK id UUID │──────►│ PK id UUID │ ║ ║
║ ║ │ │ return_number VARCHAR │ 1:N │ FK return_id UUID │ ║ ║
║ ║ │ │ FK original_order_id UUID│ │ FK original_line_id UUID │ ║ ║
║ ║ │ │ FK location_id UUID │ │ FK variant_id UUID │ ║ ║
║ ║ │ │ FK customer_id UUID │ │ quantity INT │ ║ ║
║ ║ │ │ FK processed_by UUID │ │ refund_amount DECIMAL │ ║ ║
║ ║ │ │ status ENUM │ │ reason ENUM │ ║ ║
║ ║ │ │ refund_total DECIMAL │ │ condition ENUM │ ║ ║
║ ║ │ │ refund_method ENUM │ │ restocked BOOLEAN │ ║ ║
║ ║ │ │ created_at TIMESTAMP │ └──────────────────────────┘ ║ ║
║ ║ │ └──────────────────────────┘ ║ ║
║ ╚══════════════╪═══════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ │ ║
║ │ order_id ║
║ ▼ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 7: PAYMENTS ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ payments │ │ payment_refunds │ │ payment_batches │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ FK order_id UUID │ 1:N │ FK payment_id UUID │◄──────│ FK location_id UUID │ ║ ║
║ ║ │ payment_method ENUM │ │ FK return_id UUID │ N:1 │ batch_date DATE │ ║ ║
║ ║ │ amount DECIMAL(10,2) │ │ amount DECIMAL │ │ status ENUM │ ║ ║
║ ║ │ status ENUM │ │ status ENUM │ │ total_amount DECIMAL │ ║ ║
║ ║ │ authorization_code │ │ gateway_refund_id │ │ transaction_count INT │ ║ ║
║ ║ │ gateway_transaction_id│ │ created_at TIMESTAMP │ │ settled_at TIMESTAMP │ ║ ║
║ ║ │ card_brand VARCHAR │ └──────────────────────────┘ │ created_at TIMESTAMP │ ║ ║
║ ║ │ card_last_four VARCHAR│ └──────────────────────────┘ ║ ║
║ ║ │ entry_method ENUM │ │ ║ ║
║ ║ │ terminal_id VARCHAR │ │ ║ ║
║ ║ │ FK batch_id UUID │◄─────────────────────────────────────────────────────────┘ ║ ║
║ ║ │ tip_amount DECIMAL │ ║ ║
║ ║ │ metadata JSONB │ ║ ║
║ ║ │ created_at TIMESTAMP │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 8: CUSTOMERS & LOYALTY ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ customers │ │ loyalty_transactions │ │ customer_tags │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ customer_number VARCH │ 1:N │ FK customer_id UUID │ │ FK customer_id UUID │◄──┤ ║ ║
║ ║ │ first_name VARCHAR │ │ FK order_id UUID │ │ FK tag_id UUID │ ║ ║
║ ║ │ last_name VARCHAR │ │ transaction_type ENUM │ │ applied_at TIMESTAMP │ ║ ║
║ ║ │ email VARCHAR(255) │ │ points INT │ │ expires_at TIMESTAMP │ ║ ║
║ ║ │ phone VARCHAR(20) │ │ balance_after INT │ │ applied_by UUID │ ║ ║
║ ║ │ address JSONB │ │ description VARCHAR │ └──────────────────────────┘ ║ ║
║ ║ │ loyalty_tier ENUM │ │ created_at TIMESTAMP │ ║ ║
║ ║ │ loyalty_points INT │ └──────────────────────────┘ ┌──────────────────────────┐ ║ ║
║ ║ │ lifetime_spend DECIM │ │ tags │ ║ ║
║ ║ │ total_orders INT │ ├──────────────────────────┤ ║ ║
║ ║ │ marketing_opt_in BOOL │ │ PK id UUID │ ║ ║
║ ║ │ sms_opt_in BOOLEAN │ ┌──────────────────────────┐ │ name VARCHAR(50) │ ║ ║
║ ║ │ tax_exempt BOOLEAN │ │ customer_notes │ │ category VARCHAR │ ║ ║
║ ║ │ notes TEXT │ ├──────────────────────────┤ │ color VARCHAR(7) │ ║ ║
║ ║ │ metadata JSONB │ │ PK id UUID │ │ is_auto BOOLEAN │ ║ ║
║ ║ │ created_at TIMESTAMP │──────────►│ FK customer_id UUID │ └──────────────────────────┘ ║ ║
║ ║ │ updated_at TIMESTAMP │ 1:N │ FK created_by UUID │ ║ ║
║ ║ └──────────────────────────┘ │ note TEXT │ ║ ║
║ ║ │ created_at TIMESTAMP │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 9: GIFT CARDS ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ gift_cards │ │ gift_card_transactions │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ ║ ║
║ ║ │ card_number VARCHAR │ 1:N │ FK gift_card_id UUID │ ║ ║
║ ║ │ card_number_hash VARCH│ │ FK order_id UUID │ ║ ║
║ ║ │ initial_balance DECIM │ │ transaction_type ENUM │ ║ ║
║ ║ │ current_balance DECIM │ │ amount DECIMAL │ ║ ║
║ ║ │ status ENUM │ │ balance_after DECIMAL │ ║ ║
║ ║ │ type ENUM │ │ reference VARCHAR │ ║ ║
║ ║ │ purchased_at TIMESTAMP│ │ created_at TIMESTAMP │ ║ ║
║ ║ │ FK purchased_by UUID │ └──────────────────────────┘ ║ ║
║ ║ │ FK purchase_order_id UUID│ ║ ║
║ ║ │ recipient_email VARCH │ ║ ║
║ ║ │ recipient_name VARCHAR│ ║ ║
║ ║ │ message TEXT │ ║ ║
║ ║ │ expires_at TIMESTAMP │ ║ ║
║ ║ │ created_at TIMESTAMP │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 10: CASH MANAGEMENT ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ shifts │ │ cash_movements │ │ cash_counts │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │──────────►│ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ FK register_id UUID │ 1:N │ FK shift_id UUID │ │ FK shift_id UUID │◄──┤ ║ ║
║ ║ │ FK opened_by UUID │ │ movement_type ENUM │ │ count_type ENUM │ ║ ║
║ ║ │ FK closed_by UUID │ │ amount DECIMAL │ │ expected DECIMAL │ ║ ║
║ ║ │ status ENUM │ │ FK performed_by UUID │ │ actual DECIMAL │ ║ ║
║ ║ │ opening_float DECIMAL │ │ FK witnessed_by UUID │ │ variance DECIMAL │ ║ ║
║ ║ │ expected_cash DECIMAL │ │ reason VARCHAR │ │ breakdown JSONB │ ║ ║
║ ║ │ actual_cash DECIMAL │ │ reference_number VARC │ │ FK counted_by UUID │ ║ ║
║ ║ │ variance DECIMAL │ │ notes TEXT │ │ counted_at TIMESTAMP │ ║ ║
║ ║ │ opened_at TIMESTAMP │ │ created_at TIMESTAMP │ │ notes TEXT │ ║ ║
║ ║ │ closed_at TIMESTAMP │ └──────────────────────────┘ └──────────────────────────┘ ║ ║
║ ║ │ notes TEXT │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 11: RFID ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ rfid_tags │ │ rfid_scan_sessions │ │ rfid_scans │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │ │ PK id UUID │──────►│ PK id UUID │ ║ ║
║ ║ │ epc VARCHAR(64) │ │ FK location_id UUID │ 1:N │ FK session_id UUID │ ║ ║
║ ║ │ FK variant_id UUID │ │ zone_id VARCHAR │ │ FK tag_id UUID │ ║ ║
║ ║ │ serial_number BIGINT │ │ session_type ENUM │ │ epc VARCHAR(64) │ ║ ║
║ ║ │ status ENUM │ │ FK started_by UUID │ │ rssi INT │ ║ ║
║ ║ │ FK current_location UUID │ │ FK completed_by UUID │ │ antenna_id INT │ ║ ║
║ ║ │ FK printed_at_location │ │ status ENUM │ │ read_count INT │ ║ ║
║ ║ │ printed_at TIMESTAMP │ │ started_at TIMESTAMP │ │ first_seen TIMESTAMP │ ║ ║
║ ║ │ FK printed_by UUID │ │ completed_at TIMESTAMP│ │ last_seen TIMESTAMP │ ║ ║
║ ║ │ last_seen_at TIMESTP │ │ summary JSONB │ └──────────────────────────┘ ║ ║
║ ║ │ created_at TIMESTAMP │ └──────────────────────────┘ ║ ║
║ ║ │ UK epc │ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 12: EVENTS & SYNC ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ domain_events │ │ sync_queue │ │ conflict_resolutions │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │ │ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ event_type VARCHAR │ │ device_id VARCHAR │ │ conflict_type VARCHAR │ ║ ║
║ ║ │ aggregate_type VARCHAR│ │ direction ENUM │ │ entity_type VARCHAR │ ║ ║
║ ║ │ aggregate_id UUID │ │ event_type VARCHAR │ │ entity_id UUID │ ║ ║
║ ║ │ payload JSONB │ │ payload JSONB │ │ server_value JSONB │ ║ ║
║ ║ │ correlation_id UUID │ │ local_sequence INT │ │ local_value JSONB │ ║ ║
║ ║ │ causation_id UUID │ │ status ENUM │ │ resolved_value JSONB │ ║ ║
║ ║ │ version INT │ │ attempts INT │ │ resolution_method EN │ ║ ║
║ ║ │ created_at TIMESTAMP │ │ last_attempt TIMESTP │ │ FK resolved_by UUID │ ║ ║
║ ║ │ IX (aggregate_type, id) │ │ error_message TEXT │ │ resolved_at TIMESTAMP │ ║ ║
║ ║ │ IX (created_at) │ │ created_at TIMESTAMP │ │ notes TEXT │ ║ ║
║ ║ └──────────────────────────┘ └──────────────────────────┘ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
║ ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════╗ ║
║ ║ DOMAIN 13: AUDIT & LOGS ║ ║
║ ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════╣ ║
║ ║ ║ ║
║ ║ ┌──────────────────────────┐ ┌──────────────────────────┐ ║ ║
║ ║ │ audit_logs │ │ api_request_logs │ ║ ║
║ ║ ├──────────────────────────┤ ├──────────────────────────┤ ║ ║
║ ║ │ PK id UUID │ │ PK id UUID │ ║ ║
║ ║ │ action VARCHAR(50) │ │ method VARCHAR(10) │ ║ ║
║ ║ │ entity_type VARCHAR │ │ path VARCHAR(500) │ ║ ║
║ ║ │ entity_id UUID │ │ status_code INT │ ║ ║
║ ║ │ old_values JSONB │ │ duration_ms INT │ ║ ║
║ ║ │ new_values JSONB │ │ FK user_id UUID │ ║ ║
║ ║ │ FK performed_by UUID │ │ ip_address INET │ ║ ║
║ ║ │ ip_address INET │ │ user_agent VARCHAR │ ║ ║
║ ║ │ user_agent VARCHAR │ │ request_body JSONB │ ║ ║
║ ║ │ created_at TIMESTAMP │ │ created_at TIMESTAMP │ ║ ║
║ ║ │ IX (entity_type, id) │ │ IX (created_at) │ ║ ║
║ ║ │ IX (performed_by) │ │ IX (user_id) │ ║ ║
║ ║ │ IX (created_at) │ └──────────────────────────┘ ║ ║
║ ║ └──────────────────────────┘ ║ ║
║ ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ ║
║ ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
B.4 Table Summary by Domain
| Domain | Tables | Primary Tables |
|---|---|---|
| 1. Multi-Tenancy | 3 | tenants, tenant_modules, system_settings |
| 2. Locations | 3 | locations, registers, operating_hours |
| 3. Users | 4 | users, user_permissions, user_sessions, time_clock_entries |
| 4. Products | 5 | categories, vendors, products, product_variants, product_images, variant_prices |
| 5. Inventory | 5 | inventory_levels, inventory_transactions, inventory_reservations, inventory_transfers, transfer_line_items |
| 6. Orders | 6 | orders, order_line_items, order_discounts, returns, return_line_items |
| 7. Payments | 3 | payments, payment_refunds, payment_batches |
| 8. Customers | 5 | customers, loyalty_transactions, customer_tags, tags, customer_notes |
| 9. Gift Cards | 2 | gift_cards, gift_card_transactions |
| 10. Cash | 3 | shifts, cash_movements, cash_counts |
| 11. RFID | 3 | rfid_tags, rfid_scan_sessions, rfid_scans |
| 12. Events | 3 | domain_events, sync_queue, conflict_resolutions |
| 13. Audit | 2 | audit_logs, api_request_logs |
| TOTAL | 51 |
B.5 Key Relationships
One-to-Many (1:N)
| Parent | Child | Foreign Key |
|---|---|---|
| tenants | tenant_modules | tenant_id |
| locations | registers | location_id |
| locations | operating_hours | location_id |
| users | user_permissions | user_id |
| users | user_sessions | user_id |
| users | time_clock_entries | user_id |
| categories | categories (self) | parent_id |
| categories | products | category_id |
| vendors | products | vendor_id |
| products | product_variants | product_id |
| products | product_images | product_id |
| product_variants | inventory_levels | variant_id |
| product_variants | inventory_transactions | variant_id |
| product_variants | order_line_items | variant_id |
| orders | order_line_items | order_id |
| orders | order_discounts | order_id |
| orders | payments | order_id |
| orders | returns | original_order_id |
| returns | return_line_items | return_id |
| payments | payment_refunds | payment_id |
| payment_batches | payments | batch_id |
| customers | orders | customer_id |
| customers | loyalty_transactions | customer_id |
| customers | customer_tags | customer_id |
| customers | customer_notes | customer_id |
| gift_cards | gift_card_transactions | gift_card_id |
| shifts | cash_movements | shift_id |
| shifts | cash_counts | shift_id |
| inventory_transfers | transfer_line_items | transfer_id |
| rfid_scan_sessions | rfid_scans | session_id |
Many-to-Many (M:N)
| Table A | Junction | Table B |
|---|---|---|
| customers | customer_tags | tags |
| product_variants | variant_prices | price_lists |
B.6 Indexes
Critical Performance Indexes
-- Orders lookup
CREATE INDEX idx_orders_location_date ON orders(location_id, created_at DESC);
CREATE INDEX idx_orders_customer ON orders(customer_id);
CREATE INDEX idx_orders_receipt ON orders(receipt_number);
-- Inventory queries
CREATE INDEX idx_inventory_levels_variant_location
ON inventory_levels(variant_id, location_id);
CREATE INDEX idx_inventory_levels_location_reorder
ON inventory_levels(location_id) WHERE on_hand <= reorder_point;
-- Product search
CREATE INDEX idx_products_sku ON products(sku);
CREATE INDEX idx_product_variants_barcode ON product_variants(barcode);
CREATE INDEX idx_products_search ON products USING gin(to_tsvector('english', name));
-- Customer lookup
CREATE INDEX idx_customers_email ON customers(lower(email));
CREATE INDEX idx_customers_phone ON customers(phone);
CREATE INDEX idx_customers_search ON customers
USING gin(to_tsvector('english', first_name || ' ' || last_name));
-- Event sourcing
CREATE INDEX idx_domain_events_aggregate ON domain_events(aggregate_type, aggregate_id);
CREATE INDEX idx_domain_events_created ON domain_events(created_at);
-- Audit trail
CREATE INDEX idx_audit_logs_entity ON audit_logs(entity_type, entity_id);
CREATE INDEX idx_audit_logs_user ON audit_logs(performed_by);
CREATE INDEX idx_audit_logs_time ON audit_logs(created_at DESC);
-- RFID
CREATE UNIQUE INDEX idx_rfid_tags_epc ON rfid_tags(epc);
CREATE INDEX idx_rfid_tags_variant ON rfid_tags(variant_id);
B.7 Partitioning Strategy
Time-Based Partitioning
-- Orders partitioned by month
CREATE TABLE orders (
id UUID,
created_at TIMESTAMP,
-- other columns
) PARTITION BY RANGE (created_at);
CREATE TABLE orders_2025_01 PARTITION OF orders
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE orders_2025_02 PARTITION OF orders
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
-- etc.
-- Domain events partitioned by month
CREATE TABLE domain_events (
id UUID,
created_at TIMESTAMP,
-- other columns
) PARTITION BY RANGE (created_at);
-- Audit logs partitioned by month
CREATE TABLE audit_logs (
id UUID,
created_at TIMESTAMP,
-- other columns
) PARTITION BY RANGE (created_at);
B.8 Constraints Summary
Unique Constraints
| Table | Columns | Purpose |
|---|---|---|
| tenants | subdomain | Unique tenant subdomain |
| locations | code | Unique location code per tenant |
| users | Unique user email per tenant | |
| products | sku | Unique SKU per tenant |
| product_variants | sku | Unique variant SKU per tenant |
| product_variants | barcode | Unique barcode per tenant |
| orders | order_number | Unique order number per tenant |
| orders | receipt_number | Unique receipt per tenant |
| customers | customer_number | Unique customer ID per tenant |
| gift_cards | card_number | Unique card number per tenant |
| rfid_tags | epc | Globally unique EPC |
| inventory_levels | variant_id, location_id | One record per variant-location |
Check Constraints
-- Positive quantities
ALTER TABLE inventory_levels ADD CONSTRAINT chk_on_hand_positive
CHECK (on_hand >= 0);
ALTER TABLE order_line_items ADD CONSTRAINT chk_quantity_positive
CHECK (quantity > 0);
-- Valid percentages
ALTER TABLE order_discounts ADD CONSTRAINT chk_discount_valid
CHECK (discount_value >= 0 AND discount_value <= 100);
-- Valid statuses
ALTER TABLE orders ADD CONSTRAINT chk_order_status
CHECK (status IN ('pending', 'completed', 'voided', 'refunded'));
-- Balance constraints
ALTER TABLE gift_cards ADD CONSTRAINT chk_balance_not_negative
CHECK (current_balance >= 0);
B.9 Data Types Reference
Custom ENUM Types
-- Tenant status
CREATE TYPE tenant_status AS ENUM ('active', 'suspended', 'trial', 'cancelled');
-- Location type
CREATE TYPE location_type AS ENUM ('store', 'warehouse', 'popup', 'mobile');
-- User role
CREATE TYPE user_role AS ENUM ('super_admin', 'admin', 'manager', 'cashier', 'viewer');
-- Order status
CREATE TYPE order_status AS ENUM ('pending', 'completed', 'voided', 'refunded');
-- Payment method
CREATE TYPE payment_method AS ENUM ('cash', 'card', 'gift_card', 'loyalty', 'other');
-- Payment status
CREATE TYPE payment_status AS ENUM ('pending', 'approved', 'declined', 'refunded');
-- Inventory transaction type
CREATE TYPE inv_transaction_type AS ENUM (
'sale', 'return', 'adjustment', 'transfer_out', 'transfer_in', 'receipt', 'shrinkage'
);
-- Cash movement type
CREATE TYPE cash_movement_type AS ENUM (
'till_drop', 'pickup', 'paid_in', 'paid_out', 'float_adjust'
);
-- RFID tag status
CREATE TYPE rfid_status AS ENUM ('active', 'sold', 'returned', 'void', 'lost');
-- Sync direction
CREATE TYPE sync_direction AS ENUM ('push', 'pull');
This ERD represents the complete database schema for the POS Platform with 51 tables across 13 domains.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Section | Appendix B |
This appendix is part of the POS Blueprint Book. All content is self-contained.
Appendix C: Domain Events Catalog
Version: 4.0.0 Last Updated: February 25, 2026 Total Events: 55+
C.1 Overview
This appendix contains the complete catalog of domain events for the POS Platform. These events form the foundation of the event-driven architecture, enabling real-time updates, audit trails, and offline synchronization.
C.2 Event Structure
All events follow this standard envelope:
{
"eventId": "evt_uuid",
"eventType": "EventName",
"timestamp": "2025-12-29T14:30:00.000Z",
"tenantId": "tenant_nexus",
"correlationId": "uuid",
"causationId": "uuid",
"version": 1,
"payload": { }
}
| Field | Type | Description |
|---|---|---|
| eventId | UUID | Unique event identifier |
| eventType | string | Event type name |
| timestamp | ISO 8601 | When event occurred |
| tenantId | string | Tenant identifier |
| correlationId | UUID | Links related events |
| causationId | UUID | Event that caused this event |
| version | int | Schema version |
| payload | object | Event-specific data |
C.3 Sales Events
1. OrderCreated
Trigger: Customer begins checkout Producer: POS Terminal, Web Store Consumers: Analytics, Inventory Reservation
{
"eventType": "OrderCreated",
"eventId": "evt_ord_001",
"timestamp": "2025-12-29T14:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"orderNumber": "ORD-2025-00001",
"locationId": "loc_gm",
"registerId": "reg_01",
"createdBy": "usr_cashier1",
"customerId": "cust_john_doe",
"lineItems": [
{
"lineItemId": "li_001",
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"name": "Classic V-Neck Tee - M Black",
"quantity": 2,
"unitPrice": 29.99,
"discountAmount": 0,
"taxAmount": 4.80,
"lineTotal": 64.78
}
],
"subtotal": 59.98,
"discountTotal": 0,
"taxTotal": 4.80,
"total": 64.78,
"status": "pending",
"channel": "pos"
}
}
2. PaymentAttempted
Trigger: Customer initiates payment Producer: Payment Terminal Consumers: Payment Gateway, Fraud Detection
{
"eventType": "PaymentAttempted",
"eventId": "evt_pay_001",
"timestamp": "2025-12-29T14:31:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"paymentAttemptId": "pa_001",
"paymentMethod": "card",
"terminalId": "term_verifone_01",
"amount": 64.78,
"currency": "USD",
"cardPresent": true,
"entryMethod": "chip",
"cardBrand": "visa",
"lastFour": "4242"
}
}
3. PaymentCompleted
Trigger: Payment gateway confirms success Producer: Payment Gateway Adapter Consumers: Order Service, Receipt Service, Inventory
{
"eventType": "PaymentCompleted",
"eventId": "evt_pay_002",
"timestamp": "2025-12-29T14:31:15Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"paymentId": "pay_001",
"paymentAttemptId": "pa_001",
"amount": 64.78,
"authorizationCode": "AUTH123456",
"transactionId": "txn_gateway_abc",
"batchId": "batch_2025-12-29",
"cardBrand": "visa",
"lastFour": "4242",
"entryMethod": "chip",
"receiptData": {
"merchantName": "Nexus Clothing - Greenbrier",
"merchantId": "MID123456",
"approvalCode": "123456"
}
}
}
4. PaymentFailed
Trigger: Payment gateway declines Producer: Payment Gateway Adapter Consumers: Order Service, POS UI, Analytics
{
"eventType": "PaymentFailed",
"eventId": "evt_pay_003",
"timestamp": "2025-12-29T14:31:20Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"paymentAttemptId": "pa_001",
"failureReason": "insufficient_funds",
"failureCode": "DECLINED_05",
"retriable": true,
"suggestedAction": "Try different payment method",
"gatewayResponse": {
"code": "51",
"message": "Insufficient funds"
}
}
}
5. OrderCompleted
Trigger: All payments successful, order finalized Producer: Order Service Consumers: Inventory, Analytics, Loyalty, Receipt
{
"eventType": "OrderCompleted",
"eventId": "evt_ord_002",
"timestamp": "2025-12-29T14:31:30Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"orderNumber": "ORD-2025-00001",
"receiptNumber": "GM-2025-001234",
"locationId": "loc_gm",
"registerId": "reg_01",
"customerId": "cust_john_doe",
"lineItems": [
{
"lineItemId": "li_001",
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantity": 2,
"unitPrice": 29.99,
"lineTotal": 59.98
}
],
"payments": [
{
"paymentId": "pay_001",
"method": "card",
"amount": 64.78
}
],
"subtotal": 59.98,
"discountTotal": 0,
"taxTotal": 4.80,
"total": 64.78,
"loyaltyPointsEarned": 65,
"completedAt": "2025-12-29T14:31:30Z",
"completedBy": "usr_cashier1",
"shiftId": "shift_2025-12-29_am"
}
}
6. OrderVoided
Trigger: Manager voids order Producer: POS Application Consumers: Inventory, Analytics, Audit
{
"eventType": "OrderVoided",
"eventId": "evt_ord_003",
"timestamp": "2025-12-29T14:35:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"orderNumber": "ORD-2025-00001",
"voidReason": "customer_changed_mind",
"voidedBy": "usr_manager1",
"voidedAt": "2025-12-29T14:35:00Z",
"authorizationCode": "MGR-VOID-001",
"originalTotal": 64.78,
"refundRequired": false,
"inventoryReleased": true,
"lineItems": [
{
"variantId": "var_nxp0323_m_blk",
"quantity": 2
}
]
}
}
7. ReturnInitiated
Trigger: Customer requests return Producer: POS Application Consumers: Return Service, Inventory, Fraud
{
"eventType": "ReturnInitiated",
"eventId": "evt_ret_001",
"timestamp": "2025-12-29T15:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ret_abc123",
"payload": {
"returnId": "ret_abc123",
"originalOrderId": "ord_xyz789",
"originalReceiptNumber": "GM-2025-001234",
"locationId": "loc_hm",
"customerId": "cust_john_doe",
"returnItems": [
{
"originalLineItemId": "li_001",
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityReturned": 1,
"returnReason": "wrong_size",
"condition": "resaleable",
"refundAmount": 32.39
}
],
"totalRefund": 32.39,
"refundMethod": "original_payment",
"initiatedBy": "usr_cashier2"
}
}
8. ReturnCompleted
Trigger: Refund processed Producer: Return Service Consumers: Inventory, Payment, Loyalty, Analytics
{
"eventType": "ReturnCompleted",
"eventId": "evt_ret_002",
"timestamp": "2025-12-29T15:05:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ret_abc123",
"payload": {
"returnId": "ret_abc123",
"returnReceiptNumber": "RET-HM-2025-0001",
"originalOrderId": "ord_xyz789",
"refundTransactionId": "refund_txn_001",
"refundAmount": 32.39,
"refundMethod": "card",
"loyaltyPointsDeducted": 32,
"inventoryRestocked": [
{
"variantId": "var_nxp0323_m_blk",
"locationId": "loc_hm",
"quantityAdded": 1,
"condition": "resaleable"
}
],
"processedBy": "usr_cashier2",
"completedAt": "2025-12-29T15:05:00Z",
"shiftId": "shift_2025-12-29_pm"
}
}
9. ReceiptRequested
Trigger: Customer requests receipt Producer: POS Application Consumers: Receipt Service, Communication
{
"eventType": "ReceiptRequested",
"eventId": "evt_rcpt_001",
"timestamp": "2025-12-29T14:32:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"orderId": "ord_xyz789",
"receiptNumber": "GM-2025-001234",
"deliveryMethod": "email",
"destination": "john@example.com",
"includePromotions": true,
"loyaltyBalance": 1250,
"requestedBy": "usr_cashier1"
}
}
C.4 Inventory Events
10. StockReserved
Trigger: Order created, items reserved Producer: Inventory Service Consumers: Order Service, Stock Visibility
{
"eventType": "StockReserved",
"eventId": "evt_inv_001",
"timestamp": "2025-12-29T14:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"reservationId": "res_001",
"orderId": "ord_xyz789",
"locationId": "loc_gm",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityReserved": 2,
"previousOnHand": 15,
"previousAvailable": 15,
"newAvailable": 13
}
],
"expiresAt": "2025-12-29T15:00:00Z"
}
}
11. StockCommitted
Trigger: Payment completed Producer: Inventory Service Consumers: Reporting, Reorder, Sync
{
"eventType": "StockCommitted",
"eventId": "evt_inv_002",
"timestamp": "2025-12-29T14:31:30Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"commitId": "commit_001",
"reservationId": "res_001",
"orderId": "ord_xyz789",
"receiptNumber": "GM-2025-001234",
"locationId": "loc_gm",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantitySold": 2,
"previousOnHand": 15,
"newOnHand": 13,
"unitCost": 12.50,
"totalCostOfGoodsSold": 25.00
}
],
"transactionType": "sale"
}
}
12. StockReleased
Trigger: Order voided/abandoned Producer: Inventory Service Consumers: Order Service, Stock Visibility
{
"eventType": "StockReleased",
"eventId": "evt_inv_003",
"timestamp": "2025-12-29T14:40:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"releaseId": "rel_001",
"reservationId": "res_001",
"orderId": "ord_xyz789",
"locationId": "loc_gm",
"releaseReason": "order_voided",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityReleased": 2,
"previousAvailable": 13,
"newAvailable": 15
}
]
}
}
13. StockReceived
Trigger: Vendor shipment received Producer: Receiving Service Consumers: Inventory, AP, Reporting
{
"eventType": "StockReceived",
"eventId": "evt_inv_004",
"timestamp": "2025-12-29T09:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "po_12345",
"payload": {
"receiptId": "rcpt_001",
"purchaseOrderId": "po_12345",
"vendorId": "vendor_nike",
"locationId": "loc_hq",
"receivedBy": "usr_warehouse1",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityOrdered": 50,
"quantityReceived": 48,
"quantityDamaged": 2,
"previousOnHand": 100,
"newOnHand": 148,
"unitCost": 12.50,
"totalCost": 600.00
}
],
"totalItemsReceived": 48,
"totalCost": 600.00,
"discrepancyNotes": "2 units damaged in shipping"
}
}
14. StockAdjusted
Trigger: Manual adjustment (count, shrinkage) Producer: Inventory Management Consumers: Inventory, Reporting, Audit
{
"eventType": "StockAdjusted",
"eventId": "evt_inv_005",
"timestamp": "2025-12-29T11:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "adj_001",
"payload": {
"adjustmentId": "adj_001",
"locationId": "loc_gm",
"adjustedBy": "usr_manager1",
"adjustmentType": "cycle_count",
"authorizationCode": "MGR-ADJ-001",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"systemQuantity": 15,
"countedQuantity": 13,
"variance": -2,
"varianceReason": "shrinkage",
"previousOnHand": 15,
"newOnHand": 13,
"costImpact": -25.00
}
],
"totalVariance": -2,
"totalCostImpact": -25.00,
"notes": "Quarterly cycle count - Section A"
}
}
15. TransferRequested
Trigger: Store requests stock Producer: Inventory Management Consumers: Transfer Service, Notifications
{
"eventType": "TransferRequested",
"eventId": "evt_inv_006",
"timestamp": "2025-12-29T10:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "xfer_001",
"payload": {
"transferId": "xfer_001",
"fromLocationId": "loc_hq",
"toLocationId": "loc_gm",
"requestedBy": "usr_gm_manager",
"priority": "normal",
"requestReason": "low_stock",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityRequested": 10,
"sourceOnHand": 100,
"destinationOnHand": 3
}
],
"expectedShipDate": "2025-12-30",
"expectedArrivalDate": "2025-12-31"
}
}
16. TransferShipped
Trigger: Source location ships Producer: Transfer Service Consumers: Inventory, Tracking
{
"eventType": "TransferShipped",
"eventId": "evt_inv_007",
"timestamp": "2025-12-29T14:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "xfer_001",
"payload": {
"transferId": "xfer_001",
"fromLocationId": "loc_hq",
"toLocationId": "loc_gm",
"shippedBy": "usr_warehouse1",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityShipped": 10,
"previousFromOnHand": 100,
"newFromOnHand": 90
}
],
"trackingNumber": "1Z999AA10123456784",
"carrier": "UPS",
"shippedAt": "2025-12-29T14:00:00Z"
}
}
17. TransferReceived
Trigger: Destination receives Producer: Transfer Service Consumers: Inventory, Notifications
{
"eventType": "TransferReceived",
"eventId": "evt_inv_008",
"timestamp": "2025-12-30T09:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "xfer_001",
"payload": {
"transferId": "xfer_001",
"fromLocationId": "loc_hq",
"toLocationId": "loc_gm",
"receivedBy": "usr_gm_associate1",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityExpected": 10,
"quantityReceived": 10,
"quantityDamaged": 0,
"previousToOnHand": 3,
"newToOnHand": 13
}
],
"receivedAt": "2025-12-30T09:00:00Z",
"discrepancyNotes": null
}
}
18. StockRestocked
Trigger: Return item restocked Producer: Return Service Consumers: Inventory, Reporting
{
"eventType": "StockRestocked",
"eventId": "evt_inv_009",
"timestamp": "2025-12-29T15:05:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ret_abc123",
"payload": {
"restockId": "restock_001",
"returnId": "ret_abc123",
"originalOrderId": "ord_xyz789",
"locationId": "loc_hm",
"items": [
{
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"quantityRestocked": 1,
"condition": "resaleable",
"restockLocation": "sales_floor",
"previousOnHand": 20,
"newOnHand": 21
}
],
"restockedBy": "usr_cashier2"
}
}
C.5 Customer Events
19. CustomerCreated
Trigger: New customer registered Producer: Customer Service Consumers: Loyalty, Marketing, Analytics
{
"eventType": "CustomerCreated",
"eventId": "evt_cust_001",
"timestamp": "2025-12-29T14:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "cust_john_doe",
"payload": {
"customerId": "cust_john_doe",
"customerNumber": "CUST-2025-00001",
"createdAt": "2025-12-29T14:00:00Z",
"createdBy": "usr_cashier1",
"creationSource": "pos",
"locationId": "loc_gm",
"profile": {
"firstName": "John",
"lastName": "Doe",
"email": "john.doe@example.com",
"phone": "555-0100",
"marketingOptIn": true,
"smsOptIn": false
},
"loyalty": {
"enrolled": true,
"programId": "loyalty_standard",
"tierLevel": "bronze",
"pointsBalance": 0
}
}
}
20. CustomerUpdated
Trigger: Profile modified Producer: Customer Service Consumers: Sync, Marketing, Analytics
{
"eventType": "CustomerUpdated",
"eventId": "evt_cust_002",
"timestamp": "2025-12-29T15:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "cust_john_doe",
"payload": {
"customerId": "cust_john_doe",
"updatedBy": "usr_cashier2",
"updateSource": "pos",
"locationId": "loc_hm",
"changes": [
{
"field": "phone",
"previousValue": "555-0100",
"newValue": "555-0200",
"changedAt": "2025-12-29T15:30:00Z"
},
{
"field": "address.city",
"previousValue": null,
"newValue": "Chesapeake",
"changedAt": "2025-12-29T15:30:00Z"
}
]
}
}
21. CustomerMerged
Trigger: Duplicates consolidated Producer: Customer Service Consumers: Order, Loyalty, Analytics
{
"eventType": "CustomerMerged",
"eventId": "evt_cust_003",
"timestamp": "2025-12-29T16:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "merge_001",
"payload": {
"mergeId": "merge_001",
"survivingCustomerId": "cust_john_doe",
"mergedCustomerIds": ["cust_john_d", "cust_jdoe"],
"mergedBy": "usr_admin1",
"mergeReason": "duplicate_registration",
"dataConsolidation": {
"ordersTransferred": 5,
"loyaltyPointsCombined": 1500,
"previousTierLevels": ["bronze", "silver"],
"newTierLevel": "silver"
},
"conflictResolutions": [
{
"field": "email",
"values": ["john.doe@example.com", "jdoe@work.com"],
"resolution": "kept_primary",
"selectedValue": "john.doe@example.com"
}
]
}
}
22. LoyaltyPointsEarned
Trigger: Purchase completed Producer: Loyalty Service Consumers: Customer, Notifications, Analytics
{
"eventType": "LoyaltyPointsEarned",
"eventId": "evt_cust_004",
"timestamp": "2025-12-29T14:31:30Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"customerId": "cust_john_doe",
"orderId": "ord_xyz789",
"receiptNumber": "GM-2025-001234",
"locationId": "loc_gm",
"pointsEarned": 65,
"earnRate": 1.0,
"bonusMultiplier": 1.0,
"qualifyingAmount": 64.78,
"excludedAmount": 0,
"previousBalance": 1250,
"newBalance": 1315,
"tierLevel": "silver",
"pointsToNextTier": 685
}
}
23. LoyaltyPointsRedeemed
Trigger: Points used for discount Producer: Loyalty Service Consumers: Order, Analytics
{
"eventType": "LoyaltyPointsRedeemed",
"eventId": "evt_cust_005",
"timestamp": "2025-12-29T14:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"customerId": "cust_john_doe",
"orderId": "ord_xyz789",
"locationId": "loc_gm",
"pointsRedeemed": 500,
"redemptionType": "discount",
"discountAmount": 5.00,
"redemptionRate": 100,
"previousBalance": 1750,
"newBalance": 1250,
"minimumBalanceRequired": 100
}
}
24. LoyaltyPointsDeducted
Trigger: Return processed Producer: Loyalty Service Consumers: Customer, Notifications
{
"eventType": "LoyaltyPointsDeducted",
"eventId": "evt_cust_006",
"timestamp": "2025-12-29T15:05:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ret_abc123",
"payload": {
"customerId": "cust_john_doe",
"returnId": "ret_abc123",
"originalOrderId": "ord_xyz789",
"pointsDeducted": 32,
"deductionReason": "return",
"refundAmount": 32.39,
"previousBalance": 1315,
"newBalance": 1283,
"tierImpact": "none"
}
}
25. LoyaltyTierChanged
Trigger: Threshold reached Producer: Loyalty Service Consumers: Customer, Marketing, Notifications
{
"eventType": "LoyaltyTierChanged",
"eventId": "evt_cust_007",
"timestamp": "2025-12-29T14:31:30Z",
"tenantId": "tenant_nexus",
"correlationId": "cust_john_doe",
"payload": {
"customerId": "cust_john_doe",
"previousTier": "silver",
"newTier": "gold",
"changeType": "upgrade",
"changeReason": "spending_threshold",
"qualifyingSpend": 2000.00,
"tierThreshold": 2000.00,
"effectiveDate": "2025-12-29",
"expirationDate": "2026-12-29",
"newBenefits": [
"1.5x points on all purchases",
"Free shipping on orders $50+",
"Early access to sales",
"Birthday triple points"
]
}
}
26. CustomerTagged
Trigger: Tag applied Producer: Marketing Service Consumers: Marketing Automation, Analytics
{
"eventType": "CustomerTagged",
"eventId": "evt_cust_008",
"timestamp": "2025-12-29T14:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "cust_john_doe",
"payload": {
"customerId": "cust_john_doe",
"tagId": "tag_vip_2025",
"tagName": "VIP 2025",
"tagCategory": "loyalty",
"taggedBy": "system",
"tagSource": "auto_rule",
"ruleId": "rule_vip_qualification",
"expiresAt": "2025-12-31T23:59:59Z",
"metadata": {
"qualificationReason": "annual_spend_over_5000"
}
}
}
27. CustomerOptInChanged
Trigger: Preference changed Producer: Customer Service Consumers: Marketing, Compliance
{
"eventType": "CustomerOptInChanged",
"eventId": "evt_cust_009",
"timestamp": "2025-12-29T14:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "cust_john_doe",
"payload": {
"customerId": "cust_john_doe",
"optInType": "sms_marketing",
"previousValue": false,
"newValue": true,
"changedBy": "cust_john_doe",
"changeSource": "self_service",
"ipAddress": "192.168.1.100",
"consentTimestamp": "2025-12-29T14:00:00Z",
"consentMethod": "checkbox",
"consentText": "I agree to receive promotional SMS messages"
}
}
C.6 Gift Card Events
28. GiftCardPurchased
Trigger: Gift card sold Producer: Gift Card Service Consumers: Customer, Financial, Analytics
{
"eventType": "GiftCardPurchased",
"eventId": "evt_gc_001",
"timestamp": "2025-12-29T14:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "gc_001",
"payload": {
"giftCardId": "gc_001",
"cardNumber": "6012XXXXXXXXXXXX1234",
"purchasedBy": "cust_john_doe",
"recipientEmail": "jane@example.com",
"recipientName": "Jane Doe",
"orderId": "ord_gc001",
"locationId": "loc_gm",
"initialBalance": 50.00,
"purchaseAmount": 50.00,
"cardType": "digital",
"deliveryMethod": "email",
"activationDate": "2025-12-29",
"expirationDate": null,
"personalMessage": "Happy Birthday!"
}
}
29. GiftCardRedeemed
Trigger: Card used as payment Producer: Gift Card Service Consumers: Order, Financial
{
"eventType": "GiftCardRedeemed",
"eventId": "evt_gc_002",
"timestamp": "2025-12-29T15:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz790",
"payload": {
"giftCardId": "gc_001",
"cardNumber": "6012XXXXXXXXXXXX1234",
"redeemedBy": "cust_jane_doe",
"orderId": "ord_xyz790",
"locationId": "loc_hm",
"amountRedeemed": 35.00,
"previousBalance": 50.00,
"newBalance": 15.00,
"transactionType": "purchase"
}
}
30. GiftCardBalanceChecked
Trigger: Balance inquiry Producer: Gift Card Service Consumers: Analytics
{
"eventType": "GiftCardBalanceChecked",
"eventId": "evt_gc_003",
"timestamp": "2025-12-29T14:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "gc_001",
"payload": {
"giftCardId": "gc_001",
"cardNumber": "6012XXXXXXXXXXXX1234",
"currentBalance": 15.00,
"checkedBy": null,
"checkSource": "web",
"locationId": null
}
}
C.7 Employee Events
31. EmployeeClockedIn
Trigger: Employee starts shift Producer: Time Clock Service Consumers: Payroll, Reporting
{
"eventType": "EmployeeClockedIn",
"eventId": "evt_emp_001",
"timestamp": "2025-12-29T08:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "tc_001",
"payload": {
"timeClockEntryId": "tc_001",
"employeeId": "usr_cashier1",
"locationId": "loc_gm",
"registerId": "reg_01",
"clockInTime": "2025-12-29T08:00:00Z",
"clockInMethod": "pin",
"scheduledStart": "2025-12-29T08:00:00Z",
"minutesEarly": 0,
"minutesLate": 0
}
}
32. EmployeeClockedOut
Trigger: Employee ends shift Producer: Time Clock Service Consumers: Payroll, Reporting
{
"eventType": "EmployeeClockedOut",
"eventId": "evt_emp_002",
"timestamp": "2025-12-29T17:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "tc_001",
"payload": {
"timeClockEntryId": "tc_001",
"employeeId": "usr_cashier1",
"locationId": "loc_gm",
"clockOutTime": "2025-12-29T17:00:00Z",
"clockOutMethod": "pin",
"totalHoursWorked": 9.0,
"breakMinutes": 30,
"overtimeHours": 1.0,
"scheduledEnd": "2025-12-29T16:00:00Z"
}
}
33. BreakStarted
Trigger: Employee starts break Producer: Time Clock Service Consumers: Floor Coverage
{
"eventType": "BreakStarted",
"eventId": "evt_emp_003",
"timestamp": "2025-12-29T12:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "tc_001",
"payload": {
"timeClockEntryId": "tc_001",
"employeeId": "usr_cashier1",
"locationId": "loc_gm",
"breakType": "lunch",
"breakStartTime": "2025-12-29T12:00:00Z",
"expectedDuration": 30
}
}
34. BreakEnded
Trigger: Employee returns from break Producer: Time Clock Service Consumers: Floor Coverage
{
"eventType": "BreakEnded",
"eventId": "evt_emp_004",
"timestamp": "2025-12-29T12:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "tc_001",
"payload": {
"timeClockEntryId": "tc_001",
"employeeId": "usr_cashier1",
"locationId": "loc_gm",
"breakType": "lunch",
"breakEndTime": "2025-12-29T12:30:00Z",
"actualDuration": 30,
"overBreak": false
}
}
C.8 Cash Management Events
35. ShiftOpened
Trigger: Manager opens cash drawer Producer: Cash Management Service Consumers: Reporting, Audit
{
"eventType": "ShiftOpened",
"eventId": "evt_cash_001",
"timestamp": "2025-12-29T08:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "shift_001",
"payload": {
"shiftId": "shift_001",
"registerId": "reg_01",
"locationId": "loc_gm",
"openedBy": "usr_manager1",
"openedAt": "2025-12-29T08:00:00Z",
"openingFloat": 267.50,
"floatBreakdown": {
"bills_20": 5,
"bills_10": 5,
"bills_5": 10,
"bills_1": 50,
"quarters": 40,
"dimes": 50,
"nickels": 40,
"pennies": 50
},
"countVariance": 0
}
}
36. TillDropped
Trigger: Cash removed to safe Producer: Cash Management Service Consumers: Reporting, Audit
{
"eventType": "TillDropped",
"eventId": "evt_cash_002",
"timestamp": "2025-12-29T14:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "shift_001",
"payload": {
"shiftId": "shift_001",
"dropId": "drop_001",
"registerId": "reg_01",
"locationId": "loc_gm",
"droppedBy": "usr_cashier1",
"dropAmount": 200.00,
"breakdown": {
"bills_100": 2
},
"drawerBalanceBefore": 467.50,
"drawerBalanceAfter": 267.50,
"dropReason": "excess_cash",
"dropSlipNumber": "DROP-2025-12-29-001"
}
}
37. CashPickedUp
Trigger: Manager removes cash Producer: Cash Management Service Consumers: Reporting, Audit
{
"eventType": "CashPickedUp",
"eventId": "evt_cash_003",
"timestamp": "2025-12-29T15:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "shift_001",
"payload": {
"shiftId": "shift_001",
"pickupId": "pickup_001",
"registerId": "reg_01",
"locationId": "loc_gm",
"performedBy": "usr_manager1",
"witnessedBy": "usr_cashier1",
"pickupAmount": 300.00,
"pickupReason": "bank_deposit",
"drawerBalanceBefore": 567.50,
"drawerBalanceAfter": 267.50
}
}
38. PaidOut
Trigger: Petty cash expense Producer: Cash Management Service Consumers: Reporting, AP, Audit
{
"eventType": "PaidOut",
"eventId": "evt_cash_004",
"timestamp": "2025-12-29T11:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "shift_001",
"payload": {
"shiftId": "shift_001",
"paidOutId": "paidout_001",
"registerId": "reg_01",
"locationId": "loc_gm",
"performedBy": "usr_manager1",
"amount": 25.00,
"category": "office_supplies",
"description": "Printer paper",
"vendorName": "Office Depot",
"receiptAttached": true,
"drawerBalanceBefore": 292.50,
"drawerBalanceAfter": 267.50
}
}
39. ShiftClosed
Trigger: End of day close Producer: Cash Management Service Consumers: Reporting, Audit
{
"eventType": "ShiftClosed",
"eventId": "evt_cash_005",
"timestamp": "2025-12-29T21:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "shift_001",
"payload": {
"shiftId": "shift_001",
"registerId": "reg_01",
"locationId": "loc_gm",
"closedBy": "usr_manager1",
"closedAt": "2025-12-29T21:00:00Z",
"expectedCash": 725.50,
"actualCash": 723.00,
"variance": -2.50,
"varianceSeverity": "notable",
"closingBreakdown": {
"bills_100": 2,
"bills_50": 3,
"bills_20": 15,
"bills_10": 10,
"bills_5": 20,
"bills_1": 75,
"quarters": 80,
"dimes": 100
},
"summary": {
"openingFloat": 267.50,
"cashSales": 458.00,
"cashReturns": -45.00,
"paidOuts": -25.00,
"paidIns": 0,
"tillDrops": -200.00,
"expectedClosing": 455.50
}
}
}
C.9 RFID Events
40. RfidTagPrinted
Trigger: Tag printed and encoded Producer: RFID Print Service Consumers: Tag Registry, Inventory
{
"eventType": "RfidTagPrinted",
"eventId": "evt_rfid_001",
"timestamp": "2025-12-29T08:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "print_job_001",
"payload": {
"tagId": "tag_001",
"epc": "30340123456789012345678901",
"variantId": "var_nxp0323_m_blk",
"sku": "NXP0323-M-BLK",
"serialNumber": 1234567,
"printJobId": "print_job_001",
"printerId": "printer_zebra_01",
"locationId": "loc_hq",
"printedBy": "usr_warehouse1",
"templateId": "tmpl_standard"
}
}
41. RfidScanSessionStarted
Trigger: Inventory scan begins Producer: RFID Mobile App Consumers: Scan Session Service
{
"eventType": "RfidScanSessionStarted",
"eventId": "evt_rfid_002",
"timestamp": "2025-12-29T10:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "scan_session_001",
"payload": {
"sessionId": "scan_session_001",
"locationId": "loc_gm",
"startedBy": "usr_associate1",
"deviceId": "rfid_handheld_01",
"sessionType": "cycle_count",
"expectedSkuCount": 150
}
}
42. RfidTagScanned
Trigger: Tag read during scan Producer: RFID Mobile App Consumers: Real-time Dashboard
{
"eventType": "RfidTagScanned",
"eventId": "evt_rfid_003",
"timestamp": "2025-12-29T10:05:23Z",
"tenantId": "tenant_nexus",
"correlationId": "scan_session_001",
"payload": {
"scanEventId": "scan_evt_001",
"sessionId": "scan_session_001",
"tagId": "tag_001",
"epc": "30340123456789012345678901",
"rssi": -45,
"antennaId": 1,
"readCount": 3,
"firstSeenAt": "2025-12-29T10:05:23Z",
"lastSeenAt": "2025-12-29T10:05:25Z"
}
}
43. RfidScanSessionCompleted
Trigger: Scan session ends Producer: RFID Mobile App Consumers: Inventory, Variance Report
{
"eventType": "RfidScanSessionCompleted",
"eventId": "evt_rfid_004",
"timestamp": "2025-12-29T10:30:00Z",
"tenantId": "tenant_nexus",
"correlationId": "scan_session_001",
"payload": {
"sessionId": "scan_session_001",
"locationId": "loc_gm",
"completedBy": "usr_associate1",
"duration": 1800,
"summary": {
"totalTagsScanned": 145,
"uniqueSkusFound": 142,
"expectedSkus": 150,
"varianceCount": 8,
"missingSkus": ["NXP0323-M-BLK", "NXP0324-L-WHT"],
"extraSkus": []
},
"variancePercentage": 5.33,
"requiresRecount": false,
"autoAdjust": false
}
}
44. RfidTagStatusChanged
Trigger: Tag lifecycle change Producer: Various Services Consumers: Tag Registry, Analytics
{
"eventType": "RfidTagStatusChanged",
"eventId": "evt_rfid_005",
"timestamp": "2025-12-29T14:31:30Z",
"tenantId": "tenant_nexus",
"correlationId": "ord_xyz789",
"payload": {
"tagId": "tag_001",
"epc": "30340123456789012345678901",
"previousStatus": "active",
"newStatus": "sold",
"triggeredBy": "sale",
"referenceId": "ord_xyz789",
"referenceType": "order",
"locationId": "loc_gm",
"changedAt": "2025-12-29T14:31:30Z"
}
}
45. RfidChunkUploaded
Trigger: Sync chunk received from Raptag mobile app Producer: RFID Sync Service Consumers: Session Aggregator, Dashboard
{
"eventType": "RfidChunkUploaded",
"eventId": "evt_rfid_006",
"timestamp": "2026-02-25T10:35:00Z",
"tenantId": "tenant_nexus",
"correlationId": "scan_session_001",
"payload": {
"sessionId": "scan_session_001",
"chunkIndex": 3,
"eventCount": 5000,
"totalChunks": 5,
"uploadedBy": "usr_associate1"
}
}
46. RfidTagEncoded
Trigger: New RFID tag encoded with EPC Producer: Tag Encoding Service Consumers: Tag Registry, Inventory
{
"eventType": "RfidTagEncoded",
"eventId": "evt_rfid_007",
"timestamp": "2026-02-25T08:15:00Z",
"tenantId": "tenant_nexus",
"correlationId": "encode_batch_001",
"payload": {
"tagId": "tag_002",
"epc": "30340123456789012345678902",
"productId": "prod_nxp0323",
"variantId": "var_nxp0323_m_blk",
"templateId": "tmpl_standard",
"encodedBy": "usr_warehouse1",
"locationId": "loc_hq"
}
}
47. RfidConfigUpdated
Trigger: RFID settings changed by admin Producer: RFID Configuration Service Consumers: Raptag App, Dashboard
{
"eventType": "RfidConfigUpdated",
"eventId": "evt_rfid_008",
"timestamp": "2026-02-25T09:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "config_change_001",
"payload": {
"tenantId": "tenant_nexus",
"configChanges": {
"maxOperatorsPerSession": 10,
"chunkSize": 5000,
"autoSaveIntervalSeconds": 30
},
"changedBy": "usr_admin1"
}
}
C.10 Integration Events
48. IntegrationSyncStarted
Trigger: Channel sync begins Producer: Integration Sync Service Consumers: Dashboard, Audit
{
"eventType": "IntegrationSyncStarted",
"eventId": "evt_int_001",
"timestamp": "2026-02-25T02:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "sync_batch_001",
"payload": {
"channelType": "shopify",
"direction": "outbound",
"triggeredBy": "scheduler",
"itemCount": 250
}
}
49. IntegrationSyncCompleted
Trigger: Channel sync finishes successfully Producer: Integration Sync Service Consumers: Dashboard, Notification
{
"eventType": "IntegrationSyncCompleted",
"eventId": "evt_int_002",
"timestamp": "2026-02-25T02:05:30Z",
"tenantId": "tenant_nexus",
"correlationId": "sync_batch_001",
"payload": {
"channelType": "shopify",
"itemsSynced": 248,
"itemsFailed": 2,
"duration": 330,
"errors": []
}
}
50. IntegrationSyncFailed
Trigger: Channel sync fails Producer: Integration Sync Service Consumers: Alert Service, Dashboard
{
"eventType": "IntegrationSyncFailed",
"eventId": "evt_int_003",
"timestamp": "2026-02-25T02:01:15Z",
"tenantId": "tenant_nexus",
"correlationId": "sync_batch_002",
"payload": {
"channelType": "amazon_sp_api",
"errorCode": "ERR-6003",
"errorMessage": "API rate limit exceeded",
"retryCount": 3,
"nextRetryAt": "2026-02-25T02:06:15Z"
}
}
51. IntegrationWebhookReceived
Trigger: External webhook received Producer: Webhook Receiver Consumers: Integration Router, Audit
{
"eventType": "IntegrationWebhookReceived",
"eventId": "evt_int_004",
"timestamp": "2026-02-25T14:22:00Z",
"tenantId": "tenant_nexus",
"correlationId": "webhook_001",
"payload": {
"source": "shopify",
"eventType": "orders/create",
"payloadHash": "sha256:abc123...",
"processedAt": "2026-02-25T14:22:01Z"
}
}
C.11 Tax Events
52. TaxJurisdictionCreated
Trigger: New tax jurisdiction added Producer: Tax Configuration Service Consumers: POS Client, Tax Calculator
{
"eventType": "TaxJurisdictionCreated",
"eventId": "evt_tax_001",
"timestamp": "2026-02-25T09:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "tax_config_001",
"payload": {
"jurisdictionId": "jur_ny_state",
"type": "state",
"name": "New York State Tax",
"rate": 4.0,
"effectiveDate": "2026-03-01"
}
}
53. TaxRateUpdated
Trigger: Tax rate changed Producer: Tax Configuration Service Consumers: POS Client, Tax Calculator, Audit
{
"eventType": "TaxRateUpdated",
"eventId": "evt_tax_002",
"timestamp": "2026-02-25T09:05:00Z",
"tenantId": "tenant_nexus",
"correlationId": "tax_config_002",
"payload": {
"jurisdictionId": "jur_ny_county",
"oldRate": 4.5,
"newRate": 4.75,
"effectiveDate": "2026-04-01",
"changedBy": "usr_admin1"
}
}
C.12 Sync Events
45. SyncConflictDetected
Trigger: Offline sync conflict Producer: Sync Service Consumers: Conflict Resolution, Admin Dashboard
{
"eventType": "SyncConflictDetected",
"eventId": "evt_sync_001",
"timestamp": "2025-12-29T12:00:00Z",
"tenantId": "tenant_nexus",
"correlationId": "sync_batch_001",
"payload": {
"conflictId": "conflict_001",
"deviceId": "dev_pos_01",
"conflictType": "inventory_quantity",
"entityType": "inventory_level",
"entityId": "invlvl_001",
"variantId": "var_nxp0323_m_blk",
"locationId": "loc_gm",
"serverValue": {
"quantity": 12,
"lastUpdated": "2025-12-29T11:45:00Z"
},
"localValue": {
"quantity": 15,
"lastUpdated": "2025-12-29T10:30:00Z",
"delta": -2
},
"resolution": {
"method": "delta_merge",
"resolvedValue": 10,
"automated": true
},
"syncTimestamp": "2025-12-29T12:00:00Z"
}
}
C.11 Event Summary by Domain
| Domain | Event Count | Events |
|---|---|---|
| Sales | 9 | OrderCreated, PaymentAttempted, PaymentCompleted, PaymentFailed, OrderCompleted, OrderVoided, ReturnInitiated, ReturnCompleted, ReceiptRequested |
| Inventory | 9 | StockReserved, StockCommitted, StockReleased, StockReceived, StockAdjusted, TransferRequested, TransferShipped, TransferReceived, StockRestocked |
| Customer | 9 | CustomerCreated, CustomerUpdated, CustomerMerged, LoyaltyPointsEarned, LoyaltyPointsRedeemed, LoyaltyPointsDeducted, LoyaltyTierChanged, CustomerTagged, CustomerOptInChanged |
| Gift Card | 3 | GiftCardPurchased, GiftCardRedeemed, GiftCardBalanceChecked |
| Employee | 4 | EmployeeClockedIn, EmployeeClockedOut, BreakStarted, BreakEnded |
| Cash | 5 | ShiftOpened, TillDropped, CashPickedUp, PaidOut, ShiftClosed |
| RFID | 8 | RfidTagPrinted, RfidScanSessionStarted, RfidTagScanned, RfidScanSessionCompleted, RfidTagStatusChanged, RfidChunkUploaded, RfidTagEncoded, RfidConfigUpdated |
| Integration | 4 | IntegrationSyncStarted, IntegrationSyncCompleted, IntegrationSyncFailed, IntegrationWebhookReceived |
| Tax | 2 | TaxJurisdictionCreated, TaxRateUpdated |
| Sync | 1 | SyncConflictDetected |
| TOTAL | 55+ |
This catalog documents all 55+ domain events that power the POS Platform’s event-driven architecture.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Section | Appendix C |
This appendix is part of the POS Blueprint Book. All content is self-contained.
Appendix D: UI Mockups
Version: 1.0.0 Last Updated: December 29, 2025
D.1 Overview
This appendix contains complete ASCII wireframe mockups for all screens in the POS Platform. These serve as the definitive visual reference for implementation.
D.2 POS Client Application
1. Login Screen
+==============================================================================+
| |
| |
| ╔══════════════════════════════╗ |
| ║ ║ |
| ║ NEXUS CLOTHING ║ |
| ║ POINT OF SALE ║ |
| ║ ║ |
| ╚══════════════════════════════╝ |
| |
| |
| +------------------------------+ |
| | Employee ID or Email | |
| | [________________________] | |
| +------------------------------+ |
| |
| +------------------------------+ |
| | PIN | |
| | [**** ] | |
| +------------------------------+ |
| |
| |
| +------------------------------+ |
| | | |
| | [ SIGN IN ] | |
| | | |
| +------------------------------+ |
| |
| |
| Forgot PIN? Contact Manager |
| |
| |
| +-----------+ +-----------+ |
| | OFFLINE | | v2.1.0 | |
| +-----------+ +-----------+ |
| |
+==============================================================================+
Store: Greenbrier Mall (GM)
2. Main Sale Screen
+==============================================================================+
| NEXUS POS Greenbrier Mall Register 1 John D. 12/29/25 2:30 PM |
+==============================================================================+
| |
| +----------------------------------+ +----------------------------------+ |
| | SEARCH / SCAN | | CURRENT SALE #0001234 | |
| | [SKU, Barcode, or Product... ] | | | |
| +----------------------------------+ | +------------------------------+| |
| | | Item Qty Price || |
| +----------------------------------+ | +------------------------------+| |
| | QUICK CATEGORIES | | | Classic V-Neck Tee 2 $59.98|| |
| | | | | NXP0323-M-BLK || |
| | +--------+ +--------+ +--------+| | | [-] [2] [+] [X Remove] || |
| | | SHIRTS | | PANTS | | ACCESS || | +------------------------------+| |
| | +--------+ +--------+ +--------+| | | Slim Fit Chinos 1 $79.99|| |
| | | | | NXP0456-32-KHK || |
| | +--------+ +--------+ +--------+| | | [-] [1] [+] [X Remove] || |
| | | SHOES | | SALE | | NEW || | +------------------------------+| |
| | +--------+ +--------+ +--------+| | | || |
| | | | | || |
| +----------------------------------+ | | || |
| | | || |
| +----------------------------------+ | +------------------------------+| |
| | RECENT ITEMS | | | |
| | | | +------------------------------+| |
| | +------+ +------+ +------+ | | | Subtotal: $139.97 || |
| | | | | | | | | | | Discount (10%): -$14.00|| |
| | | Tee | | Polo | |Chinos| | | | Tax (6%): $7.56 || |
| | |$29.99| |$44.99| |$79.99| | | +------------------------------+| |
| | +------+ +------+ +------+ | | | || |
| | | | | TOTAL: $133.53 || |
| | +------+ +------+ +------+ | | | || |
| | | | | | | | | | +------------------------------+| |
| | | Belt | | Socks| | Hat | | | | |
| | |$34.99| |$12.99| |$24.99| | +----------------------------------+ |
| | +------+ +------+ +------+ | |
| | | +----------------------------------+ |
| +----------------------------------+ | | |
| | [ CUSTOMER ] [ DISCOUNT ] | |
| +----------------------------------+ | | |
| | FUNCTIONS | | [ PAY $133.53 ] | |
| | | | | |
| | [Returns] [Hold] [Gift Card] | | [ VOID SALE ] [ HOLD SALE ] | |
| | [No Sale] [Time] [Manager] | | | |
| +----------------------------------+ +----------------------------------+ |
| |
+==============================================================================+
| Status: ONLINE | Shift: AM | Drawer: Open | Sales: 23 |
+==============================================================================+
3. Payment Screen
+==============================================================================+
| NEXUS POS Greenbrier Mall Register 1 John D. 12/29/25 2:31 PM |
+==============================================================================+
| |
| +----------------------------------+ +----------------------------------+ |
| | PAYMENT | | ORDER SUMMARY #0001234 | |
| | | | | |
| | Amount Due: $133.53 | | Items: 3 | |
| | | | Subtotal: $139.97 | |
| | +------------------------------+| | Discount: -$14.00 | |
| | | || | Tax: $7.56 | |
| | | PAYMENT METHOD || | | |
| | | || | -------------------------------- | |
| | | +--------+ +--------+ || | TOTAL: $133.53 | |
| | | | | | | || | | |
| | | | CARD | | CASH | || | -------------------------------- | |
| | | | | | | || | | |
| | | +--------+ +--------+ || | Customer: John Doe | |
| | | || | Loyalty: Gold (1,250 pts) | |
| | | +--------+ +--------+ || | | |
| | | | | | | || | Points to earn: 134 | |
| | | | GIFT | | SPLIT | || | | |
| | | | CARD | | | || +----------------------------------+ |
| | | +--------+ +--------+ || |
| | | || +----------------------------------+ |
| | +------------------------------+| | PAYMENTS APPLIED | |
| | | | | |
| | +------------------------------+| | +----------------------------+ | |
| | | CARD SELECTED || | | Visa ****4242 $133.53 | | |
| | | || | +----------------------------+ | |
| | | Present, insert, or tap || | | |
| | | card on terminal || | Balance Due: $0.00 | |
| | | || | | |
| | | +------------------+ || +----------------------------------+ |
| | | | | || |
| | | | [PROCESSING] | || |
| | | | | || |
| | | +------------------+ || |
| | | || |
| | +------------------------------+| |
| | | |
| | [ CANCEL ] | |
| +----------------------------------+ |
| |
+==============================================================================+
4. Receipt Screen
+==============================================================================+
| NEXUS POS Greenbrier Mall Register 1 John D. 12/29/25 2:32 PM |
+==============================================================================+
| |
| +----------------------------------+ |
| | | |
| | TRANSACTION COMPLETE | |
| | | |
| | +-----------------------+ | |
| | | | | |
| | | NEXUS CLOTHING | | |
| | | Greenbrier Mall | | |
| | | 1401 Greenbrier Pkwy| | |
| | | Chesapeake, VA 23320| | |
| | | (757) 555-0100 | | |
| | | | | |
| | | 12/29/25 2:32 PM | | |
| | | Receipt: GM-001234 | | |
| | | Cashier: John D. | | |
| | | | | |
| | | Classic V-Neck x2 | | |
| | | $59.98 | | |
| | | Slim Fit Chinos x1 | | |
| | | $79.99 | | |
| | | | | |
| | | Subtotal: $139.97 | | |
| | | Discount: -$14.00 | | |
| | | Tax: $7.56 | | |
| | | ----------------- | | |
| | | TOTAL: $133.53 | | |
| | | | | |
| | | Visa ****4242 | | |
| | | Auth: 123456 | | |
| | | | | |
| | | Loyalty: +134 pts | | |
| | | Balance: 1,384 pts | | |
| | | | | |
| | +-----------------------+ | |
| | | |
| | RECEIPT OPTIONS | |
| | | |
| | [ PRINT ] [ EMAIL ] [ SMS ] | |
| | | |
| | [ NO RECEIPT - NEW SALE ] | |
| | | |
| +----------------------------------+ |
| |
+==============================================================================+
5. Customer Lookup
+==============================================================================+
| NEXUS POS Greenbrier Mall Register 1 John D. 12/29/25 2:25 PM |
+==============================================================================+
| |
| +------------------------------------------------------------------------+ |
| | CUSTOMER LOOKUP [ X ] | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | Search: [ john doe ] [GO]| |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | SEARCH RESULTS (3 found) | |
| | | |
| | +--------------------------------------------------------------------+| |
| | | [*] John Doe || |
| | | john.doe@example.com | (555) 555-0100 || |
| | | Gold Member | 1,250 points | Last visit: 12/15/25 || |
| | +--------------------------------------------------------------------+| |
| | | |
| | +--------------------------------------------------------------------+| |
| | | [ ] John Doe Jr || |
| | | johnjr@example.com | (555) 555-0101 || |
| | | Bronze Member | 250 points | Last visit: 11/20/25 || |
| | +--------------------------------------------------------------------+| |
| | | |
| | +--------------------------------------------------------------------+| |
| | | [ ] Johnny Doeson || |
| | | johnny.d@example.com | (555) 555-0102 || |
| | | Silver Member | 850 points | Last visit: 12/01/25 || |
| | +--------------------------------------------------------------------+| |
| | | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | SELECTED CUSTOMER DETAILS | |
| | | |
| | Name: John Doe Tier: Gold | |
| | Email: john.doe@example.com Points: 1,250 | |
| | Phone: (555) 555-0100 Lifetime Spend: $2,450.00 | |
| | | |
| | Recent Purchases: | |
| | - 12/15/25: $89.99 (Jacket) | |
| | - 12/01/25: $45.00 (Shirts x2) | |
| | - 11/20/25: $120.00 (Pants, Belt) | |
| | | |
| +------------------------------------------------------------------------+ |
| |
| [ CREATE NEW ] [ SELECT CUSTOMER ] [ CANCEL ] |
| |
+==============================================================================+
6. Returns Screen
+==============================================================================+
| NEXUS POS Greenbrier Mall Register 1 John D. 12/29/25 3:00 PM |
+==============================================================================+
| |
| +------------------------------------------------------------------------+ |
| | PROCESS RETURN | |
| +------------------------------------------------------------------------+ |
| |
| +----------------------------------+ +----------------------------------+ |
| | FIND ORIGINAL RECEIPT | | RECEIPT DETAILS | |
| | | | | |
| | Receipt #: [GM-001200 ] | | Receipt: GM-001200 | |
| | - or - | | Date: 12/20/2025 | |
| | Scan item barcode | | Location: Greenbrier Mall | |
| | - or - | | Cashier: Jane S. | |
| | Customer lookup | | | |
| | | | Customer: John Doe | |
| | [ SEARCH ] | | Payment: Visa ****4242 | |
| | | | | |
| +----------------------------------+ +----------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | ITEMS FROM RECEIPT Select items | |
| | | |
| | +--------------------------------------------------------------------+| |
| | | [X] Classic V-Neck Tee - M Black Qty: 1/2 $29.99 || |
| | | NXP0323-M-BLK || |
| | | Reason: [ Wrong Size v ] Condition: [ Resaleable v ] || |
| | +--------------------------------------------------------------------+| |
| | | |
| | +--------------------------------------------------------------------+| |
| | | [ ] Classic V-Neck Tee - M Black Qty: 0/1 $29.99 || |
| | | NXP0323-M-BLK || |
| | +--------------------------------------------------------------------+| |
| | | |
| | +--------------------------------------------------------------------+| |
| | | [ ] Slim Fit Chinos - 32 Khaki Qty: 0/1 $79.99 || |
| | | NXP0456-32-KHK || |
| | +--------------------------------------------------------------------+| |
| | | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | RETURN SUMMARY | |
| | | |
| | Items to Return: 1 | |
| | Refund Amount: $29.99 + $1.80 tax = $31.79 | |
| | Refund Method: Original Payment (Visa ****4242) | |
| | Loyalty Points to Deduct: 30 | |
| | | |
| +------------------------------------------------------------------------+ |
| |
| [ CANCEL ] [ PROCESS RETURN $31.79 ] |
| |
+==============================================================================+
7. Inventory Lookup
+==============================================================================+
| NEXUS POS Greenbrier Mall Register 1 John D. 12/29/25 2:45 PM |
+==============================================================================+
| |
| +------------------------------------------------------------------------+ |
| | INVENTORY LOOKUP [ X ] | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | Search: [ NXP0323 ] [GO] | |
| | | |
| | Filters: [All Categories v] [All Sizes v] [All Colors v] [In Stock v]| |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | PRODUCT: Classic V-Neck Tee SKU: NXP0323 | |
| | | |
| | Price: $29.99 Category: Shirts > T-Shirts | |
| | Vendor: ABC Apparel Last Received: 12/15/2025 | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | INVENTORY BY LOCATION | |
| | | |
| | +---------+---------+---------+---------+---------+---------+ | |
| | | Size | HQ | GM | HM | LM | NM | TOTAL | |
| | +---------+---------+---------+---------+---------+---------+ | |
| | | S-BLK | 25 | [8] | 5 | 3 | 7 | 48 | |
| | | M-BLK | 30 | [12] | 8 | 6 | 4 | 60 | |
| | | L-BLK | 20 | [5] | 7 | 4 | 9 | 45 | |
| | | XL-BLK | 15 | [3] | 2 | 1 | 3 | 24 | |
| | +---------+---------+---------+---------+---------+---------+ | |
| | | S-WHT | 20 | [6] | 4 | 5 | 5 | 40 | |
| | | M-WHT | 25 | [10] | 6 | 4 | 5 | 50 | |
| | | L-WHT | 18 | [4] | 5 | 3 | 6 | 36 | |
| | | XL-WHT | 12 | [2] | 1 | 2 | 2 | 19 | |
| | +---------+---------+---------+---------+---------+---------+ | |
| | | |
| | [Your Store] highlighted | |
| | | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | ACTIONS | |
| | | |
| | [ REQUEST TRANSFER ] [ ADD TO SALE ] [ VIEW HISTORY ] | |
| | | |
| +------------------------------------------------------------------------+ |
| |
+==============================================================================+
D.3 Admin Portal
8. Admin Dashboard
+==============================================================================+
| NEXUS ADMIN John Doe | [Logout] |
+------------------------------------------------------------------------------+
| +------+ |
| | HOME | Dashboard | Inventory | Products | Employees | Reports |
+==============================================================================+
| |
| TODAY'S PERFORMANCE December 29, 2025 |
| |
| +-------------------+ +-------------------+ +-------------------+ |
| | TOTAL SALES | | TRANSACTIONS | | AVG TICKET | |
| | | | | | | |
| | $12,450 | | 185 | | $67.30 | |
| | | | | | | |
| | +15% vs LY | | +8% vs LY | | +6% vs LY | |
| +-------------------+ +-------------------+ +-------------------+ |
| |
| +-------------------+ +-------------------+ +-------------------+ |
| | RETURNS | | ITEMS SOLD | | CUSTOMERS | |
| | | | | | | |
| | $450 | | 425 | | 142 | |
| | | | | | | |
| | 3.6% of sales | | 2.3 per txn | | 28 new today | |
| +-------------------+ +-------------------+ +-------------------+ |
| |
| +------------------------------------+ +-------------------------------+ |
| | SALES BY LOCATION | | HOURLY SALES TODAY | |
| | | | | |
| | GM $$$$$$$$$$$$$ $4,250 (34%) | | $1.5k + | |
| | HM $$$$$$$$ $3,100 (25%) | | | ___ | |
| | LM $$$$$$$ $2,800 (22%) | | $1k + / \___ | |
| | NM $$$$$ $2,300 (19%) | | | / \___ | |
| | | | $500 +/ \ | |
| | Total: $12,450 | | +--+--+--+--+--+--+--+ | |
| | | | 9 10 11 12 1 2 3 | |
| +------------------------------------+ +-------------------------------+ |
| |
| +------------------------------------+ +-------------------------------+ |
| | TOP SELLING ITEMS | | ALERTS & NOTIFICATIONS | |
| | | | | |
| | 1. Classic V-Neck Tee 45 qty | | [!] Low stock: NXP0789 @ LM | |
| | 2. Slim Fit Chinos 32 qty | | [!] Low stock: NXP0456 @ NM | |
| | 3. Leather Belt 28 qty | | [i] Transfer received at GM | |
| | 4. Cotton Polo 25 qty | | [i] New customer signup: 28 | |
| | 5. Casual Sneakers 22 qty | | [$] Variance alert: GM -$5 | |
| | | | | |
| +------------------------------------+ +-------------------------------+ |
| |
+==============================================================================+
9. Inventory Management
+==============================================================================+
| NEXUS ADMIN John Doe | [Logout] |
+------------------------------------------------------------------------------+
| Dashboard | +----------+ | Products | Employees | Reports | Settings |
| | |INVENTORY | | |
+==============================================================================+
| |
| INVENTORY MANAGEMENT |
| |
| +-----+----------+------------+-------------+----------+--------+ |
| |Levels|Transfers| Adjustments| Receiving | Counts | Alerts| |
| +-----+----------+------------+-------------+----------+--------+ |
| |
| +------------------------------------------------------------------------+ |
| | FILTER: Location [All Locations v] Category [All v] Status [All v] | |
| | Search: [ ] [GO] | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | [ ] | SKU | Product Name | HQ | GM | HM | LM | NM |TOT | |
| +------------------------------------------------------------------------+ |
| | [ ] | NXP0323-S | Classic V-Neck - S BLK | 25 | 8 | 5 | 3 | 7 | 48 | |
| | [ ] | NXP0323-M | Classic V-Neck - M BLK | 30 | 12 | 8 | 6 | 4 | 60 | |
| | [X] | NXP0323-L | Classic V-Neck - L BLK | 20 | 5 | 7 | 4 | 9 | 45 | |
| | [ ] | NXP0323-XL | Classic V-Neck - XL BLK | 15 | 3 | 2 | 1 | 3 | 24 | |
| | [ ] | NXP0456-30 | Slim Fit Chinos - 30 | 18 | 6 | 4 | 5 | 3 | 36 | |
| | [ ] | NXP0456-32 | Slim Fit Chinos - 32 | 22 | 8 | 6 | 4 | 5 | 45 | |
| | [!] | NXP0789-M | Cotton Polo - M | 5 | 2 | 1 | 0 | 1 | 9 | |
| | [!] | NXP0789-L | Cotton Polo - L | 8 | 1 | 2 | 1 | 0 | 12 | |
| +------------------------------------------------------------------------+ |
| |
| Page 1 of 250 [< Prev] [1] [2] [3] ... [Next >] |
| |
| +------------------------------------------------------------------------+ |
| | BULK ACTIONS SELECTED: 1 item(s) | |
| | | |
| | [ Create Transfer ] [ Adjust Qty ] [ Export CSV ] [ Print Labels ] | |
| | | |
| +------------------------------------------------------------------------+ |
| |
| Legend: [!] = Below reorder point |
| |
+==============================================================================+
10. Product Catalog
+==============================================================================+
| NEXUS ADMIN John Doe | [Logout] |
+------------------------------------------------------------------------------+
| Dashboard | Inventory | +--------+ | Employees | Reports | Settings |
| | |PRODUCTS| | |
+==============================================================================+
| |
| PRODUCT CATALOG [ + ADD PRODUCT ] |
| |
| +------------------------------------------------------------------------+ |
| | Search: [ ] Category: [All Categories v] | |
| | Vendor: [All Vendors v] Status: [Active v] | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | | | | | | | |
| | IMAGE | PRODUCT | SKU | PRICE | STOCK | STATUS | |
| | | | | | | | |
| +------------------------------------------------------------------------+ |
| | +----+ | Classic V-Neck Tee | | | | | |
| | | | | 8 variants | NXP0323 | $29.99 | 322 | Active | |
| | +----+ | Category: Shirts | | | | [Edit] | |
| +------------------------------------------------------------------------+ |
| | +----+ | Slim Fit Chinos | | | | | |
| | | | | 6 variants | NXP0456 | $79.99 | 245 | Active | |
| | +----+ | Category: Pants | | | | [Edit] | |
| +------------------------------------------------------------------------+ |
| | +----+ | Cotton Polo | | | | | |
| | | | | 4 variants | NXP0789 | $44.99 | 42 | Active | |
| | +----+ | Category: Shirts | | | | [Edit] | |
| +------------------------------------------------------------------------+ |
| | +----+ | Leather Belt | | | | | |
| | | | | 3 variants | NXP0234 | $34.99 | 128 | Active | |
| | +----+ | Category: Accessories | | | | [Edit] | |
| +------------------------------------------------------------------------+ |
| | +----+ | Winter Jacket | | | | | |
| | | | | 4 variants | NXP0567 | $149.99| 18 | Draft | |
| | +----+ | Category: Outerwear | | | | [Edit] | |
| +------------------------------------------------------------------------+ |
| |
| Showing 1-5 of 1,250 products [< Prev] [1] [2] [3] ... [Next >] |
| |
+==============================================================================+
11. Employee Management
+==============================================================================+
| NEXUS ADMIN John Doe | [Logout] |
+------------------------------------------------------------------------------+
| Dashboard | Inventory | Products | +---------+ | Reports | Settings |
| | |EMPLOYEES| | |
+==============================================================================+
| |
| EMPLOYEE MANAGEMENT [ + ADD EMPLOYEE ] |
| |
| +------------------------------------------------------------------------+ |
| | Search: [ ] Location: [All Locations v] | |
| | Role: [All Roles v] Status: [Active v] | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | NAME | EMAIL | ROLE | LOCATION | STATUS | |
| +------------------------------------------------------------------------+ |
| | John Doe | john.d@nexus.com | Admin | All | Active | |
| | | Last login: Today 2:30 PM | [Edit] | |
| +------------------------------------------------------------------------+ |
| | Jane Smith | jane.s@nexus.com | Manager | GM | Active | |
| | | Last login: Today 8:15 AM | [Edit] | |
| +------------------------------------------------------------------------+ |
| | Mike Johnson | mike.j@nexus.com | Cashier | GM | Active | |
| | | Last login: Today 8:00 AM | [Edit] | |
| +------------------------------------------------------------------------+ |
| | Sarah Williams | sarah.w@nexus.com | Cashier | HM | Active | |
| | | Last login: Yesterday 5:00 PM | [Edit] | |
| +------------------------------------------------------------------------+ |
| | Tom Brown | tom.b@nexus.com | Cashier | LM |Inactive| |
| | | Last login: 12/15/2025 | [Edit] | |
| +------------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | CURRENTLY CLOCKED IN | |
| | | |
| | +----------+ +----------+ +----------+ +----------+ | |
| | | GM: 3 | | HM: 2 | | LM: 2 | | NM: 2 | | |
| | +----------+ +----------+ +----------+ +----------+ | |
| | | |
| | Jane S. (GM) - Since 8:15 AM Sarah W. (HM) - Since 9:00 AM | |
| | Mike J. (GM) - Since 8:00 AM Chris D. (HM) - Since 9:30 AM | |
| | Lisa M. (GM) - Since 10:00 AM | |
| | | |
| +------------------------------------------------------------------------+ |
| |
+==============================================================================+
12. Reports Dashboard
+==============================================================================+
| NEXUS ADMIN John Doe | [Logout] |
+------------------------------------------------------------------------------+
| Dashboard | Inventory | Products | Employees | +-------+ | Settings |
| | |REPORTS| | |
+==============================================================================+
| |
| REPORTS |
| |
| +------------------------------------------------------------------------+ |
| | Date Range: [12/01/2025] to [12/29/2025] Location: [All v] | |
| | Compare: [Last Year v] | |
| +------------------------------------------------------------------------+ |
| |
| +----------------------------+ +----------------------------+ |
| | SALES REPORTS | | INVENTORY REPORTS | |
| | | | | |
| | > Daily Sales Summary | | > Current Stock Levels | |
| | > Sales by Category | | > Stock Valuation | |
| | > Sales by Employee | | > Inventory Movement | |
| | > Sales by Hour | | > Reorder Report | |
| | > Sales by Payment Type | | > Shrinkage Analysis | |
| | > Discount Analysis | | > Dead Stock Report | |
| | > Refund Report | | > Transfer History | |
| | | | | |
| +----------------------------+ +----------------------------+ |
| |
| +----------------------------+ +----------------------------+ |
| | CUSTOMER REPORTS | | EMPLOYEE REPORTS | |
| | | | | |
| | > Customer List | | > Time Clock Report | |
| | > New Customers | | > Sales by Employee | |
| | > Top Customers | | > Commission Report | |
| | > Loyalty Points Summary | | > Void/Return by Employee | |
| | > Customer Retention | | > Productivity Analysis | |
| | > Marketing Campaign | | | |
| | | | | |
| +----------------------------+ +----------------------------+ |
| |
| +------------------------------------------------------------------------+ |
| | QUICK REPORT: Daily Sales Summary [ Generate ] | |
| +------------------------------------------------------------------------+ |
| | | |
| | +------------------------------------------------------------------+ | |
| | | DATE | TRANS | GROSS | DISCOUNT | NET | vs LY| | |
| | +------------------------------------------------------------------+ | |
| | | 12/29/25 | 185 | $13,200 | -$750 | $12,450 | +15% | | |
| | | 12/28/25 | 172 | $11,800 | -$650 | $11,150 | +12% | | |
| | | 12/27/25 | 198 | $14,500 | -$900 | $13,600 | +18% | | |
| | +------------------------------------------------------------------+ | |
| | | |
| | [ Export PDF ] [ Export Excel ] [ Email Report ] | |
| | | |
| +------------------------------------------------------------------------+ |
| |
+==============================================================================+
13. Settings Page
+==============================================================================+
| NEXUS ADMIN John Doe | [Logout] |
+------------------------------------------------------------------------------+
| Dashboard | Inventory | Products | Employees | Reports | +------+ |
| ||SETTINGS||
+==============================================================================+
| |
| SETTINGS |
| |
| +----------------+ +----------------------------------------------------+ |
| | | | | |
| | > General | | GENERAL SETTINGS | |
| | Locations | | | |
| | Registers | | +------------------------------------------------+| |
| | Tax | | | COMPANY INFORMATION || |
| | | | | || |
| | > Sales | | | Company Name: [ Nexus Clothing ] || |
| | Receipts | | | Address: [ 1401 Greenbrier Pkwy ] || |
| | Discounts | | | City/State: [ Chesapeake ] [ VA v ] || |
| | Returns | | | ZIP: [ 23320 ] || |
| | | | | Phone: [ (757) 555-0100 ] || |
| | > Inventory | | | Email: [ info@nexusclothing.com ] || |
| | Reorder | | | || |
| | Transfers | | +------------------------------------------------+| |
| | Counting | | | |
| | | | +------------------------------------------------+| |
| | > Customers | | | REGIONAL SETTINGS || |
| | Loyalty | | | || |
| | Marketing | | | Timezone: [ America/New_York v ] || |
| | | | | Currency: [ USD - US Dollar v ] || |
| | > Payments | | | Date Format: [ MM/DD/YYYY v ] || |
| | Terminals | | | Start of Week: [ Sunday v ] || |
| | Gift Cards | | | || |
| | | | +------------------------------------------------+| |
| | > Users | | | |
| | Roles | | +------------------------------------------------+| |
| | Permissions | | | TAX SETTINGS || |
| | | | | || |
| | > Integration | | | Default Tax Rate: [ 6.0 ] % || |
| | Shopify | | | Tax Included in Price: [ ] Yes [X] No || |
| | QuickBooks | | | || |
| | API Keys | | +------------------------------------------------+| |
| | | | | |
| +----------------+ | [ SAVE CHANGES ] [ CANCEL ] | |
| | | |
| +----------------------------------------------------+ |
| |
+==============================================================================+
D.4 Mobile RFID App (Raptag)
14. Raptag Main Menu
+---------------------------+
| [=] RAPTAG [?] [!] |
+---------------------------+
| |
| Store: Greenbrier Mall |
| User: John Doe |
| |
+---------------------------+
| |
| +---------------------+ |
| | | |
| | SCAN INVENTORY | |
| | | |
| +---------------------+ |
| |
| +---------------------+ |
| | | |
| | RECEIVE SHIPMENT | |
| | | |
| +---------------------+ |
| |
| +---------------------+ |
| | | |
| | FIND ITEM | |
| | | |
| +---------------------+ |
| |
| +---------------------+ |
| | | |
| | VIEW HISTORY | |
| | | |
| +---------------------+ |
| |
+---------------------------+
| [ONLINE] v2.1.0 |
+---------------------------+
15. Raptag Scan Session
+---------------------------+
| [<] SCAN SESSION [X] |
+---------------------------+
| |
| Zone: Sales Floor |
| Started: 10:00 AM |
| Duration: 00:25:42 |
| |
+---------------------------+
| |
| +---------------------+ |
| | | |
| | SCANNING... | |
| | | |
| | ||||||||||||||| | |
| | | |
| +---------------------+ |
| |
| Tags Found: 145 |
| Unique SKUs: 142 |
| Expected: 150 |
| |
+---------------------------+
| |
| LAST SCANNED: |
| |
| NXP0323-M-BLK |
| Classic V-Neck Tee |
| RSSI: -42 dB |
| |
+---------------------------+
| |
| [ PAUSE ] [ COMPLETE ] |
| |
+---------------------------+
16. Raptag Scan Results
+---------------------------+
| [<] SCAN RESULTS |
+---------------------------+
| |
| Session Complete |
| Duration: 00:32:15 |
| |
+---------------------------+
| |
| +--------+ +--------+ |
| | FOUND | |MISSING | |
| | 142 | | 8 | |
| +--------+ +--------+ |
| |
| Variance: 5.3% |
| |
+---------------------------+
| |
| MISSING ITEMS: |
| |
| [!] NXP0323-M-BLK (2) |
| [!] NXP0324-L-WHT (1) |
| [!] NXP0456-32-KHK (2) |
| [!] NXP0789-S-NAV (3) |
| |
+---------------------------+
| |
| [EXPORT] [ADJUST INV] |
| |
| [ COMPLETE & SYNC ] |
| |
+---------------------------+
D.5 Component Library Reference
Buttons
+-----------------------------------------------------------------------------+
| BUTTON STYLES |
+-----------------------------------------------------------------------------+
PRIMARY: [ Button Text ] <- Blue background, white text
+----------------+
SECONDARY: [ Button Text ] <- Gray background, dark text
+----------------+
DANGER: [ Button Text ] <- Red background, white text
+----------------+
SUCCESS: [ Button Text ] <- Green background, white text
+----------------+
OUTLINE: [ Button Text ] <- Border only, no fill
+----------------+
DISABLED: [ Button Text ] <- Grayed out, no interaction
+----------------+
SIZES:
SMALL: [ Sm ]
MEDIUM: [ Medium ]
LARGE: [ Large ]
Form Elements
+-----------------------------------------------------------------------------+
| FORM ELEMENTS |
+-----------------------------------------------------------------------------+
TEXT INPUT:
+----------------------------+
| Label |
| [________________________]|
| Helper text goes here |
+----------------------------+
SELECT:
+----------------------------+
| Label |
| [ Selected Option v ] |
+----------------------------+
CHECKBOX:
[X] Checked option
[ ] Unchecked option
RADIO:
(*) Selected option
( ) Unselected option
TOGGLE:
[ OFF |====] or [====| ON ]
SEARCH:
+-----------------------------------+
| [Q] Search... [GO] |
+-----------------------------------+
DATE PICKER:
+----------------------------+
| [12/29/2025 [C]] |
+----------------------------+
Status Indicators
+-----------------------------------------------------------------------------+
| STATUS INDICATORS |
+-----------------------------------------------------------------------------+
BADGES:
[Active] <- Green
[Pending] <- Yellow
[Inactive] <- Gray
[Error] <- Red
[New] <- Blue
ALERTS:
+-------------------------------------------+
| [i] Info: This is an informational alert |
+-------------------------------------------+
+-------------------------------------------+
| [!] Warning: This requires attention |
+-------------------------------------------+
+-------------------------------------------+
| [X] Error: Something went wrong |
+-------------------------------------------+
+-------------------------------------------+
| [*] Success: Operation completed |
+-------------------------------------------+
PROGRESS:
[==================== ] 65%
Loading... [==== ]
Data Display
+-----------------------------------------------------------------------------+
| DATA DISPLAY |
+-----------------------------------------------------------------------------+
TABLE:
+--------+----------------+--------+--------+
| Header | Header | Header | Header |
+--------+----------------+--------+--------+
| Data | Data | Data | Action |
| Data | Data | Data | Action |
| Data | Data | Data | Action |
+--------+----------------+--------+--------+
CARD:
+----------------------------+
| CARD TITLE |
| |
| Card content goes here |
| with supporting text. |
| |
| [ Action ] |
+----------------------------+
STAT BOX:
+-------------------+
| LABEL |
| $12,450 |
| +15% vs LY |
+-------------------+
LIST:
+----------------------------+
| > Item 1 |
| > Item 2 |
| > Item 3 |
+----------------------------+
These mockups provide the definitive visual reference for implementing the POS Platform user interface.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Section | Appendix D |
This appendix is part of the POS Blueprint Book. All content is self-contained.
Appendix E: Code Templates
Version: 1.0.0 Last Updated: December 29, 2025 Language: C# (.NET 8.0)
E.1 Overview
This appendix contains copy-paste code templates for common patterns in the POS Platform. All templates follow the established architecture and coding standards.
E.2 Table of Contents
- Entity Template
- Repository Interface Template
- Repository Implementation Template
- Service Interface Template
- Service Implementation Template
- Controller Template
- DTO Templates
- Validator Template
- Event Handler Template
- Integration Test Template
- Unit Test Template
- Domain Event Template
E.3 Entity Template
// File: src/POS.Core/Entities/Product.cs
using System;
using System.Collections.Generic;
namespace POS.Core.Entities;
/// <summary>
/// Represents a product in the catalog.
/// </summary>
public class Product : BaseEntity, IAuditableEntity, ITenantEntity
{
/// <summary>
/// Gets or sets the tenant identifier.
/// </summary>
public Guid TenantId { get; set; }
/// <summary>
/// Gets or sets the SKU (Stock Keeping Unit).
/// </summary>
public required string Sku { get; set; }
/// <summary>
/// Gets or sets the product name.
/// </summary>
public required string Name { get; set; }
/// <summary>
/// Gets or sets the product description.
/// </summary>
public string? Description { get; set; }
/// <summary>
/// Gets or sets the category identifier.
/// </summary>
public Guid? CategoryId { get; set; }
/// <summary>
/// Gets or sets the vendor identifier.
/// </summary>
public Guid? VendorId { get; set; }
/// <summary>
/// Gets or sets the base price.
/// </summary>
public decimal BasePrice { get; set; }
/// <summary>
/// Gets or sets the cost price.
/// </summary>
public decimal Cost { get; set; }
/// <summary>
/// Gets or sets the product status.
/// </summary>
public ProductStatus Status { get; set; } = ProductStatus.Active;
/// <summary>
/// Gets or sets the Shopify product ID for integration.
/// </summary>
public string? ShopifyProductId { get; set; }
// Navigation properties
public virtual Category? Category { get; set; }
public virtual Vendor? Vendor { get; set; }
public virtual ICollection<ProductVariant> Variants { get; set; } = new List<ProductVariant>();
public virtual ICollection<ProductImage> Images { get; set; } = new List<ProductImage>();
// Audit properties
public DateTime CreatedAt { get; set; }
public Guid? CreatedBy { get; set; }
public DateTime? UpdatedAt { get; set; }
public Guid? UpdatedBy { get; set; }
}
/// <summary>
/// Product status enumeration.
/// </summary>
public enum ProductStatus
{
Draft,
Active,
Discontinued,
Archived
}
E.4 Repository Interface Template
// File: src/POS.Core/Interfaces/Repositories/IProductRepository.cs
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using POS.Core.Entities;
namespace POS.Core.Interfaces.Repositories;
/// <summary>
/// Repository interface for Product entity operations.
/// </summary>
public interface IProductRepository : IRepository<Product>
{
/// <summary>
/// Gets a product by SKU.
/// </summary>
/// <param name="sku">The SKU to search for.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The product if found, null otherwise.</returns>
Task<Product?> GetBySkuAsync(string sku, CancellationToken cancellationToken = default);
/// <summary>
/// Gets products by category.
/// </summary>
/// <param name="categoryId">The category identifier.</param>
/// <param name="includeVariants">Whether to include variants.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>List of products in the category.</returns>
Task<IReadOnlyList<Product>> GetByCategoryAsync(
Guid categoryId,
bool includeVariants = false,
CancellationToken cancellationToken = default);
/// <summary>
/// Gets products by vendor.
/// </summary>
/// <param name="vendorId">The vendor identifier.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>List of products from the vendor.</returns>
Task<IReadOnlyList<Product>> GetByVendorAsync(
Guid vendorId,
CancellationToken cancellationToken = default);
/// <summary>
/// Searches products by name or SKU.
/// </summary>
/// <param name="searchTerm">The search term.</param>
/// <param name="page">Page number (1-based).</param>
/// <param name="pageSize">Items per page.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>Paginated list of matching products.</returns>
Task<PagedResult<Product>> SearchAsync(
string searchTerm,
int page = 1,
int pageSize = 20,
CancellationToken cancellationToken = default);
/// <summary>
/// Checks if a SKU exists.
/// </summary>
/// <param name="sku">The SKU to check.</param>
/// <param name="excludeProductId">Product ID to exclude from check.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>True if SKU exists, false otherwise.</returns>
Task<bool> SkuExistsAsync(
string sku,
Guid? excludeProductId = null,
CancellationToken cancellationToken = default);
}
E.5 Repository Implementation Template
// File: src/POS.Infrastructure/Repositories/ProductRepository.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using POS.Core.Entities;
using POS.Core.Interfaces.Repositories;
using POS.Infrastructure.Data;
namespace POS.Infrastructure.Repositories;
/// <summary>
/// Repository implementation for Product entity.
/// </summary>
public class ProductRepository : Repository<Product>, IProductRepository
{
public ProductRepository(ApplicationDbContext context) : base(context)
{
}
/// <inheritdoc />
public async Task<Product?> GetBySkuAsync(
string sku,
CancellationToken cancellationToken = default)
{
return await _dbSet
.Include(p => p.Variants)
.Include(p => p.Category)
.FirstOrDefaultAsync(p => p.Sku == sku, cancellationToken);
}
/// <inheritdoc />
public async Task<IReadOnlyList<Product>> GetByCategoryAsync(
Guid categoryId,
bool includeVariants = false,
CancellationToken cancellationToken = default)
{
var query = _dbSet
.Where(p => p.CategoryId == categoryId)
.Where(p => p.Status == ProductStatus.Active);
if (includeVariants)
{
query = query.Include(p => p.Variants);
}
return await query
.OrderBy(p => p.Name)
.ToListAsync(cancellationToken);
}
/// <inheritdoc />
public async Task<IReadOnlyList<Product>> GetByVendorAsync(
Guid vendorId,
CancellationToken cancellationToken = default)
{
return await _dbSet
.Where(p => p.VendorId == vendorId)
.Include(p => p.Variants)
.OrderBy(p => p.Name)
.ToListAsync(cancellationToken);
}
/// <inheritdoc />
public async Task<PagedResult<Product>> SearchAsync(
string searchTerm,
int page = 1,
int pageSize = 20,
CancellationToken cancellationToken = default)
{
var query = _dbSet
.Where(p => p.Status == ProductStatus.Active)
.Where(p => EF.Functions.ILike(p.Name, $"%{searchTerm}%") ||
EF.Functions.ILike(p.Sku, $"%{searchTerm}%"));
var totalCount = await query.CountAsync(cancellationToken);
var items = await query
.Include(p => p.Variants)
.OrderBy(p => p.Name)
.Skip((page - 1) * pageSize)
.Take(pageSize)
.ToListAsync(cancellationToken);
return new PagedResult<Product>(items, totalCount, page, pageSize);
}
/// <inheritdoc />
public async Task<bool> SkuExistsAsync(
string sku,
Guid? excludeProductId = null,
CancellationToken cancellationToken = default)
{
var query = _dbSet.Where(p => p.Sku == sku);
if (excludeProductId.HasValue)
{
query = query.Where(p => p.Id != excludeProductId.Value);
}
return await query.AnyAsync(cancellationToken);
}
}
E.6 Service Interface Template
// File: src/POS.Core/Interfaces/Services/IProductService.cs
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using POS.Core.DTOs;
namespace POS.Core.Interfaces.Services;
/// <summary>
/// Service interface for product operations.
/// </summary>
public interface IProductService
{
/// <summary>
/// Gets a product by ID.
/// </summary>
Task<ProductDto?> GetByIdAsync(Guid id, CancellationToken cancellationToken = default);
/// <summary>
/// Gets a product by SKU.
/// </summary>
Task<ProductDto?> GetBySkuAsync(string sku, CancellationToken cancellationToken = default);
/// <summary>
/// Gets all products with optional filtering.
/// </summary>
Task<PagedResult<ProductDto>> GetAllAsync(
ProductFilterDto filter,
CancellationToken cancellationToken = default);
/// <summary>
/// Creates a new product.
/// </summary>
Task<ProductDto> CreateAsync(
CreateProductDto dto,
CancellationToken cancellationToken = default);
/// <summary>
/// Updates an existing product.
/// </summary>
Task<ProductDto> UpdateAsync(
Guid id,
UpdateProductDto dto,
CancellationToken cancellationToken = default);
/// <summary>
/// Deletes a product (soft delete).
/// </summary>
Task DeleteAsync(Guid id, CancellationToken cancellationToken = default);
/// <summary>
/// Adds a variant to a product.
/// </summary>
Task<ProductVariantDto> AddVariantAsync(
Guid productId,
CreateVariantDto dto,
CancellationToken cancellationToken = default);
/// <summary>
/// Updates a product variant.
/// </summary>
Task<ProductVariantDto> UpdateVariantAsync(
Guid variantId,
UpdateVariantDto dto,
CancellationToken cancellationToken = default);
/// <summary>
/// Searches products.
/// </summary>
Task<PagedResult<ProductDto>> SearchAsync(
string searchTerm,
int page = 1,
int pageSize = 20,
CancellationToken cancellationToken = default);
}
E.7 Service Implementation Template
// File: src/POS.Application/Services/ProductService.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using AutoMapper;
using FluentValidation;
using Microsoft.Extensions.Logging;
using POS.Core.DTOs;
using POS.Core.Entities;
using POS.Core.Exceptions;
using POS.Core.Interfaces.Repositories;
using POS.Core.Interfaces.Services;
namespace POS.Application.Services;
/// <summary>
/// Service implementation for product operations.
/// </summary>
public class ProductService : IProductService
{
private readonly IProductRepository _productRepository;
private readonly IUnitOfWork _unitOfWork;
private readonly IMapper _mapper;
private readonly IValidator<CreateProductDto> _createValidator;
private readonly IValidator<UpdateProductDto> _updateValidator;
private readonly ILogger<ProductService> _logger;
private readonly IDomainEventDispatcher _eventDispatcher;
public ProductService(
IProductRepository productRepository,
IUnitOfWork unitOfWork,
IMapper mapper,
IValidator<CreateProductDto> createValidator,
IValidator<UpdateProductDto> updateValidator,
ILogger<ProductService> logger,
IDomainEventDispatcher eventDispatcher)
{
_productRepository = productRepository;
_unitOfWork = unitOfWork;
_mapper = mapper;
_createValidator = createValidator;
_updateValidator = updateValidator;
_logger = logger;
_eventDispatcher = eventDispatcher;
}
/// <inheritdoc />
public async Task<ProductDto?> GetByIdAsync(
Guid id,
CancellationToken cancellationToken = default)
{
var product = await _productRepository.GetByIdAsync(id, cancellationToken);
return product is null ? null : _mapper.Map<ProductDto>(product);
}
/// <inheritdoc />
public async Task<ProductDto?> GetBySkuAsync(
string sku,
CancellationToken cancellationToken = default)
{
var product = await _productRepository.GetBySkuAsync(sku, cancellationToken);
return product is null ? null : _mapper.Map<ProductDto>(product);
}
/// <inheritdoc />
public async Task<PagedResult<ProductDto>> GetAllAsync(
ProductFilterDto filter,
CancellationToken cancellationToken = default)
{
var result = await _productRepository.SearchAsync(
filter.SearchTerm ?? "",
filter.Page,
filter.PageSize,
cancellationToken);
return new PagedResult<ProductDto>(
_mapper.Map<List<ProductDto>>(result.Items),
result.TotalCount,
result.Page,
result.PageSize);
}
/// <inheritdoc />
public async Task<ProductDto> CreateAsync(
CreateProductDto dto,
CancellationToken cancellationToken = default)
{
// Validate
var validationResult = await _createValidator.ValidateAsync(dto, cancellationToken);
if (!validationResult.IsValid)
{
throw new ValidationException(validationResult.Errors);
}
// Check SKU uniqueness
if (await _productRepository.SkuExistsAsync(dto.Sku, null, cancellationToken))
{
throw new BusinessException($"SKU '{dto.Sku}' already exists.");
}
// Create entity
var product = _mapper.Map<Product>(dto);
product.Status = ProductStatus.Active;
await _productRepository.AddAsync(product, cancellationToken);
await _unitOfWork.SaveChangesAsync(cancellationToken);
_logger.LogInformation("Product created: {ProductId} - {Sku}", product.Id, product.Sku);
// Dispatch domain event
await _eventDispatcher.DispatchAsync(new ProductCreatedEvent(product.Id, product.Sku));
return _mapper.Map<ProductDto>(product);
}
/// <inheritdoc />
public async Task<ProductDto> UpdateAsync(
Guid id,
UpdateProductDto dto,
CancellationToken cancellationToken = default)
{
// Validate
var validationResult = await _updateValidator.ValidateAsync(dto, cancellationToken);
if (!validationResult.IsValid)
{
throw new ValidationException(validationResult.Errors);
}
// Get existing product
var product = await _productRepository.GetByIdAsync(id, cancellationToken);
if (product is null)
{
throw new NotFoundException($"Product with ID {id} not found.");
}
// Check SKU uniqueness if changed
if (dto.Sku != product.Sku &&
await _productRepository.SkuExistsAsync(dto.Sku, id, cancellationToken))
{
throw new BusinessException($"SKU '{dto.Sku}' already exists.");
}
// Update entity
_mapper.Map(dto, product);
_productRepository.Update(product);
await _unitOfWork.SaveChangesAsync(cancellationToken);
_logger.LogInformation("Product updated: {ProductId} - {Sku}", product.Id, product.Sku);
return _mapper.Map<ProductDto>(product);
}
/// <inheritdoc />
public async Task DeleteAsync(Guid id, CancellationToken cancellationToken = default)
{
var product = await _productRepository.GetByIdAsync(id, cancellationToken);
if (product is null)
{
throw new NotFoundException($"Product with ID {id} not found.");
}
// Soft delete - change status
product.Status = ProductStatus.Archived;
_productRepository.Update(product);
await _unitOfWork.SaveChangesAsync(cancellationToken);
_logger.LogInformation("Product archived: {ProductId} - {Sku}", product.Id, product.Sku);
}
/// <inheritdoc />
public async Task<ProductVariantDto> AddVariantAsync(
Guid productId,
CreateVariantDto dto,
CancellationToken cancellationToken = default)
{
var product = await _productRepository.GetByIdAsync(productId, cancellationToken);
if (product is null)
{
throw new NotFoundException($"Product with ID {productId} not found.");
}
var variant = _mapper.Map<ProductVariant>(dto);
variant.ProductId = productId;
product.Variants.Add(variant);
await _unitOfWork.SaveChangesAsync(cancellationToken);
_logger.LogInformation("Variant added: {VariantId} to Product {ProductId}",
variant.Id, productId);
return _mapper.Map<ProductVariantDto>(variant);
}
/// <inheritdoc />
public async Task<ProductVariantDto> UpdateVariantAsync(
Guid variantId,
UpdateVariantDto dto,
CancellationToken cancellationToken = default)
{
// Implementation similar to UpdateAsync
throw new NotImplementedException();
}
/// <inheritdoc />
public async Task<PagedResult<ProductDto>> SearchAsync(
string searchTerm,
int page = 1,
int pageSize = 20,
CancellationToken cancellationToken = default)
{
var result = await _productRepository.SearchAsync(
searchTerm, page, pageSize, cancellationToken);
return new PagedResult<ProductDto>(
_mapper.Map<List<ProductDto>>(result.Items),
result.TotalCount,
result.Page,
result.PageSize);
}
}
E.8 Controller Template
// File: src/POS.API/Controllers/ProductsController.cs
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using POS.Core.DTOs;
using POS.Core.Interfaces.Services;
namespace POS.API.Controllers;
/// <summary>
/// API controller for product operations.
/// </summary>
[ApiController]
[Route("api/v1/[controller]")]
[Authorize]
[Produces("application/json")]
public class ProductsController : ControllerBase
{
private readonly IProductService _productService;
private readonly ILogger<ProductsController> _logger;
public ProductsController(
IProductService productService,
ILogger<ProductsController> logger)
{
_productService = productService;
_logger = logger;
}
/// <summary>
/// Gets all products with optional filtering.
/// </summary>
/// <param name="filter">Filter parameters.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>Paginated list of products.</returns>
[HttpGet]
[ProducesResponseType(typeof(PagedResult<ProductDto>), StatusCodes.Status200OK)]
public async Task<ActionResult<PagedResult<ProductDto>>> GetAll(
[FromQuery] ProductFilterDto filter,
CancellationToken cancellationToken)
{
var result = await _productService.GetAllAsync(filter, cancellationToken);
return Ok(result);
}
/// <summary>
/// Gets a product by ID.
/// </summary>
/// <param name="id">The product ID.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The product if found.</returns>
[HttpGet("{id:guid}")]
[ProducesResponseType(typeof(ProductDto), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public async Task<ActionResult<ProductDto>> GetById(
Guid id,
CancellationToken cancellationToken)
{
var product = await _productService.GetByIdAsync(id, cancellationToken);
if (product is null)
{
return NotFound();
}
return Ok(product);
}
/// <summary>
/// Gets a product by SKU.
/// </summary>
/// <param name="sku">The product SKU.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The product if found.</returns>
[HttpGet("sku/{sku}")]
[ProducesResponseType(typeof(ProductDto), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public async Task<ActionResult<ProductDto>> GetBySku(
string sku,
CancellationToken cancellationToken)
{
var product = await _productService.GetBySkuAsync(sku, cancellationToken);
if (product is null)
{
return NotFound();
}
return Ok(product);
}
/// <summary>
/// Creates a new product.
/// </summary>
/// <param name="dto">The product data.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The created product.</returns>
[HttpPost]
[Authorize(Policy = "CanManageProducts")]
[ProducesResponseType(typeof(ProductDto), StatusCodes.Status201Created)]
[ProducesResponseType(typeof(ValidationProblemDetails), StatusCodes.Status400BadRequest)]
[ProducesResponseType(typeof(ProblemDetails), StatusCodes.Status409Conflict)]
public async Task<ActionResult<ProductDto>> Create(
[FromBody] CreateProductDto dto,
CancellationToken cancellationToken)
{
var product = await _productService.CreateAsync(dto, cancellationToken);
return CreatedAtAction(nameof(GetById), new { id = product.Id }, product);
}
/// <summary>
/// Updates an existing product.
/// </summary>
/// <param name="id">The product ID.</param>
/// <param name="dto">The updated product data.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The updated product.</returns>
[HttpPut("{id:guid}")]
[Authorize(Policy = "CanManageProducts")]
[ProducesResponseType(typeof(ProductDto), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(ValidationProblemDetails), StatusCodes.Status400BadRequest)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public async Task<ActionResult<ProductDto>> Update(
Guid id,
[FromBody] UpdateProductDto dto,
CancellationToken cancellationToken)
{
var product = await _productService.UpdateAsync(id, dto, cancellationToken);
return Ok(product);
}
/// <summary>
/// Deletes a product (soft delete).
/// </summary>
/// <param name="id">The product ID.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>No content on success.</returns>
[HttpDelete("{id:guid}")]
[Authorize(Policy = "CanManageProducts")]
[ProducesResponseType(StatusCodes.Status204NoContent)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public async Task<IActionResult> Delete(
Guid id,
CancellationToken cancellationToken)
{
await _productService.DeleteAsync(id, cancellationToken);
return NoContent();
}
/// <summary>
/// Adds a variant to a product.
/// </summary>
/// <param name="id">The product ID.</param>
/// <param name="dto">The variant data.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The created variant.</returns>
[HttpPost("{id:guid}/variants")]
[Authorize(Policy = "CanManageProducts")]
[ProducesResponseType(typeof(ProductVariantDto), StatusCodes.Status201Created)]
[ProducesResponseType(typeof(ValidationProblemDetails), StatusCodes.Status400BadRequest)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public async Task<ActionResult<ProductVariantDto>> AddVariant(
Guid id,
[FromBody] CreateVariantDto dto,
CancellationToken cancellationToken)
{
var variant = await _productService.AddVariantAsync(id, dto, cancellationToken);
return CreatedAtAction(nameof(GetById), new { id }, variant);
}
/// <summary>
/// Searches products by name or SKU.
/// </summary>
/// <param name="q">Search query.</param>
/// <param name="page">Page number.</param>
/// <param name="pageSize">Page size.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>Matching products.</returns>
[HttpGet("search")]
[ProducesResponseType(typeof(PagedResult<ProductDto>), StatusCodes.Status200OK)]
public async Task<ActionResult<PagedResult<ProductDto>>> Search(
[FromQuery] string q,
[FromQuery] int page = 1,
[FromQuery] int pageSize = 20,
CancellationToken cancellationToken = default)
{
var result = await _productService.SearchAsync(q, page, pageSize, cancellationToken);
return Ok(result);
}
}
E.9 DTO Templates
// File: src/POS.Core/DTOs/ProductDtos.cs
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
namespace POS.Core.DTOs;
/// <summary>
/// Product data transfer object.
/// </summary>
public record ProductDto
{
public Guid Id { get; init; }
public required string Sku { get; init; }
public required string Name { get; init; }
public string? Description { get; init; }
public Guid? CategoryId { get; init; }
public string? CategoryName { get; init; }
public Guid? VendorId { get; init; }
public string? VendorName { get; init; }
public decimal BasePrice { get; init; }
public decimal Cost { get; init; }
public string Status { get; init; } = "Active";
public List<ProductVariantDto> Variants { get; init; } = new();
public List<ProductImageDto> Images { get; init; } = new();
public DateTime CreatedAt { get; init; }
public DateTime? UpdatedAt { get; init; }
}
/// <summary>
/// Product variant data transfer object.
/// </summary>
public record ProductVariantDto
{
public Guid Id { get; init; }
public required string Sku { get; init; }
public string? Barcode { get; init; }
public Dictionary<string, string> Options { get; init; } = new();
public decimal Price { get; init; }
public decimal? CompareAtPrice { get; init; }
public decimal Cost { get; init; }
public bool IsActive { get; init; }
}
/// <summary>
/// Product image data transfer object.
/// </summary>
public record ProductImageDto
{
public Guid Id { get; init; }
public required string Url { get; init; }
public string? AltText { get; init; }
public int Position { get; init; }
}
/// <summary>
/// DTO for creating a new product.
/// </summary>
public record CreateProductDto
{
[Required]
[StringLength(50)]
public required string Sku { get; init; }
[Required]
[StringLength(255)]
public required string Name { get; init; }
[StringLength(2000)]
public string? Description { get; init; }
public Guid? CategoryId { get; init; }
public Guid? VendorId { get; init; }
[Range(0, 999999.99)]
public decimal BasePrice { get; init; }
[Range(0, 999999.99)]
public decimal Cost { get; init; }
public List<CreateVariantDto>? Variants { get; init; }
}
/// <summary>
/// DTO for updating an existing product.
/// </summary>
public record UpdateProductDto
{
[Required]
[StringLength(50)]
public required string Sku { get; init; }
[Required]
[StringLength(255)]
public required string Name { get; init; }
[StringLength(2000)]
public string? Description { get; init; }
public Guid? CategoryId { get; init; }
public Guid? VendorId { get; init; }
[Range(0, 999999.99)]
public decimal BasePrice { get; init; }
[Range(0, 999999.99)]
public decimal Cost { get; init; }
public string? Status { get; init; }
}
/// <summary>
/// DTO for creating a product variant.
/// </summary>
public record CreateVariantDto
{
[Required]
[StringLength(50)]
public required string Sku { get; init; }
[StringLength(50)]
public string? Barcode { get; init; }
public Dictionary<string, string> Options { get; init; } = new();
[Range(0, 999999.99)]
public decimal Price { get; init; }
[Range(0, 999999.99)]
public decimal? CompareAtPrice { get; init; }
[Range(0, 999999.99)]
public decimal Cost { get; init; }
}
/// <summary>
/// DTO for updating a product variant.
/// </summary>
public record UpdateVariantDto
{
[Required]
[StringLength(50)]
public required string Sku { get; init; }
[StringLength(50)]
public string? Barcode { get; init; }
public Dictionary<string, string>? Options { get; init; }
[Range(0, 999999.99)]
public decimal? Price { get; init; }
[Range(0, 999999.99)]
public decimal? CompareAtPrice { get; init; }
[Range(0, 999999.99)]
public decimal? Cost { get; init; }
public bool? IsActive { get; init; }
}
/// <summary>
/// Product filter DTO.
/// </summary>
public record ProductFilterDto
{
public string? SearchTerm { get; init; }
public Guid? CategoryId { get; init; }
public Guid? VendorId { get; init; }
public string? Status { get; init; }
[Range(1, int.MaxValue)]
public int Page { get; init; } = 1;
[Range(1, 100)]
public int PageSize { get; init; } = 20;
}
/// <summary>
/// Paginated result wrapper.
/// </summary>
public record PagedResult<T>
{
public IReadOnlyList<T> Items { get; init; }
public int TotalCount { get; init; }
public int Page { get; init; }
public int PageSize { get; init; }
public int TotalPages => (int)Math.Ceiling(TotalCount / (double)PageSize);
public bool HasNextPage => Page < TotalPages;
public bool HasPreviousPage => Page > 1;
public PagedResult(IReadOnlyList<T> items, int totalCount, int page, int pageSize)
{
Items = items;
TotalCount = totalCount;
Page = page;
PageSize = pageSize;
}
}
E.10 Validator Template
// File: src/POS.Application/Validators/CreateProductValidator.cs
using FluentValidation;
using POS.Core.DTOs;
using POS.Core.Interfaces.Repositories;
namespace POS.Application.Validators;
/// <summary>
/// Validator for CreateProductDto.
/// </summary>
public class CreateProductValidator : AbstractValidator<CreateProductDto>
{
private readonly IProductRepository _productRepository;
private readonly ICategoryRepository _categoryRepository;
public CreateProductValidator(
IProductRepository productRepository,
ICategoryRepository categoryRepository)
{
_productRepository = productRepository;
_categoryRepository = categoryRepository;
RuleFor(x => x.Sku)
.NotEmpty()
.WithMessage("SKU is required.")
.MaximumLength(50)
.WithMessage("SKU cannot exceed 50 characters.")
.Matches(@"^[A-Z0-9\-]+$")
.WithMessage("SKU must contain only uppercase letters, numbers, and hyphens.")
.MustAsync(BeUniqueSku)
.WithMessage("SKU already exists.");
RuleFor(x => x.Name)
.NotEmpty()
.WithMessage("Product name is required.")
.MaximumLength(255)
.WithMessage("Product name cannot exceed 255 characters.");
RuleFor(x => x.Description)
.MaximumLength(2000)
.WithMessage("Description cannot exceed 2000 characters.");
RuleFor(x => x.BasePrice)
.GreaterThanOrEqualTo(0)
.WithMessage("Base price must be zero or greater.");
RuleFor(x => x.Cost)
.GreaterThanOrEqualTo(0)
.WithMessage("Cost must be zero or greater.")
.LessThanOrEqualTo(x => x.BasePrice)
.When(x => x.BasePrice > 0)
.WithMessage("Cost should not exceed the base price.");
RuleFor(x => x.CategoryId)
.MustAsync(CategoryExists)
.When(x => x.CategoryId.HasValue)
.WithMessage("Category does not exist.");
RuleForEach(x => x.Variants)
.SetValidator(new CreateVariantValidator());
}
private async Task<bool> BeUniqueSku(string sku, CancellationToken cancellationToken)
{
return !await _productRepository.SkuExistsAsync(sku, null, cancellationToken);
}
private async Task<bool> CategoryExists(Guid? categoryId, CancellationToken cancellationToken)
{
if (!categoryId.HasValue) return true;
return await _categoryRepository.ExistsAsync(categoryId.Value, cancellationToken);
}
}
/// <summary>
/// Validator for CreateVariantDto.
/// </summary>
public class CreateVariantValidator : AbstractValidator<CreateVariantDto>
{
public CreateVariantValidator()
{
RuleFor(x => x.Sku)
.NotEmpty()
.WithMessage("Variant SKU is required.")
.MaximumLength(50)
.WithMessage("Variant SKU cannot exceed 50 characters.");
RuleFor(x => x.Barcode)
.MaximumLength(50)
.WithMessage("Barcode cannot exceed 50 characters.")
.Matches(@"^[0-9]*$")
.When(x => !string.IsNullOrEmpty(x.Barcode))
.WithMessage("Barcode must contain only numbers.");
RuleFor(x => x.Price)
.GreaterThanOrEqualTo(0)
.WithMessage("Price must be zero or greater.");
RuleFor(x => x.CompareAtPrice)
.GreaterThan(x => x.Price)
.When(x => x.CompareAtPrice.HasValue)
.WithMessage("Compare at price must be greater than regular price.");
RuleFor(x => x.Cost)
.GreaterThanOrEqualTo(0)
.WithMessage("Cost must be zero or greater.");
}
}
E.11 Event Handler Template
// File: src/POS.Application/EventHandlers/OrderCompletedEventHandler.cs
using System.Threading;
using System.Threading.Tasks;
using MediatR;
using Microsoft.Extensions.Logging;
using POS.Core.Events;
using POS.Core.Interfaces.Services;
namespace POS.Application.EventHandlers;
/// <summary>
/// Handles the OrderCompleted domain event.
/// </summary>
public class OrderCompletedEventHandler : INotificationHandler<OrderCompletedEvent>
{
private readonly IInventoryService _inventoryService;
private readonly ILoyaltyService _loyaltyService;
private readonly IAnalyticsService _analyticsService;
private readonly INotificationService _notificationService;
private readonly ILogger<OrderCompletedEventHandler> _logger;
public OrderCompletedEventHandler(
IInventoryService inventoryService,
ILoyaltyService loyaltyService,
IAnalyticsService analyticsService,
INotificationService notificationService,
ILogger<OrderCompletedEventHandler> logger)
{
_inventoryService = inventoryService;
_loyaltyService = loyaltyService;
_analyticsService = analyticsService;
_notificationService = notificationService;
_logger = logger;
}
/// <summary>
/// Handles the OrderCompleted event.
/// </summary>
public async Task Handle(
OrderCompletedEvent notification,
CancellationToken cancellationToken)
{
_logger.LogInformation(
"Processing OrderCompleted event for Order {OrderId}",
notification.OrderId);
try
{
// Commit inventory reservations
await _inventoryService.CommitReservationsAsync(
notification.OrderId,
notification.LineItems,
cancellationToken);
// Award loyalty points if customer attached
if (notification.CustomerId.HasValue)
{
await _loyaltyService.AwardPointsAsync(
notification.CustomerId.Value,
notification.OrderId,
notification.Total,
cancellationToken);
}
// Record analytics
await _analyticsService.RecordSaleAsync(
notification.OrderId,
notification.LocationId,
notification.Total,
notification.LineItems.Count,
cancellationToken);
// Send receipt notification if requested
if (notification.SendReceipt)
{
await _notificationService.SendReceiptAsync(
notification.OrderId,
notification.CustomerEmail,
notification.ReceiptMethod,
cancellationToken);
}
_logger.LogInformation(
"Successfully processed OrderCompleted event for Order {OrderId}",
notification.OrderId);
}
catch (Exception ex)
{
_logger.LogError(
ex,
"Error processing OrderCompleted event for Order {OrderId}",
notification.OrderId);
// Re-throw to trigger retry logic
throw;
}
}
}
E.12 Integration Test Template
// File: tests/POS.IntegrationTests/Controllers/ProductsControllerTests.cs
using System;
using System.Net;
using System.Net.Http.Json;
using System.Threading.Tasks;
using FluentAssertions;
using Microsoft.AspNetCore.Mvc.Testing;
using Microsoft.Extensions.DependencyInjection;
using POS.API;
using POS.Core.DTOs;
using POS.IntegrationTests.Fixtures;
using Xunit;
namespace POS.IntegrationTests.Controllers;
/// <summary>
/// Integration tests for ProductsController.
/// </summary>
[Collection("Database")]
public class ProductsControllerTests : IClassFixture<WebApplicationFactory<Program>>, IAsyncLifetime
{
private readonly WebApplicationFactory<Program> _factory;
private readonly HttpClient _client;
private readonly DatabaseFixture _dbFixture;
public ProductsControllerTests(
WebApplicationFactory<Program> factory,
DatabaseFixture dbFixture)
{
_factory = factory.WithWebHostBuilder(builder =>
{
builder.ConfigureServices(services =>
{
// Configure test database
dbFixture.ConfigureServices(services);
});
});
_client = _factory.CreateClient();
_dbFixture = dbFixture;
}
public async Task InitializeAsync()
{
await _dbFixture.ResetDatabaseAsync();
await AuthenticateAsync();
}
public Task DisposeAsync() => Task.CompletedTask;
private async Task AuthenticateAsync()
{
var loginDto = new { Email = "test@example.com", Password = "Test123!" };
var response = await _client.PostAsJsonAsync("/api/v1/auth/login", loginDto);
var result = await response.Content.ReadFromJsonAsync<LoginResult>();
_client.DefaultRequestHeaders.Authorization =
new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", result?.Token);
}
[Fact]
public async Task GetAll_ReturnsProducts()
{
// Arrange
await SeedProductsAsync();
// Act
var response = await _client.GetAsync("/api/v1/products");
// Assert
response.StatusCode.Should().Be(HttpStatusCode.OK);
var result = await response.Content.ReadFromJsonAsync<PagedResult<ProductDto>>();
result.Should().NotBeNull();
result!.Items.Should().NotBeEmpty();
result.TotalCount.Should().BeGreaterThan(0);
}
[Fact]
public async Task GetById_ExistingProduct_ReturnsProduct()
{
// Arrange
var productId = await CreateTestProductAsync();
// Act
var response = await _client.GetAsync($"/api/v1/products/{productId}");
// Assert
response.StatusCode.Should().Be(HttpStatusCode.OK);
var product = await response.Content.ReadFromJsonAsync<ProductDto>();
product.Should().NotBeNull();
product!.Id.Should().Be(productId);
}
[Fact]
public async Task GetById_NonExistingProduct_ReturnsNotFound()
{
// Arrange
var nonExistingId = Guid.NewGuid();
// Act
var response = await _client.GetAsync($"/api/v1/products/{nonExistingId}");
// Assert
response.StatusCode.Should().Be(HttpStatusCode.NotFound);
}
[Fact]
public async Task Create_ValidProduct_ReturnsCreated()
{
// Arrange
var createDto = new CreateProductDto
{
Sku = "TEST-001",
Name = "Test Product",
Description = "A test product",
BasePrice = 29.99m,
Cost = 12.50m
};
// Act
var response = await _client.PostAsJsonAsync("/api/v1/products", createDto);
// Assert
response.StatusCode.Should().Be(HttpStatusCode.Created);
var product = await response.Content.ReadFromJsonAsync<ProductDto>();
product.Should().NotBeNull();
product!.Sku.Should().Be("TEST-001");
product.Name.Should().Be("Test Product");
// Verify location header
response.Headers.Location.Should().NotBeNull();
}
[Fact]
public async Task Create_DuplicateSku_ReturnsConflict()
{
// Arrange
var createDto = new CreateProductDto
{
Sku = "DUPLICATE-SKU",
Name = "First Product",
BasePrice = 29.99m,
Cost = 12.50m
};
await _client.PostAsJsonAsync("/api/v1/products", createDto);
var duplicateDto = new CreateProductDto
{
Sku = "DUPLICATE-SKU",
Name = "Second Product",
BasePrice = 39.99m,
Cost = 15.00m
};
// Act
var response = await _client.PostAsJsonAsync("/api/v1/products", duplicateDto);
// Assert
response.StatusCode.Should().Be(HttpStatusCode.Conflict);
}
[Fact]
public async Task Create_InvalidData_ReturnsBadRequest()
{
// Arrange
var createDto = new CreateProductDto
{
Sku = "", // Invalid - empty
Name = "", // Invalid - empty
BasePrice = -10m, // Invalid - negative
Cost = 12.50m
};
// Act
var response = await _client.PostAsJsonAsync("/api/v1/products", createDto);
// Assert
response.StatusCode.Should().Be(HttpStatusCode.BadRequest);
}
[Fact]
public async Task Update_ValidData_ReturnsUpdatedProduct()
{
// Arrange
var productId = await CreateTestProductAsync();
var updateDto = new UpdateProductDto
{
Sku = "UPDATED-SKU",
Name = "Updated Product Name",
BasePrice = 39.99m,
Cost = 15.00m
};
// Act
var response = await _client.PutAsJsonAsync(
$"/api/v1/products/{productId}",
updateDto);
// Assert
response.StatusCode.Should().Be(HttpStatusCode.OK);
var product = await response.Content.ReadFromJsonAsync<ProductDto>();
product!.Name.Should().Be("Updated Product Name");
product.BasePrice.Should().Be(39.99m);
}
[Fact]
public async Task Delete_ExistingProduct_ReturnsNoContent()
{
// Arrange
var productId = await CreateTestProductAsync();
// Act
var response = await _client.DeleteAsync($"/api/v1/products/{productId}");
// Assert
response.StatusCode.Should().Be(HttpStatusCode.NoContent);
// Verify product is soft-deleted
var getResponse = await _client.GetAsync($"/api/v1/products/{productId}");
var product = await getResponse.Content.ReadFromJsonAsync<ProductDto>();
product!.Status.Should().Be("Archived");
}
private async Task SeedProductsAsync()
{
for (int i = 1; i <= 5; i++)
{
var dto = new CreateProductDto
{
Sku = $"SEED-{i:D3}",
Name = $"Seeded Product {i}",
BasePrice = 29.99m,
Cost = 12.50m
};
await _client.PostAsJsonAsync("/api/v1/products", dto);
}
}
private async Task<Guid> CreateTestProductAsync()
{
var dto = new CreateProductDto
{
Sku = $"TEST-{Guid.NewGuid():N}".Substring(0, 20),
Name = "Test Product",
BasePrice = 29.99m,
Cost = 12.50m
};
var response = await _client.PostAsJsonAsync("/api/v1/products", dto);
var product = await response.Content.ReadFromJsonAsync<ProductDto>();
return product!.Id;
}
}
record LoginResult(string Token);
E.13 Unit Test Template
// File: tests/POS.UnitTests/Services/ProductServiceTests.cs
using System;
using System.Threading;
using System.Threading.Tasks;
using AutoMapper;
using FluentAssertions;
using FluentValidation;
using FluentValidation.Results;
using Microsoft.Extensions.Logging;
using Moq;
using POS.Application.Services;
using POS.Core.DTOs;
using POS.Core.Entities;
using POS.Core.Exceptions;
using POS.Core.Interfaces.Repositories;
using POS.Core.Interfaces.Services;
using Xunit;
namespace POS.UnitTests.Services;
/// <summary>
/// Unit tests for ProductService.
/// </summary>
public class ProductServiceTests
{
private readonly Mock<IProductRepository> _productRepositoryMock;
private readonly Mock<IUnitOfWork> _unitOfWorkMock;
private readonly Mock<IMapper> _mapperMock;
private readonly Mock<IValidator<CreateProductDto>> _createValidatorMock;
private readonly Mock<IValidator<UpdateProductDto>> _updateValidatorMock;
private readonly Mock<ILogger<ProductService>> _loggerMock;
private readonly Mock<IDomainEventDispatcher> _eventDispatcherMock;
private readonly ProductService _sut;
public ProductServiceTests()
{
_productRepositoryMock = new Mock<IProductRepository>();
_unitOfWorkMock = new Mock<IUnitOfWork>();
_mapperMock = new Mock<IMapper>();
_createValidatorMock = new Mock<IValidator<CreateProductDto>>();
_updateValidatorMock = new Mock<IValidator<UpdateProductDto>>();
_loggerMock = new Mock<ILogger<ProductService>>();
_eventDispatcherMock = new Mock<IDomainEventDispatcher>();
_sut = new ProductService(
_productRepositoryMock.Object,
_unitOfWorkMock.Object,
_mapperMock.Object,
_createValidatorMock.Object,
_updateValidatorMock.Object,
_loggerMock.Object,
_eventDispatcherMock.Object);
}
[Fact]
public async Task GetByIdAsync_ExistingProduct_ReturnsProductDto()
{
// Arrange
var productId = Guid.NewGuid();
var product = new Product
{
Id = productId,
Sku = "TEST-001",
Name = "Test Product"
};
var productDto = new ProductDto
{
Id = productId,
Sku = "TEST-001",
Name = "Test Product"
};
_productRepositoryMock
.Setup(x => x.GetByIdAsync(productId, It.IsAny<CancellationToken>()))
.ReturnsAsync(product);
_mapperMock
.Setup(x => x.Map<ProductDto>(product))
.Returns(productDto);
// Act
var result = await _sut.GetByIdAsync(productId);
// Assert
result.Should().NotBeNull();
result!.Id.Should().Be(productId);
result.Sku.Should().Be("TEST-001");
}
[Fact]
public async Task GetByIdAsync_NonExistingProduct_ReturnsNull()
{
// Arrange
var productId = Guid.NewGuid();
_productRepositoryMock
.Setup(x => x.GetByIdAsync(productId, It.IsAny<CancellationToken>()))
.ReturnsAsync((Product?)null);
// Act
var result = await _sut.GetByIdAsync(productId);
// Assert
result.Should().BeNull();
}
[Fact]
public async Task CreateAsync_ValidDto_CreatesAndReturnsProduct()
{
// Arrange
var createDto = new CreateProductDto
{
Sku = "NEW-001",
Name = "New Product",
BasePrice = 29.99m,
Cost = 12.50m
};
var product = new Product
{
Id = Guid.NewGuid(),
Sku = "NEW-001",
Name = "New Product"
};
var productDto = new ProductDto
{
Id = product.Id,
Sku = "NEW-001",
Name = "New Product"
};
_createValidatorMock
.Setup(x => x.ValidateAsync(createDto, It.IsAny<CancellationToken>()))
.ReturnsAsync(new ValidationResult());
_productRepositoryMock
.Setup(x => x.SkuExistsAsync("NEW-001", null, It.IsAny<CancellationToken>()))
.ReturnsAsync(false);
_mapperMock
.Setup(x => x.Map<Product>(createDto))
.Returns(product);
_mapperMock
.Setup(x => x.Map<ProductDto>(product))
.Returns(productDto);
// Act
var result = await _sut.CreateAsync(createDto);
// Assert
result.Should().NotBeNull();
result.Sku.Should().Be("NEW-001");
_productRepositoryMock.Verify(
x => x.AddAsync(It.IsAny<Product>(), It.IsAny<CancellationToken>()),
Times.Once);
_unitOfWorkMock.Verify(
x => x.SaveChangesAsync(It.IsAny<CancellationToken>()),
Times.Once);
_eventDispatcherMock.Verify(
x => x.DispatchAsync(It.IsAny<ProductCreatedEvent>()),
Times.Once);
}
[Fact]
public async Task CreateAsync_DuplicateSku_ThrowsBusinessException()
{
// Arrange
var createDto = new CreateProductDto
{
Sku = "EXISTING-SKU",
Name = "New Product",
BasePrice = 29.99m,
Cost = 12.50m
};
_createValidatorMock
.Setup(x => x.ValidateAsync(createDto, It.IsAny<CancellationToken>()))
.ReturnsAsync(new ValidationResult());
_productRepositoryMock
.Setup(x => x.SkuExistsAsync("EXISTING-SKU", null, It.IsAny<CancellationToken>()))
.ReturnsAsync(true);
// Act
var act = () => _sut.CreateAsync(createDto);
// Assert
await act.Should().ThrowAsync<BusinessException>()
.WithMessage("*EXISTING-SKU*already exists*");
}
[Fact]
public async Task CreateAsync_InvalidDto_ThrowsValidationException()
{
// Arrange
var createDto = new CreateProductDto
{
Sku = "",
Name = "",
BasePrice = -10m,
Cost = 12.50m
};
var validationResult = new ValidationResult(new[]
{
new ValidationFailure("Sku", "SKU is required."),
new ValidationFailure("Name", "Name is required.")
});
_createValidatorMock
.Setup(x => x.ValidateAsync(createDto, It.IsAny<CancellationToken>()))
.ReturnsAsync(validationResult);
// Act
var act = () => _sut.CreateAsync(createDto);
// Assert
await act.Should().ThrowAsync<ValidationException>();
}
[Fact]
public async Task UpdateAsync_ExistingProduct_UpdatesAndReturnsProduct()
{
// Arrange
var productId = Guid.NewGuid();
var updateDto = new UpdateProductDto
{
Sku = "UPDATED-SKU",
Name = "Updated Name",
BasePrice = 39.99m,
Cost = 15.00m
};
var existingProduct = new Product
{
Id = productId,
Sku = "OLD-SKU",
Name = "Old Name"
};
var updatedProductDto = new ProductDto
{
Id = productId,
Sku = "UPDATED-SKU",
Name = "Updated Name"
};
_updateValidatorMock
.Setup(x => x.ValidateAsync(updateDto, It.IsAny<CancellationToken>()))
.ReturnsAsync(new ValidationResult());
_productRepositoryMock
.Setup(x => x.GetByIdAsync(productId, It.IsAny<CancellationToken>()))
.ReturnsAsync(existingProduct);
_productRepositoryMock
.Setup(x => x.SkuExistsAsync("UPDATED-SKU", productId, It.IsAny<CancellationToken>()))
.ReturnsAsync(false);
_mapperMock
.Setup(x => x.Map<ProductDto>(existingProduct))
.Returns(updatedProductDto);
// Act
var result = await _sut.UpdateAsync(productId, updateDto);
// Assert
result.Should().NotBeNull();
result.Sku.Should().Be("UPDATED-SKU");
_productRepositoryMock.Verify(
x => x.Update(It.IsAny<Product>()),
Times.Once);
_unitOfWorkMock.Verify(
x => x.SaveChangesAsync(It.IsAny<CancellationToken>()),
Times.Once);
}
[Fact]
public async Task UpdateAsync_NonExistingProduct_ThrowsNotFoundException()
{
// Arrange
var productId = Guid.NewGuid();
var updateDto = new UpdateProductDto
{
Sku = "UPDATED-SKU",
Name = "Updated Name",
BasePrice = 39.99m,
Cost = 15.00m
};
_updateValidatorMock
.Setup(x => x.ValidateAsync(updateDto, It.IsAny<CancellationToken>()))
.ReturnsAsync(new ValidationResult());
_productRepositoryMock
.Setup(x => x.GetByIdAsync(productId, It.IsAny<CancellationToken>()))
.ReturnsAsync((Product?)null);
// Act
var act = () => _sut.UpdateAsync(productId, updateDto);
// Assert
await act.Should().ThrowAsync<NotFoundException>()
.WithMessage($"*{productId}*not found*");
}
[Fact]
public async Task DeleteAsync_ExistingProduct_SoftDeletesProduct()
{
// Arrange
var productId = Guid.NewGuid();
var product = new Product
{
Id = productId,
Sku = "TO-DELETE",
Name = "Product to Delete",
Status = ProductStatus.Active
};
_productRepositoryMock
.Setup(x => x.GetByIdAsync(productId, It.IsAny<CancellationToken>()))
.ReturnsAsync(product);
// Act
await _sut.DeleteAsync(productId);
// Assert
product.Status.Should().Be(ProductStatus.Archived);
_productRepositoryMock.Verify(
x => x.Update(product),
Times.Once);
_unitOfWorkMock.Verify(
x => x.SaveChangesAsync(It.IsAny<CancellationToken>()),
Times.Once);
}
}
E.14 Domain Event Template
// File: src/POS.Core/Events/OrderCompletedEvent.cs
using System;
using System.Collections.Generic;
using MediatR;
namespace POS.Core.Events;
/// <summary>
/// Domain event raised when an order is completed.
/// </summary>
public record OrderCompletedEvent : INotification
{
/// <summary>
/// Gets the event ID.
/// </summary>
public Guid EventId { get; init; } = Guid.NewGuid();
/// <summary>
/// Gets the timestamp when the event occurred.
/// </summary>
public DateTime Timestamp { get; init; } = DateTime.UtcNow;
/// <summary>
/// Gets the order ID.
/// </summary>
public required Guid OrderId { get; init; }
/// <summary>
/// Gets the order number.
/// </summary>
public required string OrderNumber { get; init; }
/// <summary>
/// Gets the receipt number.
/// </summary>
public required string ReceiptNumber { get; init; }
/// <summary>
/// Gets the location ID.
/// </summary>
public required Guid LocationId { get; init; }
/// <summary>
/// Gets the register ID.
/// </summary>
public Guid? RegisterId { get; init; }
/// <summary>
/// Gets the customer ID.
/// </summary>
public Guid? CustomerId { get; init; }
/// <summary>
/// Gets the customer email.
/// </summary>
public string? CustomerEmail { get; init; }
/// <summary>
/// Gets the line items.
/// </summary>
public required IReadOnlyList<OrderLineItemEvent> LineItems { get; init; }
/// <summary>
/// Gets the payment details.
/// </summary>
public required IReadOnlyList<PaymentEvent> Payments { get; init; }
/// <summary>
/// Gets the subtotal.
/// </summary>
public decimal Subtotal { get; init; }
/// <summary>
/// Gets the discount total.
/// </summary>
public decimal DiscountTotal { get; init; }
/// <summary>
/// Gets the tax total.
/// </summary>
public decimal TaxTotal { get; init; }
/// <summary>
/// Gets the order total.
/// </summary>
public required decimal Total { get; init; }
/// <summary>
/// Gets the loyalty points earned.
/// </summary>
public int LoyaltyPointsEarned { get; init; }
/// <summary>
/// Gets whether to send receipt.
/// </summary>
public bool SendReceipt { get; init; }
/// <summary>
/// Gets the receipt delivery method.
/// </summary>
public string? ReceiptMethod { get; init; }
/// <summary>
/// Gets the user who completed the order.
/// </summary>
public required Guid CompletedBy { get; init; }
/// <summary>
/// Gets the shift ID.
/// </summary>
public Guid? ShiftId { get; init; }
}
/// <summary>
/// Order line item event data.
/// </summary>
public record OrderLineItemEvent
{
public required Guid LineItemId { get; init; }
public required Guid VariantId { get; init; }
public required string Sku { get; init; }
public required string Name { get; init; }
public required int Quantity { get; init; }
public required decimal UnitPrice { get; init; }
public decimal DiscountAmount { get; init; }
public decimal TaxAmount { get; init; }
public required decimal LineTotal { get; init; }
public decimal Cost { get; init; }
}
/// <summary>
/// Payment event data.
/// </summary>
public record PaymentEvent
{
public required Guid PaymentId { get; init; }
public required string Method { get; init; }
public required decimal Amount { get; init; }
public string? AuthorizationCode { get; init; }
public string? LastFour { get; init; }
}
E.15 Usage Notes
-
Entity Template: Inherit from
BaseEntityand implement tenant/audit interfaces as needed. -
Repository Interface: Define only operations specific to the entity; generic CRUD is in
IRepository<T>. -
Repository Implementation: Use Entity Framework Core’s
DbSetand LINQ for queries. -
Service Interface: Keep it focused on business operations, not CRUD.
-
Service Implementation: Handle validation, business rules, and coordinate between repositories.
-
Controller Template: Use
[FromBody]for complex objects,[FromQuery]for filters. -
DTOs: Use records for immutability; separate Create/Update/Response DTOs.
-
Validators: Use FluentValidation with async rules for database checks.
-
Event Handlers: Handle one event type per handler; keep handlers focused.
-
Integration Tests: Use
WebApplicationFactoryand test against real database. -
Unit Tests: Use Moq for dependencies; test business logic in isolation.
-
Domain Events: Use MediatR
INotification; include all relevant data in the event.
These templates provide the foundation for consistent, maintainable code across the POS Platform.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Section | Appendix E |
This appendix is part of the POS Blueprint Book. All content is self-contained.
Appendix F: BRD-to-Code Module Mapping
Version: 4.0.0 Last Updated: February 25, 2026 BRD Version: 20.0 (19,900+ lines, 7 modules, 113 decisions)
F.1 Purpose & How to Use This Document
This appendix maps every business capability in the Business Requirements Document (BRD v20.0) to a specific code-level service in the POS Platform implementation. It bridges the gap between business requirements (written for stakeholders) and the modular monolith implementation (written for developers).
Who Should Use This
| Audience | Use Case |
|---|---|
| Developers | Find which service to create/modify when implementing a BRD feature |
| Architects | Verify module boundaries and dependency direction rules |
| QA Engineers | Trace test coverage back to BRD sections |
| Product Owners | Understand how business features map to technical components |
How to Read the Tables
Each service entry includes:
- Service Name: Technology-agnostic logical name (e.g.,
sale.cart.command.service) - BRD Section(s): Which BRD section(s) this service implements
- Capability: What business function this service performs
- Pattern: CQRS Command, Query, CRUD, Event Handler, Rule Engine, etc.
- Owns Tables: Database tables this service is the authoritative writer for
- Publishes/Consumes Events: Domain events for inter-service communication
F.2 Architecture Context
The POS Platform follows an Event-Driven Modular Monolith architecture (selected in Chapter 04) with the following pattern assignments per module:
| Module | CQRS | Event Sourcing | Pattern |
|---|---|---|---|
| Module 1: Sales | Full CQRS | Full ES | Separate command/query services |
| Module 2: Customers | Standard CRUD | None | Repository pattern with caching |
| Module 3: Catalog | Standard CRUD | None | Read-heavy, Redis cache |
| Module 4: Inventory | Materialized read model | ES for audit trail | Command/query split for PO, transfers |
| Module 5: Setup | Standard CRUD | None | Configuration data, direct access |
| Module 6: Integrations | Standard CRUD | Audit-trail-only ES | Extractable gateway |
Design Principles
- Maximum Granularity: One service per business capability, not one service per module
- Single Responsibility: Each service does ONE thing well
- DDD Boundaries: Services own their aggregates; cross-module access via events or public API
- No God Services: Break coarse-grained services (e.g.,
IOrderService) into focused capabilities
Reference: Chapter 04 (Architecture Styles Analysis), Section L.4 for full architecture rationale.
F.3 Module Overview
The BRD defines 6 business modules. The code architecture maps these to 7 code modules plus cross-cutting concerns and an optional RFID module:
| # | BRD Module | Code Module | Services | Pattern |
|---|---|---|---|---|
| 1 | Sales (1.1-1.20) | modules/sales/ | 37 | Full CQRS + ES |
| 2 | Customers (2.1-2.8) | modules/customers/ | 7 | CRUD |
| 3 | Catalog (3.1-3.15) | modules/catalog/ | 20 | CRUD + Cache |
| 4 | Inventory (4.1-4.19) | modules/inventory/ | 23 | Materialized + ES audit |
| 5 | Setup (5.1-5.21) | modules/setup/ | 21 | CRUD |
| 6 | Integrations (6.1-6.13) | modules/integrations/ | 20 | CRUD + Audit ES |
| X | Cross-cutting (Ch 07, 14) | cross-cutting/ | 8 | Mixed |
| R | RFID/Raptag (Ch 10 D13) | modules/rfid/ | 6 | CRUD + Command |
| TOTAL | 142 |
F.4 Module 1: Sales – Service Breakdown (37 Services, Full CQRS+ES)
BRD Sections: 1.1-1.20
F.4.1 Cart & Checkout Commands
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 1 | sale.cart.command.service | 1.1 | Create cart, add/remove line items, attach customer | Command | orders (draft state) | SaleCreated, SaleLineItemAdded, SaleLineItemRemoved | – |
| 2 | sale.cart.query.service | 1.1 | Get active cart, list items, calculate running totals | Query | (reads orders, order_items) | – | SaleLineItemAdded, SaleLineItemRemoved |
| 3 | sale.park.command.service | 1.1, 1.1.1 | Park/retrieve/expire held sales, manage TTL, soft-reserve inventory | Command | orders (parked state) | SaleParked, SaleRetrieved, SaleExpired | – |
| 4 | sale.park.query.service | 1.1 | List parked sales for terminal/location | Query | (reads orders WHERE status=parked) | – | SaleParked, SaleRetrieved |
F.4.2 Discount & Pricing Commands
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 5 | sale.discount.command.service | 1.2 | Apply/remove line discounts, global discounts, enforce calculation order | Command | (writes to order discount fields) | DiscountApplied, DiscountRemoved | – |
| 6 | sale.promotion.engine.service | 1.2, 1.14 | Evaluate automatic promos (Buy X Get Y), validate coupon codes, stack rules | Rule Engine | pricing_rules (read) | PromotionTriggered, CouponRedeemed | SaleLineItemAdded |
| 7 | sale.price-override.command.service | 1.2 | Manual price override with manager auth, reason code | Command | (writes to order_items.unit_price) | PriceOverridden | – |
F.4.3 Payment & Settlement Commands
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 8 | sale.payment.command.service | 1.3 | Process split tenders, validate payment covers total, calculate change | Command | payment_attempts | PaymentReceived, PaymentFailed | – |
| 9 | sale.payment.card.service | 1.18 | SAQ-A semi-integrated card flow: initiate terminal, receive token + auth | Integration | payment_attempts (card entries) | CardPaymentAuthorized, CardPaymentDeclined | – |
| 10 | sale.payment.cash.service | 1.3 | Cash tendering, change calculation, drawer interaction | Stateful | (writes to cash_movements) | CashPaymentReceived | – |
| 11 | sale.payment.giftcard.service | 1.3, 1.5 | Check GC balance, apply partial/full, deduct | Command | gift_card_transactions | GiftCardRedeemed | – |
| 12 | sale.payment.storecredit.service | 1.3 | Check credit balance, apply on-account, validate credit limit | Command | (reads/writes customer store_credit) | StoreCreditApplied | – |
| 13 | sale.payment.affirm.service | 1.3 | Third-party financing flow: create session, handle webhook | Integration | payment_attempts (affirm entries) | AffirmLoanApproved | – |
| 14 | sale.finalize.command.service | 1.3 | Finalize order: write record, deduct inventory, award loyalty, record commission | Command (Orchestrator) | orders (completed state) | SaleCompleted | PaymentReceived |
F.4.4 Post-Sale Commands
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 15 | sale.void.command.service | 1.4, 1.4.1 | Void same-day order: reverse inventory, loyalty, commission; check eligibility | Command | orders (voided state) | SaleVoided | – |
| 16 | sale.return.command.service | 1.4 | Process return: validate receipt, apply policy, issue refund | Command | returns, return_items | ReturnInitiated, ReturnCompleted | – |
| 17 | sale.exchange.command.service | 1.4 | Dedicated exchange: items OUT + items IN, calculate difference | Command | returns (exchange type), orders | ExchangeProcessed | – |
| 18 | sale.return-policy.engine.service | 1.9 | Evaluate return eligibility: time window, receipt validation, manager override | Rule Engine | (reads tenant_settings, return policy config) | ReturnPolicyEvaluated | – |
F.4.5 Gift Card Commands
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 19 | sale.giftcard.command.service | 1.5 | Sell/activate gift cards, reload, deactivate, check compliance | Command | gift_cards, gift_card_transactions | GiftCardIssued, GiftCardActivated, GiftCardReloaded | – |
| 20 | sale.giftcard.query.service | 1.5 | Balance lookup, transaction history, expiration check | Query | (reads gift_cards, gift_card_transactions) | – | GiftCardRedeemed, GiftCardIssued |
F.4.6 Special Order & Layaway
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 21 | sale.specialorder.command.service | 1.6 | Create/manage special orders and back orders | Command | orders (special_order type) | SpecialOrderCreated, SpecialOrderFulfilled | InventoryReceived |
| 22 | sale.layaway.command.service | 1.3.2 | Create layaway, accept deposits, release inventory on final payment | Stateful | orders (layaway state) | LayawayCreated, LayawayPaymentReceived, LayawayCompleted, LayawayCancelled | – |
| 23 | sale.hold-for-pickup.command.service | 1.11 | Hold/stage/expire pickup orders including BOPIS | Stateful | orders (hold states) | HoldCreated, HoldStaged, HoldPickedUp, HoldExpired | – |
F.4.7 Cash Drawer Operations
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 24 | sale.cashdrawer.command.service | 1.12 | Open/close drawer, paid in/out, cash drops, no-sale | Command | cash_drawers, cash_movements, cash_drops | DrawerOpened, DrawerClosed, DrawerCashDrop, DrawerPaidIn, DrawerPaidOut | CashPaymentReceived |
| 25 | sale.cashdrawer.count.service | 1.12 | Denomination-level cash counts (opening, closing, mid-shift, audit) | Command | cash_counts | CashCounted | – |
| 26 | sale.cashdrawer.pickup.service | 1.12 | Armored car pickup tracking, bank deposit reconciliation | Command | cash_pickups | CashPickupCompleted | – |
| 27 | sale.shift.command.service | 1.12 | Clock-in/out to shift, link to drawer, track totals | Stateful | shifts | ShiftOpened, ShiftClosed | DrawerOpened, DrawerClosed, SaleCompleted |
F.4.8 Tax Engine
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 28 | sale.tax.calculation.service | 1.17 | Calculate compound 3-level tax (State/County/City), handle exemptions | Calculation | (reads taxes, location_tax) | TaxCalculated | – |
F.4.9 Commission & Loyalty
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 29 | sale.commission.command.service | 1.8 | Calculate and record commissions per sale, proportional reversal on return | Command | (commission fields on orders) | CommissionRecorded, CommissionReversed | SaleCompleted, ReturnCompleted, SaleVoided |
| 30 | sale.loyalty.command.service | 1.15 | Award/redeem loyalty points, tier calculation, bonus rules | Command | loyalty_transactions, loyalty_accounts | LoyaltyPointsEarned, LoyaltyPointsRedeemed, LoyaltyTierChanged | SaleCompleted |
F.4.10 Queries & Read Models
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 31 | sale.history.query.service | 1.4 | Sales history with filters (date, user, status, location) | Query | (reads orders, order_items) | – | SaleCompleted, SaleVoided, ReturnCompleted |
| 32 | sale.receipt.query.service | 1.4 | Generate/reprint receipt data, email receipt | Query | (reads orders, order_items, payments) | ReceiptEmailed | – |
| 33 | sale.receipt.validate.service | 1.4 | Validate receipt barcode authenticity, match to order | Query | (reads orders) | – | – |
| 34 | sale.daily-summary.projection.service | 1.1.2, 1.3.4 | Materialized daily sales summary, hourly heatmap | Event Handler | (writes read model views) | – | SaleCompleted, SaleVoided, ReturnCompleted |
| 35 | sale.price-check.query.service | 1.13 | Price check mode: lookup product price without sale context | Query | (reads products, pricing_rules) | – | – |
F.4.11 Offline & Serial Tracking
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 36 | sale.offline.sync.service | 1.16 | Queue offline transactions, sync on reconnect, conflict resolution | Stateful | sync_queue (sale entries) | OfflineSaleSynced, SyncConflictDetected | – |
| 37 | sale.serial-tracking.command.service | 1.10 | Associate serial numbers with sale line items, validate uniqueness | Command | (serial_number field on order_items) | SerialNumberSold | – |
F.5 Module 2: Customers – Service Breakdown (7 Services, CRUD)
BRD Sections: 2.1-2.8
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 38 | customer.profile.crud.service | 2.1 | Create/update/delete customer profiles, manage PII | CRUD | customers | CustomerCreated, CustomerUpdated, CustomerDeleted | – |
| 39 | customer.search.query.service | 2.1 | Search customers by name, email, phone, loyalty number | Query | (reads customers) | – | CustomerCreated, CustomerUpdated |
| 40 | customer.group.crud.service | 2.2 | Manage customer groups/tiers (VIP, Wholesale, etc.), auto-tier rules | CRUD | (customer group/tier fields) | CustomerGroupAssigned, CustomerTierChanged | SaleCompleted |
| 41 | customer.notes.crud.service | 2.3 | Customer notes, preferences, internal flags | CRUD | (notes fields on customers) | – | – |
| 42 | customer.communication.crud.service | 2.4 | Marketing consent, preferred channels, opt-in/out | CRUD | (communication preference fields) | CommunicationPreferenceChanged | – |
| 43 | customer.merge.command.service | 2.5 | Merge duplicate customer records, reassign history | Command | customers (merge target) | CustomersMerged | – |
| 44 | customer.privacy.command.service | 2.5, 2.6 | GDPR anonymization, data export, deletion request | Command | customers (anonymized_at) | CustomerAnonymized, CustomerDataExported | – |
F.6 Module 3: Catalog – Service Breakdown (20 Services, CRUD+Cache)
BRD Sections: 3.1-3.15
F.6.1 Product Management
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 45 | catalog.product.crud.service | 3.1 | Create/update/delete products, manage attributes, soft delete | CRUD | products | ProductCreated, ProductUpdated, ProductDeleted | – |
| 46 | catalog.variant.crud.service | 3.1 | Create/update/delete variants (size/color), matrix management | CRUD | variants | VariantCreated, VariantUpdated, VariantDeleted | – |
| 47 | catalog.product.query.service | 3.1 | Get product by ID/SKU/barcode, list with pagination/filters | Query | (reads products, variants) | – | ProductCreated, ProductUpdated |
| 48 | catalog.bulk-import.command.service | 3.1 | Bulk CSV/Excel import of products and variants | Command | products, variants | BulkImportCompleted | – |
| 49 | catalog.product.lifecycle.service | 3.2 | Manage product lifecycle: draft, active, discontinued, archived | Stateful | products (lifecycle states) | ProductActivated, ProductDiscontinued, ProductArchived | – |
F.6.2 Categorization & Tagging
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 50 | catalog.category.crud.service | 3.5 | Manage hierarchical categories, sort order | CRUD | categories | CategoryCreated, CategoryUpdated | – |
| 51 | catalog.collection.crud.service | 3.5 | Marketing/seasonal collections with date ranges | CRUD | collections, product_collection | CollectionCreated, CollectionUpdated | – |
| 52 | catalog.tag.crud.service | 3.5 | Freeform product tags | CRUD | tags, product_tag | – | – |
| 53 | catalog.brand.crud.service | 3.1 | Brand reference data management | CRUD | brands | – | – |
F.6.3 Pricing
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 54 | catalog.pricing.crud.service | 3.3 | Manage pricing rules, price books, tier pricing | CRUD | pricing_rules | PricingRuleCreated, PricingRuleUpdated | – |
| 55 | catalog.pricing.calculation.service | 3.3 | Calculate effective price (hierarchy: price book > tier > promo > base) | Calculation | (reads pricing_rules, products) | – | – |
| 56 | catalog.markdown.command.service | 3.3 | Schedule markdowns, automatic clearance pricing | Command | pricing_rules (markdown type) | MarkdownApplied, MarkdownExpired | – |
F.6.4 Barcode & Label
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 57 | catalog.barcode.service | 3.4 | Generate/validate/lookup UPC/EAN barcodes | CRUD | (barcode fields on products/variants) | – | – |
| 58 | catalog.label.print.service | 3.10 | Generate label/price tag print jobs, template selection | Command | (label print queue) | LabelPrintJobCreated | – |
F.6.5 Search, Media & Vendor
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 59 | catalog.search.service | 3.9 | Full-text product search, faceted filtering, suggestions | Query | (reads products via FTS indexes) | – | ProductCreated, ProductUpdated |
| 60 | catalog.media.crud.service | 3.11 | Product image management, upload, reorder | CRUD | (image fields on products/variants) | – | – |
| 61 | catalog.vendor.crud.service | 3.8 | Vendor/supplier management, lead times, min order quantities | CRUD | (vendor reference tables) | VendorCreated, VendorUpdated | – |
| 62 | catalog.notes.crud.service | 3.12 | Product notes and attachments | CRUD | (notes/attachments on products) | – | – |
| 63 | catalog.permissions.service | 3.13 | Catalog approval workflows, permission checks | Rule Engine | (reads role_permissions) | CatalogChangeApproved, CatalogChangeRejected | ProductUpdated |
| 64 | catalog.analytics.query.service | 3.14 | Product performance analytics, sales velocity, margin analysis | Query | (reads products, order_items, inventory) | – | SaleCompleted |
F.7 Module 4: Inventory – Service Breakdown (23 Services, Materialized+ES Audit)
BRD Sections: 4.1-4.19
F.7.1 Stock Level Queries
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 65 | inventory.level.query.service | 4.1, 4.2 | Get current stock by variant/location, available qty calculation | Query | (reads inventory_levels materialized view) | – | InventoryAdjusted, InventorySold, InventoryReceived |
| 66 | inventory.level.adjustment.service | 4.7 | Manual adjustments: count, damage, theft, found, with reason codes | Command | inventory_levels, inventory_transactions | InventoryAdjusted | – |
| 67 | inventory.status-model.service | 4.2 | Manage inventory statuses: available, reserved, committed, in_transit, damaged | Stateful | inventory_levels (status fields) | InventoryStatusChanged | – |
F.7.2 Purchase Orders
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 68 | inventory.po.command.service | 4.3 | Create/edit/approve/cancel purchase orders | Command | (purchase_orders table) | PurchaseOrderCreated, PurchaseOrderApproved, PurchaseOrderCancelled | – |
| 69 | inventory.po.query.service | 4.3 | List/search POs, status tracking, ETA display | Query | (reads purchase_orders) | – | PurchaseOrderCreated, PurchaseOrderApproved |
| 70 | inventory.receiving.command.service | 4.4 | Receive against PO: full/partial, inspection, discrepancy handling | Command | inventory_levels, inventory_transactions | InventoryReceived, ReceivingDiscrepancyLogged | PurchaseOrderApproved |
F.7.3 Reorder Management
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 71 | inventory.reorder.engine.service | 4.5 | Auto-reorder point monitoring, suggested PO generation | Rule Engine | (reads inventory_levels, reorder configs) | ReorderPointReached, ReorderSuggested | InventoryAdjusted, InventorySold |
F.7.4 Counting & Auditing
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 72 | inventory.count.command.service | 4.6 | Physical inventory counts: full, cycle, spot check | Command | inventory_transactions (count type) | InventoryCounted | – |
| 73 | inventory.count.query.service | 4.6 | Count session management, variance reports | Query | (reads count sessions/results) | – | InventoryCounted |
F.7.5 Transfers
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 74 | inventory.transfer.command.service | 4.8 | Create/ship/receive inter-store transfers | Command | (transfer tables), inventory_transactions | InventoryTransferred, TransferShipped, TransferReceived | – |
| 75 | inventory.transfer.query.service | 4.8 | List transfers, track in-transit, ETA | Query | (reads transfer tables) | – | TransferShipped, TransferReceived |
F.7.6 Vendor Returns & Costing
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 76 | inventory.rma.command.service | 4.9 | Vendor RMA: create, ship back, track credit | Command | (RMA tables) | VendorRMACreated, VendorRMAShipped | – |
| 77 | inventory.costing.calculation.service | 4.11 | Landed cost calculation: freight, duty, insurance allocation | Calculation | (cost fields on inventory_transactions) | LandedCostCalculated | InventoryReceived |
| 78 | inventory.serial-lot.command.service | 4.10 | Serial/lot number assignment, tracking, recall support | Command | (serial/lot fields) | SerialNumberAssigned, LotCreated | – |
F.7.7 Movement History & Dashboard
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 79 | inventory.movement.query.service | 4.12 | Stock ledger, movement history by variant/location | Query | (reads inventory_transactions) | – | all Inventory* events |
| 80 | inventory.dashboard.projection.service | 4.17 | Inventory dashboard materialized views: stock value, aging, velocity | Event Handler | (writes dashboard read models) | – | InventoryAdjusted, InventorySold, InventoryReceived |
F.7.8 POS Integration & Fulfillment
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 81 | inventory.sale-deduction.event-handler.service | 4.13 | Deduct inventory on sale completion, restore on void/return | Event Handler | inventory_levels, inventory_transactions | InventorySold, InventoryRestored | SaleCompleted, SaleVoided, ReturnCompleted |
| 82 | inventory.reservation.command.service | 4.13 | Soft-reserve inventory for pending sales, holds, layaways | Command | inventory_levels (quantity_reserved) | InventoryReserved, InventoryReservationReleased | SaleParked, HoldCreated, LayawayCreated |
| 83 | inventory.fulfillment.command.service | 4.14 | Online order fulfillment: pick, pack, ship from store | Command | (fulfillment fields on orders) | FulfillmentStarted, FulfillmentShipped | – |
F.7.9 Offline & Alerts
| # | Service Name | BRD Section(s) | Capability | Pattern | Owns Tables | Publishes Events | Consumes Events |
|---|---|---|---|---|---|---|---|
| 84 | inventory.offline.sync.service | 4.15 | Offline inventory operations queue, sync, conflict resolution | Stateful | sync_queue (inventory entries) | OfflineInventorySynced | – |
| 85 | inventory.alert.service | 4.16 | Low stock alerts, reorder notifications, expiring lot alerts | Event Handler | – | LowStockAlert, ReorderAlert | InventoryAdjusted, InventorySold |
| 86 | inventory.rules.engine.service | 4.18 | Business rules evaluation from YAML config (negative stock, auto-transfer) | Rule Engine | (reads tenant_settings) | – | – |
| 87 | inventory.report.query.service | 4.17 | Inventory reports: valuation, aging, shrinkage, turnover | Query | (reads inventory_levels, inventory_transactions) | – | – |
F.8 Module 5: Setup & Configuration – Service Breakdown (21 Services, CRUD)
BRD Sections: 5.1-5.21
F.8.1 System Settings
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 88 | setup.settings.crud.service | 5.2 | System settings, branding, locale, defaults | CRUD |
| 89 | setup.currency.crud.service | 5.3 | Multi-currency configuration, exchange rates | CRUD |
F.8.2 Location Management
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 90 | setup.location.crud.service | 5.4 | Create/update/deactivate locations, assign type, set hours | CRUD |
F.8.3 User & Role Management
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 91 | setup.user.crud.service | 5.5 | User profile management, activation/deactivation | CRUD |
| 92 | setup.role.crud.service | 5.5 | Role management, permission assignment matrix | CRUD |
| 93 | setup.timetracking.command.service | 5.6 | Clock-in/clock-out, break management | Command |
F.8.4 Register & Hardware
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 94 | setup.register.crud.service | 5.7 | Register management, IP limits (max 2/365 days), retire (OWNER-only) | CRUD |
| 95 | setup.printer.crud.service | 5.8 | Printer/peripheral registration, connection management | CRUD |
| 96 | setup.device.crud.service | 5.7 | Device registration, hardware fingerprint, status management | CRUD |
F.8.5 Tax Configuration
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 97 | setup.tax.crud.service | 5.9 | Tax rate definitions, location-tax assignments, effective dates | CRUD |
F.8.6 Payment & UoM Configuration
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 98 | setup.payment-method.crud.service | 5.11 | Payment method configuration, processor settings | CRUD |
| 99 | setup.uom.crud.service | 5.10 | Units of measure management, conversion rules | CRUD |
F.8.7 Custom Fields & Workflows
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 100 | setup.customfield.crud.service | 5.12 | Custom field definitions, validation rules, entity assignment | CRUD |
| 101 | setup.approval-workflow.crud.service | 5.13 | Approval workflow configuration, threshold rules | CRUD |
F.8.8 Receipt & Email
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 102 | setup.receipt.crud.service | 5.14 | Receipt template configuration, header/footer customization | CRUD |
| 103 | setup.email-template.crud.service | 5.15 | Email template management, variable substitution | CRUD |
F.8.9 Audit & Onboarding
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 104 | setup.audit.config.service | 5.18 | Audit log configuration, retention policies | CRUD |
| 105 | setup.rules.engine.service | 5.19 | Business rules YAML configuration, validation | CRUD |
| 106 | setup.loyalty.config.service | 5.17 | Loyalty program configuration: earn rate, tiers, expiry | CRUD |
| 107 | setup.onboarding.wizard.service | 5.20 | Tenant onboarding wizard: step tracking, initial data seeding | Stateful |
| 108 | setup.integrations-hub.config.service | 5.16 | Integration connections configuration (Setup side of Module 6) | CRUD |
F.9 Module 6: Integrations – Service Breakdown (20 Services, CRUD+Audit ES)
BRD Sections: 6.1-6.13
F.9.1 Core Integration Infrastructure
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 109 | integration.provider.registry.service | 6.2 | Provider registration, IIntegrationProvider management | CRUD |
| 110 | integration.circuit-breaker.service | 6.2 | Circuit breaker state machine (CLOSED/OPEN/HALF_OPEN) per provider | Stateful |
| 111 | integration.outbox.relay.service | 6.2 | Transactional outbox polling, event publication via LISTEN/NOTIFY | Event Handler |
| 112 | integration.idempotency.service | 6.2 | Idempotency key tracking for at-least-once delivery | Stateful |
| 113 | integration.webhook.pipeline.service | 6.2 | Inbound webhook receipt, signature validation, routing | Integration |
| 114 | integration.dead-letter.service | 6.2 | Failed integration message capture, retry, replay | Event Handler |
F.9.2 Shopify Integration
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 115 | integration.shopify.product-sync.service | 6.3 | Bidirectional product/variant sync via GraphQL, bulk operations | Integration |
| 116 | integration.shopify.inventory-sync.service | 6.3 | Inventory level sync with safety buffers, oversell prevention | Integration |
| 117 | integration.shopify.order-sync.service | 6.3 | Online order ingestion, BOPIS flow, fulfillment updates | Integration |
| 118 | integration.shopify.webhook-handler.service | 6.3 | Shopify webhook processing: orders/create, products/update, etc. | Event Handler |
F.9.3 Amazon SP-API Integration
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 119 | integration.amazon.catalog-sync.service | 6.4 | Amazon catalog/listings management, compliance validation | Integration |
| 120 | integration.amazon.order-sync.service | 6.4 | Amazon order polling (2-min interval), FBM fulfillment | Integration |
| 121 | integration.amazon.inventory-sync.service | 6.4 | Amazon inventory feed, FBA + FBM channel quantities | Integration |
F.9.4 Google Merchant Integration
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 122 | integration.google.product-sync.service | 6.5 | Google Merchant product data feed, disapproval prevention | Integration |
| 123 | integration.google.inventory-sync.service | 6.5 | Local inventory ads feed (2x/day batch) | Integration |
F.9.5 Cross-Platform Orchestration
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 124 | integration.cross-platform.validation.service | 6.6 | Strictest-rule-wins validation across all channels | Rule Engine |
| 125 | integration.cross-platform.inventory-orchestrator.service | 6.7 | Safety buffer computation, channel allocation, saga compensation | Stateful (Saga) |
F.9.6 Payment, Email & Shipping
| # | Service Name | BRD Section(s) | Capability | Pattern |
|---|---|---|---|---|
| 126 | integration.payment-processor.service | 6.8 | Payment processor gateway abstraction (Stripe, Square) | Integration |
| 127 | integration.email.service | 6.9 | Email sending via provider abstraction (SendGrid, SES) | Integration |
| 128 | integration.shipping.service | 6.10 | Carrier rate lookup, label generation, tracking | Integration |
F.10 Cross-Cutting Services (8 Services)
These services satisfy architecture requirements from the Blueprint (Ch 07, Ch 14) rather than direct BRD business sections. They provide infrastructure that all modules depend on.
| # | Service Name | Blueprint Reference | Capability | Pattern |
|---|---|---|---|---|
| 129 | crosscutting.event-store.service | Ch 07 L.4A.1 | Append events, optimistic concurrency, snapshot management | Stateful |
| 130 | crosscutting.tenant.middleware.service | Ch 07 L.10A.4 | Tenant resolution from JWT, RLS policy enforcement | Stateful |
| 131 | crosscutting.auth.service | Ch 14 | JWT validation, PIN authentication, permission checks | Stateful |
| 132 | crosscutting.audit-log.service | Ch 07 L.4A | Cross-cutting audit trail: who, what, when, before/after | Event Handler |
| 133 | crosscutting.notification.service | Ch 07 | Push notifications, in-app alerts, SignalR real-time | Event Handler |
| 134 | crosscutting.sync.orchestrator.service | Ch 07 L.10A.1 | Offline sync coordination: device registration, conflict resolution | Stateful |
| 135 | crosscutting.report.engine.service | Various | Saved report execution, scheduling, export (CSV/PDF) | Query |
| 136 | crosscutting.state-machine.service | Ch 07 L.4A | Database-driven state machine: validate transitions, log history | Rule Engine |
Optional: RFID Module (Raptag, 6 Services)
| # | Service Name | Blueprint Reference | Capability | Pattern |
|---|---|---|---|---|
| 137 | rfid.config.crud.service | Ch 10 D13 | RFID tenant configuration (EPC prefix, serial counter) | CRUD |
| 138 | rfid.tag.crud.service | Ch 10 D13 | Tag lifecycle: create, activate, sell, transfer, void | CRUD |
| 139 | rfid.printer.crud.service | Ch 10 D13 | Printer registration, status monitoring | CRUD |
| 140 | rfid.printjob.command.service | Ch 10 D13 | Print job queue management, progress tracking | Command |
| 141 | rfid.scan.command.service | Ch 10 D13 | Scan session management: start, record reads, complete | Command |
| 142 | rfid.inventory-reconciliation.service | Ch 10 D13 | Compare RFID scan results against expected inventory | Calculation |
F.11 Module Dependency Matrix
F.11.1 Inter-Module Dependencies
Module 1 (Sales) --> Module 3 (Catalog):
sale.cart.command.service --> catalog.product.query.service (lookup product/variant)
sale.promotion.engine.service --> catalog.pricing.calculation.service (resolve price)
sale.price-check.query.service --> catalog.product.query.service (read product)
Module 1 (Sales) --> Module 4 (Inventory):
sale.finalize.command.service --> inventory.sale-deduction (via SaleCompleted event)
sale.park.command.service --> inventory.reservation (soft-reserve on park)
sale.void.command.service --> inventory.sale-deduction (via SaleVoided, restores)
sale.return.command.service --> inventory.sale-deduction (via ReturnCompleted, restores)
Module 1 (Sales) --> Module 2 (Customers):
sale.cart.command.service --> customer.profile.crud.service (attach customer)
sale.loyalty.command.service --> customer.profile.crud.service (read loyalty)
sale.payment.storecredit.service --> customer.profile.crud.service (check credit)
Module 1 (Sales) --> Module 5 (Setup):
sale.tax.calculation.service --> setup.tax.crud.service (read tax rates)
sale.shift.command.service --> setup.user.crud.service (validate employee)
sale.cashdrawer.command.service --> setup.register.crud.service (validate register)
Module 1 (Sales) --> Module 6 (Integrations):
sale.payment.card.service --> integration.payment-processor.service (card auth)
sale.receipt.query.service --> integration.email.service (email receipt)
Module 3 (Catalog) --> Module 4 (Inventory):
catalog.product.lifecycle.service --> inventory.level.query.service (check stock)
Module 4 (Inventory) --> Module 3 (Catalog):
inventory.po.command.service --> catalog.vendor.crud.service (vendor details)
inventory.reorder.engine.service --> catalog.product.query.service (product details)
Module 4 (Inventory) --> Module 5 (Setup):
inventory.level.query.service --> setup.location.crud.service (location details)
inventory.rules.engine.service --> setup.settings.crud.service (business rules)
Module 6 (Integrations) --> Module 3 (Catalog):
integration.shopify.product-sync --> catalog.product.query.service (read products)
integration.amazon.catalog-sync --> catalog.product.query.service (read products)
integration.google.product-sync --> catalog.product.query.service (read products)
integration.cross-platform.validation --> catalog.product.query.service (validate)
Module 6 (Integrations) --> Module 4 (Inventory):
integration.shopify.inventory-sync --> inventory.level.query.service (read stock)
integration.amazon.inventory-sync --> inventory.level.query.service (read stock)
integration.google.inventory-sync --> inventory.level.query.service (read stock)
integration.cross-platform.inventory-orchestrator --> inventory.level.query (allocate)
Module 6 (Integrations) --> Module 1 (Sales):
integration.shopify.order-sync --> sale.finalize.command.service (create order)
Cross-Cutting --> All Modules:
crosscutting.tenant.middleware.service --> ALL (RLS enforcement)
crosscutting.auth.service --> ALL (permission checks)
crosscutting.audit-log.service --> ALL (via event subscription)
crosscutting.event-store.service --> Module 1, 4, 6 (ES-enabled modules)
F.11.2 Dependency Direction Rules
| Rule | Description |
|---|---|
| Allowed | Module 1 –> Module 2, 3, 4, 5 (sales orchestrates) |
| Allowed | Module 6 –> Module 3, 4 (integrations read catalog/inventory) |
| Allowed | Module 4 –> Module 3 (inventory references catalog) |
| Forbidden | Module 2 –> Module 1 (customers cannot call sales) |
| Forbidden | Module 3 –> Module 1 (catalog cannot call sales) |
| Forbidden | Module 5 –> Module 1, 2, 3, 4 (setup is pure configuration) |
| Event-Only | Module 4 <– Module 1 (inventory reacts to sale events, not direct calls) |
F.12 Service-to-BRD Traceability Matrix
Every BRD top-level section (x.y) maps to at least one service. Full bidirectional traceability:
Coverage Statistics
| BRD Module | Sections | Services | Coverage |
|---|---|---|---|
| Module 1: Sales (1.1-1.20) | 59 subsections | 37 services | 100% |
| Module 2: Customers (2.1-2.8) | 10 subsections | 7 services | 100% |
| Module 3: Catalog (3.1-3.15) | 48 subsections | 20 services | 100% |
| Module 4: Inventory (4.1-4.19) | 55 subsections | 23 services | 100% |
| Module 5: Setup (5.1-5.21) | 63 subsections | 21 services | 100% |
| Module 6: Integrations (6.1-6.13) | 28 subsections | 20 services | 100% |
| TOTAL | 263 subsections | 128 services | 100% |
Orphaned Capabilities: None
All BRD sections 1.1-6.13 have at least one mapped service.
Architecture-Only Services (14)
Cross-cutting (#129-136) and RFID (#137-142) services are justified by Blueprint architecture requirements (Ch 07, Ch 10, Ch 14) rather than BRD sections.
F.13 CQRS Pattern Reference
F.13.1 When to Split Command/Query
| Criterion | Split (CQRS) | Keep Together (CRUD) |
|---|---|---|
| Write and read models differ significantly | Yes | – |
| Audit trail required (Event Sourcing) | Yes | – |
| Read-heavy with denormalized views | Yes (materialized projections) | – |
| Simple entity CRUD with no complex reads | – | Yes |
| Configuration data | – | Yes |
F.13.2 CQRS Event Flow
1. Client sends Command (e.g., CreateSaleCommand)
|
v
2. Command Handler validates business rules
|
v
3. Aggregate produces Domain Events (e.g., SaleCreated, SaleLineItemAdded)
|
v
4. Events appended to Event Store (events table)
|
+---> 5a. Projection Handlers update Read Models (materialized views)
+---> 5b. Audit Log Handler writes to audit_log
+---> 5c. Outbox Relay publishes to external subscribers
+---> 5d. Integration Handler triggers sync (Module 6)
|
v
6. Query reads from optimized Read Model (not event store)
F.13.3 Domain Events Summary
| Aggregate | Module | Event Count |
|---|---|---|
| Sale | 1 (Sales) | 25 |
| Return | 1 (Sales) | 5 |
| Gift Card | 1 (Sales) | 4 |
| Layaway/Hold | 1 (Sales) | 6 |
| Cash Drawer | 1 (Sales) | 6 |
| Inventory | 4 (Inventory) | 12 |
| Customer | 2 (Customers) | 8 |
| Employee | 5 (Setup) | 4 |
| Integration | 6 (Integrations) | 10 |
| TOTAL | 80 |
F.14 Folder Structure Reference (Technology-Agnostic)
src/
+-- modules/
| +-- sales/
| | +-- commands/
| | | +-- cart/
| | | +-- checkout/
| | | +-- payment/
| | | +-- return/
| | | +-- cash-drawer/
| | | +-- gift-card/
| | | +-- layaway/
| | | +-- hold/
| | +-- queries/
| | +-- events/
| | +-- event-handlers/
| | +-- domain/
| | | +-- aggregates/
| | | +-- value-objects/
| | | +-- rules/
| | +-- repositories/
| | +-- dtos/
| |
| +-- customers/
| | +-- commands/
| | +-- queries/
| | +-- domain/
| | +-- repositories/
| |
| +-- catalog/
| | +-- commands/
| | | +-- product/
| | | +-- variant/
| | | +-- pricing/
| | | +-- category/
| | | +-- bulk-import/
| | +-- queries/
| | +-- domain/
| | +-- cache/
| |
| +-- inventory/
| | +-- commands/
| | | +-- adjustment/
| | | +-- purchase-order/
| | | +-- receiving/
| | | +-- transfer/
| | | +-- count/
| | | +-- reservation/
| | | +-- fulfillment/
| | +-- queries/
| | +-- events/
| | +-- event-handlers/
| | +-- domain/
| |
| +-- setup/
| | +-- commands/
| | +-- queries/
| |
| +-- integrations/
| | +-- core/
| | | +-- provider-registry/
| | | +-- circuit-breaker/
| | | +-- outbox-relay/
| | | +-- idempotency/
| | | +-- webhook-pipeline/
| | | +-- dead-letter/
| | +-- providers/
| | | +-- shopify/
| | | +-- amazon/
| | | +-- google/
| | | +-- payment/
| | | +-- email/
| | | +-- shipping/
| | +-- orchestration/
| | +-- anti-corruption-layer/
| |
| +-- rfid/
| +-- commands/
| +-- queries/
| +-- domain/
|
+-- cross-cutting/
| +-- event-store/
| +-- tenant/
| +-- auth/
| +-- audit/
| +-- sync/
| +-- notifications/
| +-- reporting/
| +-- state-machine/
|
+-- shared/
| +-- domain/
| +-- interfaces/
| +-- infrastructure/
|
+-- api/
+-- controllers/
+-- startup/
F.15 Summary Statistics
| Metric | Value |
|---|---|
| BRD Modules | 6 |
| Code Modules | 7 + cross-cutting + RFID |
| Total Services | 142 |
| Module 1 (Sales) | 37 services |
| Module 2 (Customers) | 7 services |
| Module 3 (Catalog) | 20 services |
| Module 4 (Inventory) | 23 services |
| Module 5 (Setup) | 21 services |
| Module 6 (Integrations) | 20 services |
| Cross-Cutting | 8 services |
| RFID (Optional) | 6 services |
| Domain Events | 80 |
| State Machines | 19 |
| BRD Decisions Mapped | 107/107 (100%) |
| BRD Coverage | 100% (all sections mapped) |
| Orphaned Capabilities | 0 |
Service Pattern Distribution
| Pattern | Count | % |
|---|---|---|
| CRUD | 35 | 24.6% |
| Command | 28 | 19.7% |
| Query | 14 | 9.9% |
| Integration | 13 | 9.2% |
| Stateful | 12 | 8.5% |
| Event Handler | 10 | 7.0% |
| Rule Engine | 7 | 4.9% |
| Calculation | 5 | 3.5% |
| Other (Orchestrator, Saga) | 18 | 12.7% |
| TOTAL | 142 | 100% |
Migration from Current Service Layer
The current Chapter 11 defines 5 coarse-grained services. This mapping decomposes them:
| Current Service (Ch 14) | Decomposed Into | Count |
|---|---|---|
IOrderService | sale.cart.*, sale.park.*, sale.discount.*, sale.payment.*, sale.finalize.*, sale.void.*, sale.return.*, sale.exchange.*, sale.layaway.*, sale.hold-for-pickup.*, sale.giftcard.*, sale.receipt.*, sale.history.* | 23 |
IInventoryService | inventory.level.*, inventory.po.*, inventory.receiving.*, inventory.transfer.*, inventory.count.*, inventory.rma.*, inventory.reservation.*, inventory.fulfillment.* | 23 |
ICustomerService | customer.profile.*, customer.search.*, customer.group.*, customer.merge.*, customer.privacy.*, sale.loyalty.* | 7 |
IItemService | catalog.product.*, catalog.variant.*, catalog.search.*, catalog.bulk-import.* | 20 |
IReportService | crosscutting.report.engine.service, sale.daily-summary.*, inventory.dashboard.*, catalog.analytics.* | 4 |
| (new services) | setup.*, integration.*, cross-cutting, RFID | 55 |
RFID Decisions (BRD v20.0, #108-113)
BRD v20.0 added 6 new decisions for the RFID Counting Subsystem:
| # | Decision | Summary |
|---|---|---|
| 108 | RFID scope limited to counting only | No lifecycle tracking (sold_at, transferred_at stripped). Tag status limited to: active, void, lost |
| 109 | EPC serial generation via PostgreSQL SEQUENCE | Per-tenant SEQUENCE for serial numbers, not column-based last_serial_number |
| 110 | Chunked sync with 5,000 events per chunk | UNIQUE(session_id, epc) idempotency, resume via upload-status endpoint |
| 111 | RSSI-based multi-operator dedup | When multiple operators scan the same tag, highest RSSI wins for section assignment |
| 112 | Auto-save with 30-second SQLite checkpoint | Crash recovery dialog, battery-triggered saves on Raptag mobile app |
| 113 | Maximum 10 operators per counting session | Section assignment per operator, session_operators join table |
Related services: rfid.tag.crud.service, rfid.session.command.service, rfid.scan.command.service, rfid.sync.command.service, rfid.config.crud.service, rfid.encoding.command.service
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2026-02-24 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Section | Appendix F |
| BRD Version | 20.0 |
This appendix is part of the POS Blueprint Book. All content is self-contained.